
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Fri, 03 Apr 2026 17:04:47 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Our ongoing commitment to privacy for the 1.1.1.1 public DNS resolver]]></title>
            <link>https://blog.cloudflare.com/1111-privacy-examination-2026/</link>
            <pubDate>Wed, 01 Apr 2026 13:00:00 GMT</pubDate>
            <description><![CDATA[ Eight years ago, we launched 1.1.1.1 to build a faster, more private Internet. Today, we’re sharing the results of our latest independent examination. The result: our privacy protections are working exactly as promised. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Exactly 8 years ago today, <a href="https://blog.cloudflare.com/announcing-1111/"><u>we launched the 1.1.1.1 public DNS resolver</u></a>, with the intention to build the world’s <a href="https://www.dnsperf.com/#!dns-resolvers"><u>fastest</u></a> resolver — and the most private one. We knew that trust is everything for a service that handles the "phonebook of the Internet." That’s why, at launch, we made a unique commitment to publicly confirm that we are doing what we said we would do with personal data. In 2020, we <a href="https://blog.cloudflare.com/announcing-the-results-of-the-1-1-1-1-public-dns-resolver-privacy-examination/"><u>hired an independent firm to check our work</u></a>, instead of just asking you to take our word for it. We shared our intention to update such examinations in the future. We also called on other providers to do the same, but, as far as we are aware, no other major public resolver has had their DNS privacy practices independently examined.</p><p>At the time of the 2020 review, the 1.1.1.1 resolver was less than two years old, and the purpose of the examination was to prove our systems made good on all the commitments we made about how our 1.1.1.1 resolver functioned, even commitments that did not impact personal data or user privacy. </p><p>Since then, Cloudflare’s technology stack has grown significantly in both scale and complexity. For example, we <a href="https://blog.cloudflare.com/big-pineapple-intro/"><u>built an entirely new platform</u></a> that powers our 1.1.1.1 resolver and other DNS systems. So we felt it was vital to review our systems, and our 1.1.1.1 resolver privacy commitments in particular, once again with a rigorous and independent review. </p><p>Today, we are sharing the results of our most recent privacy examination by the same Big 4 accounting firm. Its independent examination is available on our <a href="https://www.cloudflare.com/trust-hub/compliance-resources/"><u>compliance page</u></a>.</p><p>Following the conclusion of the 2024 calendar year, we began our comprehensive process of collecting and preparing evidence for our independent auditors. The examination took several months and required many teams across Cloudflare to provide supporting evidence of our privacy controls in action. After the independent auditors' completion of the examination, we're pleased to share the final report, which provides assurance that our commitments were met: our systems are as private as promised. Most importantly, <b>our core privacy guarantees for the 1.1.1.1 resolver remain unchanged and confirmed by independent review:</b></p><ul><li><p><b>Cloudflare will not sell or share public resolver users’ personal data with third parties or use personal data from the public resolver to target any user with advertisements.</b></p></li></ul><ul><li><p><b>Cloudflare will only retain or use what is being asked, not information that will identify who is asking it.</b> </p></li></ul><ul><li><p><b>Source IP addresses are anonymized and deleted within 25 hours.</b></p></li></ul><p>We also want to be transparent about two points. First: as we explained in <a href="https://blog.cloudflare.com/announcing-the-results-of-the-1-1-1-1-public-dns-resolver-privacy-examination/"><u>our 2020 blog announcing the results of our previous examination</u>,</a> randomly sampled network packets (at most 0.05% of all traffic, including the querying IP address of 1.1.1.1 public resolver users) are used solely for network troubleshooting and attack mitigation.</p><p>Second, the scope of this examination focuses exclusively on our privacy commitments. Back in 2020, our first examination reviewed all of our representations, not only our privacy commitments but our description of how we would handle anonymized transaction and debug log data (“Public Resolver Logs”) for the legitimate operation of our Public Resolver and research purposes. Over time, our uses of this data to do things like power <a href="https://radar.cloudflare.com/"><u>Cloudflare Radar</u></a>, which was released after our initial 1.1.1.1 examination, have changed how we treat those logs, even though there is no impact on personal information or personal privacy. </p><p><a href="https://blog.cloudflare.com/announcing-the-results-of-the-1-1-1-1-public-dns-resolver-privacy-examination/"><u>As we noted with the first review 6 years ago</u></a>: we’ve never wanted to know what individuals do on the Internet, and we’ve taken technical steps to ensure we can’t. At Cloudflare, we believe privacy should be the default. By proactively undergoing these independent examinations, we hope to set a standard for the rest of the industry. We believe every user, whether they are browsing the web directly or deploying an AI agent on their behalf, deserves an Internet that doesn't track their movement. And further, Cloudflare steadfastly stands behind the commitment in our <a href="https://www.cloudflare.com/privacypolicy/"><u>Privacy Policy</u></a> that we will not combine any information collected from DNS queries to the 1.1.1.1 resolver with any other Cloudflare or third-party data in any way that can be used to identify individual end users.</p><p>As always, we thank you for trusting 1.1.1.1 to be your gateway to the Internet. Details of the 1.1.1.1 resolver privacy examination and our accountant’s report can be found on Cloudflare’s <a href="https://www.cloudflare.com/trust-hub/compliance-resources/"><u>Certifications and compliance resources page</u></a>. Visit <a href="https://developers.cloudflare.com/1.1.1.1/"><u>https://developers.cloudflare.com/1.1.1.1/</u></a> to learn more about how to get started with the Internet's fastest, privacy-first DNS resolver. </p> ]]></content:encoded>
            <category><![CDATA[1.1.1.1]]></category>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[Privacy]]></category>
            <category><![CDATA[Consumer Services]]></category>
            <category><![CDATA[Transparency]]></category>
            <guid isPermaLink="false">VOddnCi9jbM6zHOay1HCN</guid>
            <dc:creator>Rory Malone</dc:creator>
            <dc:creator>Hannes Gerhart</dc:creator>
            <dc:creator>Leah Romm</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cable cuts, storms, and DNS: a look at Internet disruptions in Q4 2025]]></title>
            <link>https://blog.cloudflare.com/q4-2025-internet-disruption-summary/</link>
            <pubDate>Mon, 26 Jan 2026 14:00:00 GMT</pubDate>
            <description><![CDATA[ The last quarter of 2025 brought several notable disruptions to Internet connectivity. Cloudflare Radar data reveals the impact of cable cuts, power outages, extreme weather, technical problems, and more. ]]></description>
            <content:encoded><![CDATA[ <p>In 2025, we <a href="https://radar.cloudflare.com/outage-center?dateStart=2025-01-01&amp;dateEnd=2025-12-31"><u>observed over 180 Internet disruptions</u></a> spurred by a variety of causes – some were brief and partial, while others were complete outages lasting for days. In the fourth quarter, we tracked only a single <a href="#government-directed"><u>government-directed</u></a> Internet shutdown, but multiple <a href="#cable-cuts"><u>cable cuts</u></a> wreaked havoc on connectivity in several countries. <a href="#power-outages"><u>Power outages</u></a> and <a href="#weather"><u>extreme weather</u></a> disrupted Internet services in multiple places, and the ongoing <a href="#military-action"><u>conflict</u></a> in Ukraine impacted connectivity there as well. As always, a number of the disruptions we observed were due to <a href="#known-or-unspecified-technical-problems"><u>technical problems</u></a> – with some acknowledged by the relevant providers, while others had unknown causes. In addition, incidents at several hyperscaler <a href="#cloud-platforms"><u>cloud platforms</u></a> and <a href="#cloudflare"><u>Cloudflare</u></a> impacted the availability of websites and applications.  </p><p>This post is intended as a summary overview of observed and confirmed disruptions and is not an exhaustive or complete list of issues that have occurred during the quarter. These anomalies are detected through significant deviations from expected traffic patterns observed across our network. Check out the <a href="https://radar.cloudflare.com/outage-center"><u>Cloudflare Radar Outage Center</u></a> for a full list of verified anomalies and confirmed outages. </p>
    <div>
      <h2>Government-directed</h2>
      <a href="#government-directed">
        
      </a>
    </div>
    
    <div>
      <h3>Tanzania</h3>
      <a href="#tanzania">
        
      </a>
    </div>
    <p><a href="https://bsky.app/profile/radar.cloudflare.com/post/3m4df6i7hjk25"><u>The Internet was shut down in Tanzania</u></a> on October 29 as <a href="https://www.theguardian.com/world/2025/oct/29/tanzania-election-president-samia-suluhu-hassan-poised-to-retain-power"><u>violent protests</u></a> took place during the country’s presidential election. Traffic initially fell around 12:30 local time (09:30 UTC), dropping more than 90% lower than the previous week. The disruption lasted approximately 26 hours, with <a href="https://bsky.app/profile/radar.cloudflare.com/post/3m4qec7zdnt2u"><u>traffic beginning to return</u></a> around 14:30 local time (11:30 UTC) on October 30. However, that restoration <a href="https://bsky.app/profile/radar.cloudflare.com/post/3m4gjngzck72u"><u>proved to be quite brief</u></a>, with a significant decrease in traffic occurring around 16:15 local time (13:15 UTC), approximately two hours after it returned. This second near-complete outage lasted until November 3, <a href="https://bsky.app/profile/radar.cloudflare.com/post/3m4g47vasfm2u"><u>when traffic aggressively returned</u></a> after 17:00 local time (14:00 UTC). Nominal drops in <a href="https://radar.cloudflare.com/routing/tz?dateStart=2025-10-29&amp;dateEnd=2025-11-04#announced-ip-address-space"><u>announced IPv4 and IPv6 address space</u></a> were also observed during the shutdown, but there was never a complete loss of announcements, which would have signified a total disconnection of the country from the Internet. (<a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/"><u>Autonomous systems</u></a> announce IP address space to other Internet providers, letting them know what blocks of IP addresses they are responsible for.)</p><p>Tanzania’s president later <a href="https://apnews.com/article/tanzania-samia-suluhu-hassan-internet-shutdown-october-election-1ec66b897e7809865d8971699a7284e0"><u>expressed sympathy</u></a> for the members of the diplomatic community and foreigners residing in the country regarding the impact of the Internet shutdown. Internet and social media services were also <a href="https://www.dw.com/en/tanzania-internet-slowdown-comes-at-a-high-cost/a-55512732"><u>restricted in 2020</u></a> ahead of the country’s general elections.</p>
    <div>
      <h2>Cable cuts</h2>
      <a href="#cable-cuts">
        
      </a>
    </div>
    
    <div>
      <h3>Digicel Haiti</h3>
      <a href="#digicel-haiti">
        
      </a>
    </div>
    <p>Digicel Haiti is unfortunately no stranger to Internet disruptions caused by cable cuts, and the network experienced two more such incidents during the fourth quarter. On October 16, traffic from <a href="https://radar.cloudflare.com/as27653"><u>Digicel Haiti (AS27653)</u></a> began to fall at 14:30 local time (18:30 UTC), reaching near zero at 16:00 local time (20:00 UTC). A translated <a href="https://x.com/jpbrun30/status/1978920959089230003"><u>X post from the company’s Director General</u></a> noted: “<i>We advise our clientele that @DigicelHT is experiencing 2 cuts on its international fiber optic infrastructure.</i>” Traffic began to recover after 17:00 local time (21:00 UTC), and reached expected levels within the following hour. At 17:33 local time (21:34 UTC), the Director General <a href="https://x.com/jpbrun30/status/1978937426841063504"><u>posted</u></a> that “<i>the first fiber on the international infrastructure has been repaired” </i>and service had been restored. </p><p>On November 25, another translated <a href="https://x.com/jpbrun30/status/1993283730467963345"><u>X post from the provider’s Director General</u></a> stated that its “<i>international optical fiber infrastructure on National Road 1</i>” had been cut. We observed traffic dropping on Digicel’s network approximately an hour earlier, with a complete outage observed between 02:00 - 08:00 local time (07:00 - 13:00 UTC). A <a href="https://x.com/jpbrun30/status/1993309357438910484"><u>follow-on X post</u></a> at 08:22 local time (13:22 UTC) stated that all services had been restored.</p>
    <div>
      <h3>Cybernet/StormFiber (Pakistan)</h3>
      <a href="#cybernet-stormfiber-pakistan">
        
      </a>
    </div>
    <p>At 17:30 local time (12:30 UTC) on October 20, Internet traffic for <a href="https://radar.cloudflare.com/as9541"><u>Cybernet/StormFiber (AS9541)</u></a> dropped sharply, falling to a level approximately 50% the same time a week prior. At the same time, the network’s announced IPv4 address space dropped by over a third. The cause of these shifts was damage to the <a href="https://www.submarinecablemap.com/submarine-cable/peace-cable"><u>PEACE</u></a> submarine cable, which suffered a cut in the Red Sea near Sudan. </p><p>PEACE is one of several submarine cable systems (including <a href="https://www.submarinecablemap.com/submarine-cable/imewe"><u>IMEWE</u></a> and <a href="https://www.submarinecablemap.com/submarine-cable/seamewe-4"><u>SEA-ME-WE-4</u></a>) that carry international Internet traffic for Pakistani providers. The provider <a href="https://profit.pakistantoday.com.pk/2025/10/24/stormfiber-pledges-full-restoration-by-monday-after-weeklong-internet-disruptions/"><u>pledged to fully restore service</u></a> by October 27, but traffic and announced IPv4 address space had recovered to near expected levels by around 02:00 local time on October 21 (21:00 UTC on October 20).</p>
<p>


    </p><div>
      <h3>Camtel, MTN Cameroon, Orange Cameroun</h3>
      <a href="#camtel-mtn-cameroon-orange-cameroun">
        
      </a>
    </div>
    <p>Unusual traffic patterns observed across multiple Internet providers in Cameroon on October 23 were reportedly caused by problems on the <a href="https://www.submarinecablemap.com/submarine-cable/west-africa-cable-system-wacs"><u>WACS (West Africa Cable System)</u></a> submarine cable, which connects countries along the west coast of Africa to Portugal. </p><p>A (translated) <a href="https://teleasu.tv/internet-graves-perturbations-observees-ce-jeudi-23-octobre-2025/"><u>published report</u></a> stated that MTN informed subscribers that “<i>following an incident on the WACS fiber optic cable, Internet service is temporarily disrupted</i>” and Orange Cameroun informed subscribers that “<i>due to an incident on the international access fiber, Internet service is disrupted.</i>” An <a href="https://x.com/Camtelonline/status/1981424170316464390"><u>X post from Camtel</u></a> stated “<i>Cameroon Telecommunications (CAMTEL) wishes to inform the public that a technical incident involving WACS cable equipment in Batoke (LIMBE) occurred in the early hours of 23 October 2025, causing Internet connectivity disruptions throughout the country.</i>” </p><p>Traffic across the impacted providers originally fell just at around  05:00 local time (04:00 UTC) before recovering to expected levels around 22:00 local time (21:00 UTC). Traffic across these networks was quite volatile during the day, dropping 90-99% at times. It isn’t clear what caused the visible spikiness in the traffic pattern—possibly attempts to shift Internet traffic to <a href="https://www.submarinecablemap.com/country/cameroon"><u>other submarine cable systems that connect to Cameroon</u></a>. Announced IP address space from <a href="https://radar.cloudflare.com/routing/as30992?dateStart=2025-10-23&amp;dateEnd=2025-10-23#announced-ip-address-space"><u>MTN Cameroon</u></a> and <a href="https://radar.cloudflare.com/routing/as36912?dateStart=2025-10-23&amp;dateEnd=2025-10-23#announced-ip-address-space"><u>Orange Cameroon</u></a> dropped during this period as well, although <a href="https://radar.cloudflare.com/routing/as15964?dateStart=2025-10-23&amp;dateEnd=2025-10-23#announced-ip-address-space"><u>Camtel’s</u></a> announced IP address space did not change.</p><p>Connectivity in the <a href="https://radar.cloudflare.com/cf"><u>Central African Republic</u></a> and <a href="https://radar.cloudflare.com/cg"><u>Republic of Congo</u></a> was also reportedly impacted by the WACS issues.</p>



    <div>
      <h3>Claro Dominicana</h3>
      <a href="#claro-dominicana">
        
      </a>
    </div>
    <p>On December 9, we saw traffic from <a href="https://radar.cloudflare.com/as6400"><u>Claro Dominicana (AS6400)</u></a>, an Internet provider in the Dominican Republic, drop sharply around 12:15 local time (16:15 UTC). Traffic levels fell again around 14:15 local time (18:15 UTC), bottoming out 77% lower than the previous week before quickly returning to expected levels. The connectivity disruption was likely caused by two fiber optic outages, as an <a href="https://x.com/ClaroRD/status/1998468046311002183"><u>X post from the provider</u></a> during the outage noted that they were “causing intermittency and slowness in some services.” A <a href="https://x.com/ClaroRD/status/1998496113838764343"><u>subsequent post on X</u></a> from Claro stated that technicians had restored Internet services nationwide by repairing the severed fiber optic cables.</p>
    <div>
      <h2>Power outages</h2>
      <a href="#power-outages">
        
      </a>
    </div>
    
    <div>
      <h3>Dominican Republic</h3>
      <a href="#dominican-republic">
        
      </a>
    </div>
    <p>According to a (translated) <a href="https://x.com/ETED_RD/status/1988326178219061450"><u>X post from the Empresa de Transmisión Eléctrica Dominicana</u></a> (ETED), a transmission line outage caused an interruption in electrical service in the <a href="https://radar.cloudflare.com/do"><u>Dominican Republic</u></a> on November 11. This power outage impacted Internet traffic from the country, resulting in a <a href="https://noc.social/@cloudflareradar/115533081511310085"><u>nearly 50% drop in traffic</u></a> compared to the prior week, starting at 13:15 local time (17:15 UTC). Traffic levels remained lower until approximately 02:00 local time (06:00 UTC) on December 12, with a later <a href="https://x.com/ETED_RD/status/1988575130990330153"><u>(translated) X post from ETED</u></a> noting “<i>At 2:20 a.m. we have completed the recovery of the national electrical system, supplying 96% of the demand…</i>”</p><p>A subsequent <a href="https://dominicantoday.com/dr/local/2025/11/27/manual-line-disconnection-triggered-nationwide-blackout-report-says/"><u>technical report found</u></a> that “<i>the blackout began at the 138 kV San Pedro de Macorís I substation, where a live line was manually disconnected, triggering a high-intensity short circuit. Protection systems responded immediately, but the fault caused several nearby lines to disconnect, separating 575 MW of generation in the eastern region from the rest of the grid. The imbalance caused major power plants to trip automatically as part of their built-in safety mechanisms.</i>”</p>
    <div>
      <h3>Kenya</h3>
      <a href="#kenya">
        
      </a>
    </div>
    <p>On December 9, a <a href="https://www.tuko.co.ke/kenya/612181-kenya-power-reveals-7-pm-nationwide-blackout-multiple-regions/"><u>major power outage</u></a> impacted multiple regions across <a href="https://radar.cloudflare.com/ke"><u>Kenya</u></a>. Kenya Power explained that the outage “<i>was triggered by an incident on the regional Kenya-Uganda interconnected power network, which caused a disturbance on the Kenyan side of the system</i>” and claimed that “<i>[p]ower was restored to most of the affected areas within approximately 30 minutes.</i>” However, impacts to Internet connectivity lasted for nearly four hours, between 19:15 - 23:00 local time (16:15 - 20:00 UTC). The power outage caused traffic to drop as much as 18% at a national level, with the traffic shifts most visible in <a href="https://radar.cloudflare.com/traffic/7668902"><u>Nakuru County</u></a> and <a href="https://radar.cloudflare.com/traffic/192709"><u>Kaimbu County</u></a>.</p>


    <div>
      <h2>Military action</h2>
      <a href="#military-action">
        
      </a>
    </div>
    
    <div>
      <h3>Odesa, Ukraine</h3>
      <a href="#odesa-ukraine">
        
      </a>
    </div>
    <p><a href="https://odessa-journal.com/russia-carried-out-a-massive-drone-attack-on-the-odessa-region"><u>Russian drone strikes</u></a> on the <a href="https://radar.cloudflare.com/traffic/698738"><u>Odesa region</u></a> in <a href="https://radar.cloudflare.com/ua"><u>Ukraine</u></a> on December 12 damaged warehouses and energy infrastructure, with the latter causing power outages in parts of the region. Those outages disrupted Internet connectivity, resulting in <a href="https://x.com/CloudflareRadar/status/2000993223406211327?s=20"><u>traffic dropping by as much as 57%</u></a> as compared to the prior week. After the initial drop at midnight on December 13 (22:00 UTC on December 12), traffic gradually recovered over the following several days, returning to expected levels around 14:30 local time (12:30 UTC) on December 16.</p>
    <div>
      <h2>Weather</h2>
      <a href="#weather">
        
      </a>
    </div>
    
    <div>
      <h3>Jamaica</h3>
      <a href="#jamaica">
        
      </a>
    </div>
    <p><a href="https://www.nytimes.com/live/2025/10/28/weather/hurricane-melissa-jamaica-landfall?smid=url-share#df989e67-a90e-50fb-92d0-8d5d52f76e84"><u>Hurricane Melissa</u></a> made landfall on <a href="https://radar.cloudflare.com/jm"><u>Jamaica</u></a> on October 28 and left a trail of damage and destruction in its path. Associated <a href="https://www.jamaicaobserver.com/2025/10/28/eyeonmelissa-35-jps-customers-without-power/"><u>power outages</u></a> and infrastructure damage impacted Internet connectivity, causing traffic to initially <a href="https://x.com/CloudflareRadar/status/1983266694715084866"><u>drop by approximately half</u></a>, <a href="https://x.com/CloudflareRadar/status/1983217966347866383"><u>starting</u></a> around 06:15 local time (11:15 UTC), ultimately reaching as much as <a href="https://x.com/CloudflareRadar/status/1983357587707048103"><u>70% lower</u></a> than the previous week. Internet traffic from Jamaica remained well below pre-hurricane levels for several days, and ultimately started to make greater progress towards expected levels <a href="https://x.com/CloudflareRadar/status/1985708253872107713?s=20"><u>during the morning of November 4</u></a>. It can often take weeks or months for Internet traffic from a country to return to “normal” levels following storms that cause massive and widespread damage – while power may be largely restored within several days, damage to physical infrastructure takes significantly longer to address.</p>
    <div>
      <h3>Sri Lanka &amp; Indonesia</h3>
      <a href="#sri-lanka-indonesia">
        
      </a>
    </div>
    <p>On November 26, <a href="https://apnews.com/article/indonesia-sri-lanka-thailand-malaysia-floods-landsides-aa9947df1f6192a3c6c72ef58659d4d2"><u>Cyclone Senyar</u></a> caused catastrophic floods and landslides in <a href="https://radar.cloudflare.com/lk"><u>Sri Lanka</u></a> and <a href="https://radar.cloudflare.com/id"><u>Indonesia</u></a>, killing over 1,000 people and damaging telecommunications and power infrastructure across these countries. The infrastructure damage resulted in <a href="https://x.com/CloudflareRadar/status/1996233525989720083"><u>disruptions to Internet connectivity</u></a>, and resultant lower traffic levels, across multiple regions.</p><p>In Sri Lanka, regions outside the main Western Province were the most affected, and several provinces saw traffic drop <a href="https://x.com/CloudflareRadar/status/1996233528032301513"><u>between 80% and 95%</u></a> as compared to the prior week, including <a href="https://radar.cloudflare.com/traffic/1232860?dateStart=2025-11-24&amp;dateEnd=2025-12-14"><u>North Western</u></a>, <a href="https://radar.cloudflare.com/traffic/1227618?dateStart=2025-11-24&amp;dateEnd=2025-12-14"><u>Southern</u></a>, <a href="https://radar.cloudflare.com/traffic/1225265?dateStart=2025-11-24&amp;dateEnd=2025-12-14"><u>Uva</u></a>, <a href="https://radar.cloudflare.com/traffic/8133521?dateStart=2025-11-24&amp;dateEnd=2025-12-14"><u>Eastern</u></a>, <a href="https://radar.cloudflare.com/traffic/7671049?dateStart=2025-11-24&amp;dateEnd=2025-12-14"><u>Northern</u></a>, <a href="https://radar.cloudflare.com/traffic/1232870?dateStart=2025-11-24&amp;dateEnd=2025-12-14"><u>North Central</u></a>, and <a href="https://radar.cloudflare.com/traffic/1228435?dateStart=2025-11-24&amp;dateEnd=2025-12-14"><u>Sabaragamuwa</u></a>.</p>

<p>In <a href="https://x.com/CloudflareRadar/status/1996233530267885938"><u>Indonesia</u></a>, <a href="https://radar.cloudflare.com/traffic/1215638?dateStart=2025-11-24&amp;dateEnd=2025-12-14"><u>Aceh</u></a> and the Sumatra regions saw the biggest Internet disruptions. In Aceh, traffic initially dropped over 75% as compared to the previous week. In Sumatra, <a href="https://radar.cloudflare.com/traffic/1213642?dateStart=2025-11-24&amp;dateEnd=2025-12-14"><u>North Sumatra</u></a> was the most affected, with an early 30% drop as compared to the previous week, before starting to recover more actively the following week.</p>


    <div>
      <h2>Known or unspecified technical problems</h2>
      <a href="#known-or-unspecified-technical-problems">
        
      </a>
    </div>
    
    <div>
      <h3>Smartfren (Indonesia)</h3>
      <a href="#smartfren-indonesia">
        
      </a>
    </div>
    <p>On October 3, subscribers to Indonesian Internet provider <a href="https://radar.cloudflare.com/as18004"><u>Smartfren (AS18004</u></a>) experienced a service disruption. The issues were <a href="https://x.com/smartfrenworld/status/1973957300466643203"><u>acknowledged by the provider in an X post</u></a>, which stated (in translation), “<i>Currently, telephone, SMS and data services are experiencing problems in several areas.</i>” Traffic from the provider fell as much as 84%, starting around 09:00 local time (02:00 UTC). The disruption lasted for approximately eight hours, as traffic returned to expected levels around 17:00 local time (10:00 UTC). Smartfren did not provide any additional information on what caused the service problems.</p>
    <div>
      <h3>Vodafone UK</h3>
      <a href="#vodafone-uk">
        
      </a>
    </div>
    <p>Major British Internet provider Vodafone UK (<a href="https://radar.cloudflare.com/as5378"><u>AS5378</u></a> &amp; <a href="https://radar.cloudflare.com/as25135"><u>AS25135</u></a>) experienced a brief service outage on October 23. At 15:00 local time (14:00 UTC), traffic on both Vodafone <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/"><u>ASNs</u></a> dropped to zero. Announced IPv4 address space from <a href="https://radar.cloudflare.com/routing/as5378?dateStart=2025-10-13&amp;dateEnd=2025-10-13#announced-ip-address-space"><u>AS5378</u></a> fell by 75%, while announced IPv4 address space from <a href="https://radar.cloudflare.com/routing/as25135?dateStart=2025-10-13&amp;dateEnd=2025-10-13#announced-ip-address-space"><u>AS25135</u></a> disappeared entirely. Both Internet traffic and address space recovered two hours later, returning to expected levels around 17:00 local time (16:00 UTC). Vodafone did not provide any information on their social media channels about the cause of the outage, and their <a href="https://www.vodafone.co.uk/network/status-checker"><u>network status checker page</u></a> was also unavailable during the outage.</p>






    <div>
      <h3>Fastweb (Italy)</h3>
      <a href="#fastweb-italy">
        
      </a>
    </div>
    <p>According to a <a href="https://tg24.sky.it/tecnologia/2025/10/22/fastweb-down-problemi-internet-oggi"><u>published report</u></a>, a DNS resolution issue disrupted Internet services for customers of Italian provider <a href="https://radar.cloudflare.com/as12874"><u>Fastweb (AS12874)</u></a> on October 22, causing observed traffic volumes to drop by over 75%. Fastweb <a href="https://www.firstonline.info/en/fastweb-down-oggi-internet-bloccato-in-tutta-italia-migliaia-di-segnalazioni/"><u>acknowledged the issue</u></a>, which impacted wired Internet customers between 09:30 - 13:00 local time (08:30 - 12:00 UTC).</p><p>Although not an Internet outage caused by connectivity failure, the impact of DNS resolution issues on Internet traffic is very similar. When a provider’s <a href="https://www.cloudflare.com/learning/dns/dns-server-types/"><u>DNS resolver</u></a> is experiencing problems, switching to a service like Cloudflare’s <a href="https://1.1.1.1/dns"><u>1.1.1.1 public DNS resolver</u></a> will often restore connectivity.</p>
    <div>
      <h3>SBIN, MTN Benin, Etisalat Benin</h3>
      <a href="#sbin-mtn-benin-etisalat-benin">
        
      </a>
    </div>
    <p>On December 7, a concurrent drop in traffic was observed across <a href="https://radar.cloudflare.com/as28683"><u>SBIN (AS28683)</u></a>, <a href="https://radar.cloudflare.com/as37424"><u>MTN Benin (AS37424)</u></a>, and <a href="https://radar.cloudflare.com/as37136"><u>Etisalat Benin (AS37136)</u></a>. Between 18:30 - 19:30 local time (17:30 - 18:30 UTC), traffic dropped as much as 80% as compared to the prior week at a country level, nearly 100% at Etisalat and MTN, and over 80% at SBIN.</p><p>While an <a href="https://www.reuters.com/world/africa/soldiers-benins-national-television-claim-have-seized-power-2025-12-07/"><u>attempted coup</u></a> had taken place earlier in the day, it is unclear whether the observed Internet disruption was related in any way. From a routing perspective, all three impacted networks share <a href="https://radar.cloudflare.com/as174"><u>Cogent (AS174)</u></a> as an upstream provider, so a localized issue at Cogent may have contributed to the brief outage.  </p>



    <div>
      <h3>Cellcom Israel</h3>
      <a href="#cellcom-israel">
        
      </a>
    </div>
    <p>According to a <a href="https://www.ynetnews.com/article/2gpt1kt35"><u>reported announcement</u></a> from Israeli provider <a href="https://radar.cloudflare.com/as1680"><u>Cellcom (AS1680)</u></a>, on December 18, there was “<i>a malfunction affecting Internet connectivity that is impacting some of our customers.</i>” This malfunction dropped traffic nearly 70% as compared to the prior week, and occurred between 09:30 - 11:00 local time (07:30 - 09:00 UTC). The “malfunction” may have been a DNS failure, according to a <a href="https://www.israelnationalnews.com/news/419552"><u>published report</u></a>.</p>
    <div>
      <h3>Partner Communications (Israel)</h3>
      <a href="#partner-communications-israel">
        
      </a>
    </div>
    <p>Closing out 2025, on December 30, a major technical failure at Israeli provider <a href="https://radar.cloudflare.com/as12400"><u>Partner Communications (AS12400)</u></a> <a href="https://www.ynetnews.com/tech-and-digital/article/hjewkibnwe"><u>disrupted</u></a> mobile, TV, and Internet services across the country. Internet traffic from Partner fell by two-thirds as compared to the previous week between 14:00 - 15:00 local time (12:00 - 13:00 UTC). During the outage, queries to Cloudflare’s 1.1.1.1 public DNS resolver spiked, suggesting that the problem may have been related to Partner’s DNS infrastructure. However, the provider did not publicly confirm what caused the outage.</p>




    <div>
      <h2>Cloud Platforms</h2>
      <a href="#cloud-platforms">
        
      </a>
    </div>
    <p>During the fourth quarter, we launched a new <a href="https://radar.cloudflare.com/cloud-observatory"><u>Cloud Observatory</u></a> page on Radar that tracks availability and performance issues at a region level across hyperscaler cloud platforms, including <a href="https://radar.cloudflare.com/cloud-observatory/amazon"><u>Amazon Web Services</u></a>, <a href="https://radar.cloudflare.com/cloud-observatory/microsoft"><u>Microsoft Azure</u></a>, <a href="https://radar.cloudflare.com/cloud-observatory/google"><u>Google Cloud Platform</u></a>, and <a href="https://radar.cloudflare.com/cloud-observatory/oracle"><u>Oracle Cloud Infrastructure</u></a>.</p>
    <div>
      <h3>Amazon Web Services</h3>
      <a href="#amazon-web-services">
        
      </a>
    </div>
    <p>On October 20, the Amazon Web Services us-east-1 region in Northern Virginia experienced “<a href="https://health.aws.amazon.com/health/status?eventID=arn:aws:health:us-east-1::event/MULTIPLE_SERVICES/AWS_MULTIPLE_SERVICES_OPERATIONAL_ISSUE/AWS_MULTIPLE_SERVICES_OPERATIONAL_ISSUE_BA540_514A652BE1A"><u>increased error rates and latencies</u></a>” that affected multiple services within the region. The issues impacted not only customers with public-facing Web sites and applications that rely on infrastructure within the region, but also Cloudflare customers that have origin resources hosted in us-east-1.</p><p>We began to see the impact of the problems around 06:30 UTC, as the share of <a href="https://radar.cloudflare.com/cloud-observatory/amazon/us-east-1?dateStart=2025-10-20&amp;dateEnd=2025-10-21#success-rate"><u>error</u></a> (<a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Status#server_error_responses"><u>5xx-class</u></a>) responses began to climb, reaching as high as 17% around 08:00 UTC. The number of <a href="https://radar.cloudflare.com/cloud-observatory/amazon/us-east-1?dateStart=2025-10-20&amp;dateEnd=2025-10-21#connection-failures"><u>failures encountered when attempting to connect to origins</u></a> in us-east-1 climbed as well, peaking around 12:00 UTC.</p>

<p>The impact could also be clearly seen in key network performance metrics, which remained elevated throughout the incident, returning to normal levels just before the end of the incident, around 23:00 UTC. Both <a href="https://radar.cloudflare.com/cloud-observatory/amazon/us-east-1?dateStart=2025-10-20&amp;dateEnd=2025-10-21#tcp-handshake-duration"><u>TCP</u></a> and <a href="https://radar.cloudflare.com/cloud-observatory/amazon/us-east-1?dateStart=2025-10-20&amp;dateEnd=2025-10-21#tls-handshake-duration"><u>TLS</u></a> handshake durations got progressively worse throughout the incident—these metrics measure the amount of time needed for Cloudflare to establish TCP and TLS connections respectively with customer origin servers in us-east-1. In addition, the amount of time elapsed before Cloudflare <a href="https://radar.cloudflare.com/cloud-observatory/amazon/us-east-1/#response-header-receive-duration"><u>received response headers</u></a> from the origin increased significantly during the first several hours of the incident, before gradually returning to expected levels.  </p>





    <div>
      <h3>Microsoft Azure</h3>
      <a href="#microsoft-azure">
        
      </a>
    </div>
    <p>On October 29, Microsoft Azure experienced an <a href="https://azure.status.microsoft/en-us/status/history/?trackingId=YKYN-BWZ"><u>incident</u></a> impacting <a href="https://azure.microsoft.com/en-us/products/frontdoor"><u>Azure Front Door</u></a>, its content delivery network service. According to <a href="https://azure.status.microsoft/en-us/status/history/?trackingId=YKYN-BWZ"><u>Azure's report on the incident</u></a>, “<i>A specific sequence of customer configuration changes, performed across two different control plane build versions, resulted in incompatible customer configuration metadata being generated. These customer configuration changes themselves were valid and non-malicious – however they produced metadata that, when deployed to edge site servers, exposed a latent bug in the data plane. This incompatibility triggered a crash during asynchronous processing within the data plane service.</i>”</p><p>The incident report marked the start time at 15:41 UTC, although we observed the volume of <a href="https://radar.cloudflare.com/cloud-observatory/microsoft/global?dateStart=2025-10-29&amp;dateEnd=2025-10-30#connection-failures"><u>failed connection attempts</u></a> to Azure-hosted origins begin to climb about 45 minutes prior. The TCP and TLS handshake metrics also became more volatile during the incident period, with <a href="https://radar.cloudflare.com/cloud-observatory/microsoft/global?dateStart=2025-10-29&amp;dateEnd=2025-10-30#tcp-handshake-duration"><u>TCP handshakes</u></a> taking over 50% longer at times, and <a href="https://radar.cloudflare.com/cloud-observatory/microsoft/global?dateStart=2025-10-29&amp;dateEnd=2025-10-30#tls-handshake-duration"><u>TLS handshakes</u></a> taking nearly 200% longer at peak. The impacted metrics began to improve after 20:00 UTC, and according to Microsoft, the incident ended at 00:05 UTC on October 30.</p>



    <div>
      <h2>Cloudflare</h2>
      <a href="#cloudflare">
        
      </a>
    </div>
    <p>In addition to the outages discussed above, Cloudflare also experienced two disruptions during the fourth quarter. While these were not Internet outages in the classic sense, they did prevent users from accessing Web sites and applications delivered and protected by Cloudflare when they occurred.</p><p>The first incident took place on November 18, and was caused by a software failure triggered by a change to one of our database systems' permissions, which caused the database to output multiple entries into a “feature file” used by our Bot Management system. Additional details, including a root cause analysis and timeline, can be found in the associated <a href="https://blog.cloudflare.com/18-november-2025-outage/"><u>blog post</u></a>.</p><p>The second incident occurred on December 5, and impacted a subset of customers, accounting for approximately 28% of all HTTP traffic served by Cloudflare. It was triggered by changes being made to our request body parsing logic while attempting to detect and mitigate a newly disclosed industry-wide React Server Components vulnerability. A post-mortem <a href="https://blog.cloudflare.com/5-december-2025-outage/"><u>blog post</u></a> contains additional details, including a root cause analysis and timeline.</p><p>For more information about the work underway at Cloudflare to prevent outages like these from happening again, check out our <a href="https://blog.cloudflare.com/fail-small-resilience-plan/"><u>blog post</u></a> detailing “Code Orange: Fail Small.”</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>The disruptions observed in the fourth quarter underscore the importance of real-time data in maintaining global connectivity. Whether it’s a government-ordered shutdown or a minor technical issue, transparency allows the technical community to respond faster and more effectively. We will continue to track these shifts on Cloudflare Radar, providing the insights needed to navigate the complexities of modern networking. We share our observations on the <a href="https://radar.cloudflare.com/outage-center"><u>Cloudflare Radar Outage Center</u></a>, via social media, and in posts on <a href="https://blog.cloudflare.com/tag/cloudflare-radar/"><u>blog.cloudflare.com</u></a>. Follow us on social media at <a href="https://twitter.com/CloudflareRadar"><u>@CloudflareRadar</u></a> (X), <a href="https://noc.social/@cloudflareradar"><u>noc.social/@cloudflareradar</u></a> (Mastodon), and <a href="https://bsky.app/profile/radar.cloudflare.com"><u>radar.cloudflare.com</u></a> (Bluesky), or contact us via <a><u>email</u></a>.</p><p>As a reminder, while these blog posts feature graphs from <a href="https://radar.cloudflare.com/"><u>Radar</u></a> and the <a href="https://radar.cloudflare.com/explorer"><u>Radar Data Explorer</u></a>, the underlying data is available from our <a href="https://developers.cloudflare.com/api/resources/radar/"><u>API</u></a>. You can use the API to retrieve data to do your own local monitoring or analysis, or you can use the <a href="https://github.com/cloudflare/mcp-server-cloudflare/tree/main/apps/radar#cloudflare-radar-mcp-server-"><u>Radar MCP server</u></a> to incorporate Radar data into your AI tools.</p> ]]></content:encoded>
            <category><![CDATA[Radar]]></category>
            <category><![CDATA[Internet Shutdown]]></category>
            <category><![CDATA[Internet Traffic]]></category>
            <category><![CDATA[Outage]]></category>
            <category><![CDATA[Internet Trends]]></category>
            <category><![CDATA[AWS]]></category>
            <category><![CDATA[Microsoft Azure]]></category>
            <category><![CDATA[Consumer Services]]></category>
            <guid isPermaLink="false">6dRT0oOSVcyQzjnZCkzH7S</guid>
            <dc:creator>David Belson</dc:creator>
        </item>
        <item>
            <title><![CDATA[What came first: the CNAME or the A record?]]></title>
            <link>https://blog.cloudflare.com/cname-a-record-order-dns-standards/</link>
            <pubDate>Wed, 14 Jan 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[ A recent change to 1.1.1.1 accidentally altered the order of CNAME records in DNS responses, breaking resolution for some clients. This post explores the technical root cause, examines the source code of affected resolvers, and dives into the inherent ambiguities of the DNS RFCs.   ]]></description>
            <content:encoded><![CDATA[ <p>On January 8, 2026, a routine update to 1.1.1.1 aimed at reducing memory usage accidentally triggered a wave of DNS resolution failures for users across the Internet. The root cause wasn't an attack or an outage, but a subtle shift in the order of records within our DNS responses.  </p><p>While most modern software treats the order of records in DNS responses as irrelevant, we discovered that some implementations expect CNAME records to appear before everything else. When that order changed, resolution started failing. This post explores the code change that caused the shift, why it broke specific DNS clients, and the 40-year-old protocol ambiguity that makes the "correct" order of a DNS response difficult to define.</p>
    <div>
      <h2>Timeline</h2>
      <a href="#timeline">
        
      </a>
    </div>
    <p><i>All timestamps referenced are in Coordinated Universal Time (UTC).</i></p><table><tr><th><p><b>Time</b></p></th><th><p><b>Description</b></p></th></tr><tr><td><p>2025-12-02</p></td><td><p>The record reordering is introduced to the 1.1.1.1 codebase</p></td></tr><tr><td><p>2025-12-10</p></td><td><p>The change is released to our testing environment</p></td></tr><tr><td><p>2026-01-07 23:48</p></td><td><p>A global release containing the change starts</p></td></tr><tr><td><p>2026-01-08 17:40</p></td><td><p>The release reaches 90% of servers</p></td></tr><tr><td><p>2026-01-08 18:19</p></td><td><p>Incident is declared</p></td></tr><tr><td><p>2026-01-08 18:27</p></td><td><p>The release is reverted</p></td></tr><tr><td><p>2026-01-08 19:55</p></td><td><p>Revert is completed. Impact ends</p></td></tr></table>
    <div>
      <h2>What happened?</h2>
      <a href="#what-happened">
        
      </a>
    </div>
    <p>While making some improvements to lower the memory usage of our cache implementation, we introduced a subtle change to CNAME record ordering. The change was introduced on December 2, 2025, released to our testing environment on December 10, and began deployment on January 7, 2026.</p>
    <div>
      <h3>How DNS CNAME chains work</h3>
      <a href="#how-dns-cname-chains-work">
        
      </a>
    </div>
    <p>When you query for a domain like <code>www.example.com</code>, you might get a <a href="https://www.cloudflare.com/learning/dns/dns-records/dns-cname-record/"><u>CNAME (Canonical Name)</u></a> record that indicates one name is an alias for another name. It’s the job of public resolvers, such as <a href="https://www.cloudflare.com/learning/dns/what-is-1.1.1.1/"><u>1.1.1.1</u></a>, to follow this chain of aliases until it reaches a final response:</p><p><code>www.example.com → cdn.example.com → server.cdn-provider.com → 198.51.100.1</code></p><p>As 1.1.1.1 traverses this chain, it caches every intermediate record. Each record in the chain has its own <a href="https://www.cloudflare.com/learning/cdn/glossary/time-to-live-ttl/"><u>TTL (Time-To-Live)</u></a>, indicating how long we can cache it. Not all the TTLs in a CNAME chain need to be the same:</p><p><code>www.example.com → cdn.example.com (TTL: 3600 seconds) # Still cached
cdn.example.com → 198.51.100.1    (TTL: 300 seconds)  # Expired</code></p><p>When one or more records in a CNAME chain expire, it’s considered partially expired. Fortunately, since parts of the chain are still in our cache, we don’t have to resolve the entire CNAME chain again — only the part that has expired. In our example above, we would take the still valid <code>www.example.com → cdn.example.com</code> chain, and only resolve the expired <code>cdn.example.com</code> <a href="https://www.cloudflare.com/learning/dns/dns-records/dns-a-record/"><u>A record</u></a>. Once that’s done, we combine the existing CNAME chain and the newly resolved records into a single response.</p>
    <div>
      <h3>The logic change</h3>
      <a href="#the-logic-change">
        
      </a>
    </div>
    <p>The code that merges these two chains is where the change occurred. Previously, the code would create a new list, insert the existing CNAME chain, and then append the new records:</p>
            <pre><code>impl PartialChain {
    /// Merges records to the cache entry to make the cached records complete.
    pub fn fill_cache(&amp;self, entry: &amp;mut CacheEntry) {
        let mut answer_rrs = Vec::with_capacity(entry.answer.len() + self.records.len());
        answer_rrs.extend_from_slice(&amp;self.records); // CNAMEs first
        answer_rrs.extend_from_slice(&amp;entry.answer); // Then A/AAAA records
        entry.answer = answer_rrs;
    }
}
</code></pre>
            <p>However, to save some memory allocations and copies, the code was changed to instead append the CNAMEs to the existing answer list:</p>
            <pre><code>impl PartialChain {
    /// Merges records to the cache entry to make the cached records complete.
    pub fn fill_cache(&amp;self, entry: &amp;mut CacheEntry) {
        entry.answer.extend(self.records); // CNAMEs last
    }
}
</code></pre>
            <p>As a result, the responses that 1.1.1.1 returned now sometimes had the CNAME records appearing at the bottom, after the final resolved answer.</p>
    <div>
      <h3>Why this caused impact</h3>
      <a href="#why-this-caused-impact">
        
      </a>
    </div>
    <p>When DNS clients receive a response with a CNAME chain in the answer section, they also need to follow this chain to find out that <code>www.example.com</code> points to <code>198.51.100.1</code>. Some DNS client implementations handle this by keeping track of the expected name for the records as they’re iterated sequentially. When a CNAME is encountered, the expected name is updated:</p>
            <pre><code>;; QUESTION SECTION:
;; www.example.com.        IN    A

;; ANSWER SECTION:
www.example.com.    3600   IN    CNAME  cdn.example.com.
cdn.example.com.    300    IN    A      198.51.100.1
</code></pre>
            <p></p><ol><li><p>Find records for <code>www.example.com</code></p></li><li><p>Encounter <code>www.example.com. CNAME cdn.example.com</code></p></li><li><p>Find records for <code>cdn.example.com</code></p></li><li><p>Encounter <code>cdn.example.com. A 198.51.100.1</code></p></li></ol><p>When the CNAME suddenly appears at the bottom, this no longer works:</p>
            <pre><code>;; QUESTION SECTION:
;; www.example.com.	       IN    A

;; ANSWER SECTION:
cdn.example.com.    300    IN    A      198.51.100.1
www.example.com.    3600   IN    CNAME  cdn.example.com.
</code></pre>
            <p></p><ol><li><p>Find records for <code>www.example.com</code></p></li><li><p>Ignore <code>cdn.example.com. A 198.51.100.1</code> as it doesn’t match the expected name</p></li><li><p>Encounter <code>www.example.com. CNAME cdn.example.com</code></p></li><li><p>Find records for <code>cdn.example.com</code></p></li><li><p>No more records are present, so the response is considered empty</p></li></ol><p>One such implementation that broke is the <a href="https://man7.org/linux/man-pages/man3/getaddrinfo.3.html"><code><u>getaddrinfo</u></code></a> function in glibc, which is commonly used on Linux for DNS resolution. When looking at its <code>getanswer_r</code> implementation, we can indeed see it expects to find the CNAME records before any answers:</p>
            <pre><code>for (; ancount &gt; 0; --ancount)
  {
    // ... parsing DNS records ...
    
    if (rr.rtype == T_CNAME)
      {
        /* Record the CNAME target as the new expected name. */
        int n = __ns_name_unpack (c.begin, c.end, rr.rdata,
                                  name_buffer, sizeof (name_buffer));
        expected_name = name_buffer;  // Update what we're looking for
      }
    else if (rr.rtype == qtype
             &amp;&amp; __ns_samebinaryname (rr.rname, expected_name)  // Must match!
             &amp;&amp; rr.rdlength == rrtype_to_rdata_length (type:qtype))
      {
        /* Address record matches - store it */
        ptrlist_add (list:addresses, item:(char *) alloc_buffer_next (abuf, uint32_t));
        alloc_buffer_copy_bytes (buf:abuf, src:rr.rdata, size:rr.rdlength);
      }
  }
</code></pre>
            <p>Another notable affected implementation was the DNSC process in three models of Cisco ethernet switches. In the case where switches had been configured to use 1.1.1.1 these switches experienced spontaneous reboot loops when they received a response containing the reordered CNAMEs. <a href="https://www.cisco.com/c/en/us/support/docs/smb/switches/Catalyst-switches/kmgmt3846-cbs-reboot-with-fatal-error-from-dnsc-process.html"><u>Cisco has published a service document describing the issue</u></a>.</p>
    <div>
      <h3>Not all implementations break</h3>
      <a href="#not-all-implementations-break">
        
      </a>
    </div>
    <p>Most DNS clients don’t have this issue. For example, <a href="https://www.freedesktop.org/software/systemd/man/latest/systemd-resolved.service.html"><u>systemd-resolved</u></a> first parses the records into an ordered set:</p>
            <pre><code>typedef struct DnsAnswerItem {
        DnsResourceRecord *rr; // The actual record
        DnsAnswerFlags flags;  // Which section it came from
        // ... other metadata
} DnsAnswerItem;


typedef struct DnsAnswer {
        unsigned n_ref;
        OrderedSet *items;
} DnsAnswer;
</code></pre>
            <p>When following a CNAME chain it can then search the entire answer set, even if the CNAME records don’t appear at the top.</p>
    <div>
      <h2>What the RFC says</h2>
      <a href="#what-the-rfc-says">
        
      </a>
    </div>
    <p><a href="https://datatracker.ietf.org/doc/html/rfc1034"><u>RFC 1034</u></a>, published in 1987, defines much of the behavior of the DNS protocol, and should give us an answer on whether the order of CNAME records matters. <a href="https://datatracker.ietf.org/doc/html/rfc1034#section-4.3.1"><u>Section 4.3.1</u></a> contains the following text:</p><blockquote><p>If recursive service is requested and available, the recursive response to a query will be one of the following:</p><p>- The answer to the query, possibly preface by one or more CNAME RRs that specify aliases encountered on the way to an answer.</p></blockquote><p>While "possibly preface" can be interpreted as a requirement for CNAME records to appear before everything else, it does not use normative key words, such as <a href="https://datatracker.ietf.org/doc/html/rfc2119"><u>MUST and SHOULD</u></a> that modern RFCs use to express requirements. This isn’t a flaw in RFC 1034, but simply a result of its age. <a href="https://datatracker.ietf.org/doc/html/rfc2119"><u>RFC 2119</u></a>, which standardized these key words, was published in 1997, 10 years <i>after</i> RFC 1034.</p><p>In our case, we did originally implement the specification so that CNAMEs appear first. However, we did not have any tests asserting the behavior remains consistent due to the ambiguous language in the RFC.</p>
    <div>
      <h3>The subtle distinction: RRsets vs RRs in message sections</h3>
      <a href="#the-subtle-distinction-rrsets-vs-rrs-in-message-sections">
        
      </a>
    </div>
    <p>To understand why this ambiguity exists, we need to understand a subtle but important distinction in DNS terminology.</p><p>RFC 1034 <a href="https://datatracker.ietf.org/doc/html/rfc1034#section-3.6"><u>section 3.6</u></a> defines Resource Record Sets (RRsets) as collections of records with the same name, type, and class. For RRsets, the specification is clear about ordering:</p><blockquote><p>The order of RRs in a set is not significant, and need not be preserved by name servers, resolvers, or other parts of the DNS.</p></blockquote><p>However, RFC 1034 doesn’t clearly specify how message sections relate to RRsets. While modern DNS specifications have shown that message sections can indeed contain multiple RRsets (consider <a href="https://www.cloudflare.com/learning/dns/dnssec/how-dnssec-works/">DNSSEC</a> responses with signatures), RFC 1034 doesn’t describe message sections in those terms. Instead, it treats message sections as containing individual Resource Records (RRs).</p><p>The problem is that the RFC primarily discusses ordering in the context of RRsets but doesn't specify the ordering of different RRsets relative to each other within a message section. This is where the ambiguity lives.</p><p>RFC 1034 <a href="https://datatracker.ietf.org/doc/html/rfc1034#section-6.2.1"><u>section 6.2.1</u></a> includes an example that demonstrates this ambiguity further. It mentions that the order of Resource Records (RRs) is not significant either:</p><blockquote><p>The difference in ordering of the RRs in the answer section is not significant.</p></blockquote><p>However, this example only shows two A records for the same name within the same RRset. It doesn't address whether this applies to different record types like CNAMEs and A records.</p>
    <div>
      <h2>CNAME chain ordering</h2>
      <a href="#cname-chain-ordering">
        
      </a>
    </div>
    <p>It turns out that this issue extends beyond putting CNAME records before other record types. Even when CNAMEs appear before other records, sequential parsing can still break if the CNAME chain itself is out of order. Consider the following response:</p>
            <pre><code>;; QUESTION SECTION:
;; www.example.com.              IN    A

;; ANSWER SECTION:
cdn.example.com.           3600  IN    CNAME  server.cdn-provider.com.
www.example.com.           3600  IN    CNAME  cdn.example.com.
server.cdn-provider.com.   300   IN    A      198.51.100.1
</code></pre>
            <p>Each CNAME belongs to a different RRset, as they have different owners, so the statement about RRset order being insignificant doesn’t apply here.</p><p>However, RFC 1034 doesn't specify that CNAME chains must appear in any particular order. There's no requirement that <code>www.example.com. CNAME cdn.example.com.</code> must appear before <code>cdn.example.com. CNAME server.cdn-provider.com.</code>. With sequential parsing, the same issue occurs:</p><ol><li><p>Find records for <code>www.example.com</code></p></li><li><p>Ignore <code>cdn.example.com. CNAME server.cdn-provider.com</code>. as it doesn’t match the expected name</p></li><li><p>Encounter <code>www.example.com. CNAME cdn.example.com</code></p></li><li><p>Find records for <code>cdn.example.com</code></p></li><li><p>Ignore <code>server.cdn-provider.com. A 198.51.100.1</code> as it doesn’t match the expected name</p></li></ol>
    <div>
      <h2>What should resolvers do?</h2>
      <a href="#what-should-resolvers-do">
        
      </a>
    </div>
    <p>RFC 1034 section 5 describes resolver behavior. <a href="https://datatracker.ietf.org/doc/html/rfc1034#section-5.2.2"><u>Section 5.2.2</u></a> specifically addresses how resolvers should handle aliases (CNAMEs): </p><blockquote><p>In most cases a resolver simply restarts the query at the new name when it encounters a CNAME.</p></blockquote><p>This suggests that resolvers should restart the query upon finding a CNAME, regardless of where it appears in the response. However, it's important to distinguish between different types of resolvers:</p><ul><li><p>Recursive resolvers, like 1.1.1.1, are full DNS resolvers that perform recursive resolution by querying authoritative nameservers</p></li><li><p>Stub resolvers, like glibc’s getaddrinfo, are simplified local interfaces that forward queries to recursive resolvers and process the responses</p></li></ul><p>The RFC sections on resolver behavior were primarily written with full resolvers in mind, not the simplified stub resolvers that most applications actually use. Some stub resolvers evidently don’t implement certain parts of the spec, such as the CNAME-restart logic described in the RFC. </p>
    <div>
      <h2>The DNSSEC specifications provide contrast</h2>
      <a href="#the-dnssec-specifications-provide-contrast">
        
      </a>
    </div>
    <p>Later DNS specifications demonstrate a different approach to defining record ordering. <a href="https://datatracker.ietf.org/doc/html/rfc4035"><u>RFC 4035</u></a>, which defines protocol modifications for <a href="https://www.cloudflare.com/learning/dns/dnssec/how-dnssec-works/"><u>DNSSEC</u></a>, uses more explicit language:</p><blockquote><p>When placing a signed RRset in the Answer section, the name server MUST also place its RRSIG RRs in the Answer section. The RRSIG RRs have a higher priority for inclusion than any other RRsets that may have to be included.</p></blockquote><p>The specification uses "MUST" and explicitly defines "higher priority" for <a href="https://www.cloudflare.com/learning/dns/dnssec/how-dnssec-works/"><u>RRSIG</u></a> records. However, "higher priority for inclusion" refers to whether RRSIGs should be included in the response, not where they should appear. This provides unambiguous guidance to implementers about record inclusion in DNSSEC contexts, while not mandating any particular behavior around record ordering.</p><p>For unsigned zones, however, the ambiguity from RFC 1034 remains. The word "preface" has guided implementation behavior for nearly four decades, but it has never been formally specified as a requirement.</p>
    <div>
      <h2>Do CNAME records come first?</h2>
      <a href="#do-cname-records-come-first">
        
      </a>
    </div>
    <p>While in our interpretation the RFCs do not require CNAMEs to appear in any particular order, it’s clear that at least some widely-deployed DNS clients rely on it. As some systems using these clients might be updated infrequently, or never updated at all, we believe it’s best to require CNAME records to appear in-order before any other records.</p><p>Based on what we have learned during this incident, we have reverted the CNAME re-ordering and do not intend to change the order in the future.</p><p>To prevent any future incidents or confusion, we have written a proposal in the form of an <a href="https://www.ietf.org/participate/ids/"><u>Internet-Draft</u></a> to be discussed at the IETF. If consensus is reached on the clarified behavior, this would become an RFC that explicitly defines how to correctly handle CNAMEs in DNS responses, helping us and the wider DNS community navigate the protocol. The proposal can be found at <a href="https://datatracker.ietf.org/doc/draft-jabley-dnsop-ordered-answer-section/">https://datatracker.ietf.org/doc/draft-jabley-dnsop-ordered-answer-section</a>. If you have suggestions or feedback we would love to hear your opinions, most usefully via the <a href="https://datatracker.ietf.org/wg/dnsop/about/"><u>DNSOP working group</u></a> at the IETF.</p> ]]></content:encoded>
            <category><![CDATA[1.1.1.1]]></category>
            <category><![CDATA[Post Mortem]]></category>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[Resolver]]></category>
            <category><![CDATA[Standards]]></category>
            <category><![CDATA[Bugs]]></category>
            <category><![CDATA[Consumer Services]]></category>
            <guid isPermaLink="false">3fP84BsxwSxKr7ffpmVO6s</guid>
            <dc:creator>Sebastiaan Neuteboom</dc:creator>
        </item>
        <item>
            <title><![CDATA[Fresh insights from old data: corroborating reports of Turkmenistan IP unblocking and firewall testing]]></title>
            <link>https://blog.cloudflare.com/fresh-insights-from-old-data-corroborating-reports-of-turkmenistan-ip/</link>
            <pubDate>Mon, 03 Nov 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare used historical data to investigate reports of potential new firewall tests in Turkmenistan. Shifts in TCP resets/timeouts across ASNs corroborate large-scale network control system changes.
 ]]></description>
            <content:encoded><![CDATA[ <p>Here at Cloudflare, we frequently use and write about data in the present. But sometimes understanding the present begins with digging into the past.  </p><p>We recently learned of a 2024 <a href="https://turkmen.news/internet-amnistiya-v-turkmenistane-razblokirovany-3-milliarda-ip-adresov-hostingi-i-cdn/"><u>turkmen.news article</u></a> (available in Russian) that reports <a href="https://radar.cloudflare.com/tm"><u>Turkmenistan</u></a> experienced “an unprecedented easing in blocking,” causing over 3 billion previously-blocked IP addresses to become reachable. The same article reports that one of the reasons for unblocking IP addresses was that Turkmenistan may have been testing a new firewall. (The Turkmen government’s tight control over the country’s Internet access <a href="https://www.bbc.com/news/world-asia-16095369"><u>is well-documented</u></a>.) </p><p>Indeed, <a href="https://radar.cloudflare.com/"><u>Cloudflare Radar</u></a> shows a surge of requests coming from Turkmenistan around the same time, as we’ll show below. But we had an additional question: Does the firewall activity show up on Radar, as well? Two years ago, we launched the <a href="https://blog.cloudflare.com/tcp-resets-timeouts/"><u>dashboard on Radar</u></a> to give a window into the TCP connections to Cloudflare that close due to resets and timeouts. These stand out because they are considered ungraceful mechanisms to close TCP connections, according to the TCP specification. </p><p>In this blog post, we go back in time to share what Cloudflare saw in connection resets and timeouts. We must remind our readers that, as passive observers, there are <a href="https://blog.cloudflare.com/connection-tampering/#limitations-of-our-data"><u>limitations on what we can glean from the data</u></a>. For example, our data can’t reveal attribution. Even so, the ability to observe our environment <a href="https://blog.cloudflare.com/tricky-internet-measurement/"><u>can be insightful</u></a>. In a recent example, our visibility into resets and timeouts helped corroborate reports of large-scale <a href="https://blog.cloudflare.com/russian-internet-users-are-unable-to-access-the-open-internet/"><u>blocking and traffic tampering by Russia</u></a>.</p>
    <div>
      <h3>Turkmenistan requests where there were none before</h3>
      <a href="#turkmenistan-requests-where-there-were-none-before">
        
      </a>
    </div>
    <p>Let’s look first at the number of requests, since those should increase if IP addresses are unblocked. In mid-June 2024 Cloudflare started receiving a noticeable increase in HTTP requests, consistent with <a href="https://turkmen.news/internet-amnistiya-v-turkmenistane-razblokirovany-3-milliarda-ip-adresov-hostingi-i-cdn/"><u>reports</u></a> of Turkmenistan unblocking IPs.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3Kqaxxjv9g52RVMWg92AYu/e57468cf523702cadd634c34775be033/BLOG_3069_2.png" />
          </figure><p><sup>Source: </sup><a href="https://radar.cloudflare.com/traffic/tm?dateStart=2024-06-01&amp;dateEnd=2024-06-30"><sup>Cloudflare Radar</sup></a></p>
    <div>
      <h3>Overall TCP resets and timeouts</h3>
      <a href="#overall-tcp-resets-and-timeouts">
        
      </a>
    </div>
    <p>The Transmission Control Protocol (TCP) is a lower-layer mechanism used to create a connection between clients and servers, and also carries <a href="https://radar.cloudflare.com/adoption-and-usage#http1x-vs-http2-vs-http3"><u>70% of HTTP traffic</u></a> to Cloudflare. A TCP connection works <a href="https://blog.cloudflare.com/connection-tampering/#explaining-tampering-with-telephone-calls"><u>much like a telephone call</u></a> between humans, who follow graceful conventions to end a call—and who are acutely aware when conventions are broken if a call ends abruptly.  </p><p>TCP also defines conventions to end the connection gracefully, and we developed <a href="https://blog.cloudflare.com/tcp-resets-timeouts/"><u>mechanisms to detect</u></a> when they don’t. An ungraceful end is triggered by a reset instruction or a timeout. Some are due to benign artifacts of software design or human user behaviours. However, sometimes they are exploited by <a href="https://blog.cloudflare.com/connection-tampering"><u>third parties to close connections</u></a> in everything from school and enterprise firewalls or software, to zero-rating on mobile plans, to nation-state filtering.</p><p>When we look at connections from Turkmenistan, we see that on June 13, 2024, the combined proportion of the four coloured regions increases; each coloured region represents ungraceful ends at a distinct stage of the connection lifetime. In addition to the combined increase, the relative proportions between stages (or colours) changes as well.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1hNDpdNS9lDPKg3jFHigiL/ff3de33af7974c5d32ba421cbbc3c42e/BLOG_3069_3.png" />
          </figure><p><sup>Source: </sup><a href="https://radar.cloudflare.com/security/network-layer/tm?dateStart=2023-10-01&amp;dateEnd=2023-11-30#tcp-resets-and-timeouts"><sup>Cloudflare Radar</sup></a></p><p>Further changes appeared in the weeks that followed. Among them are an increase in Post-PSH (orange) anomalies starting around July 4; a reduction in Post-ACK (light blue) anomalies around July 13; and an increase in anomalies later in connections (green) starting July 22.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6IavKOkF7tB02MtNqJPqqD/f08c78f65894e751b7c9fce9820dee85/BLOG_3069_4.png" />
          </figure><p><sup>Source: </sup><a href="https://radar.cloudflare.com/security/network-layer/tm?dateStart=2024-07-01&amp;dateEnd=2024-07-30#tcp-resets-and-timeouts"><sup>Cloudflare Radar</sup></a></p><p>The shifts above <i>could</i> be explained by a large firewall system. It’s important to keep in mind that data in each of the connection stages (captured by the four coloured regions in the graphs) can be explained by browser implementations or user actions. However, the scale of the data would need a great number of browsers or users doing the same thing to show up. Similarly, individual changes in behaviour would be lost unless they occur in large numbers at the same time.</p>
    <div>
      <h3>Digging down to individual networks</h3>
      <a href="#digging-down-to-individual-networks">
        
      </a>
    </div>
    <p>We’ve learned that it can be helpful to look at the data for individual networks to reveal common patterns between different networks in different regions <a href="https://blog.cloudflare.com/tcp-resets-timeouts/#zero-rating-in-mobile-networks"><u>operated by single entities</u></a>. </p><p>Looking at individual networks within Turkmenistan, trends and timelines appear more pronounced. July 22 in particular sees greater proportions of anomalies associated with the <a href="https://www.cloudflare.com/learning/ssl/what-is-sni/"><u>Server Name Indication</u></a>, or domain name, rather than the IP address (dark blue), although the connection stage where the anomalies appear varies by individual network.</p><p>The general Turkmenistan trends are largely mirrored in connections from <a href="https://radar.cloudflare.com/as20661"><u>AS20661 (TurkmenTelecom)</u></a>, indicating that this <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/"><u>autonomous system</u></a> (AS) accounts for <a href="https://radar.cloudflare.com/tm#autonomous-systems"><u>a large proportion of Turkmenistan’s traffic</u></a> to Cloudflare’s network. There is a notable reduction in Post-ACK (light blue) anomalies starting around July 26.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5ukNOB1CYUAPW2s7ofdqMK/7d1dca367374db90627413e2c40a6ee3/BLOG_3069_5.png" />
          </figure><p><sup>Source: </sup><a href="https://radar.cloudflare.com/security/network-layer/tm?dateStart=2023-10-01&amp;dateEnd=2023-11-30#tcp-resets-and-timeouts"><sup>Cloudflare Radar</sup></a></p><p>A different picture emerges from <a href="https://radar.cloudflare.com/as51495"><u>AS51495 (Ashgabat City Telephone Network)</u></a>. Post-ACK anomalies almost completely disappear on July 12, corresponding with an increase in anomalies during the Post-PSH stage. An increase of anomalies in the Later (green) connection stage on July 22 is apparent for this AS as well.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7btBYWx2VVVg0MH10yY9ot/17e87bf94f97b1cd43139e432f189770/BLOG_3069_6.png" />
          </figure><p><sup>Source: </sup><a href="https://radar.cloudflare.com/security/network-layer/tm?dateStart=2023-10-01&amp;dateEnd=2023-11-30#tcp-resets-and-timeouts"><sup>Cloudflare Radar</sup></a></p><p>Finally, for <a href="https://radar.cloudflare.com/as59974"><u>AS59974 (Altyn Asyr)</u></a>, you can see below that there is a clear spike in Post-ACK anomalies starting July 22. This is the stage of the connection where a firewall could have seen the SNI, and chooses to drop the packets immediately, so they never reach Cloudflare’s servers.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4pxUHjzkRwnbmaSsgkhiKd/b56fbc84e2fdcd8b889b6e8b3a68dc40/BLOG_3069_7.png" />
          </figure><p><sup>Source: </sup><a href="https://radar.cloudflare.com/security/network-layer/tm?dateStart=2023-10-01&amp;dateEnd=2023-11-30#tcp-resets-and-timeouts"><sup>Cloudflare Radar</sup></a></p>
    <div>
      <h3>Timeouts and resets in context, never isolation</h3>
      <a href="#timeouts-and-resets-in-context-never-isolation">
        
      </a>
    </div>
    <p>We’ve previously discussed <a href="https://blog.cloudflare.com/tcp-resets-timeouts/"><u>how to use the resets and timeouts</u></a> data because, while useful, it can also be misinterpreted. Radar’s data on resets and timeouts is unique among operators, but in isolation it’s incomplete and subject to human bias. </p><p>Take the figure above for AS59974 where Post-ACK (light blue) anomalies markedly increased on July 22. The Radar view is proportional, meaning that the increase in proportion could be explained by greater numbers of anomalies – but could also be explained, for example, by a smaller number of valid requests. Indeed, looking at the HTTP request levels for the same AS, there was a similarly pronounced drop starting on the same day, as shown below. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2PAYPpcFeInis6zo4lWrSx/f28a1f84fbe5b1c21659911b11331c30/BLOG_3069_8.png" />
          </figure><p><sup>Source: </sup><a href="https://radar.cloudflare.com/security/network-layer/tm?dateStart=2023-10-01&amp;dateEnd=2023-11-30#tcp-resets-and-timeouts"><sup>Cloudflare Radar</sup></a></p><p>If we look at the same two graphs before July 22, however, rates of reset and timeout anomalies do not appear to mirror the very large shifts up and down in HTTP requests.</p>
    <div>
      <h3>Looking ahead can also mean looking behind</h3>
      <a href="#looking-ahead-can-also-mean-looking-behind">
        
      </a>
    </div>
    <p>These charts from Radar above offer a way to analyze news events from a different angle, by looking at requests and TCP connection resets and timeouts. Does this data tell us definitively that new firewalls were being tested in Turkmenistan? No. But the trends in the data are consistent with what we could expect to see if that were the case.</p><p>If thinking about ways to use the resets and timeouts data going forward, we’d encourage also looking at the data in retrospect—or even further past to improve context.</p><p>A natural question might be, for example, “If Turkmenistan stopped blocking IPs in mid-2024, what did the data say beforehand?” The figure below captures October and November 2023. (The red-shaded region contains missing data due to the <a href="https://blog.cloudflare.com/post-mortem-on-cloudflare-control-plane-and-analytics-outage"><u>Nov. 2 Cloudflare control plane and metrics outage</u></a>.) Signals about the Internet in Turkmenistan were evolving well before the <a href="https://turkmen.news/internet-amnistiya-v-turkmenistane-razblokirovany-3-milliarda-ip-adresov-hostingi-i-cdn/"><u>news article</u></a> that prompted us to look.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2W4MfieKNV24PmvynAAIfO/af42a2328059eb15fba0619372973887/BLOG_3069_9.png" />
          </figure><p><sup>Source: </sup><a href="https://radar.cloudflare.com/security/network-layer/tm?dateStart=2023-10-01&amp;dateEnd=2023-11-30#tcp-resets-and-timeouts"><sup>Cloudflare Radar</sup></a></p>
    <div>
      <h3>What’s next?</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>To learn more, see our guide about <a href="https://blog.cloudflare.com/tcp-resets-timeouts/"><u>how to use the resets and timeouts data available on Radar</u></a>, as well as the technical details about our <a href="https://blog.cloudflare.com/connection-tampering/"><u>third-party tampering measurement </u></a>and some perspectives by a former <a href="https://blog.cloudflare.com/experience-of-data-at-scale/"><u>intern who helped drive</u></a> the study. </p><p>We’re proud to offer a unique view of TCP connection anomalies on Radar. It’s a testament to the long-lived benefits that emerge when approaching <a href="https://blog.cloudflare.com/tricky-internet-measurement/"><u>Internet measurement as a science</u></a>. In keeping with the open spirit of science, we’ve also shared how we<a href="https://blog.cloudflare.com/tricky-internet-measurement/"><u> detect and log resets and timeouts</u></a> so that others can reproduce the observability on their servers, whether by hobbyists or other large operators.</p> ]]></content:encoded>
            <category><![CDATA[Radar]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Internet Shutdown]]></category>
            <category><![CDATA[Internet Trends]]></category>
            <category><![CDATA[Trends]]></category>
            <category><![CDATA[Consumer Services]]></category>
            <guid isPermaLink="false">404c64k0KinGRYZkfe0xum</guid>
            <dc:creator>Luke Valenta</dc:creator>
            <dc:creator>Marwan Fayed</dc:creator>
        </item>
        <item>
            <title><![CDATA[Online outages: Q3 2025 Internet disruption summary]]></title>
            <link>https://blog.cloudflare.com/q3-2025-internet-disruption-summary/</link>
            <pubDate>Tue, 28 Oct 2025 12:00:00 GMT</pubDate>
            <description><![CDATA[ In Q3 2025, we observed Internet disruptions around the world resulting from government directed shutdowns, power outages, cable cuts, a cyberattack, an earthquake, a fire, and technical problems, as well as several with unexplained causes. ]]></description>
            <content:encoded><![CDATA[ <p>In the third quarter, we observed Internet disruptions with a wide variety of known causes, as well as several with <a href="#no-definitive-cause"><u>no definitive or published cause</u></a>. Once again, we unfortunately saw a number of <a href="#government-directed-shutdowns"><u>government-directed shutdowns</u></a>, including exam-related shutdowns in <a href="#sudan"><u>Sudan</u></a>, <a href="#syria"><u>Syria</u></a>, and <a href="#iraq"><u>Iraq</u></a>. <a href="#fiber-optic-cable-damage"><u>Cable cuts</u></a>, both submarine and terrestrial, caused Internet outages, including one caused by a <a href="#texas-united-states"><u>stray bullet</u></a>. <a href="#gibraltar"><u>A rogue contractor</u></a>, among other events, caused power outages that impacted Internet connectivity. Damage from an <a href="#earthquake"><u>earthquake</u></a> and a <a href="#fire-causes-infrastructure-damage"><u>fire</u></a> caused service disruptions, as did a targeted <a href="#targeted-cyberattack"><u>cyberattack</u></a>. And a myriad of <a href="#technical-problems"><u>technical issues</u></a>, including issues with <a href="#china"><u>China’s Great Firewall</u></a>, resulted in traffic losses across multiple countries.</p><p>As we have noted in the past, this post is intended as a summary overview of observed and confirmed disruptions, and is not an exhaustive or complete list of issues that have occurred during the quarter. A larger list of detected traffic anomalies is available in the <a href="https://radar.cloudflare.com/outage-center#traffic-anomalies"><u>Cloudflare Radar Outage Center</u></a>. These anomalies are detected through significant deviations from expected traffic patterns observed across our network. Note that both bytes-based and request-based traffic graphs are used within the post to illustrate the impact of the observed disruptions — the choice of metric to include was generally made based on which better illustrated the impact of the disruption.</p>
    <div>
      <h2>Government-directed shutdowns</h2>
      <a href="#government-directed-shutdowns">
        
      </a>
    </div>
    
    <div>
      <h3>Sudan</h3>
      <a href="#sudan">
        
      </a>
    </div>
    <p>Regular drops in traffic from <a href="https://radar.cloudflare.com/sd"><u>Sudan</u></a> were observed between 12:00-15:00 UTC (14:00-17:00 local time) each day from July 7-10. Partial outages were observed at <a href="https://radar.cloudflare.com/traffic/as15706?dateStart=2025-07-06&amp;dateEnd=2025-07-12#http-traffic"><u>Sudatel (AS15706)</u></a>, and near-complete outages at <a href="https://radar.cloudflare.com/traffic/as36998?dateStart=2025-07-06&amp;dateEnd=2025-07-12#http-traffic"><u>SDN Mobitel (AS36998)</u></a> and <a href="https://radar.cloudflare.com/traffic/as36972?dateStart=2025-07-06&amp;dateEnd=2025-07-12#http-traffic"><u>MTN Sudan (AS36972)</u></a>. Similar drops were also seen in traffic to our <a href="https://1.1.1.1/dns"><u>1.1.1.1 DNS resolver</u></a> from these impacted <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/"><u>ASNs</u></a>.</p><p>We have observed Sudan implementing government-directed Internet shutdowns in the past (<a href="https://blog.cloudflare.com/sudans-exam-related-internet-shutdowns/"><u>2021</u></a>, <a href="https://blog.cloudflare.com/syria-sudan-algeria-exam-internet-shutdown/#sudan"><u>2022</u></a>), and given that the timing aligns with the last four days of <a href="https://www.suna-sd.net/posts/ministry-of-education-publishes-schedule-for-postponed-2024-secondary-school-certificate-examinations"><u>postponed 2024 secondary school certificate examinations</u></a>, in addition to fitting the pattern of short-duration disruptions repeating across multiple days, we believe that these drops in traffic were exam-related shutdowns as well. </p>
    <div>
      <h3>Syria</h3>
      <a href="#syria">
        
      </a>
    </div>
    <p>In our <a href="https://blog.cloudflare.com/q2-2025-internet-disruption-summary/#syria"><u>second quarter post</u></a>, we covered the cellular connectivity-focused exam-related Internet shutdowns that <a href="https://radar.cloudflare.com/sy"><u>Syria</u></a> chose to implement this year in an effort to limit their impact. During the second quarter, the shutdowns associated with the “Basic Education Certificate” took place on June 21, 24, and 29 between 05:15 - 06:00 UTC (08:15 - 09:00 local time). Exams and associated shutdowns for the “Secondary Education Certificate” were scheduled to take place between July 12 and August 3, and during that period, we observed six additional Internet disruptions in Syria on July 12, 17, 21, 28, 31, and August 3, as shown in the graph below.</p><p>At the end of the exam period, the <a href="https://t.me/TrbyaGov/2352"><u>Syrian Ministry of Education posted a Telegram message</u></a> that was presumably intended to justify the shutdowns, and the focus on cellular connectivity. Translated, it said in part:</p><p>“<i>As part of its efforts to ensure the integrity of the examination process, and in coordination with relevant authorities, the Ministry of Education was able to uncover organized exam cheating networks in three examination centers in Lattakia Governorate. These networks used advanced electronic technologies and devices in their attempt to manipulate the exam process.</i></p><p><i>The network was seized in cooperation with the Lattakia Education Directorate, following close monitoring and detection of suspicious attempts. It was found that members of the network used small earphones, wireless communication devices, and mobile phones equipped with advanced transmission and reception technologies, which contradict educational values and violate the integrity of the examination process and the principle of justice.</i>”</p>
    <div>
      <h3>Venezuela </h3>
      <a href="#venezuela">
        
      </a>
    </div>
    <p>A slightly more unusual government directed shutdown took place in <a href="https://radar.cloudflare.com/ve"><u>Venezuela</u></a> on August 18 when Venezuelan provider <a href="https://radar.cloudflare.com/as22313"><u>SuperCable (AS22313)</u></a> ceased service. An <a href="https://x.com/vesinfiltro/status/1957601745321783746"><u>X post</u></a> from Venezuelan industry watcher <a href="https://vesinfiltro.org/"><u>VE sin Filtro</u></a> published a notification from <a href="https://conatel.gob.ve/"><u>CONATEL, the National Commission of Telecommunications in Venezuela</u></a>, that notified SuperCable that as of March 14, 2025, its authority to operate in the country had been revoked, and established a 60 day transition period so that users could find another provider. Another <a href="https://x.com/vesinfiltro/status/1957595268221632929"><u>X post from VE sin Filtro</u></a> shared an email that SuperCable subscribers received from the company announcing the end of the service and, and noted that half an hour after the email was sent, subscribers were left without Internet connectivity. Traffic began to fall at 15:00 UTC (11:00 local time), and was gone after 15:30 UTC (11:30 local time). Connectivity remained shut down through the end of the quarter.</p><p>Interestingly, we did not see a corresponding full loss of announced IP address space when traffic disappeared. However, such full losses did occur between <a href="https://radar.cloudflare.com/routing/as22313?dateStart=2025-08-17&amp;dateEnd=2025-08-23"><u>August 19-21</u></a>, and again briefly on <a href="https://radar.cloudflare.com/routing/as22313?dateStart=2025-09-14&amp;dateEnd=2025-09-20"><u>September 16</u></a>. The number of announced /24s (blocks of 256 IPv4 addresses) fell from 95 to 63 on <a href="https://radar.cloudflare.com/routing/as22313?dateStart=2025-09-24&amp;dateEnd=2025-09-30"><u>September 25</u></a>, and remained at that level through the end of the quarter.</p>
    <div>
      <h3>Iraq</h3>
      <a href="#iraq">
        
      </a>
    </div>
    <p>Similar to Syria, we covered the latest rounds of exam-related Internet shutdowns in <a href="https://radar.cloudflare.com/iq"><u>Iraq</u></a> in our <a href="https://blog.cloudflare.com/q2-2025-internet-disruption-summary/#iraq"><u>second quarter blog post</u></a>. In that post, we noted that the shutdowns in the main part of the country ran until July 3 for <a href="https://www.facebook.com/Iraq.Ministry.of.Education/posts/pfbid0a7VuMttRxdoGWwuaymy38LcZw9jscz3Dfxup4aUue2LeRBPuU2c7vnDsZKbgCkE2l"><u>preparatory school exams</u></a>, and through July 6 in the Kurdistan region. These can be seen in the graph below.</p><p>The <a href="https://pulse.internetsociety.org/en/shutdowns/exams-shutdown-kurdistan-iraq-25-august-2025/"><u>Kurdistan Regional Government in Iraq ordered Internet services to be suspended</u></a> on August 23 between 03:30 and 04:45 UTC (6:30-7:45 local time), and again every Saturday, Monday, and Wednesday until September 8 to prevent cheating on the <a href="https://www.kurdistan24.net/ckb/story/859388/%D9%88%DB%95%D8%B2%D8%A7%D8%B1%DB%95%D8%AA%DB%8C-%DA%AF%D9%88%D8%A7%D8%B3%D8%AA%D9%86%DB%95%D9%88%DB%95-%D9%84%DB%95-%DA%95%DB%86%DA%98%D8%A7%D9%86%DB%8C-%D8%AA%D8%A7%D9%82%DB%8C%DA%A9%D8%B1%D8%AF%D9%86%DB%95%D9%88%DB%95%DA%A9%D8%A7%D9%86%DB%8C-%D9%BE%DB%86%D9%84%DB%8C-12-%D9%87%DB%8E%DA%B5%DB%95%DA%A9%D8%A7%D9%86%DB%8C-%D8%A6%DB%8C%D9%86%D8%AA%DB%95%D8%B1%D9%86%DB%8E%D8%AA-%DA%95%D8%A7%D8%AF%DB%95%DA%AF%DB%8C%D8%B1%DB%8E%D9%86"><u>second round of grade 12 exams</u></a>. Similar to last quarter, <a href="https://radar.cloudflare.com/as206206"><u>KNET (AS206206)</u></a>, <a href="https://radar.cloudflare.com/as21277"><u>Newroz Telecom (AS21277)</u></a>, <a href="https://radar.cloudflare.com/as48492"><u>IQ Online (AS48492)</u></a>, and <a href="https://radar.cloudflare.com/as59625"><u>KorekTel (AS59625)</u></a> were impacted by the ordered shutdowns.</p><p>In the main part of the country, starting on August 26, the latest round of <a href="https://pulse.internetsociety.org/en/shutdowns/internet-shutdown-for-iraq-exam-26-august-2025/"><u>Internet shutdowns for high school exams</u></a> began, scheduled through September 13, taking place between 03:00-05:00 UTC (06:00-08:00 local time). Networks impacted by these shutdowns included <a href="https://radar.cloudflare.com/traffic/as199739"><u>Earthlink (AS199739)</u></a>, <a href="https://radar.cloudflare.com/traffic/as51684"><u>Asiacell (AS51684)</u></a>, <a href="https://radar.cloudflare.com/traffic/as59588"><u>Zainas (AS59588)</u></a>, <a href="https://radar.cloudflare.com/traffic/as58322"><u>Halasat (AS58322)</u></a>, and <a href="https://radar.cloudflare.com/traffic/as203214"><u>HulumTele (AS203214)</u></a>.</p>
    <div>
      <h3>Afghanistan</h3>
      <a href="#afghanistan">
        
      </a>
    </div>
    <p>In mid-September, the Taliban <a href="https://amu.tv/200798/"><u>ordered the shutdown of fiber optic Internet connectivity</u></a> in multiple provinces across <a href="https://radar.cloudflare.com/af"><u>Afghanistan</u></a>, as part of a drive to “prevent immorality”. It was the first such ban issued since the Taliban took full control of the country in August 2021. As many as <a href="https://amu.tv/200798/"><u>15 provinces</u></a> experienced shutdowns, and these regional shutdowns <a href="https://www.afghanstudiescenter.org/taliban-internet-shutdown-blocks-thousands-of-afghan-students-from-online-classes/"><u>blocked</u></a> Afghani students from attending online classes, <a href="https://theweek.com/world-news/afghanistan-taliban-high-speed-internet-women-education"><u>impacted</u></a> commerce and banking, and <a href="https://www.dw.com/en/afghanistan-whats-at-stake-as-taliban-cut-internet/a-74043564"><u>limited access</u></a> to government agencies and institutions such as passport and registration offices, customs offices.</p><p>Less than two weeks later, just after 11:30 UTC (16:00 local time) on Monday, September 29, 2025, subscribers of wired Internet providers in <a href="https://radar.cloudflare.com/traffic/af"><u>Afghanistan</u></a> experienced a <a href="https://x.com/CloudflareRadar/status/1972649804821057727"><u>brief service interruption</u></a>, lasting until just before 12:00 UTC (16:30 local time). Mobile providers <a href="https://radar.cloudflare.com/explorer?dataSet=netflows&amp;loc=&amp;dt=1d&amp;asn=as131284&amp;compAsn=as38742&amp;timeCompare=2025-09-21"><u>Afghan Wireless (AS38472) and Etisalat (AS131284)</u></a> remained available during that period. However, just after 12:30 UTC (17:00 local time), the Internet was <a href="https://x.com/CloudflareRadar/status/1972682041759076637"><u>completely shut down</u></a>, taking the country completely offline.</p><p>These shutdowns are reviewed in more detail in our September 30 blog post, <a href="https://blog.cloudflare.com/nationwide-internet-shutdown-in-afghanistan/"><i><u>Nationwide Internet shutdown in Afghanistan extends localized disruptions</u></i></a>. Connectivity was restored around 11:45 UTC (16:15 local time) on October 1.</p>
    <div>
      <h2>Fiber optic cable damage</h2>
      <a href="#fiber-optic-cable-damage">
        
      </a>
    </div>
    
    <div>
      <h3>Dominican Republic</h3>
      <a href="#dominican-republic">
        
      </a>
    </div>
    <p>On July 7, a <a href="https://x.com/ClaroRD/status/1942286349006168091"><u>post on X from Claro</u></a> alerted subscribers to a service disruption caused by damage to two fiber optic cables. According to a <a href="https://x.com/ClaroRD/status/1942368212160516305"><u>subsequent post</u></a>, one was damaged by work being done by <a href="http://coraavega.gob.do"><u>CORAAVEGA</u></a> (La Vega Water And Sewerage Corporation) and the other by work being done by the Dominican Electric Transmission Company. As a result of the damage, traffic from <a href="https://radar.cloudflare.com/as6400"><u>Claro (AS6400)</u></a> began to drop just before 16:00 UTC (12:00 local time), falling just over two-thirds compared to the prior week. Claro’s technicians were able to quickly locate the faults and repair them, with traffic recovering around 18:00 UTC (14:00 local time).</p>
    <div>
      <h3>Angola</h3>
      <a href="#angola">
        
      </a>
    </div>
    <p>Between 12:45-15:45 UTC (13:45-16:45 local time) on July 19, users in <a href="https://radar.cloudflare.com/ao"><u>Angola</u></a> experienced an Internet disruption, with <a href="https://radar.cloudflare.com/as37119"><u>Unitel Angola (AS37119)</u></a> experiencing <a href="https://radar.cloudflare.com/explorer?dataSet=netflows&amp;loc=as37119&amp;dt=2025-07-19_2025-07-19&amp;timeCompare=2025-07-12#query"><u>as much as a 95% drop in traffic</u></a> as compared to the previous week, and <a href="https://radar.cloudflare.com/traffic/as327932?dateStart=2025-07-19&amp;dateEnd=2025-07-19"><u>Connectis (AS327932)</u></a> suffering a complete outage. According to an <a href="https://x.com/unitelao/status/1946644209370358120"><u>X post from Unitel Angola</u></a>, it “<i>was caused by a disruption at our partner Angola Cables, resulting from public road works that affected the national fiber optic interconnections.</i>”</p><p>However, the timing of the disruption coincided with protests over the rise in diesel fuel prices, and local non-governmental organizations <a href="https://www.verangola.net/va/en/072025/Society/45242/Angolan-NGOs-consider-internet-shutdown-during-Saturday%27s-protests-a-dictatorial-measure.htm"><u>disputed</u></a> Unitel Angola’s explanation, <a href="https://myemail.constantcontact.com/STATEMENT-OF-REPUDIATION--ON-THE-INTERNET-SHUTDOWN-DURING-THE-DEMONSTRATIONS-OF-JULY-19-.html"><u>claiming</u></a> that it was actually due to a government-directed Internet shutdown. Multiple Angolan network providers experienced a drop in announced IP address space during the period the Internet disruption occurred, and analysis of routing information for these networks finds that they share <a href="https://radar.cloudflare.com/as37468"><u>Angola Cables (AS37468)</u></a> as an upstream provider, lending some credence to the explanation from Unitel Angola.</p>
    <div>
      <h3>Haiti</h3>
      <a href="#haiti">
        
      </a>
    </div>
    <p><a href="https://radar.cloudflare.com/as27653"><u>Digicel Haiti (AS27653)</u></a> is no stranger to Internet disruptions caused by damage to both terrestrial and submarine cables, experiencing such problems during the <a href="https://blog.cloudflare.com/q1-2025-internet-disruption-summary/#haiti"><u>first</u></a> and <a href="https://blog.cloudflare.com/q2-2025-internet-disruption-summary/#digicel-haiti"><u>second</u></a> quarters of 2025, as well as <a href="https://blog.cloudflare.com/q1-2024-internet-disruption-summary/#digicel-haiti"><u>first</u></a>, <a href="https://blog.cloudflare.com/q2-2024-internet-disruption-summary/#haiti"><u>second</u></a>, and <a href="https://blog.cloudflare.com/q3-2024-internet-disruption-summary/#haiti"><u>third</u></a> quarters of 2024. The most recent such disruption occurred on August 26, when they experienced two different cuts on their fiber optic infrastructure, <a href="https://x.com/jpbrun30/status/1960437559558869220"><u>according to an X post</u></a> from the company’s Director General. Traffic <a href="https://radar.cloudflare.com/explorer?dataSet=http&amp;loc=as27653&amp;dt=2025-08-26_2025-08-26&amp;timeCompare=2025-08-19#result"><u>dropped by approximately 80%</u></a> during the disruption, which lasted from 19:30-23:00 UTC (15:30-19:00 UTC).</p>
    <div>
      <h3>Pakistan &amp; United Arab Emirates</h3>
      <a href="#pakistan-united-arab-emirates">
        
      </a>
    </div>
    <p>Telegeography’s <a href="https://www.submarinecablemap.com/"><u>Submarine Cable Map</u></a> shows that the Red Sea has a high density of submarine cables that carry data between Europe, Africa, and Asia. Cuts to these cables <a href="https://www.wired.com/story/houthi-internet-cables-ship-anchor-path/"><u>can significantly impact connectivity</u></a>, ranging from increased latency on international connections to complete outages. The impacts may only affect a single country, or they may disrupt multiple countries connected to a damaged cable. On September 6, <a href="https://radar.cloudflare.com/as17557"><u>Pakistan Telecom (AS17557)</u></a> <a href="https://x.com/PTCLOfficial/status/1964203180876521559"><u>posted a message on X</u></a> that stated “<i>We would like to inform that submarine cable cuts have occurred in Saudi waters near Jeddah, impacting partial bandwidth capacity on </i><a href="https://www.submarinecablemap.com/submarine-cable/seamewe-4"><i><u>SMW4</u></i></a><i> and </i><a href="https://www.submarinecablemap.com/submarine-cable/imewe"><i><u>IMEWE</u></i></a><i> systems. As a result, internet users in Pakistan may experience some service degradation during peak hours.</i>” (Initial reporting that the cable cuts occurred near Jeddah were apparently incorrect, as the <a href="https://www.linkedin.com/feed/update/urn:li:activity:7379509758598406144?commentUrn=urn%3Ali%3Acomment%3A%28activity%3A7379509758598406144%2C7379684775701245952%29&amp;dashCommentUrn=urn%3Ali%3Afsd_comment%3A%287379684775701245952%2Curn%3Ali%3Aactivity%3A7379509758598406144%29"><u>damage occurred in Yemeni waters</u></a>.)</p><p>Looking at the impact in Pakistan, we observed traffic drop by 25-30% in Sindh and Punjab between 12:00-20:00 UTC (17:00 - 01:00 local time).</p><p>In the <a href="https://radar.cloudflare.com/ae"><u>United Arab Emirates</u></a>, Etisalat alerted customers via <a href="https://x.com/eAndUAE/status/1964655864117346578"><u>a post on X</u></a> that they “<i>may experience slowness in data services due to an interruption in the international submarine cables.</i>” Between 11:00-22:00 UTC (15:00-02:00 local time) on September 6, traffic from <a href="https://radar.cloudflare.com/as8966"><u>AS8966 (Etisalat)</u></a> <a href="https://x.com/CloudflareRadar/status/1964727360764469339"><u>dropped as much as 28%</u></a>.</p><p>Also in the UAE, service provider <a href="https://radar.cloudflare.com/as15802"><u>du (AS15802)</u></a> told their customers via a post on X that “<i>You may experience some slowness in our data services due to an International submarine cable cut.</i>” This slowness is visible in Radar’s Internet quality metrics for the network between 11:00-22:00 UTC (15:00-02:00 local time) on September 6, with <a href="https://radar.cloudflare.com/quality/as15802?dateStart=2025-09-06&amp;dateEnd=2025-09-06#bandwidth"><u>median bandwidth</u></a> dropping by more than half, from 25 Mbps to as low as 9.8 Mbps, and <a href="https://radar.cloudflare.com/quality/as15802?dateStart=2025-09-06&amp;dateEnd=2025-09-06#latency"><u>median latency</u></a> doubling from 30 ms to over 60 ms.</p><p>The graphs below provide <a href="https://x.com/CloudflareRadar/status/1964817678541205758"><u>another view of the impact</u></a> of the cable cuts, based on Cloudflare network probes between New Delhi (del-c) to London (lhr-a) and Bombay (bom-c) to Frankfurt (fra-a). For the former pair of data centers, mean latency grew by approximately 20%, and for the latter pair, by approximately 30%, starting around 23:00 UTC on September 5. (The stable latency line at the bottom of both graphs represents probes going over the Cloudflare backbone, which was not impacted by the cable cuts.)</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/MqZmljASqeJlMQO4UFUDw/eb067e32492eecb151eb3d8f4db89bf4/image24.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5C9XAWuaBwASAibBbN5HV4/778c2ad24adaea37f3e0e04c59250fc3/image32.png" />
          </figure>
    <div>
      <h3>Texas, United States</h3>
      <a href="#texas-united-states">
        
      </a>
    </div>
    <p>Fiber optic cables are frequently damaged by errant ship anchors (submarine) or construction equipment (terrestrial), but on September 26, <a href="https://www.wfaa.com/article/tech/stray-bullet-caused-major-spectrum-outages-north-texas/287-e72cdefc-6a0a-4a1e-b181-6d02bc60b732"><u>a stray bullet damaged a cable</u></a> in the Dallas, Texas area, disrupting Internet connectivity for <a href="https://radar.cloudflare.com/as11427"><u>Spectrum (AS11427)</u></a> customers. Spectrum <a href="https://x.com/Ask_Spectrum/status/1971651914283851975"><u>acknowledged the service interruption</u></a> in a post on X, followed by <a href="https://x.com/Ask_Spectrum/status/1971722840279077229"><u>another post</u></a> four and a half hours later stating that the issue had been resolved. Although neither post cited the bullet as the cause of the disruption, <a href="https://www.wfaa.com/article/tech/stray-bullet-caused-major-spectrum-outages-north-texas/287-e72cdefc-6a0a-4a1e-b181-6d02bc60b732"><u>news reports</u></a> attributed the claim to a Spectrum spokesperson. Overall, the disruption was fairly nominal, lasting for just two hours between 18:00-20:00 UTC (13:00-15:00 local time), with traffic dropping less than 25% as compared to the prior week.</p>
    <div>
      <h3>South Africa</h3>
      <a href="#south-africa">
        
      </a>
    </div>
    <p>“Major cable breaks” disrupted Internet connectivity for customers of <a href="https://radar.cloudflare.com/as37457"><u>Telkom (AS37457)</u></a> in <a href="https://radar.cloudflare.com/za"><u>South Africa</u></a> on September 27. Although Telkom acknowledged the <a href="https://x.com/TelkomZA/status/1971901592413913294"><u>initial service disruption</u></a> and its <a href="https://x.com/TelkomZA/status/1971921589316080109"><u>subsequent resolution</u></a> in posts on X, it didn’t provide any information about the cause in these posts. However, it apparently later <a href="https://mybroadband.co.za/news/cellular/612245-telkom-network-suffers-national-outage.html"><u>issued a statement</u></a>, stating “<i>Telkom confirms that mobile voice and data services, which were disrupted earlier on Saturday due to major cable breaks, have now been fully restored nationwide.</i>” The disruption lasted six hours, from 08:00-14:00 UTC (10:00-16:00 local time), with traffic dropping as much as 50% as compared to the previous week.</p>
    <div>
      <h2>Power outages cause Internet disruptions</h2>
      <a href="#power-outages-cause-internet-disruptions">
        
      </a>
    </div>
    
    <div>
      <h3>Tanzania</h3>
      <a href="#tanzania">
        
      </a>
    </div>
    <p>A reported <a href="https://x.com/airtel_tanzania/status/1940072844446359845"><u>power outage at one of Airtel Tanzania's data centers</u></a> on July 1 resulted in a multi-hour disruption in connectivity for its mobile customers. The service interruption occurred between 11:30-18:00 UTC (14:30-21:00 local time), with traffic dropping on <a href="https://radar.cloudflare.com/as37133"><u>Airtel Tanzania (AS37133)</u></a> by as much as 40% as compared to the previous week.</p>
    <div>
      <h3>Czech Republic</h3>
      <a href="#czech-republic">
        
      </a>
    </div>
    <p>According to the Industry and Trade Ministry in the <a href="https://radar.cloudflare.com/cz"><u>Czech Republic</u></a>, <a href="https://www.reuters.com/world/europe/czech-republic-hit-by-major-power-outage-2025-07-04/"><u>a fallen power cable caused a widespread power outage</u></a> on July 4. This power outage impacted Internet connectivity within the country, with <a href="https://x.com/CloudflareRadar/status/1941237676730089797"><u>traffic dropping</u></a> by as much as 32%. Traffic fell just after the power outage began at 10:00 UTC (12:00 local time), and although it was <a href="https://www.reuters.com/world/europe/czech-republic-hit-by-major-power-outage-2025-07-04/"><u>“nearly fully resolved”</u></a> by 16:00 UTC (18:00 local time), traffic did not return to expected levels until closer to 20:00 UTC (22:00 local time). This trailing traffic recovery aligns with a <a href="https://www.expats.cz/czech-news/article/czechia-picks-up-the-pieces-after-power-outage-why-it-happened-and-what-the-future-holds"><u>published report</u></a> that noted “<i>While ČEPS, the national transmission system operator, restored full grid functionality by mid-afternoon, tens of thousands remained without electricity into the evening.</i>”</p>
    <div>
      <h3>St. Vincent and the Grenadines</h3>
      <a href="#st-vincent-and-the-grenadines">
        
      </a>
    </div>
    <p>On <a href="https://radar.cloudflare.com/vc"><u>St. Vincent and the Grenadines</u></a>, the St Vincent Electricity Services Limited (VINLEC) <a href="https://www.facebook.com/VINLECSVG/posts/st-vincent-electricity-services-limited-vinlec-experienced-a-system-failure-at-a/1308214567765820/"><u>stated in a Facebook post</u></a> that a “system failure” caused a power outage that affected customers on mainland St. Vincent. According to <a href="https://www.vinlec.com/"><u>VINLEC</u></a>, the system failed at approximately 11:30 local time on August 16 (03:30 UTC on August 17), and power was restored to all customers just after 04:00 local time on August 17 (08:00 UTC). During the four-hour power outage, which also disrupted Internet connectivity, traffic dropped by as much as 80% below expected levels.</p>
    <div>
      <h3>Curaçao</h3>
      <a href="#curacao">
        
      </a>
    </div>
    <p>In <a href="https://radar.cloudflare.com/cw"><u>Curaçao</u></a>, a series of Facebook posts from <a href="https://www.aqualectra.com/"><u>Aqualectra</u></a>, the island’s water and power company, <a href="https://www.facebook.com/AqualectraUtilityCuracao/posts/pfbid02wBV7CqovjuSTX52NCpYVqKAjzGkgoAurCUVnrVDCqKEA8hNpyRoh96SaGTUQ7C8Ll"><u>confirmed</u></a> that there was a power outage, and provided updates on the <a href="https://www.facebook.com/AqualectraUtilityCuracao/posts/pfbid017xNQW9sbLnmXEHo3y8mU22cbKtdzYXoKfVL7fFJ1pomMTHitty5wg5ZjN1YnMDgl"><u>progress</u></a> towards <a href="https://www.facebook.com/AqualectraUtilityCuracao/posts/pfbid021MAkFoaSVZiN8inieUxryV3ACVhZy1bjkSmp5MgG5PgceSWZ1X6i6SJAD7z1gM32l"><u>restoration</u></a>. The impact of the power outage to Internet connectivity was visible in traffic disruptions across several Internet service providers, including <a href="https://radar.cloudflare.com/as52233"><u>Flow (AS52233)</u></a> and <a href="https://radar.cloudflare.com/as11081"><u>UTS (AS11081)</u></a>. The observed disruptions lasted for most of the day, with traffic dropping around 06:45 UTC (02:45 local time) and recovering to expected levels around 23:45 UTC (19:45 local time). During the disruption, <a href="https://bsky.app/profile/radar.cloudflare.com/post/3lxf4cn53cv2p"><u>the country's traffic dropped by over 80%</u></a> as compared to the previous week, with Flow experiencing a near complete outage.</p>
    <div>
      <h3>Cuba</h3>
      <a href="#cuba">
        
      </a>
    </div>
    <p>Wide-scale power outages occur all too frequently in <a href="https://radar.cloudflare.com/cu"><u>Cuba</u></a>, and when power is lost, Internet connectivity follows. We have <a href="https://www.google.com/search?q=cuba+power+outage+site%3Ablog.cloudflare.com"><u>covered many such events in this series of blog posts</u></a> over the last several years, and the latest occurred on September 10. That morning, <a href="https://x.com/OSDE_UNE/status/1965770929675608214"><u>an X post</u></a> from the <a href="https://www.unionelectrica.cu/"><u>Unión Eléctrica de Cuba</u></a> reported the collapse of the national electric power system at 09:14 local time (13:14 UTC) following the unexpected shutdown of the <a href="https://www.gem.wiki/Antonio_Guiteras_Thermoelectric_Power_Plant_(CTE)"><u>Antonio Guiteras Thermoelectric Power Plant (CTE)</u></a>. The island’s Internet traffic dropped by nearly 60% (as compared to expected levels) almost immediately, and remained lower than normal for over a day, returning to expected levels around 17:15 UTC on September 11 (13:15 local time) when the Ministerio de Energía y Minas de Cuba <a href="https://x.com/EnergiaMinasCub/status/1966191043952410754"><u>posted on X</u></a> that the national electric system had been restored.</p>
    <div>
      <h3>Gibraltar</h3>
      <a href="#gibraltar">
        
      </a>
    </div>
    <p>A contractor cutting through three high voltage cables caused a nationwide power outage in <a href="https://radar.cloudflare.com/gi"><u>Gibraltar</u></a> on September 16, according to a <a href="https://www.facebook.com/gibraltargovernment/posts/pfbid0ZDLtEtVEYwSgKGn6J3eWgvneMo1mhB6cTrhHpTgLKhguL9ZqB5qfT4ijrUDsqFhrl"><u>Facebook post from the Gibraltar government</u></a>. This power outage resulted in a disruption to Internet traffic between 11:15-18:30 UTC (13:15-20:30 local time), <a href="https://bsky.app/profile/radar.cloudflare.com/post/3lyykvuty7c2s"><u>falling as low as 80%</u></a> below the previous week.</p>
    <div>
      <h2>Earthquake</h2>
      <a href="#earthquake">
        
      </a>
    </div>
    
    <div>
      <h3>Kamchatka Peninsula, Russia</h3>
      <a href="#kamchatka-peninsula-russia">
        
      </a>
    </div>
    <p>A <a href="https://earthquake.usgs.gov/earthquakes/eventpage/us6000qw60/executive"><u>magnitude 8.8 earthquake</u></a> struck the <a href="https://radar.cloudflare.com/traffic/2125072"><u>Kamchatka Peninsula</u></a> in <a href="https://radar.cloudflare.com/ru"><u>Russia</u></a> at 23:24 UTC on July 29 (11:24 local time on July 30), and was powerful enough to trigger <a href="https://www.reuters.com/business/environment/huge-quake-russia-triggers-tsunami-warnings-around-pacific-2025-07-30/"><u>tsunami warnings</u></a> for <a href="https://radar.cloudflare.com/jp"><u>Japan</u></a>, <a href="https://radar.cloudflare.com/traffic/5879092"><u>Alaska</u></a>, <a href="https://radar.cloudflare.com/traffic/5855797"><u>Hawaii</u></a>, <a href="https://radar.cloudflare.com/gu"><u>Guam</u></a>, and other Russian regions. The graphs below show that there was an immediate impact to Internet traffic across several networks in the region, including <a href="https://radar.cloudflare.com/as12389"><u>Rostelecom (AS12389)</u></a> and <a href="https://radar.cloudflare.com/as42742"><u>InterkamService (AS42742)</u></a>, where traffic dropped by 75% or more. While traffic started to recover almost immediately across both providers, traffic on Rostelecom approached expected levels much more quickly than on InterkamService.</p>
    <div>
      <h2>Targeted cyberattack</h2>
      <a href="#targeted-cyberattack">
        
      </a>
    </div>
    
    <div>
      <h3>Yemen</h3>
      <a href="#yemen">
        
      </a>
    </div>
    <p>A <a href="https://www.yemenmonitor.com/en/Details/ArtMID/908/ArticleID/147420"><u>cyberattack targeting Houthi-controlled YemenNet</u></a> <a href="https://radar.cloudflare.com/as30873"><u>(AS30873)</u></a> on August 11 briefly disrupted connectivity across the network in <a href="https://radar.cloudflare.com/ye"><u>Yemen</u></a>. A significant drop in traffic occurred at around 14:15 UTC (17:15 local time), recovering by 15:00 UTC (18:00 local time). This observed drop in traffic aligns with the reported timing and duration of the attack, which was focused on YemenNet’s ADSL infrastructure.</p><p>The attack also apparently impacted YemenNet’s routing, as announced IPv4 address space began to decline as the attack commenced. Although the attack ended within an hour after it started, announced address space remained depressed for approximately an additional hour, reaching as low as 510 /24s (blocks of 256 IPv4 addresses) being announced, down from a “steady state” of 870 /24s.</p>
    <div>
      <h2>Fire causes infrastructure damage</h2>
      <a href="#fire-causes-infrastructure-damage">
        
      </a>
    </div>
    
    <div>
      <h3>Egypt</h3>
      <a href="#egypt">
        
      </a>
    </div>
    <p>A <a href="https://english.alarabiya.net/News/north-africa/2025/07/07/a-fire-at-a-telecom-company-in-cairo-injures-14-and-temporarily-disrupts-service"><u>fire at the Ramses Central Exchange in Cairo, Egypt</u></a> on July 7 disrupted telecommunications services for a number of providers with infrastructure in the facility. The fire broke out in a Telecom Egypt equipment room, and impacted connectivity across multiple providers, including <a href="https://radar.cloudflare.com/as36992"><u>Etisalat (AS36992)</u></a>, <a href="https://radar.cloudflare.com/as37069"><u>Mobinil (AS37069)</u></a>, <a href="https://radar.cloudflare.com/as24863"><u>Orange Egypt (AS24863)</u></a>, and <a href="https://radar.cloudflare.com/as24835"><u>Vodafone Egypt (AS24835)</u></a>. Internet traffic across these providers initially dropped at 14:30 UTC (17:30 local time). Recovery to expected levels varied across the providers, with Etisalat recovering by July 9, Vodafone and Mobinil by July 10, and Orange Egypt on July 11.</p><p>On July 10, Telecom Egypt <a href="https://www.zawya.com/en/economy/north-africa/telecom-egypt-restores-services-after-ramses-central-fire-s2msr114"><u>announced</u></a> that services affected by the fire had been restored, after operations were transferred to alternative exchanges.</p>
    <div>
      <h2>Technical problems</h2>
      <a href="#technical-problems">
        
      </a>
    </div>
    
    <div>
      <h3>Starlink</h3>
      <a href="#starlink">
        
      </a>
    </div>
    <p>Global satellite Internet service provider <a href="https://radar.cloudflare.com/as14593"><u>Starlink (AS14593)</u></a> acknowledged a July 24 network outage through a <a href="https://x.com/Starlink/status/1948474586699571518"><u>post on X</u></a>. The Vice President of Network Engineering at SpaceX explained, in a <a href="https://x.com/michaelnicollsx/status/1948509258024452488"><u>subsequent X post</u></a>, that “<i>The outage was due to failure of key internal software services that operate the core network.</i>”</p><p>Traffic initially dropped around 19:15 UTC, and the disruption lasted approximately 2.5 hours. The impact of the Starlink outage was particularly noticeable in countries including <a href="https://x.com/CloudflareRadar/status/1948491791574986771"><u>Yemen and Sudan</u></a>, where traffic dropped by approximately 50%, as well as in <a href="https://x.com/CloudflareRadar/status/1948497510235820236"><u>Zimbabwe, South Sudan, and Chad</u></a>.</p>
    <div>
      <h3>China</h3>
      <a href="#china">
        
      </a>
    </div>
    <p>At around 16:30 UTC on August 19 (00:30 local time on August 20), we observed an anomalous 25% drop in <a href="https://radar.cloudflare.com/cn"><u>China’s</u></a> Internet traffic. Our analysis of related metrics found that this disruption caused a drop in the share of IPv4 traffic, as well as a spike in the share of HTTP traffic (meaning that HTTPS traffic share had fallen), as shown in the graphs below.</p><p>Further analysis also found the share of <a href="https://blog.cloudflare.com/tcp-resets-timeouts/#sources-of-anomalous-connections"><u>TCP connections terminated in the Post SYN stage</u></a> doubled during the observed outage, from 39% to 78%, as shown below. The cause of these unusual observations was ultimately uncovered by a <a href="https://gfw.report/blog/gfw_unconditional_rst_20250820/en/"><u>Great Firewall Report blog post</u></a>, which stated, in part: “<i>Between approximately 00:34 and 01:48 (Beijing Time, UTC+8) on August 20, 2025, the Great Firewall of China (GFW) exhibited anomalous behavior by unconditionally injecting forged TCP RST+ACK packets to disrupt all connections on TCP port 443. This incident caused massive disruption of the Internet connections between China and the rest of the world. … The responsible device does not match the fingerprints of any known GFW devices, suggesting that </i><b><i>the incident was caused by either a new GFW device or a known device operating in a novel or misconfigured state</i></b><i>.</i>” This explanation is consistent with the anomalies visible in the Radar graphs.</p>
    <div>
      <h3>Pakistan</h3>
      <a href="#pakistan">
        
      </a>
    </div>
    <p>Subscribers of <a href="https://radar.cloudflare.com/as23674"><u>Nayatel (AS23674)</u></a> experienced an approximately 90 minute disruption to Internet connectivity on September 24, due to a <a href="https://x.com/nayatelpk/status/1970791157404954809"><u>reported outage at an upstream provider</u></a>. Traffic dropped as much as 57% between around 09:15-10:45 UTC (14:15-15:45 local). <a href="https://radar.cloudflare.com/as38193"><u>Transworld (AS38193)</u></a> is one of several <a href="https://radar.cloudflare.com/routing/as23674?dateStart=2025-09-24&amp;dateEnd=2025-09-24#connectivity"><u>upstream providers</u></a> to Nayatel, and a more significant drop in traffic is visible for that network, lasting from around 09:15-12:15 UTC (14:15-17:15 local time). The Nayatel disruption was likely less significant than the one seen at Transworld because Transworld is upstream of only a portion of the prefixes originated by Nayatel — traffic from other Nayatel prefixes was carried by other providers that remained available.</p>
    <div>
      <h2>No definitive cause</h2>
      <a href="#no-definitive-cause">
        
      </a>
    </div>
    
    <div>
      <h3>Iran</h3>
      <a href="#iran">
        
      </a>
    </div>
    <p>Several weeks after experiencing a <a href="https://blog.cloudflare.com/q2-2025-internet-disruption-summary/#iran"><u>full Internet shutdown</u></a>, <a href="https://radar.cloudflare.com/ir"><u>Iran</u></a> again experienced a sudden drop in Internet traffic around 21:00 UTC on July 5 (00:30 local time on July 6), with <a href="https://x.com/CloudflareRadar/status/1941640046005617038"><u>traffic falling 80%</u></a> as compared to the prior week. While most of the “unknown” disruptions covered in this series of posts are observed but have no associated acknowledgement or explanation, this disruption had multiple competing explanations.</p><p>A <a href="https://www.iranintl.com/en/202507067645"><u>published report</u></a> noted “<i>IRNA, Iran’s official news agency, cited the state-run Telecommunications Infrastructure Company, reporting a national-level disruption in international connectivity that affected most internet service providers Saturday night. Yet government officials have not publicly addressed the cause.</i>” However, posts from civil society groups that follow Internet connectivity in Iran (<a href="https://github.com/net4people/bbs/issues/497"><u>net4people</u></a>, <a href="https://x.com/filterbaan/status/1941628644125724793"><u>FilterWatch</u></a>) suggested that the disruption was again due to an intentional shutdown. And a <a href="https://x.com/filterbaan/status/1941628644125724793"><u>post thread on X</u></a> referenced, and disputed, a claim that the disruption was due to a DDoS attack. Unfortunately, no definitive root cause for this disruption could be found.</p>
    <div>
      <h3>Colombia</h3>
      <a href="#colombia">
        
      </a>
    </div>
    <p>Customers of Claro Colombia experienced an Internet disruption that lasted just over 30 minutes on August 6, with <a href="https://x.com/CloudflareRadar/status/1953168943423864954"><u>traffic falling two-thirds or more</u></a> as compared to the prior week between 16:45 - 17:20 UTC. The disruption affected multiple ASNs owned by Claro, including <a href="https://radar.cloudflare.com/as10620"><u>AS10620</u></a>, <a href="https://radar.cloudflare.com/as14080"><u>AS14080</u></a>, and <a href="https://radar.cloudflare.com/as26611"><u>AS26611</u></a>. (The Telmex Colombia and Comcel names shown in the graphs below are historical – Telmex and Comcel <a href="https://es.wikipedia.org/wiki/Claro_(Colombia)"><u>merged in 2012</u></a> and have operated under the Claro brand since then.) Claro did not acknowledge the disruption on social media, nor did it provide any explanation for it.</p>
    <div>
      <h3>Pakistan</h3>
      <a href="#pakistan">
        
      </a>
    </div>
    <p>A near-complete outage at <a href="https://radar.cloudflare.com/pk"><u>Pakistani</u></a> backbone provider <a href="https://radar.cloudflare.com/as17557"><u>PTCL (AS17557)</u></a> caused traffic from the network provider to drop 90% at 16:10 UTC (21:10 local time) on August 19. PTCL acknowledged the issue in a <a href="https://x.com/PTCLOfficial/status/1957873019084255347"><u>post on X</u></a>, noting “<i>We are currently facing data connectivity challenges on our PTCL and Ufone services.</i>” Although they <a href="https://x.com/PTCLOfficial/status/1957977425377391076"><u>published a subsequent post</u></a> several hours later after service was restored, they did not provide any additional information about the cause of the outage. However, <a href="https://bloompakistan.com/nationwide-internet-disruption-hits-pakistan-ptcl-ufone-nayatel-services-severely-affected/"><u>one published report</u></a> claimed “<i>The disruption was primarily caused by a technical fault in PTCL’s fiber optic infrastructure.</i>” while <a href="https://bloompakistan.com/nationwide-internet-disruption-hits-pakistan-ptcl-ufone-nayatel-services-severely-affected/"><u>another report</u></a> claimed “<i>According to industry sources, the internet disruption in Pakistan may be connected to a technical fault in the fiber optic backbone or issues with main internet providers responsible for international online traffic.</i></p><p>Interestingly, <a href="https://radar.cloudflare.com/dns/as17557?dateStart=2025-08-19&amp;dateEnd=2025-08-19#dns-query-volume"><u>traffic from PTCL to Cloudflare’s 1.1.1.1 DNS resolver</u></a> spiked as the outage began, and the <a href="https://radar.cloudflare.com/dns/as17557?dateStart=2025-08-19&amp;dateEnd=2025-08-19#dns-transport-protocol"><u>share of requests made over UDP</u></a> grew from 94% to 99%. In addition, <a href="https://radar.cloudflare.com/routing/as17557?dateStart=2025-08-19&amp;dateEnd=2025-08-19"><u>routing data</u></a> shows that there was also a small drop in announced IPv4 address space coincident with the outage. However, these additional observations do not necessarily confirm a “technical fault in PTCL’s fiber optic infrastructure” as the ultimate cause of the disruption.</p>
    <div>
      <h3>South Africa</h3>
      <a href="#south-africa">
        
      </a>
    </div>
    <p>To their credit, <a href="https://radar.cloudflare.com/za"><u>South African</u></a> provider <a href="https://radar.cloudflare.com/as37053"><u>RSAWEB (AS37053)</u></a> <a href="https://netnotice.rsaweb.co.za/cmfe4mzqc0001ngqrbyfq0waj"><u>quickly acknowledged an issue</u></a> with their FTTx and Enterprise connectivity on September 10, but neither their initial post nor subsequent updates provided any information on the cause of the problem. Whatever the cause, it resulted in a near-complete loss of Internet traffic from RSAWEB between 15:00 and 16:30 UTC (17:00 - 18:30 local time).</p>Routing data also shows a loss of just two announced /24 address blocks concurrent with the outage, dropping from 470 to 468. Unless all of RSAWEB’s outbound traffic was flowing through this limited amount of IP address space, it seems unusual that the withdrawal of just 512 IPv4 addresses from the=e routing table would have such a significant impact on the network’s traffic.<p></p>
    <div>
      <h3>SpaceX Starlink</h3>
      <a href="#spacex-starlink">
        
      </a>
    </div>
    <p>After experiencing a <a href="#starlink"><u>brief disruption in July</u></a> due to a software failure, <a href="https://radar.cloudflare.com/as14593"><u>Starlink (AS14593)</u></a> suffered another short disruption between 04:00-05:00 UTC on September 15. Although Starlink generally acknowledges disruptions to their global network on <a href="https://x.com/Starlink"><u>their X account</u></a>, and often providing a root cause, in this case they <a href="https://www.datacenterdynamics.com/en/news/starlink-suffers-brief-monday-outage-globally/"><u>apparently published an acknowledgement</u></a> on X, but deleted it after the issue was resolved. In addition to the drop in traffic, we observed a concurrent drop in announced IPv4 address space and spike in BGP announcements (likely withdrawals), suggesting that the disruption may have been caused by a network-related issue.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>The recent <a href="https://blog.cloudflare.com/new-regional-internet-traffic-and-certificate-transparency-insights-on-radar/"><u>launch of regional traffic insights</u></a> on Radar brings yet another perspective to our ability to investigate observed Internet traffic anomalies. We can now drill down at regional and network levels, as well as exploring the impact across DNS traffic, connection bandwidth and latency, TCP connection tampering, and announced IP address space, helping us understand the impact of such events. And while these blog posts feature graphs from <a href="https://radar.cloudflare.com/"><u>Radar</u></a> and the <a href="https://radar.cloudflare.com/explorer"><u>Radar Data Explorer</u></a>, the underlying data is available from our <a href="https://developers.cloudflare.com/api/resources/radar/"><u>rich API</u></a>. You can use the API to retrieve data to do your own local monitoring or analysis, or the <a href="https://github.com/cloudflare/mcp-server-cloudflare/tree/main/apps/radar#cloudflare-radar-mcp-server-"><u>Radar MCP server</u></a> to incorporate Radar data into your AI tools.</p><p>The Cloudflare Radar team is constantly monitoring for Internet disruptions, sharing our observations on the <a href="https://radar.cloudflare.com/outage-center"><u>Cloudflare Radar Outage Center</u></a>, via social media, and in posts on <a href="https://blog.cloudflare.com/tag/cloudflare-radar/"><u>blog.cloudflare.com</u></a>. Follow us on social media at <a href="https://twitter.com/CloudflareRadar"><u>@CloudflareRadar</u></a> (X), <a href="https://noc.social/@cloudflareradar"><u>noc.social/@cloudflareradar</u></a> (Mastodon), and <a href="https://bsky.app/profile/radar.cloudflare.com"><u>radar.cloudflare.com</u></a> (Bluesky), or contact us via <a><u>email</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Radar]]></category>
            <category><![CDATA[Internet Shutdown]]></category>
            <category><![CDATA[Internet Traffic]]></category>
            <category><![CDATA[Outage]]></category>
            <category><![CDATA[Consumer Services]]></category>
            <category><![CDATA[Internet Trends]]></category>
            <guid isPermaLink="false">6d4g6SeHoMoMsnUve0rdrq</guid>
            <dc:creator>David Belson</dc:creator>
        </item>
        <item>
            <title><![CDATA[How a volunteer-run wildfire site in Portugal stayed online during DDoS attacks]]></title>
            <link>https://blog.cloudflare.com/wildfire-fogos-pt-portugal-ddos-attack/</link>
            <pubDate>Thu, 21 Aug 2025 17:28:00 GMT</pubDate>
            <description><![CDATA[ Fogos.pt, a volunteer-run wildfire tracker in Portugal, grew from a side project into a critical national resource used by citizens, media, and government. During 2025 fire season it was hit by DDoS  ]]></description>
            <content:encoded><![CDATA[ <p>On July 31, 2025, just as Portugal entered the peak of another intense wildfire season, João Pina, also known as <a href="https://x.com/tomahock"><u>Tomahock</u></a>, received an automated alert from Cloudflare. His volunteer-run project, <a href="https://fogos.pt"><u>fogos.pt</u></a>, now a trusted source of real-time wildfire information for millions across Portugal, was under attack.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3dgHHbPyF5op5kCreLO8Zz/b69e125f95751f5dd056d1145604fcd2/BLOG-2934_2.png" />
          </figure><p><sub>One of the several alerts </sub><a href="http://fogos.pt"><sub><u>fogos.pt</u></sub></a><sub> received related to the DDoS attack</sub></p><p>What started in 2015 as a late-night side project with friends around a dinner table in Aveiro has grown into a critical public resource. During wildfires, the site is where firefighters, journalists, citizens, and even government agencies go to understand what’s happening on the ground. Over the years, fogos.pt has evolved from parsing PDFs into visual maps to a full-featured app and website with historical data, weather overlays, and more. It’s also part of Project Galileo, Cloudflare’s initiative to protect vulnerable but important public interest sites at no cost.</p><p>Wildfires are not just a Portuguese challenge. They are frequent across southern Europe (Spain, Greece, currently also under alert), California, Australia, and in Canada, which in 2023 faced <a href="https://en.wikipedia.org/wiki/2023_Canadian_wildfires"><u>record-setting</u></a> fires. In all these cases, reliable information can be crucial, sometimes life-saving. Other organizations offering similar public services can also apply to join <a href="https://www.cloudflare.com/galileo/"><u>Project Galileo</u></a> to receive protection and handle heavy traffic.</p>
    <div>
      <h2>A side project that became a national reference</h2>
      <a href="#a-side-project-that-became-a-national-reference">
        
      </a>
    </div>
    <p>Fogos.pt began with a simple question: why was fire data only available in hard-to-read PDF documents? João and a group of friends, including volunteer firefighters, decided to build something better. They pulled the data, geolocated the fire reports, and visualized them on a map.</p><p>Soon, thousands of people were using it. Then tens of thousands. Today, fogos.pt is integrated into official communications, including mentions from the Portuguese government on social media and direct links from the national wildfire information portal (<a href="https://www.sgifr.gov.pt/"><u>SGIFR.gov.pt</u></a>).</p><p>In 2018, fogos.pt formally joined forces with<a href="https://vost.pt"><u> VOST Portugal</u></a>, a digital volunteer organization that was early on also part of our <a href="https://www.cloudflare.com/galileo/"><u>Project Galileo</u></a> — whose<a href="https://www.cloudflare.com/case-studies/vost-portugal/"><u> story was also featured in an earlier case study</u></a>. João Pina is also a co-founder of VOSTPT. Together, they created a complementary model: fogos.pt provides data and the platform; VOSTPT validates and communicates it to the public in real-time during emergencies.</p><p>It’s an operation run entirely by volunteers, with no funding, no formal team — just passion, and the help of partners.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6NjIxtp7YJjI8IPkDTdVtC/1a14e97700ab05992c1ea0610747d624/BLOG-2934_3.jpg" />
          </figure><p><sub>Homepage of fogos.pt on August 20, 2025, highlighting a major wildfire near Piódão in central Portugal.</sub></p>
    <div>
      <h3>Under attack during fire season</h3>
      <a href="#under-attack-during-fire-season">
        
      </a>
    </div>
    <p>On July 31 and August 1, 2025, two Distributed Denial of Service (DDoS) attacks targeted fogos.pt. Cloudflare automatically detected and mitigated both attacks.</p><p><b>July 31 attack:</b>
 • Duration: 7 minutes
 • Peak: 33,000 requests per second at 11:27 UTC
 • Bandwidth: 1.7 Gbps (Max)

How the attack looks like in requests per second:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5HF7TpL7tF66oK6plP5N7T/a2bce9539e21b216b8d3ae1fd7885623/BLOG-2934_4.png" />
          </figure><p><b>August 1 attack</b>:
 • Duration: 5 minutes
 • Peak: 31,000 requests per second at 10:24 UTC
 • Bandwidth: 849 Mbps (Max)

How the attack looks like in requests per second from our perspective:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/iaaqt3cvSbjQ5M9cODkhH/6202d16fc65aeeb510ba761317f0f43f/BLOG-2934_5.png" />
          </figure><p>By Cloudflare’s standards, these were small. For comparison, last year we mitigated an attack exceeding <a href="https://blog.cloudflare.com/exploring-internet-traffic-shifts-and-cyber-attacks-during-the-2024-us-election/"><u>700,000 requests per second</u></a> against a high-profile US election campaign site. But for an civic project like fogos.pt, even tens of thousands of requests per second — if unprotected — can be enough to take services offline at the worst possible time.</p><p>Attackers typically use three main methods for DDoS attacks:</p><ul><li><p>IoT devices: hacked cameras, routers, or smart gadgets sending traffic.</p></li><li><p>Proxies: open or misconfigured servers, residential proxy networks, or anonymity tools that hide attackers’ IPs.</p></li><li><p>Cloud machines: compromised or rented servers from cloud providers.</p></li></ul><p>The July 31 attack likely relied on open proxies, with much of the traffic arriving unencrypted (a common sign of proxy-based attacks). The August 1 attack, in contrast, came largely from cloud machines, matching patterns we see from botnets that exploit cloud infrastructure.</p><p>These attacks were blocked without disruption. Cloudflare’s autonomous mitigation systems kicked in, and email alerts were automatically sent to João and the team. No downtime, no manual intervention required.</p>
    <div>
      <h3>The role of Project Galileo: traffic surges</h3>
      <a href="#the-role-of-project-galileo-traffic-surges">
        
      </a>
    </div>
    <p>Fogos.pt has used Cloudflare’s free services since the beginning, starting with DNS and gradually expanding to DDoS mitigation, caching, rate limiting, and more. The site joined Project Galileo, which protects journalists, human rights defenders, and public service projects, to get stronger, upgraded features and service at no cost.</p><blockquote><p><i>“Without Cloudflare, the site would have gone down many times during fire season,” says João Pina. “We use almost every product — but protection against attacks is critical.”</i></p></blockquote>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2NGImat2Q9nujadgXBf22K/96e0aca2752f135e86efdb25d6502a18/BLOG-2934_6.png" />
          </figure><p><sub>August 11, 2025, detail the area of interest of a wildfire in central Portugal. </sub></p><p>Traffic to fogos.pt surges when wildfires hit the news or get mentioned by authorities. These spikes can bring tens of thousands of visitors per day. And as attention grows, so does the risk. Attacks can be used to silence or disrupt critical services, or simply as distractions for more malicious activity. In August 2025, the site often had close to 60,000 people browsing at the same time, with around 40,000 being the norm across the web and app services.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5dNqwHSVBjXdWZqA5jkJiq/f2eed592d0e09df61e14285a0167197c/BLOG-2934_7.png" />
          </figure><p>In just two weeks (with an August 15 peak of almost 70 million requests), fogos.pt handled over 550 million requests (more than 25 million per day) 9 TB of data transfer, nearly 100 million page views, 15 million visits, and 240 million API calls. A massive load for a volunteer-run project, as the next screenshot from the <a href="http://fogos.pt"><u>fogos.pt</u></a> team shows:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5Ofxc7GGgKgWiEbcj4JEiv/2368a8f6ec344d77a044c0a1b371201a/BLOG-2934_8.png" />
          </figure><p>In a time when timely wildfire updates can mean the difference between safety and danger, keeping the site online is essential. </p>
    <div>
      <h3>Built by community, supported by allies</h3>
      <a href="#built-by-community-supported-by-allies">
        
      </a>
    </div>
    <p>Fogos.pt is a reminder of what’s possible when public service meets technology, and why we launched Project Galileo: to protect the digital infrastructure that keeps people informed and safe. Built with no formal funding or full-time team, it runs on volunteers, partners, and a shared sense of purpose, an authenticity that João Pina believes is why it works, and why it matters.</p><p>And while this story is about Portugal, wildfires are a global challenge. Other organizations providing critical public services can also apply to join <a href="https://www.cloudflare.com/galileo/"><u>Project Galileo</u></a> and receive this protection.</p><p>From a dinner-table idea by an engineer to critical national infrastructure, fogos.pt shows the Internet at its best. Cloudflare is proud to help protect it.</p> ]]></content:encoded>
            <category><![CDATA[Project Galileo]]></category>
            <category><![CDATA[DDoS]]></category>
            <category><![CDATA[Trends]]></category>
            <category><![CDATA[Radar]]></category>
            <category><![CDATA[Consumer Services]]></category>
            <category><![CDATA[Portugal]]></category>
            <guid isPermaLink="false">44bwGeajQNVHyhbL6x3f1p</guid>
            <dc:creator>João Tomé</dc:creator>
        </item>
        <item>
            <title><![CDATA[Shutdown season: the Q2 2025 Internet disruption summary]]></title>
            <link>https://blog.cloudflare.com/q2-2025-internet-disruption-summary/</link>
            <pubDate>Tue, 22 Jul 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ In Q2 2025, we observed Internet disruptions around the world resulting from government-directed shutdowns, power outages, cable damage, a cyberattack, and technical problems. ]]></description>
            <content:encoded><![CDATA[ <p>Cloudflare’s <a href="https://www.cloudflare.com/network/"><u>network</u></a> currently spans more than 330 cities in over 125 countries, and we interconnect with over 13,000 network providers in order to provide a broad range of services to millions of customers. The breadth of both our network and our customer base provides us with a unique perspective on Internet resilience, enabling us to observe the impact of Internet disruptions at both a local and national level, as well as at a network level.</p><p>As we have noted in the past, this post is intended as a summary overview of observed and confirmed disruptions, and is not an exhaustive or complete list of issues that have occurred during the quarter. A larger list of detected traffic anomalies is available in the <a href="https://radar.cloudflare.com/outage-center#traffic-anomalies"><u>Cloudflare Radar Outage Center</u></a>. Note that both bytes-based and request-based traffic graphs are used within the post to illustrate the impact of the observed disruptions — the choice of metric was generally made based on which better illustrated the impact of the disruption.</p><p>In our <a href="https://blog.cloudflare.com/q1-2025-internet-disruption-summary/"><u>Q1 2025 summary post</u></a>, we noted that we had not observed any government-directed Internet shutdowns during the quarter. Unfortunately, that forward progress was short-lived — in the second quarter of 2025, we observed <a href="#government-directed-shutdowns"><u>shutdowns</u></a> in Libya, Iran, Iraq, Syria, and Panama. The Internet’s reliance on a stable electric grid was made abundantly clear during the quarter, with a massive <a href="#power-outages-lead-to-internet-outages"><u>power outage</u></a> impacting Spain and Portugal disrupting connectivity within those countries. Fiber optic <a href="#fiber-optic-cable-damage"><u>cable cuts</u></a> impacted providers in Haiti and Malawi, major North American providers saw <a href="#technical-problems"><u>technical problems</u></a> disrupt Internet traffic, and a Russian provider was once again targeted by a significant <a href="#cyberattack-impact"><u>cyberattack</u></a>, knocking the network offline. Unfortunately, official attribution of an Internet outage’s root cause isn’t always available — and we observed several significant, yet <a href="#unexplained-disruptions"><u>unexplained</u></a>, Internet outages during the quarter.</p>
    <div>
      <h2>Government-directed shutdowns</h2>
      <a href="#government-directed-shutdowns">
        
      </a>
    </div>
    
    <div>
      <h3>Libya</h3>
      <a href="#libya">
        
      </a>
    </div>
    <p>On May 16, Internet disruptions were observed across multiple <a href="https://radar.cloudflare.com/traffic/ly"><u>Libyan</u></a> network providers, with connectivity reportedly shut down in response to <a href="https://libyareview.com/55698/protestors-face-internet-shutdown-in-libyan-capital/"><u>public protests</u></a> against the Government of National Unity. Starting at 13:30 UTC (15:30 local time), traffic dropped by more than 50% as compared to the prior week at <a href="https://radar.cloudflare.com/traffic/as329129?dateStart=2025-05-16&amp;dateEnd=2025-05-17#traffic-trends"><u>Libyan International Company for Technology (AS329129)</u></a>, <a href="https://radar.cloudflare.com/traffic/as328539?dateStart=2025-05-16&amp;dateEnd=2025-05-17#traffic-trends"><u>Giga Communication (AS328539)</u></a>, <a href="https://radar.cloudflare.com/traffic/as37284?dateStart=2025-05-16&amp;dateEnd=2025-05-17#traffic-trends"><u>Aljeel Aljadeed for Technology (AS37284)</u></a>, and <a href="https://radar.cloudflare.com/traffic/as328733?dateStart=2025-05-16&amp;dateEnd=2025-05-17#traffic-trends"><u>Awal Telecom (AS328733)</u></a>, with the latter experiencing a complete outage. Lower traffic volumes were observed until around 00:00 UTC (02:00 local time), with traffic restoration occurring within an hour or so on either side. Giga Communication (AS328539) experienced a second disruption on May 17 between 02:00 - 11:30 UTC (04:00 - 13:30 local time).</p>
    <div>
      <h3>Iran</h3>
      <a href="#iran">
        
      </a>
    </div>
    <p>Multiple Internet shutdowns occurred in <a href="https://radar.cloudflare.com/traffic/ir"><u>Iran</u></a> in June following Israel’s initial <a href="https://apnews.com/article/iran-explosions-israel-tehran-00234a06e5128a8aceb406b140297299"><u>attacks on the country’s nuclear sites</u></a>. The first, on June 13, occurred between 07:15 - 09:45 UTC (10:45 - 13:15 local time). Iran’s Ministry of Communications <a href="https://x.com/itiransite/status/1933475023244648514"><u>issued a statement</u></a> that announced the shutdown: “<i>In light of the country's special circumstances and based on the measures taken by the competent authorities, temporary restrictions have been imposed on the country's Internet. It is obvious that these restrictions will be lifted once normal conditions are restored.</i>” This shutdown order impacted network providers including <a href="https://radar.cloudflare.com/traffic/as24631?dateStart=2025-06-13&amp;dateEnd=2025-06-13#http-traffic"><u>FanapTelecom (AS24631)</u></a>, Rasana (<a href="https://radar.cloudflare.com/traffic/as205647?dateStart=2025-06-13&amp;dateEnd=2025-06-13#http-traffic"><u>AS205647</u></a> and <a href="https://radar.cloudflare.com/traffic/as31549?dateStart=2025-06-13&amp;dateEnd=2025-06-13#http-traffic"><u>AS31549</u></a>), <a href="https://radar.cloudflare.com/traffic/as197207?dateStart=2025-06-13&amp;dateEnd=2025-06-13#http-traffic"><u>MCCI (AS197207)</u></a>, and <a href="https://radar.cloudflare.com/traffic/as58224?dateStart=2025-06-13&amp;dateEnd=2025-06-13#http-traffic"><u>TCI (AS58224)</u></a>, as well as others.</p><p>On June 17, Internet connectivity was again restricted, this time <a href="https://x.com/Digiato/status/1934561401185432046"><u>reportedly in an effort to “ward off cyber attacks”</u></a>, according to a government spokesperson. This second round of shutdowns began at 17:30 local time (14:00 UTC), impacting multiple networks. Traffic recovered at 15:30 UTC (19:00 local time) on <a href="https://radar.cloudflare.com/traffic/as24631?dateStart=2025-06-17&amp;dateEnd=2025-06-17#http-traffic"><u>FanapTelecom (AS24631)</u></a> and <a href="https://radar.cloudflare.com/traffic/as16322?dateStart=2025-06-17&amp;dateEnd=2025-06-17#http-traffic"><u>Pars Online (AS16322)</u></a>, at 20:00 UTC (23:30 local time) on <a href="https://radar.cloudflare.com/traffic/as197207?dateStart=2025-06-17&amp;dateEnd=2025-06-17#http-traffic"><u>MCCI (AS197207)</u></a> and <a href="https://radar.cloudflare.com/traffic/as44244?dateStart=2025-06-17&amp;dateEnd=2025-06-17#http-traffic"><u>IranCell (AS44244)</u></a>, at 22:00 UTC on June 17 (01:30 on June 18 local time) on <a href="https://radar.cloudflare.com/traffic/as57218?dateStart=2025-06-17&amp;dateEnd=2025-06-17#http-traffic"><u>RighTel (AS57218)</u></a>, and at 06:00 UTC on June 18 (09:30 local time) on Rasana (<a href="https://radar.cloudflare.com/traffic/as31549?dateStart=2025-06-17&amp;dateEnd=2025-06-18#http-traffic"><u>AS31549</u></a> and <a href="https://radar.cloudflare.com/traffic/as205647?dateStart=2025-06-17&amp;dateEnd=2025-06-18#http-traffic"><u>AS205647</u></a>).</p><p>During these initial Internet shutdowns, incoming Internet traffic was <a href="https://filter.watch/english/2025/06/19/network-monitoring-june-iran-internet-status-week-1-of-israel-iran-war/"><u>reportedly</u></a> also blocked, and user access was limited to Iran’s domestic <a href="https://en.wikipedia.org/wiki/National_Information_Network"><u>“National Information Network” (NIN)</u></a>.</p><p>Just a day later, on June 18, an extended third shutdown was put into place, this one lasting from 12:50 UTC (16:20 local time) through 05:00 UTC (08:30 local time) on June 25. Once again, the shutdown was <a href="https://techcrunch.com/2025/06/20/irans-government-says-it-shut-down-internet-to-protect-against-cyberattacks/"><u>reportedly implemented as a means of protecting against cyberattacks</u></a>, with a government spokesperson commenting “<i>We have previously stated that if necessary, we will certainly switch to a national internet and restrict global internet access. Security is our main concern, and we are witnessing cyberattacks on the country’s critical infrastructure and disruptions in the functioning of banks. Many of the enemy’s drones are managed and controlled via the internet, and a large amount of information is exchanged this way. A cryptocurrency exchange was also hacked, and considering all these issues, we have decided to impose Internet restrictions.</i>” This shutdown resulted in a near-complete loss of traffic through 02:00 UTC (05:30 local time) on June 21, when some traffic recovery was observed, though at levels remaining well-below pre-shutdown volumes. Traffic from this partial recovery settled into a consistent cycle for several days, until returning to expected levels on June 25. The same network providers impacted by the previous shutdowns were affected by this one as well.</p>
    <div>
      <h3>Iraq</h3>
      <a href="#iraq">
        
      </a>
    </div>
    <p>Consistent with measures taken over the past several years (<a href="https://blog.cloudflare.com/syria-iraq-algeria-exam-internet-shutdown/"><u>2024</u></a>, <a href="https://blog.cloudflare.com/exam-internet-shutdowns-iraq-algeria/"><u>2023</u></a>, <a href="https://blog.cloudflare.com/q2-2022-internet-disruption-summary/#schools-in-internets-out"><u>2022</u></a>), governments in <a href="https://radar.cloudflare.com/traffic/iq"><u>Iraq</u></a> again implemented regular Internet shutdowns in an effort to prevent cheating on national exams. (We say “governments” here because the shutdowns took place both in the main part of the country and in the Iraqi Kurdistan region in the northern part of the country.)</p><p>Occurring between 03:00 - 05:00 UTC (<a href="https://www.moc.gov.iq/?article=1015"><u>06:00 - 08:00 local time</u></a>) at the request of the Ministry of Education, the shutdowns in the main part of the country started on May 20 and ran through June 4 for middle school exams, and from June 14 until July 3 for <a href="https://www.facebook.com/Iraq.Ministry.of.Education/posts/pfbid0a7VuMttRxdoGWwuaymy38LcZw9jscz3Dfxup4aUue2LeRBPuU2c7vnDsZKbgCkE2l"><u>preparatory school exams</u></a>. Network providers that implemented the shutdowns included <a href="https://radar.cloudflare.com/traffic/as199739"><u>Earthlink (AS199739)</u></a>, <a href="https://radar.cloudflare.com/traffic/as51684"><u>Asiacell (AS51684)</u></a>, <a href="https://radar.cloudflare.com/traffic/as59588"><u>Zainas (AS59588)</u></a>, <a href="https://radar.cloudflare.com/traffic/as58322"><u>Halasat (AS58322)</u></a>, and <a href="https://radar.cloudflare.com/traffic/as203214"><u>HulumTele (AS203214)</u></a>.</p><p>In the Kurdistan region, the shutdowns began June 1, and ran through July 6, <a href="https://x.com/TwanaOth/status/1930380416374002119"><u>taking place between 03:30 - 04:30 UTC (06:30 - 07:30 local time)</u></a> on Wednesdays and Sundays. Network providers that implemented the shutdowns included <a href="https://radar.cloudflare.com/traffic/as48492"><u>IQ Online (AS48492)</u></a>, <a href="https://radar.cloudflare.com/traffic/as59625"><u>KorekTel (AS59625)</u></a>, <a href="https://radar.cloudflare.com/traffic/as21277"><u>Newroz Telecom (AS21277)</u></a>, and <a href="https://radar.cloudflare.com/traffic/as206206"><u>KNET (AS206206)</u></a>.</p>
    <div>
      <h3>Syria</h3>
      <a href="#syria">
        
      </a>
    </div>
    <p>As Iraq does, <a href="https://radar.cloudflare.com/traffic/sy"><u>Syria</u></a> also implements nationwide Internet shutdowns to prevent cheating on exams, and has been doing so for several years (<a href="https://blog.cloudflare.com/syria-exam-related-internet-shutdowns/"><u>2021</u></a>, <a href="https://blog.cloudflare.com/syria-sudan-algeria-exam-internet-shutdown/"><u>2022</u></a>, <a href="https://blog.cloudflare.com/q2-2023-internet-disruption-summary/#syria"><u>2023</u></a>, <a href="https://blog.cloudflare.com/syria-iraq-algeria-exam-internet-shutdown/"><u>2024</u></a>). However, in contrast to previous years, in 2025, the government only ordered the cutoff of cellular connectivity, with a <a href="https://t.me/TrbyaGov/1869"><u>published statement</u></a> noting (translated) “<i>As part of our commitment to ensuring the integrity of public examinations and safeguarding the future of our dear students, and based on our national responsibility to secure a fair and transparent examination environment, </i><b><i>a temporary cellular communications blackout will be implemented in areas near examination centers across the Syrian Arab Republic</i></b><i>. … The cellular communications blackout will be implemented exclusively within the narrowest possible geographical and timeframe, during the time students are in exam halls.</i>”</p><p>During the second quarter, the shutdowns associated with the “Basic Education Certificate” took place on June 21, 24, and 29 between 05:15 - 06:00 UTC (08:15 - 09:00 local time). Exams and associated shutdowns for the “Secondary Education Certificate” are scheduled to take place between July 12 and August 3.</p><p>Because these shutdowns only impacted mobile connectivity, they only resulted in a partial drop in announced IP address space, as opposed to a more complete loss <a href="https://radar.cloudflare.com/routing/sy?dateStart=2024-05-19&amp;dateEnd=2024-06-15#announced-ip-address-space"><u>as seen in previous years</u></a>.</p>
    <div>
      <h3>Panama</h3>
      <a href="#panama">
        
      </a>
    </div>
    <p>On June 21, an <a href="https://x.com/AsepPanama/status/1936462415278854469"><u>X post</u></a> from <a href="https://asep.gob.pa/"><u>ASEP Panamá</u></a> (the telecommunications regulating agency) announced that (translated) “<i>...in compliance with Cabinet Decree No. 27 of June 20, 2025, and by formal instruction from the Ministry of Government, the temporary suspension of mobile telephony and residential internet services in the province of Bocas del Toro has been coordinated.</i>” The suspension, according to the post, was supposed to be in place until June 25, however a <a href="https://x.com/AsepPanama/status/1937982698624057637"><u>subsequent X post</u></a> noted that it would be extended until Sunday, June 29, 2025.</p><p>The suspension of Internet connectivity was <a href="https://www.ipandetec.org/panama/panama-debe-restablecer-internet-bocas/"><u>implemented in response to</u></a> protests and demonstrations against reforms to the Social Security Fund, retirement, and pensions, specifically in the province of Bocas del Toro.</p><p>The graph below shows an effective loss of traffic from <a href="https://radar.cloudflare.com/traffic/as18809"><u>Cable Onda (AS18809)</u></a> in Bocas Del Toro, <a href="https://radar.cloudflare.com/traffic/pa"><u>Panama</u></a> around 03:30 UTC on June 21 (22:30 local time on June 20), recovering around 06:00 UTC (01:00 local time) on June 30. The recovery is in line with the <a href="https://x.com/AsepPanama/status/1939682983440421070"><u>final related X post</u></a> from ASEP, which noted (translated) “<i>... Internet and cellular telephone services in the province of Bocas del Toro have been restored as of 12:01 a.m. on Monday, June 30…</i>”.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3hqsqe4t1DRZHzqWXiMzZr/a1186cdc13145745fafb4e9869b4481e/Jun_30_-_Panama_-_Bocas_del_Toro_-_AS18809-_1200px.png" />
            
            </figure>
    <div>
      <h2>Power outages lead to Internet outages</h2>
      <a href="#power-outages-lead-to-internet-outages">
        
      </a>
    </div>
    
    <div>
      <h3>Portugal &amp; Spain</h3>
      <a href="#portugal-spain">
        
      </a>
    </div>
    <p>The big power outage story during the second quarter was the massive outage across much of <a href="https://radar.cloudflare.com/traffic/pt"><u>Portugal</u></a> and <a href="https://radar.cloudflare.com/traffic/es"><u>Spain</u></a> on April 28. The impact of the event was covered in detail in the <a href="https://blog.cloudflare.com/how-power-outage-in-portugal-spain-impacted-internet/"><i><u>How the April 28, 2025, power outage in Portugal and Spain impacted Internet traffic and connectivity</u></i><u> blog post</u></a>, which explored shifts in traffic at a country/network/regional level, as well as how the power outage impacted network quality and announced IP address space.</p><p>In Portugal, Internet traffic dropped as the power grid failed — when compared with the previous week, traffic fell ~50 % immediately and within five hours it was ~90% below the week before.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/67Ep5vwyVnCunhfFHqlIGI/ec1d3eacdddc905bfa3a0aedf714c82f/BLOG-2817_2.png" />
          </figure><p>In Spain, Internet traffic dropped as the power grid failed, with traffic immediately dropping by around 60% as compared to the previous week, falling to approximately 80% below the previous week within the next five hours.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5mo4BDO1G0U42ibybDKwVY/857b84436db9db2aa5a7f46f17293923/Screenshot_2025-07-18_at_10.45.07%C3%A2__AM.png" />
          </figure><p>In both countries, traffic returned to expected levels around 01:00 local time (midnight UTC) on April 29. More details about the outage can be found in the blog post linked above.</p>
    <div>
      <h3>Morocco</h3>
      <a href="#morocco">
        
      </a>
    </div>
    <p>It appears that <a href="https://radar.cloudflare.com/traffic/ma"><u>Morocco</u></a> may have also been impacted in some fashion by the Portugal/Spain power outage, or at least Orange Maroc was. In a <a href="https://x.com/OrangeMaroc/status/1916866583047147690"><u>post on X</u></a>, the provider stated (translated) “<i>Internet traffic has been disrupted following a massive power outage in Spain and Portugal, which is affecting international connections.</i>” <a href="https://radar.cloudflare.com/traffic/as36925?dateStart=2025-04-28&amp;dateEnd=2025-04-29"><u>Traffic from the network (AS36925)</u></a> fell sharply around 12:00 UTC (13:00 local time), 90 minutes after the power outage began, with a full outage beginning around 15:00 UTC (16:00 local time). Traffic returned to expected levels around 23:30 UTC on April 28 (00:30 local time on April 29).</p>
    <div>
      <h3>Puerto Rico</h3>
      <a href="#puerto-rico">
        
      </a>
    </div>
    <p><a href="https://genera-pr.com/sobre-nosotros"><u>Genera PR</u></a>, a power company in <a href="https://radar.cloudflare.com/traffic/pr"><u>Puerto Rico</u></a>, <a href="https://x.com/Genera_PR/status/1912562399741100112"><u>posted on X</u></a> on April 16 that they had (translated) <i>“...experienced a massive power outage across the island due to the unexpected shutdown of all generating plants, including those of Genera PR and other private generators. This situation has caused a significant disruption to electrical service…</i>” <a href="https://lumapr.com/"><u>Luma Energy</u></a>, the private power company that is responsible for power distribution and power transmission in Puerto Rico, <a href="https://x.com/lumaenergypr/status/1912554580400812243"><u>published their own X post</u></a> that stated (translated) “<i>Approximately at 12:40pm, an event was recorded that affects the service island-wide.</i>”</p><p>Although the reported power outage was “massive” and “island-wide”, it did not have an outsized impact on <a href="https://radar.cloudflare.com/traffic/pr?dateStart=2025-04-13&amp;dateEnd=2025-04-19#traffic-trends">Puerto Rico’s Internet traffic</a>, which <a href="https://bsky.app/profile/radar.cloudflare.com/post/3lmxq2gfxtg2c"><u>initially dropped by about 40%</u></a>. Over the next several days, both companies published multiple updates to their X accounts detailing the progress being made in restoring service. By 15:00 UTC (11:00 local time) on April 18, traffic had returned to expected levels, in line with a post from Luma Energy that noted (translated) “<i>As of 10:00 a.m. on April 18, and thanks to LUMA’s extraordinary response and the tireless efforts of the island’s workforce—in coordination with the Puerto Rico government and generating companies—LUMA has restored electric service to 1,450,367 customers, representing 98.8% of total customers, in less than 38 hours since the island-wide outage began.</i>”</p><p>As seen in the graphs below, the power outage not only impacted end-user connectivity, driving the observed drop in traffic, but also had some impact on local Internet infrastructure, with some disturbance visible to <a href="https://radar.cloudflare.com/routing/pr?dateStart=2025-04-13&amp;dateEnd=2025-04-19#announced-ip-address-space">announced IP address space</a>.</p>
    <div>
      <h3>Saint Kitts and Nevis</h3>
      <a href="#saint-kitts-and-nevis">
        
      </a>
    </div>
    <p>A <a href="https://www.facebook.com/skelecltd/posts/pfbid09PDXSuw7U9X3V83rvUSz7kLGnL77bqwstYgKXbkRZQQPeGDCw2pffiP1nRkRsEAxl"><u>Facebook post</u></a> from <a href="https://www.skelec.kn/"><u>SKELEC (The St. Kitts Electricity Company)</u></a> on May 9 alerted customers on <a href="https://radar.cloudflare.com/traffic/kn?dateStart=2025-05-09&amp;dateEnd=2025-05-09#traffic-trends"><u>St. Kitts and Nevis</u></a> that “<i>...a fault developed at our Needsmust Power Plant resulting in an island wide outage. Restoration has begun, and complete restoration will be in two hours.</i>” The post was published at 17:31 UTC (13:31 local time), approximately 30 minutes after the island’s Internet traffic initially dropped. Traffic recovery initially began around 17:45 UTC (13:45 local time), well within the two-hour estimate for complete power restoration. However, Internet traffic did not fully return to expected levels until 20:15 UTC (16:15 local time).</p>
    <div>
      <h3>North Macedonia</h3>
      <a href="#north-macedonia">
        
      </a>
    </div>
    <p>On May 18, it was <a href="https://seenews.com/news/voltage-spike-causes-power-outage-in-north-macedonia-1275427"><u>reported</u></a> that “<i>High voltages in the regional 400 kV network amid low consumption caused a short-term outage in </i><a href="https://radar.cloudflare.com/traffic/mk?dateStart=2025-05-18&amp;dateEnd=2025-05-18#traffic-trends"><i><u>North Macedonia</u></i></a><i>'s 110 kV transmission network…</i>”, according to state-owned power company <a href="https://en.wikipedia.org/wiki/MEPSO"><u>MEPSO</u></a>. While the outage reportedly impacted most of the country, MEPSO also noted that the country’s power supply was normalized within an hour after the outage began. Although brief, the power outage caused the country’s Internet traffic to drop by nearly 60% as compared to the previous week during the disruption, which occurred between 03:00 - 04:45 UTC (05:00 - 06:45 local time).</p>
    <div>
      <h3>Maldives</h3>
      <a href="#maldives">
        
      </a>
    </div>
    <p>On June 1, Internet traffic in the <a href="https://radar.cloudflare.com/traffic/mv?dateStart=2025-06-01&amp;dateEnd=2025-06-01#traffic-trends"><u>Maldives</u></a> dropped by nearly half as compared to the previous week when a <a href="https://mvrepublic.com/news/widespread-power-outage-causes-disruptions-across-greater-male/"><u>widespread power outage</u></a> affected the Greater Malé region. Local Internet service providers including <a href="https://x.com/OoredooMaldives/status/1929108987187970176"><u>Ooredoo</u></a> and <a href="https://x.com/Dhiraagu/status/1929095794659103186"><u>Dhiraagu</u></a> took to social media to warn subscribers of potential interruptions to both fixed and mobile broadband connections. At a country level, Internet traffic was disrupted between 07:30 - 13:00 UTC (12:30 - 18:00 local time).</p><p>The power outage also had a nominal impact on Internet infrastructure, as <a href="https://radar.cloudflare.com/routing/mv?dateStart=2025-06-01&amp;dateEnd=2025-06-01#announced-ip-address-space">announced IPv4 address space</a> saw a nominal drop (from 355 to 350 /24s) that began shortly after the initial drop in traffic was observed, but returned to normal as the disruption ended.</p>
    <div>
      <h3>Curaçao</h3>
      <a href="#curacao">
        
      </a>
    </div>
    <p>A near-complete Internet outage at provider <a href="https://radar.cloudflare.com/traffic/as52233?dateStart=2025-06-14&amp;dateEnd=2025-06-15"><u>Flow Curaçao (AS52233)</u></a> on June 14-15 <a href="https://www.curacaochronicle.com/post/opinion/flows-internet-outage-sparks-outrage-and-urgent-call-for-infrastructure-reform/"><u>sparked outrage</u></a> and <a href="https://www.curacaochronicle.com/post/local/curacaos-telecom-regulator-demands-answers-from-flow-after-major-internet-outage/"><u>demands for answers</u></a> by the country’s telecommunications regulator. Flow’s Internet traffic dropped significantly at 18:00 UTC (14:00 local time) on June 14, falling further in the following hours. Signs of recovery became visible around 11:00 UTC (07:00 local time) on June 15, with more complete recovery occurring at 14:00 UTC (10:00 local time). A <a href="https://www.facebook.com/FlowBarbados/posts/pfbid02iGV1LYdNajMprF8Anvgh3KzMZbc2k9BVbVdN4C8mrVZDdcoUEhiib23TQYgisrAxl"><u>Facebook post from Flow Barbados</u></a>, posted on June 18, referenced a local disruption that began on June 14, but pointed at a commercial power outage at one of their key regional network facilities in Curaçao, which was likely the driver of this Internet outage.</p>
    <div>
      <h2>Fiber optic cable damage</h2>
      <a href="#fiber-optic-cable-damage">
        
      </a>
    </div>
    
    <div>
      <h3>Digicel Haiti</h3>
      <a href="#digicel-haiti">
        
      </a>
    </div>
    <p>Two instances of damage to its fiber optic infrastructure caused a complete Internet outage at <a href="https://radar.cloudflare.com/traffic/as27653?dateStart=2025-05-28&amp;dateEnd=2025-05-29#http-traffic"><u>Digicel Haiti (AS27653)</u></a> as of 21:00 UTC (17:00 local time) on May 28, according to a (translated) <a href="https://x.com/jpbrun30/status/1927845676408258762"><u>X post</u></a> from the company’s Director General. The cable damage took the network completely off the Internet, as <a href="https://radar.cloudflare.com/routing/as27653?dateStart=2025-05-28&amp;dateEnd=2025-05-29#announced-ip-address-space">announced IPv4 and IPv6 address space</a> also dropped to zero. Digicel Haiti remained offline until 00:45 on May 29 (20:45 local time on May 28), when both traffic and announced IP address space returned to expected levels.</p>
    <div>
      <h3>Airtel Malawi</h3>
      <a href="#airtel-malawi">
        
      </a>
    </div>
    <p><a href="https://radar.cloudflare.com/traffic/as37440?dateStart=2025-06-24&amp;dateEnd=2025-06-24#traffic-trends"><u>Airtel Malawi (AS37440)</u></a> experienced a 90-minute Internet outage on June 24, <a href="https://x.com/AirtelMalawiPlc/status/1937591557684916436"><u>caused by ongoing vandalism on their fiber network</u></a>. Although traffic effectively disappeared between 12:30 - 14:00 UTC (14:30 - 16:00 local time), the network remained at least partially online as at least some of the network’s <a href="https://radar.cloudflare.com/routing/as37440?dateStart=2025-06-24&amp;dateEnd=2025-06-24#announced-ip-address-space">IPv4 address space</a> continued to be announced to the Internet.  Announced IPv6 address space, however, fell to zero during the duration of the outage.</p>
    <div>
      <h2>Technical problems</h2>
      <a href="#technical-problems">
        
      </a>
    </div>
    
    <div>
      <h3>Bell Canada</h3>
      <a href="#bell-canada">
        
      </a>
    </div>
    <p>A <a href="https://x.com/Bell_Support/status/1925225503507591222"><u>router update</u></a> gone awry disrupted Internet service for <a href="https://radar.cloudflare.com/traffic/as577?dateStart=2025-05-21&amp;dateEnd=2025-05-21#http-traffic"><u>Bell Canada (AS577)</u></a> customers in Ontario and Quebec on May 21. An <a href="https://x.com/Bell_Support/status/1925187984543883486"><u>initial X post from the provider</u></a>, posted at 13:52 UTC (09:52 local time), alerted customers to the service interruption. The post trailed the start of the disruption by approximately a half hour, as traffic dropped around 13:15 UTC (09:15 local time), falling by as much as 70% as compared to the same time a week prior. Request traffic to Cloudflare’s <a href="https://radar.cloudflare.com/dns/as577?dateStart=2025-05-21&amp;dateEnd=2025-05-21#dns-query-volume"><u>1.1.1.1 DNS Resolver</u></a> also saw a significant drop. A negligible decline in <a href="https://radar.cloudflare.com/routing/as577?dateStart=2025-05-21&amp;dateEnd=2025-05-21#announced-ip-address-space"><u>announced IPv4 address space</u></a> was also observed.</p><p>The disruption was fairly short-lived, with traffic returning to expected levels just an hour later. A subsequent <a href="https://x.com/Bell_Support/status/1925225503507591222"><u>X post</u></a> confirmed that services had been fully restored by 15:00 UTC (11:00 local time), with <a href="https://x.com/Bell_Support/status/1925225526198776050"><u>another post</u></a> noting that the initial update had been rolled back quickly to restore service. </p>
    <div>
      <h3>Lumen/CenturyLink </h3>
      <a href="#lumen-centurylink">
        
      </a>
    </div>
    <p>Across parts of the <a href="https://radar.cloudflare.com/traffic/us"><u>United States</u></a>, <a href="https://radar.cloudflare.com/traffic/as209?dateStart=2025-06-19&amp;dateEnd=2025-06-20#traffic-trends"><u>Lumen/CenturyLink (AS209)</u></a> customers experienced a widespread Internet service disruption on June 19. Traffic volumes dropped by over 50% as compared to the prior week starting around 21:45 UTC. The disruption only lasted a couple of hours, with traffic returning to normal by 00:00 UTC on June 20.</p><p>Social media posts from affected subscribers suggested that the problem might have been DNS related, as those that switched their DNS resolver to Cloudflare’s <a href="https://1.1.1.1/dns"><u>1.1.1.1</u></a> were once again able to access the Internet. The graph below shows that <a href="https://radar.cloudflare.com/explorer?dataSet=dns&amp;loc=as209&amp;dt=2025-06-19_2025-06-20&amp;timeCompare=2025-06-12#result">traffic to 1.1.1.1 from Lumen/CenturyLink</a> exceeded levels seen the previous week as the disruption began, and remained elevated through June 20. Problems with an Internet service provider’s DNS resolver can appear to subscribers like an Internet outage, as they become unable to access anything requiring a DNS lookup (effectively, all Internet resources), ultimately resulting in a drop in traffic to those resources (from the affected user base), as seen in the graph above.</p>
    <div>
      <h2>Cyberattack impact</h2>
      <a href="#cyberattack-impact">
        
      </a>
    </div>
    
    <div>
      <h3>ASVT (Russia)</h3>
      <a href="#asvt-russia">
        
      </a>
    </div>
    <p><a href="https://radar.cloudflare.com/traffic/ru"><u>Russian</u></a> Internet provider <a href="https://radar.cloudflare.com/traffic/as8752?dateStart=2025-05-28&amp;dateEnd=2025-06-05#traffic-trends"><u>ASVT (AS8752)</u></a> was <a href="https://therecord.media/moscow-internet-provider-asvt-ddos-attack"><u>reportedly</u></a> targeted by a major DDoS attack that resulted in a multi-day complete Internet outage. This attack followed one <a href="https://blog.cloudflare.com/q1-2025-internet-disruption-summary/#russia"><u>targeting Russian provider Nodex</u></a> (<a href="https://radar.cloudflare.com/traffic/AS29329"><u>AS29329</u></a>) in March, which also caused a complete service outage. <a href="https://tadviser.com/index.php/Company:ASVT"><u>Reaching</u></a> 70.07 Gbps/6.92 million packets/second, the attack caused traffic to drop to near zero around 05:00 UTC on May 28 (08:00 Moscow time), with the effective outage lasting for approximately 10 hours. Although traffic began to return around 15:00 UTC (18:00 Moscow time), it remained below expected levels throughout the following week.</p><p>Interestingly, <a href="https://radar.cloudflare.com/dns/as8752?dateStart=2025-05-28&amp;dateEnd=2025-06-05#dns-query-volume">query volume to Cloudflare’s 1.1.1.1 DNS Resolver from ASVT</a> saw a rapid increase as traffic began to return after the initial outage, and remained elevated throughout the duration of the disruption. It isn’t clear whether the increase could be related to problems with ASVT’s native DNS resolver during the attack, forcing users to seek alternative resolvers, or if it could be related to ASVT subscribers seeking ways around damage from the attack.</p>
    <div>
      <h2>Unexplained disruptions</h2>
      <a href="#unexplained-disruptions">
        
      </a>
    </div>
    
    <div>
      <h3>Telia Finland (April 1)</h3>
      <a href="#telia-finland-april-1">
        
      </a>
    </div>
    <p>According to a (now unavailable) <a href="https://www.telia.fi/asiakastuki/hairiotiedote?id=sabre_858055150&amp;lang=fi"><u>“Disturbance bulletin”</u></a> and an <a href="https://x.com/teliafinland/status/1906966248790868230"><u>associated X post</u></a> from <a href="https://radar.cloudflare.com/traffic/as1759?dateStart=2025-04-01&amp;dateEnd=2025-04-01#traffic-trends"><u>Telia Finland (AS1759)</u></a>, the company acknowledged that “<i>A widespread disruption has been detected in the operation of mobile network data connections and fixed broadband.</i>” The widespread disruption resulted in a brief near-complete outage for subscribers between 06:30 - 07:15 UTC (09:30 - 10:15 local time).</p><p>Telia Finland did not disclose the cause of the disruption, but it is clear that it impacted IPv4 connectivity, as seen in the graph below showing <a href="https://radar.cloudflare.com/routing/as1759?dateStart=2025-04-01&amp;dateEnd=2025-04-01#announced-ip-address-space">announced IPv4 address space</a>. (<a href="https://radar.cloudflare.com/routing/as1759?dateStart=2025-04-01&amp;dateEnd=2025-04-01#announced-ip-address-space"><u>Announced IPv6 address space</u></a> did not see any change.) This loss of IPv4 connectivity resulted in a concurrent spike in the share of traffic from Telia Finland over IPv6 — normally below 5%, it spiked above 30% during the disruption. Request traffic <a href="https://radar.cloudflare.com/dns/as1759?dateStart=2025-04-01&amp;dateEnd=2025-04-01#dns-query-volume"><u>to Cloudflare’s 1.1.1.1 resolver from Telia Finland</u></a> also spiked at that time.</p>
    <div>
      <h3>SkyCable</h3>
      <a href="#skycable">
        
      </a>
    </div>
    <p>Around 19:15 UTC on May 7 (03:15 local time on May 8), subscribers of <a href="https://radar.cloudflare.com/traffic/as23944?dateStart=2025-05-07&amp;dateEnd=2025-05-08#traffic-trends"><u>SkyCable (AS23944)</u></a> in the <a href="https://radar.cloudflare.com/traffic/ph"><u>Philippines</u></a> experienced a complete Internet outage. Internet traffic from the network dropped to zero, as did <a href="https://radar.cloudflare.com/routing/as23944?dateStart=2025-05-07&amp;dateEnd=2025-05-08#announced-ip-address-space">announced IPv4 address space</a>. The disruption lasted until 03:00 UTC on May 8 (11:00 local time), and SkyCable did not publish any information regarding the cause of the eight-hour service outage.</p>
    <div>
      <h3>TrueMove H</h3>
      <a href="#truemove-h">
        
      </a>
    </div>
    <p>On May 22, <a href="https://radar.cloudflare.com/traffic/th"><u>Thai</u></a> mobile provider <a href="https://radar.cloudflare.com/traffic/as132061?dateStart=2025-05-22&amp;dateEnd=2025-05-22#http-traffic"><u>TrueMove H (AS132061)</u></a> <a href="https://www.kaohooninternational.com/markets/558192"><u>suffered a nationwide outage</u></a>, impacting connectivity for subscribers. The provider <a href="https://www.nationthailand.com/news/general/40050305"><u>acknowledged and apologized</u></a> for the disruption, but did not provide an official reason for the outage. (An <a href="https://www.nationthailand.com/news/general/40050309"><u>article</u></a> in the local press reported “<i>that the outage was caused by technical errors on True’s computer servers</i>” and also stated that others suggested that “<i>the problem might have been caused by an error on True’s DNS servers</i>”.)</p><p>At 03:00 UTC (10:00 local time), traffic initially dropped by over 80% as compared to the prior week. Almost immediately, traffic began to slowly recover, and returned to expected levels around 08:00 UTC (15:00 local time). A brief <a href="https://radar.cloudflare.com/routing/as132061?dateStart=2025-05-22&amp;dateEnd=2025-05-22#announced-ip-address-space"><u>partial drop in announced IPv4 address space was also observed</u></a> during the first hour of the disruption.</p>
    <div>
      <h3>Digicel Haiti</h3>
      <a href="#digicel-haiti">
        
      </a>
    </div>
    <p>Two days after experiencing <a href="#fiber-optic-cable-damage"><u>an outage due to cable damage</u></a>, <a href="https://radar.cloudflare.com/traffic/as27653?dateStart=2025-05-30&amp;dateEnd=2025-05-30#traffic-trends"><u>Digicel Haiti (AS27653)</u></a> experienced another complete outage on May 30. In contrast to the previous outage, no additional information about this one was published on social media by <a href="https://x.com/DigicelHT"><u>Digicel Haiti</u></a> or its <a href="https://x.com/jpbrun30"><u>Director General</u></a>. The network effectively disappeared from the Internet at 14:15 UTC (10:15 local time), with both traffic and <a href="https://radar.cloudflare.com/routing/as27653?dateStart=2025-05-30&amp;dateEnd=2025-05-30#announced-ip-address-space">announced IP address space</a> (IPv4 &amp; IPv6) dropping to zero. The outage lasted nearly three hours, with traffic and announced IP space all returning around 17:00 UTC (13:00 local time).</p>
    <div>
      <h3>Syria</h3>
      <a href="#syria">
        
      </a>
    </div>
    <p>On June 10, an Internet outage in <a href="https://radar.cloudflare.com/traffic/sy?dateStart=2025-06-10&amp;dateEnd=2025-06-10#traffic-trends"><u>Syria</u></a> <a href="https://www.profilenews.com/en/breaking-internet-outage-in-syria/"><u>reportedly</u></a> affected the ADSL landline network across multiple provinces. Traffic dropped by as much as two-thirds below the same time the previous week at 08:15 UTC (11:15 local time), with the disruption lasting two hours. <a href="https://radar.cloudflare.com/routing/sy?dateStart=2025-06-10&amp;dateEnd=2025-06-10#announced-ip-address-space">Announced IPv4 address space</a> also fell during the course of the outage, indicating a potential infrastructure issue. However, as seen below, <a href="https://radar.cloudflare.com/dns/sy?dateStart=2025-06-10&amp;dateEnd=2025-06-10#dns-query-volume">request volume from Syria to Cloudflare’s 1.1.1.1 DNS resolver</a> was also elevated during the outage. This behavior has been observed in the past during government-directed shutdowns of Internet connectivity in Syria, when traffic can leave the country, but not return. There was no other indication that this outage was due to an intentional shutdown, but no official explanation for the disruption was available.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Government-directed Internet shutdowns returned with a vengeance in the second quarter, and that trend continues into the third quarter, though the latest ones have been exam-related, and not driven by protests. And while power-outage related Internet disruptions have frequently been observed in the past, often in smaller countries with less stable infrastructure, the massive outage in Spain and Portugal on April 28 reminds us that much like the Internet, electrical infrastructure is often interconnected across countries, meaning that problems in one can potentially cause significant problems in others.</p><p>The Cloudflare Radar team is constantly monitoring for Internet disruptions, sharing our observations on the <a href="https://radar.cloudflare.com/outage-center"><u>Cloudflare Radar Outage Center</u></a>, via social media, and in posts on <a href="https://blog.cloudflare.com/tag/cloudflare-radar/"><u>blog.cloudflare.com</u></a>. Follow us on social media at <a href="https://twitter.com/CloudflareRadar"><u>@CloudflareRadar</u></a> (X), <a href="https://noc.social/@cloudflareradar"><u>noc.social/@cloudflareradar</u></a> (Mastodon), and <a href="https://bsky.app/profile/radar.cloudflare.com"><u>radar.cloudflare.com</u></a> (Bluesky), or contact us via email.</p> ]]></content:encoded>
            <category><![CDATA[Radar]]></category>
            <category><![CDATA[Internet Shutdown]]></category>
            <category><![CDATA[Internet Traffic]]></category>
            <category><![CDATA[Consumer Services]]></category>
            <guid isPermaLink="false">37sa5eHdRj16s4vvvhEDGY</guid>
            <dc:creator>David Belson</dc:creator>
        </item>
        <item>
            <title><![CDATA[New year, no shutdowns: the Q1 2025 Internet disruption summary]]></title>
            <link>https://blog.cloudflare.com/q1-2025-internet-disruption-summary/</link>
            <pubDate>Tue, 22 Apr 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ In Q1 2025, we observed Internet disruptions around the world caused by cable damage, power outages, natural disasters, fire, a cyberattack, and technical problems. ]]></description>
            <content:encoded><![CDATA[ <p>Cloudflare’s <a href="https://www.cloudflare.com/network/"><u>network</u></a> spans more than 330 cities in over 125 countries, where we interconnect with over 13,000 network providers in order to provide a broad range of services to millions of customers. The breadth of both our network and our customer base provides us with a unique perspective on Internet resilience, enabling us to observe the impact of Internet disruptions at both a local and national level, as well as at a network level.</p><p>As we have noted in the past, this post is intended as a summary overview of observed and confirmed disruptions, and is not an exhaustive or complete list of issues that have occurred during the quarter. A larger list of detected traffic anomalies is available in the <a href="https://radar.cloudflare.com/outage-center#traffic-anomalies"><u>Cloudflare Radar Outage Center</u></a>. Note that both bytes-based and request-based traffic graphs are used within the post to illustrate the impact of the observed disruptions — the choice of metric was generally made based on which better illustrated the impact of the disruption.</p><p>In the first quarter of 2025, we observed a significant number of Internet disruptions due to <a href="#submarine-and-terrestrial-cable-damage"><u>cable damage</u></a> and <a href="#widespread-power-outages"><u>power outages</u></a>. <a href="#severe-weather"><u>Severe storms</u></a> caused outages in Ireland and Réunion, and an <a href="#earthquake"><u>earthquake</u></a> caused ongoing connectivity issues in Myanmar. Russian networks were taken offline by a reported <a href="#cyberattack"><u>cyberattack</u></a> and purported <a href="#technical-problems"><u>technical problems</u></a>, while a <a href="#fire-damage"><u>fire</u></a> took a telecom provider in Haiti offline briefly. In Q4 2024, we observed only a <a href="https://blog.cloudflare.com/q4-2024-internet-disruption-summary/#government-directed"><u>single government-directed Internet shutdown</u></a>, and this quarter, <b>no such shutdowns were observed</b>. Unfortunately, this is an unusual occurrence, and in the three-year history of this blog post series, has only occurred previously in <a href="https://blog.cloudflare.com/q4-2023-internet-disruption-summary/#governmentdirected"><u>Q4 2023</u></a> and <a href="https://blog.cloudflare.com/q1-2022-internet-disruption-summary/"><u>Q1 2022</u></a>. </p>
    <div>
      <h2>Submarine and terrestrial cable damage</h2>
      <a href="#submarine-and-terrestrial-cable-damage">
        
      </a>
    </div>
    
    <div>
      <h3>Pakistan</h3>
      <a href="#pakistan">
        
      </a>
    </div>
    <p>Just after the new year, Internet connectivity in <a href="https://radar.cloudflare.com/pk"><u>Pakistan</u></a> was disrupted by a fault in the <a href="https://www.submarinecablemap.com/submarine-cable/asia-africa-europe-1-aae-1"><u>AAE-1 submarine cable</u></a>. According to a January 2 <a href="https://www.facebook.com/PTAOfficialPK/posts/pfbid02M2zM4MyFVAKWcHeg7WDKKTdwxF7PahWsYRW2W7WPLywp4dpLgS8m8ACUp5wtJJp6l"><u>alert</u></a> published on social media by the Pakistan Telecommunications Authority, the cable fault occurred near <a href="https://radar.cloudflare.com/qa"><u>Qatar</u></a>, and would likely impact user experience across the country. Because there are seven submarine cables carrying international Internet traffic to/from Pakistan, the loss of AAE-1 did not cause an observable outage. However, the impact of the disruption was visible in the <a href="https://radar.cloudflare.com/quality/pk?dateStart=2025-01-02&amp;dateEnd=2025-01-04#bandwidth"><u>bandwidth</u></a> and <a href="https://radar.cloudflare.com/quality/pk?dateStart=2025-01-02&amp;dateEnd=2025-01-04#latency"><u>latency</u></a> graphs for Pakistan. On January 2 and 3, median latency peaked at around 125 ms, up from a pre-disruption median of approximately 80 ms. Concurrent drops in bandwidth were observed, with media download speeds dropping to around 6 Mbps from a pre-disruption media of around 9 Mbps. In an <a href="https://www.instagram.com/p/DEVff2gTxw9/"><u>“Important Update”</u></a> posted to their Instagram account, <a href="https://radar.cloudflare.com/AS17557"><u>Pakistan Telecom (PTCL, AS17557)</u></a> also highlighted the potential for “slow browsing” — the <a href="https://radar.cloudflare.com/quality/as17557?dateStart=2025-01-02&amp;dateEnd=2025-01-04"><u>Internet Quality graphs for that network</u></a> show similarly-timed shifts in median bandwidth and latency.</p><p><a href="https://www.submarinecablemap.com/country/pakistan"><u>Pakistan</u></a> is currently connected to seven submarine cables, with two additional connections on the way in 2026. This connection diversity means that damage to or an issue with one cable will likely have minimal impact on Internet availability within the country, as traffic can be re-routed across other paths.</p>

<p></p>
    <div>
      <h3>Syria</h3>
      <a href="#syria">
        
      </a>
    </div>
    <p>According to an <a href="https://www.moct.gov.sy/news-0251"><u>announcement</u></a> from the Syrian Ministry of Communications, a widespread Internet outage spanning January 23-24 was caused by sabotage that damaged two fiber optic cables that run along the highway between Damascus and Homs. The graphs below show that both <a href="https://radar.cloudflare.com/traffic/sy?dateStart=2025-01-23&amp;dateEnd=2025-01-24#http-traffic"><u>HTTP</u></a> and <a href="https://radar.cloudflare.com/dns/sy?dateStart=2025-01-23&amp;dateEnd=2025-01-24#dns-query-volume"><u>DNS</u></a> request traffic from <a href="https://radar.cloudflare.com/sy"><u>Syria</u></a> dropped to near zero between 00:30 and 03:30 local time on January 24 (21:30 on January 23 - 00:30 on January 24 UTC). Traffic began recovering shortly thereafter, and returned to expected levels by 09:00 local time (06:00 UTC). <a href="https://radar.cloudflare.com/routing/sy?dateStart=2025-01-23&amp;dateEnd=2025-01-24"><u>Announced IPv4 address space for the country</u></a>, almost exclusively from <a href="https://radar.cloudflare.com/routing/as29256?dateStart=2025-01-23&amp;dateEnd=2025-01-24"><u>Syria Telecom (AS29256)</u></a>, also saw an approximately 90% drop during this period, which suggests that these fiber cuts caused a significant amount of Syria Telecom’s network to become unreachable during the incident.</p><p>Echoing the disruption above, <a href="https://radar.cloudflare.com/sy"><u>Syria</u></a> experienced another Internet outage on March 25, again caused by sabotage that damaged fiber optic cables. According to an <a href="https://moct.gov.sy/news-0279"><u>announcement</u></a> from the Syrian Ministry of Communications, the damage occurred in the Maaloula and Hasiya regions, resulting in a near complete outage between 03:00 - 13:15 local time (00:00 - 10:15 UTC). Similar to the January outage, the graphs below show a near complete loss of <a href="https://radar.cloudflare.com/traffic/sy?dateStart=2025-03-24&amp;dateEnd=2025-03-25#http-traffic"><u>HTTP</u></a> request traffic and a significant loss of <a href="https://radar.cloudflare.com/routing/sy?dateStart=2025-03-24&amp;dateEnd=2025-03-25#announced-ip-address-space"><u>announced IPv4 address space</u></a>.</p><p>Somewhat paradoxically, <a href="https://radar.cloudflare.com/dns/sy?dateStart=2025-03-24&amp;dateEnd=2025-03-25#dns-query-volume"><u>DNS request volume from Syria</u></a> was elevated during this outage, in contrast to the behavior observed during the January event. It isn’t clear what drove the additional traffic to Cloudflare’s <a href="https://one.one.one.one/dns/"><u>1.1.1.1 DNS resolver</u></a> in this case.</p>
    <div>
      <h3>Nepal</h3>
      <a href="#nepal">
        
      </a>
    </div>
    <p>In early February, several Internet providers in <a href="https://radar.cloudflare.com/np"><u>Nepal</u></a> saw services disrupted when Indian provider <a href="https://radar.cloudflare.com/as9498"><u>Bharti Airtel (AS9498)</u></a> went offline. <a href="https://radar.cloudflare.com/as23752"><u>AS23752 (Nepal Telecom)</u></a>, <a href="https://radar.cloudflare.com/as17501"><u>AS17501 (Worldlink Communications)</u></a>, <a href="https://radar.cloudflare.com/as139922"><u>AS139922 (Dishhome Fibernet)</u></a>, <a href="https://radar.cloudflare.com/as45650"><u>AS45650 (Vianet Communications)</u></a>, and <a href="https://radar.cloudflare.com/as38565"><u>AS38565 (Ncell)</u></a>, who all have Airtel as an upstream provider or peer, saw traffic disrupted between 21:00 - 22:30 local time (15:15 - 16:45 UTC) on February 2.</p><p>Published reports disagree on the underlying cause of the Airtel issue, with one source claiming that it was related to an <a href="https://english.onlinekhabar.com/nationwide-internet-outage-in-nepal-after-payment-dispute-with-indian-bandwidth-provider.html"><u>ongoing payment dispute</u></a>, while another claims that it was due to reported <a href="https://kathmandupost.com/money/2025/02/04/nationwide-internet-outage-raises-concerns-over-outstanding-dues"><u>fiber cuts</u></a> in Airtel’s network.</p>
Show less
    <div>
      <h2>Widespread power outages</h2>
      <a href="#widespread-power-outages">
        
      </a>
    </div>
    
    <div>
      <h3>Angola</h3>
      <a href="#angola">
        
      </a>
    </div>
    <p><a href="https://angop.ao/en/noticias/economia/restabelecida-energia-electrica-nas-11-provincias-afectadas-pelo-apagao/"><u>Eleven provinces</u></a> in <a href="https://radar.cloudflare.com/ao"><u>Angola</u></a> lost electrical power on January 6 <a href="https://x.com/portalangop/status/1876328913070432710"><u>due to</u></a> an interruption in the North and Center Interconnected System, according to the <a href="http://www.rnt.co.ao/"><u>National Electricity Transmission Network (RNT)</u></a>. The widespread power outage disrupted Internet connectivity across the country, leading to a drop in traffic between 14:45 - 22:00 local time (13:45 - 21:00 UTC). <a href="https://angop.ao/en/noticias/economia/restabelecida-energia-electrica-nas-11-provincias-afectadas-pelo-apagao/"><u>Published reports</u></a> said that RNT was investigating the cause of the power outage, but no subsequent information was available confirming a specific cause.</p>
    <div>
      <h3>Sri Lanka</h3>
      <a href="#sri-lanka">
        
      </a>
    </div>
    <p>Monkey business at the Pandura electrical substation caused an island-wide power outage in <a href="https://radar.cloudflare.com/lk"><u>Sri Lanka</u></a> on February 9. More seriously, <a href="https://www.barrons.com/news/sri-lankan-monkey-causes-nationwide-blackout-2c3cc8ac"><u>a monkey coming into contact with a grid transformer</u></a> caused the power outage, which resulted in a multi-hour disruption to Internet traffic from the country. Traffic initially dropped around 11:30 local time (06:00 UTC), and recovered by around 21:00 local time (15:30 UTC). The graph below for <a href="https://radar.cloudflare.com/as18001"><u>AS18001 (Dialog)</u></a>, a major Sri Lankan network services provider, illustrates the impact on traffic.</p>
    <div>
      <h3>Chile</h3>
      <a href="#chile">
        
      </a>
    </div>
    <p>On February 25, a <a href="https://www.clarin.com/mundo/masivo-corte-luz-chile-afecta-80-ciento-pais_0_JxZR7Py3kb.html"><u>massive power outage</u></a> in <a href="https://radar.cloudflare.com/cl"><u>Chile</u></a> <a href="https://www.cnnchile.com/pais/chile-a-oscuras-mas-de-19-millones-de-clientes-sin-electricidad-equivalente-al-985-del-pais_20250225/"><u>reportedly impacted</u></a> 98.5% of the country. A <a href="https://www.cnnchile.com/pais/autoridades-informan-corte-luz-nacional-debe-desconexion-sistema-transmision-norte-chico_20250225/"><u>published report</u></a> noted that there was an interruption in the power supply from Arica to the Los Lagos region, caused by a disconnection of the 500 kV transmission system in the Norte Chico. The power outage resulted in an immediate and significant drop in Internet traffic, as seen at a country level, as well at a network level, as shown in the graphs below. Traffic initially fell at around 14:15 local time (18:15 UTC) and recovered to expected levels approximately 12 hours later, around 02:00 local time (06:00 UTC). It was <a href="https://www.cnnchile.com/pais/gobierno-informa-mas-94-clientes-pais-ya-tienen-suministro-electrico_20250226/"><u>reported</u></a> that as of an hour after traffic had recovered, approximately 94% of customers had power restored.</p>
    <div>
      <h3>Honduras</h3>
      <a href="#honduras">
        
      </a>
    </div>
    <p>A ground fault at the 15 de Septiembre electrical substation in <a href="https://radar.cloudflare.com/sv"><u>El Salvador</u></a> was <a href="https://www.elheraldo.hn/honduras/descartan-honduras-origen-apagon-afecto-region-erick-tejada-KD24633485"><u>reportedly</u></a> the cause of a power outage that resulted in a multi-hour Internet disruption in <a href="https://radar.cloudflare.com/hn"><u>Honduras</u></a> on March 1. The Regional Operator Entity (OER) <a href="https://www.instagram.com/protocinetico/p/DGrLkEqzWEP/"><u>stated</u></a> that the failure occurred at 09:22 local time (15:22 UTC), which resulted in traffic from the country dropping by about half. The disruption to Internet connectivity was relatively short-lived, as traffic returned to expected levels approximately two hours later.</p>
    <div>
      <h3>Cuba</h3>
      <a href="#cuba">
        
      </a>
    </div>
    <p>According to an <a href="https://x.com/EnergiaMinasCub/status/1900714003623506167"><u>X post from @EnergiasMinasCub</u></a> (the Cuban state agency responsible for promoting the sustainable development of the country's energy, geological, and mining sectors), at around 20:15 local time on March 14 (00:15 UTC on March 15) “<i>a failure at the Diezmero substation caused a significant loss of generation in the west of #Cuba and with it the failure of the National Electric System, SEN</i>”. This widespread power outage resulted in an immediate drop in request traffic from <a href="https://radar.cloudflare.com/cu"><u>Cuba</u></a>. Over the following two days, X posts from @EnergiasMinasCub, <a href="https://x.com/OSDE_UNE"><u>@OSDE_UNE</u></a> (the Cuban Electric Union), and <a href="https://x.com/ETECSA_Cuba"><u>@ETECSA_Cuba</u></a> (the Cuban Telecommunications Company) kept impacted subscribers apprised of the status of ongoing repairs. Traffic levels returned to expected levels around 20:00 local time on March 16 (00:00 on March 17 UTC), two full days after the incident began.</p>
    <div>
      <h3>Panama</h3>
      <a href="#panama">
        
      </a>
    </div>
    <p>An explosion and fire at the La Chorrera Thermoelectric Power Plant in <a href="https://radar.cloudflare.com/pa"><u>Panama</u></a> caused a <a href="https://www.msn.com/en-in/news/world/blackout-in-panama-after-massive-fire-at-power-plant-water-supply-hit-too/ar-AA1B0vOv"><u>massive power outage</u></a> across the country, starting at 23:40 local time on March 15 (04:40 on March 16 UTC). As expected, traffic dropped immediately, as seen in the HTTP and DNS request graphs below. However, recovery was fairly swift, as the <a href="https://x.com/PanamaAmerica/status/1901288375778246675"><u>electric system saw 75% recovery</u></a> by 03:00 local time (08:00 UTC), with full restoration <a href="https://x.com/Etesatransmite/status/1901234515999174723"><u>completed</u></a> at 06:08 local time (11:08 UTC). Traffic volumes began to increase after power was restored.</p>
    <div>
      <h2>Severe weather</h2>
      <a href="#severe-weather">
        
      </a>
    </div>
    
    <div>
      <h3>Ireland</h3>
      <a href="#ireland">
        
      </a>
    </div>
    <p><a href="https://en.wikipedia.org/wiki/Storm_%C3%89owyn"><u>Storm Éowyn</u></a> <a href="https://www.theguardian.com/uk-news/live/2025/jan/24/storm-eowyn-uk-weather-scotland-ireland-warning-winds-live-updates"><u>wreaked havoc</u></a> on <a href="https://radar.cloudflare.com/ie"><u>Ireland</u></a> in late January, knocking out <a href="https://transformers-magazine.com/tm-news/180000-without-power-in-ireland/"><u>power and water</u></a>, causing property damage, and limiting air and train travel. The storm’s impacts also disrupted Internet connectivity, as <a href="https://x.com/CloudflareRadar/status/1882851611749626219"><u>we observed</u></a> traffic from Connacht and Ulster fall by 75% as compared to the previous week at 06:30 local time (06:30 UTC) on January 24. As recovery from the storm progressed over the next several days, Internet traffic gradually recovered as well, with traffic in the two provinces reaching levels near those seen the prior week by mid-day on January 28.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/sXx1CEVuZIP0pDyKXH0xY/2f5d256f5967eca3debcfbe59dc8b42f/GiY1qXQXcAADgKl.jpeg" />
          </figure>
    <div>
      <h3>Réunion</h3>
      <a href="#reunion">
        
      </a>
    </div>
    <p><a href="https://en.wikipedia.org/wiki/Cyclone_Garance"><u>Cyclone Garance</u></a> made landfall over the French territory of <a href="https://radar.cloudflare.com/re"><u>Réunion</u></a> at ~10:00 local time (06:00 UTC) on February 28. Damage from the storm's 100+ miles/hour (160+ km/hour) winds caused power outages and infrastructure damage, resulting in disruptions to Internet connectivity. The most significant impacts to traffic were observed in the hours after the storm made landfall, but it took <a href="https://radar.cloudflare.com/explorer?dataSet=netflows&amp;loc=re&amp;dt=2025-02-27_2025-03-05&amp;timeCompare=1"><u>several days before traffic returned to expected levels</u></a>, reaching that point around 08:00 local time (04:00 UTC) on March 4.</p>
    <div>
      <h2>Earthquake</h2>
      <a href="#earthquake">
        
      </a>
    </div>
    
    <div>
      <h3>Myanmar</h3>
      <a href="#myanmar">
        
      </a>
    </div>
    <p>On March 28 at 12:50 local time (06:20 UTC), a <a href="https://earthquake.usgs.gov/earthquakes/eventpage/us7000pn9s/executive"><u>magnitude 7.7 earthquake</u></a> shook <a href="https://radar.cloudflare.com/mm"><u>Myanmar</u></a>, resulting in <a href="https://reccessary.com/en/news/asean-environment/massive-earthquake-in-myanmar-triggers-power-outages-fuel-shortages"><u>power outages and fuel shortages</u></a>. Almost immediately, <a href="https://x.com/CloudflareRadar/status/1905566630760902897"><u>traffic dropped by around 40%</u></a> at a country level. <a href="https://x.com/CloudflareRadar/status/1905603328144261396"><u>Regionally</u></a>, traffic from Nay Pyi Taw dropped 97% as compared to the previous week, Mandalay fell 90%, Ayeyarwady lost 88%, Bago 50%, and Shan State was down 38%.</p><p>While recovery efforts stretch into April, regular traffic patterns and volumes bounced back within days, as seen in the HTTP and DNS request graphs below.</p><p>However, at a network level, recovery has been mixed. Both <a href="https://radar.cloudflare.com/as134840"><u>AS134840 (MCCL)</u></a> and <a href="https://radar.cloudflare.com/as136442"><u>AS136442 (Oceanwave)</u></a> saw significant drops in traffic after the earthquake occurred, and traffic remained disrupted on both networks through the end of the first quarter. Peak traffic on MCCL has increased slightly, but nearly two weeks on, remains significantly lower than pre-earthquake levels. Traffic on Oceanwave saw steady growth after the initial disruption, and as of this writing is approaching pre-earthquake peaks. (It is unclear what caused the significant spike in request traffic seen from Oceanwave on April 3-4.) In contrast to these two providers, traffic from <a href="https://radar.cloudflare.com/as163255"><u>AS163255 (Mytel)</u></a> saw a significantly smaller disruption, and a <a href="https://developingtelecoms.com/telecom-business/humanitarian-communications/18289-mytel-says-connectivity-mostly-restored-after-myanmar-quake-amid-shutdown-fears.html"><u>significantly faster recovery</u></a>, as did traffic from <a href="https://radar.cloudflare.com/as135300?dateStart=2025-03-27&amp;dateEnd=2025-04-09#traffic-trends"><u>AS135300 (Myanmar Broadband Telecom)</u></a>.</p>
    <div>
      <h2>Cyberattack</h2>
      <a href="#cyberattack">
        
      </a>
    </div>
    
    <div>
      <h3>Russia</h3>
      <a href="#russia">
        
      </a>
    </div>
    <p>On January 7, <a href="https://radar.cloudflare.com/ru"><u>Russian</u></a> Internet provider <a href="https://radar.cloudflare.com/as29329"><u>Nodex (AS29329)</u></a> said in <a href="https://vk.com/wall-7622_825"><u>a post on Russian social media platform VKontakte</u></a> (translated) “<i>Dear Subscribers, our technical staff is still working on restoring the network. The process is painstaking and long. We express our deep gratitude to those who support us in this difficult moment! This is really important for us. Let me remind you that our network was attacked by Ukrainian hackers, which resulted in its complete failure. At the moment, its functioning is being restored. There will be communication. When, is still unknown.</i>” The <a href="https://en.wikipedia.org/wiki/Ukrainian_Cyber_Alliance"><u>Ukrainian Cyber Alliance</u></a>, a community of pro-Ukraine cyber activists formed in 2016, <a href="https://t.me/UCAgroup/38"><u>claimed responsibility for the attack</u></a> in a Telegram post.</p><p>The “complete failure” of the Nodex network is visible in the traffic graph below, where Internet traffic from the network began to drop after 03:00 local time (00:00 UTC) on January 7, reaching zero around 05:30 local time (02:30 UTC). Traffic from the network remained essentially non-existent until around 14:00 local time (11:00 UTC) the next day, recovering fairly quickly after that. Announced IPv4 address space fell by two-thirds at the same time that traffic volume dropped to zero, but recovered at 21:20 local time (18:20 UTC).</p>
    <div>
      <h2>Fire damage</h2>
      <a href="#fire-damage">
        
      </a>
    </div>
    
    <div>
      <h3>Los Angeles, California</h3>
      <a href="#los-angeles-california">
        
      </a>
    </div>
    <p>Between January 7-9, during the early days of the<a href="https://en.wikipedia.org/wiki/January_2025_Southern_California_wildfires"><u> 2025 Southern California wildfires</u></a> — which affected the <a href="https://www.fire.ca.gov/incidents/2025/1/7/palisades-fire"><u>Palisades</u></a> and <a href="https://www.fire.ca.gov/incidents/2025/1/7/eaton-fire"><u>Eaton</u></a> areas in Los Angeles — there were clear Internet disruptions in at least 13 Los Angeles neighborhoods. According to Cloudflare’s data, traffic drops of over 50% compared to the previous week were especially noticeable in cities like Pacific Palisades, Altadena, Malibu, Temple City, and Monrovia, among others. In the weeks that followed, traffic remained significantly lower than before the fires, particularly in Pacific Palisades and Altadena, reflecting the devastation in those areas. However, traffic recovery occurred significantly sooner in Malibu, Temple City, and Monrovia, although peak traffic levels remain somewhat below pre-fire levels. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/36NDkw4HzvLjEb1B2kMtLs/1792de620a5c2fbc5f83947f34b14f2f/BLOG-2800_LA1.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5xXx4H91hZ4sqwfuBpWde7/0cb0b4d03e6a5f2313dc0d6b7f0412e8/BLOG-2800_LA2.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6RHkUjU4OreZ6pQ8zBy3rQ/13c5a0f94466e4faa6c9673872111621/BLOG-2800_LA3.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/L9WOAV7BMywf9Q1jquEmh/46d5c2df525cf44384bc9f0ddd93a1c3/BLOG-2800_LA4.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7pOWHX6sf42uUFxUaWTFUD/b569959b669329aa528d6603f9db7827/BLOG-2800_LA5.png" />
          </figure>
    <div>
      <h3>Haiti</h3>
      <a href="#haiti">
        
      </a>
    </div>
    <p>On January 15, an <a href="https://x.com/jpbrun30/status/1879554628469362857"><u>X post</u></a> from the Director General of <a href="https://radar.cloudflare.com/as27653"><u>Digicel Haiti (AS27653)</u></a> stated (translated) “<i>Dear customers, last night at 8:30 pm we suffered damage to 2 of our international fiber optic cables caused by a fire in the metropolitan area. At 10:30 am a 3rd outage affected all international services, Internet and Moncash. Our teams are mobilized to resolve the problem as quickly as possible.</i>” These fires ultimately caused two complete Internet outages for Digicel Haiti’s customers, as seen in the graphs below.</p><p>Both traffic and announced IP address space (IPv4 &amp; IPv6) dropped to zero between 20:30 - 21:45 local time on January 14 (01:30 - 02:45 on January 15 UTC) and again between 10:15 - 11:00 local time on January 15 (15:15 - 16:00 UTC).</p>
    <div>
      <h2>Technical problems</h2>
      <a href="#technical-problems">
        
      </a>
    </div>
    
    <div>
      <h3>Russia</h3>
      <a href="#russia">
        
      </a>
    </div>
    <p>On January 14, multiple <a href="https://radar.cloudflare.com/ru"><u>Russian</u></a> networks, including <a href="https://radar.cloudflare.com/as8359"><u>AS8359 (MTS)</u></a>, <a href="https://radar.cloudflare.com/as12389"><u>AS12389 (Rostelecom)</u></a>, <a href="https://radar.cloudflare.com/as16345"><u>AS16345 (Beeline)</u></a>, <a href="https://radar.cloudflare.com/as31133"><u>AS31133 (MegaFon)</u></a>, and <a href="https://radar.cloudflare.com/as203451"><u>AS203451 (K-telecom)</u></a> all experienced a brief disruption in connectivity. As the traffic graphs below show, Internet traffic from these networks <a href="https://x.com/CloudflareRadar/status/1879196319874740312"><u>dropped by around 80%</u></a> between 14:00 - 14:30 UTC. According to a <a href="https://meduza.io/en/news/2025/01/14/internet-users-in-russia-report-widespread-service-outages"><u>statement</u></a> from <a href="https://en.wikipedia.org/wiki/Roskomnadzor"><u>Roskomnadzor</u></a>,  “<i>A brief connectivity issue was identified. Network operations were promptly restored.</i>” However, <a href="https://x.com/Liveuamap/status/1879195091874746543"><u>industry observers suggested</u></a> that the cause may have been due to an update to the so-called “Russian firewall” that failed and was quickly rolled back.</p>

    <div>
      <h3>Georgia</h3>
      <a href="#georgia">
        
      </a>
    </div>
    <p>Subscribers to <a href="https://radar.cloudflare.com/as16010"><u>Magticom (AS16010)</u></a>, one of the largest Internet providers in <a href="https://radar.cloudflare.com/ge"><u>Georgia</u></a>, experienced a complete outage on January 27. Request traffic and announced IP address space disappeared at 21:25 local time (17:25 UTC), recovering at 01:55 local time on January 28 (21:55 UTC). A (translated) <a href="https://www.facebook.com/permalink.php?story_fbid=pfbid03i7LBfz1Rn3YUJAUgzgv8oKK2xTPoozMAzeJnEWxbqPvYkMSUukwYnCGSqJDRcpgl&amp;id=100064447729415"><u>Facebook post from Magticom</u></a> explained that the company’s Internet connectivity comes through “<i>channels from Europe</i>” and that “<i>damage was reported in Turkey, where heavy snowfall and avalanche risks have prevented the partner company’s technical teams from reaching the affected area</i>”. Further, it noted that on the backup channel, “<i>suspicious damage was reported at three points on the Georgian side, in the territory of Adjara…</i>” Magticom’s published start and end times for the outage align with the loss and recovery of traffic and announced IP address space observed in Cloudflare data. </p>

    <div>
      <h3>France</h3>
      <a href="#france">
        
      </a>
    </div>
    <p>Subscribers of <a href="https://radar.cloudflare.com/as5410"><u>Bouygues Telecom (AS5410)</u></a> in <a href="https://radar.cloudflare.com/fr"><u>France</u></a> experienced a brief disruption to their Internet connectivity on March 11. According to a (translated) <a href="https://x.com/bouyguestelecom/status/1899352941942681834"><u>X post from the provider</u></a>, “<i>Following a technical incident between 5 a.m. and 7 a.m. you may have encountered difficulties using your services.</i>” As seen in the request traffic graphs below, a drop in traffic is visible between 05:00 - 06:45 local time (04:00 - 05:45 UTC), aligning with the provider’s stated timeframe. Bouyges Telecom did not provide any subsequent details around the cause of the “technical incident”.</p>
    <div>
      <h2>Unknown cause</h2>
      <a href="#unknown-cause">
        
      </a>
    </div>
    
    <div>
      <h3>Syria</h3>
      <a href="#syria">
        
      </a>
    </div>
    <p>Major Internet outages and disruptions in <a href="https://radar.cloudflare.com/sy"><u>Syria</u></a> are generally well documented, such as the cable cuts discussed above. However, on February 3, a multi-hour disruption was observed in the country, but no underlying cause was ever publicly disclosed. Starting approximately 14:00 local time (11:00 UTC), traffic from the country dropped by approximately 80%, along with a ~60% drop in announced IPv4 address space. Both traffic and announced IP address space returned to expected levels around 23:00 local time (20:00 UTC). The outage was confirmed in an <a href="https://x.com/syr_television/status/1886403431583092826"><u>X post from Syrian Television</u></a>.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>While the single government-directed shutdown last quarter, and the lack of such shutdowns this quarter, is an encouraging trend, we expect that it will be short-lived if countries like Iraq and Syria once again take such measures to prevent cheating on nationwide exams. As always, we encourage governments to recognize the collateral damage of such actions, and suggest that they explore alternative solutions to this problem.</p><p>The Cloudflare Radar team is constantly monitoring for Internet disruptions, sharing our observations on the <a href="https://radar.cloudflare.com/outage-center"><u>Cloudflare Radar Outage Center</u></a>, via social media, and in posts on <a href="https://blog.cloudflare.com/tag/cloudflare-radar/"><u>blog.cloudflare.com</u></a>. Follow us on social media at <a href="https://twitter.com/CloudflareRadar"><u>@CloudflareRadar</u></a> (X), <a href="https://noc.social/@cloudflareradar"><u>noc.social/@cloudflareradar</u></a> (Mastodon), and <a href="https://bsky.app/profile/radar.cloudflare.com"><u>radar.cloudflare.com</u></a> (Bluesky), or contact us via <a><u>email</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Radar]]></category>
            <category><![CDATA[Internet Shutdown]]></category>
            <category><![CDATA[Internet Traffic]]></category>
            <category><![CDATA[Consumer Services]]></category>
            <guid isPermaLink="false">4BkxBjZIqq5UIU9wkfkJwr</guid>
            <dc:creator>David Belson</dc:creator>
        </item>
        <item>
            <title><![CDATA[A diversity of downtime: the Q4 2024 Internet disruption summary]]></title>
            <link>https://blog.cloudflare.com/q4-2024-internet-disruption-summary/</link>
            <pubDate>Tue, 28 Jan 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ After a rather busy third quarter, the fourth quarter of 2024 saw significantly fewer Internet disruptions. ]]></description>
            <content:encoded><![CDATA[ <p>Cloudflare’s <a href="https://www.cloudflare.com/network/">network</a> spans more than 330 cities in over 120 countries, where we interconnect with over 13,000 network providers in order to provide a broad range of services to millions of customers. The breadth of both our network and our customer base provides us with a unique perspective on Internet resilience, enabling us to observe the impact of Internet disruptions at both a local and national level, as well as at a network level.</p><p>As we have noted in the past, this post is intended as a summary overview of observed and confirmed disruptions, and is not an exhaustive or complete list of issues that have occurred during the quarter. A larger list of detected traffic anomalies is available in the <a href="https://radar.cloudflare.com/outage-center#traffic-anomalies"><u>Cloudflare Radar Outage Center</u></a>.</p><p>In the third quarter we covered quite a few <a href="https://blog.cloudflare.com/q3-2024-internet-disruption-summary/#government-directed"><u>government-directed Internet shutdowns</u></a>, including many intended to prevent cheating on exams. In the fourth quarter, however, we only observed a single <a href="#government-directed"><u>government-directed shutdown</u></a>, this one related to protests. Terrestrial <a href="#cable-cuts"><u>cable cuts</u></a> impacted connectivity in two African countries. As we have seen multiple times before, both unexpected <a href="#power-outages"><u>power outages</u></a> and rolling power outages following <a href="#military-action"><u>military action</u></a> resulted in Internet disruptions. <a href="#natural-disasters"><u>Violent storms and an earthquake</u></a> predictably caused Internet outages in the affected countries. And unexpected issues with <a href="#maintenance"><u>maintenance</u></a> efforts caused outages at two European providers, while Verizon customers in several US states experienced a brief but <a href="#unknown"><u>unexplained</u></a> outage.</p>
    <div>
      <h2>Cable cuts</h2>
      <a href="#cable-cuts">
        
      </a>
    </div>
    
    <div>
      <h3>Rwanda</h3>
      <a href="#rwanda">
        
      </a>
    </div>
    <p>On October 1, local mobile provider <a href="https://radar.cloudflare.com/as36890"><u>MTN Rwanda (AS36890)</u></a> <a href="https://x.com/MTNRwanda/status/1841092339625865329"><u>published a post on X</u></a> alerting subscribers of a double fiber cut in <a href="https://radar.cloudflare.com/tz"><u>Tanzania</u></a> and <a href="https://radar.cloudflare.com/ug"><u>Uganda</u></a> that may impact connection quality. As a result of these fiber cuts, Internet traffic began to drop sharply after 12:45 local time (10:45 UTC), with a full outage visible between 13:15 - 13:30 local time (11:15 - 11:30 UTC). Traffic then began to rapidly recover, recovering to expected levels around 19:00 local time (17:00 UTC). Several hours later, MTN Rwanda <a href="https://x.com/MTNRwanda/status/1841201775493464174"><u>published a followup post</u></a> confirming that all services had been restored.</p><p>The <a href="https://afterfibre.nsrc.org/"><u>African Undersea and Terrestrial Fibre Optic Cables (AfTerFibre) map</u></a> shows that in addition to connecting with networks to the north and south in Tanzania and Uganda, it appears that connections are also available through networks to the west in the <a href="https://radar.cloudflare.com/cd"><u>Democratic Republic of the Congo (DRC)</u></a>. However, <a href="https://radar.cloudflare.com/routing/as36890#connectivity"><u>MTN Rwanda’s upstream providers and/or peers</u></a> may not be routing traffic through DRC-based networks, meaning that they couldn’t be used as a backup path when the apparently simultaneous fiber cuts occurred.</p>
    <div>
      <h3>Niger</h3>
      <a href="#niger">
        
      </a>
    </div>
    <p>On November 30, local mobile provider <a href="https://radar.cloudflare.com/as37531"><u>Airtel Niger (AS37531)</u></a> <a href="https://x.com/airtelniger/status/1862936319824888015"><u>posted a thread of messages</u></a> on X apologizing for Internet service disruptions, explaining that (translated) “<i>Indeed, due to a simultaneous interruption on the national optical fiber on the Niamey-Dosso, Niamey-Balleyara exits, our internet services are completely interrupted throughout the territory, beyond our control.</i>” These simultaneous fiber cuts resulted in a near complete outage between 17:30 local time (16:30 UTC) on November 29 until 19:45 local time (18:45 UTC) on November 30.</p><p>It seems unusual that the message thread was not posted until after the outage was resolved. It is possible that Airtel Niger themselves had no backup connectivity, and could not post an update until connectivity was restored. Alternately, given that the first post of the thread starts with “<i>[COMMUNIQUÉ IMPORTANT📢]</i>” (“<i>[IMPORTANT PRESS RELEASE 📢 ]</i>”), it is possible that the alert and apology was communicated through more official channels, such as Airtel’s website, in a timely manner, with the thread on X simply a follow-up once Internet services were again available.</p>
    <div>
      <h2>Power outages</h2>
      <a href="#power-outages">
        
      </a>
    </div>
    
    <div>
      <h3>Cuba </h3>
      <a href="#cuba">
        
      </a>
    </div>
    <p>Instability in a country’s electrical infrastructure often causes widespread power outages, which, in turn, disrupt Internet connectivity. This happened on October 18 in <a href="https://radar.cloudflare.com/cu"><u>Cuba</u></a>, where a <a href="https://x.com/EnergiaMinasCub/status/1847315612738978114"><u>post on X from the Ministry of Energy and Mines of Cuba</u></a> noted (translated) “<i>Following the unexpected disconnection of the Antonio Guiteras CTE, the National Electricity System was completely disconnected at 11 a.m. today. The Unión Eléctrica is working on its restoration.</i>” The power outage caused Internet traffic within the country to drop by more than half within minutes (15:15 UTC). Connectivity was disrupted for approximately three-and-a-half days, as it returned to expected levels around 23:00 local time on October 21 (03:00 UTC on October 22).</p><p>The Ministry posted several status updates on October 19 and 20, covering the work being done to restore power across the country. A <a href="https://x.com/OSDE_UNE/status/1848812241979916598"><u>final X post on October 22</u></a> signaled the end of the power outage, proclaiming (translated) “<i>At 02:44 pm the National Electric System was synchronized.</i>”</p><p>Several weeks later, power issues again impacted Internet connectivity in Cuba. On November 6, <a href="https://x.com/OSDE_UNE/status/1854252013212844461"><u>the Electrical Union of Cuba (Uníon Eléctrica) posted on X</u></a> that (translated) “<i>14:48 hours. Strong winds caused by the intense Hurricane Rafael, cause the disconnection of the National Electric System. Contingency protocols are applied.</i>” The timing of this post aligns with a sharp decline in traffic observed from Cuba, which fell sharply around 14:30 local time (19:30 UTC). Over the following days, after Hurricane Rafael passed the island, the Uníon Eléctrica posted numerous updates on the restoration of electrical service. Internet traffic appeared to return to expected levels around 13:00 local time (18:00 UTC) on November 9, although full restoration of electrical services took several days longer.</p><p>On December 4, Cuba suffered its third nationwide power outage in as many months. Early that morning, the <a href="https://x.com/EnergiaMinasCub/status/1864220455139168538"><u>Ministry of Energy and Mines posted on X</u></a> that (translated) “<i>At 2:08 this morning, the Electrical System, SEN, was disconnected when the Antonio Guiteras thermoelectric plant went out due to the automatic tripping.</i>” The loss of this electrical power due to the failure of this generation plant caused a significant drop in Internet traffic from Cuba, falling approximately 60% as compared to the previous week at just before 02:15 local time (07:15 UTC). Traffic recovered to expected levels almost a day later at around 00:30 local time (05:30 UTC). This timing aligns with a <a href="https://x.com/EnergiaMinasCub/status/1864542875255541780"><u>follow-on X post from the Ministry</u></a> that announced that all units had been synchronized, signaling a restoration of electrical service.</p>
    <div>
      <h3>Guadeloupe</h3>
      <a href="#guadeloupe">
        
      </a>
    </div>
    <p>An article <a href="https://www.theguardian.com/world/2024/oct/25/guadeloupe-power-outage-strike"><u>published in The Guardian</u></a> on October 25 noted that “<i>The French Caribbean island of </i><a href="https://radar.cloudflare.com/gp"><i><u>Guadeloupe</u></i></a><i> has been left entirely without power after striking workers seized control of the territory’s power station.</i>” Workers entered the power station’s command room “<i>and caused an emergency shutdown of all the engines</i>”, according to the article. The power outage caused by this “emergency shutdown” resulted in traffic dropping nearly 70% as compared to the previous week at 08:30 local time (12:30 UTC). Although “<a href="https://www.lemonde.fr/en/international/article/2024/10/25/guadeloupe-suffers-power-outage-blamed-on-striking-staff_6730504_4.html"><u>restored electricity supply for the 230,000 affected households was expected at 3 pm local time (19:00 UTC) at best</u></a>”, it appears that recovery took significantly longer than expected, as Internet traffic did not return to expected levels until around 22:00 local time on October 26 (02:00 UTC on October 27) . A <a href="https://www.guadeloupe.gouv.fr/Actualites/Communiques-et-dossiers-de-presse/Conflit-EDF-PEI-Point-de-situation-a-11h"><u>press release from the government</u></a> at 11:00 local time (15:00 UTC) on October 26 gave an update on the recovery efforts, noting (translated) “<i>160,000 users have had their electricity restored. The restoration of service for the 70,000 customers still cut off is continuing, with a return to normal expected over the weekend.</i>” It also noted that “<i>76% of Orange subscribers have been able to regain their network connection. 1,800 homes are still without internet.</i>”</p>
    <div>
      <h3>Kenya</h3>
      <a href="#kenya">
        
      </a>
    </div>
    <p>Power outages in <a href="https://radar.cloudflare.com/ke"><u>Kenya</u></a> resulted in multiple Internet disruptions during both the <a href="https://blog.cloudflare.com/q2-2024-internet-disruption-summary/#kenya"><u>second</u></a> and <a href="https://blog.cloudflare.com/q3-2024-internet-disruption-summary/#kenya"><u>third</u></a> quarters of 2024. A similar event occurred during the fourth quarter as well. An <a href="https://x.com/KenyaPower_Care/status/1869153351436468632"><u>X post from Kenya Power</u></a> contained a “Customer Alert” issued at 01:28 local time on December 18 (22:28 UTC on December 17) that informed customers that “<i>We are experiencing a widespread power outage affecting most of the country, except parts of Western and North Rift regions.</i>” This outage caused Internet traffic from the country to drop by over 70% starting just after midnight local time on December 18 (21:00 UTC on December 17). On December 18 at 07:35 local time (04:35 UTC), an <a href="https://x.com/KenyaPower_Care/status/1869243286667628702"><u>update from Kenya Power posted to X</u></a> reported that power had been restored to all affected areas. Internet traffic from the country had recovered to near expected levels by that time as well.</p>
    <div>
      <h2>Natural disasters</h2>
      <a href="#natural-disasters">
        
      </a>
    </div>
    
    <div>
      <h3>United States, Florida</h3>
      <a href="#united-states-florida">
        
      </a>
    </div>
    <p>At 20:30 local time on October 9 (00:30 UTC on October 10), <a href="https://www.weather.gov/mlb/HurricaneMilton_Impacts"><u>Hurricane Milton made landfall in Florida</u></a> as a Category 3 storm. Damage from Milton was extensive, including flooding, downed trees and power lines, and damage to homes and businesses. The power outages and other infrastructure damage caused by the storm, coupled with evacuation from impacted areas, resulted in a notable Internet disruption at a state level. As seen in the graph below, peak traffic levels on October 10, after Milton’s arrival, were approximately 40% lower than the preceding days. As recovery and restoration efforts began over the following days, and as evacuees returned to home, school, and work, the state’s Internet traffic began to gradually increase.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/KVUA87OvUlomBVrbylU8w/793b65193655ffef980c9341c1476668/Oct_9_-_United_States_-_Florida.png" />
          </figure><p>This gradual recovery is also visible in the series of maps below, which illustrate cities where <a href="https://x.com/CloudflareRadar/status/1845903842355101827"><u>Internet traffic was over 50% lower</u></a> than the same time the prior week, with snapshots taken at 09:00 local time (13:00 UTC) on October 10, 11, and 14. On October 10, <a href="https://x.com/CloudflareRadar/status/1844444286911381980"><u>over 70 cities</u></a> had significantly lower traffic, while on October 14, it was just over 10 cities.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4rqveKTtJ60LW8rLBizxeD/ea62aec5653bc46622fbabb922e61c88/Florida_-_three_maps.jpeg" />
          </figure>
    <div>
      <h3>Mayotte</h3>
      <a href="#mayotte">
        
      </a>
    </div>
    <p>On December 14, Cyclone Chido caused significant destruction on the French territory of <a href="https://radar.cloudflare.com/yt"><u>Mayotte</u></a> in the Indian Ocean. Power, water, and communications infrastructure <a href="https://reliefweb.int/report/mozambique/cyclone-chido-has-devastated-mayotte-and-mozambique"><u>were all damaged</u></a>, as well as homes and public facilities. Over three dozen people <a href="https://www.reuters.com/business/environment/french-officials-raise-mayotte-death-toll-39-after-storm-chido-2024-12-24/"><u>were killed</u></a>, with thousands more injured. With such widespread devastation, Internet traffic from the country was also impacted, as would be expected. Chido <a href="https://reliefweb.int/attachments/f852498a-d8a7-42c2-a447-ceff1d24ddeb/20241219_ACAPS_Briefing_note_Mayotte_Impact_of_Tropical_Cyclone_Chido.pdf"><u>made landfall</u></a> in Mayotte early in the morning on December 14, and traffic dropped sharply around 09:00 local time (06:00 UTC), causing a near-complete Internet outage. After extremely slow growth over the following week, a diurnal pattern is once again visible, with peak traffic levels continuing to gradually increase through the end of the month. As of the third week of January 2025, Mayotte’s Internet traffic continues to slowly increase, but remains well below pre-Chido levels.</p>
    <div>
      <h3>Vanuatu</h3>
      <a href="#vanuatu">
        
      </a>
    </div>
    <p>A <a href="https://earthquake.usgs.gov/earthquakes/eventpage/us7000nzf3/executive"><u>magnitude 7.3 earthquake</u></a> struck 24 km WNW of Port-Vila, <a href="https://radar.cloudflare.com/vu"><u>Vanuatu</u></a> at 17:46 local time (01:47 UTC) on December 17. Internet traffic from the country dropped sharply almost immediately, falling nearly 90% compared to the previous week. A significant drop in announced IPv4 address space was also observed, suggesting that damage from the earthquake took core network provider infrastructure offline as well. Recovery was slow, with Internet traffic not returning to expected levels until around 23:00 local time (12:00 UTC) on December 26.</p><p>An <a href="https://maritime-executive.com/editorials/vanuatu-illustrates-risks-of-thin-subsea-cable-infrastructure"><u>editorial published on The Maritime Executive</u></a> website highlights that Vanuatu is currently reliant on the <a href="https://www.submarinecablemap.com/submarine-cable/interchange-cable-network-1-icn1"><u>Interchange Cable Network 1 (ICN1) submarine cable</u></a> connection to <a href="https://radar.cloudflare.com/fj"><u>Fiji</u></a> for international Internet connectivity. The editorial states that “<i>A fire at the cable landing station temporarily interrupted the power supply, disabling internet traffic. The connection was restored 10 days later…</i>” The resolution of the power outage at the cable landing station roughly aligns with traffic returning to expected levels, suggesting that this was a significant driver of the drop in traffic seen from Vanuatu after the earthquake. Starlink’s satellite Internet service provides some nominal redundancy, as <a href="https://x.com/Starlink/status/1843395755547209765"><u>the company announced service availability</u></a> on October 7. The <a href="https://www.submarinecablemap.com/submarine-cable/tamtam"><u>TAMTAM submarine cable</u></a>, connecting Vanuatu to <a href="https://radar.cloudflare.com/nc"><u>New Caledonia</u></a>, is expected to be ready for service in 2026 — once available, it will provide additional redundancy for Internet connectivity. </p>
    <div>
      <h2>Government directed</h2>
      <a href="#government-directed">
        
      </a>
    </div>
    
    <div>
      <h3>Mozambique</h3>
      <a href="#mozambique">
        
      </a>
    </div>
    <p>On October 25 in <a href="https://radar.cloudflare.com/mz"><u>Mozambique</u></a>, <a href="https://www.enca.com/news-top-stories/internet-blackout-hits-mozambique-capital-after-election-protests"><u>mobile Internet connectivity across multiple providers was shut down</u></a> after protests against the re-election of the ruling Frelimo party became violent. Starting around 13:00 local time (11:00 UTC), significant drops in traffic were observed across <a href="https://radar.cloudflare.com/as30619"><u>AS30619 (</u></a><a href="https://radar.cloudflare.com/as30619">Telecomunicações de Moçambique</a><a href="https://radar.cloudflare.com/as30619"><u>)</u></a>, <a href="https://radar.cloudflare.com/as37342"><u>AS37342 (Movitel)</u></a>, and <a href="https://radar.cloudflare.com/as37223"><u>AS37223 (Vodacom)</u></a>. Both Vodacom and Movitel experienced near complete outages almost immediately, while some traffic remained on Telecomunicações de Moçambique until just before 02:00 local time (00:00 UTC) on October 26. Connectivity was restored the morning of October 26, as traffic returned around 08:00 local time (06:00 UTC). However, after connectivity returned, some social media platforms and messaging applications <a href="https://www.hrw.org/news/2024/11/06/mozambique-post-election-internet-restrictions-hinder-rights"><u>remained unavailable</u></a>.</p><p>Just over a week later, on November 3, subscribers on these mobile networks experienced another Internet shutdown. At around 20:30 local time (18:30 UTC) traffic dropped significantly on each of these networks, with connectivity disrupted for nearly 12 hours before recovering around 08:00 (06:00 UTC) the morning of November 4. Similar shutdowns (“Internet curfews”) were observed November 4-5 and November 6-7 on all three networks, and November 7-8 on Movitel and Vodacom. According to a <a href="https://aimnews.org/2024/11/11/internet-shutdown-to-prevent-destruction-of-country/"><u>published report</u></a>, the country’s Minister of Transport and Communications “admitted that Internet access was restricted in order ‘to avoid the destruction of the country’”, but shifted blame to the impacted services providers, claiming that when they note misuse of their services, they can take the initiative of interrupting the services, as part of their “civil responsibility” to safeguard “the stability and welfare of the population”.</p>
    <div>
      <h2>Military action</h2>
      <a href="#military-action">
        
      </a>
    </div>
    
    <div>
      <h3>Syria</h3>
      <a href="#syria">
        
      </a>
    </div>
    <p>An Internet disruption observed in <a href="https://radar.cloudflare.com/sy"><u>Syria</u></a> on November 9 may have been caused by damage from an <a href="https://www.reuters.com/world/middle-east/israeli-aggression-injures-syrian-soldiers-near-aleppo-state-media-says-2024-11-08/"><u>Israeli airstrike near Aleppo and Idlib</u></a> reported to have taken place earlier that morning. Internet traffic from the country dropped by about 80% at around 04:00 local time (01:00 UTC), with announced IP address space from the country falling significantly at that time as well. The disruption lasted approximately four hours, with traffic and announced IP address space returning to expected levels around 08:00 local time (05:00 UTC). </p><p>Internal analysis of city-level Internet traffic shows a similar disruption in Aleppo, suggesting that it may have been caused by the airstrike.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4mINSvpkYHgWfw2fmmUQ73/655b6509cff44bcdb627fc319b301198/Nov_9_-_Syria_-_Aleppo.png" />
          </figure>
    <div>
      <h3>Ukraine</h3>
      <a href="#ukraine">
        
      </a>
    </div>
    <p>Russian missile strikes on November 17 <a href="https://www.reuters.com/world/europe/ukraine-brings-back-long-rolling-power-cuts-after-major-russian-strike-2024-11-18/"><u>targeting electrical power infrastructure</u></a> in <a href="https://radar.cloudflare.com/ua"><u>Ukraine</u></a> resulted in rolling power outages in multiple regions across the country. As we have seen multiple times throughout the nearly three-year-old conflict, these power outages result in disruptions to Internet traffic, impacting both service provider infrastructure and subscriber connectivity.</p><p>During the period between 07:30 local time (05:30 UTC) on November 17 and 02:00 local time (00:00 UTC) on November 23, <a href="https://x.com/CloudflareRadar/status/1858512061816275381"><u>we observed lower Internet traffic as compared to the previous week</u></a> in Odessa, Zaporizhzhia, Mykolaiv, and Sumy. Traffic in Odessa initially dropped on November 17 by around 50% as compared to the prior week, while on November 18, traffic dropped by over 20% in the other regions. Traffic largely recovered in Odessa by November 21, while <a href="https://x.com/CloudflareRadar/status/1859693864539558225"><u>the other regions took several additional days</u></a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4df2zIvFEYuEzlxCOsheVq/929deb1bb59fa7424499489910df1660/Nov_17_-_Ukraine_-_Odessa_-_compare.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2ifIBz0P1A5CJJE5jzXCEb/9bed04debd6d2b2bb477a09bf33be572/Nov_17_-_Ukraine_-_Mykolaiv_-_compare.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3OP5J1JrocK1gEM8RV8Ejk/55cbe2ef4d5f58665e5a3e8a45930d28/Nov_17_-_Ukraine_-_Sumy_-_compare.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5Ul0rncfaLHonLFaO1PAza/171a02e635370bf4ef83d83272309351/Nov_17_-_Ukraine_-_Zaporizhzhia_-_compare.png" />
          </figure><p>Similar attacks took place just a few days later, with <a href="https://www.politico.eu/article/ukraine-power-shutdowns-russia-massive-attack-energy-grid-volodymyr-zelenskyy-vladimir-putin-border-eu/"><u>additional Russian airstrikes again targeting electrical infrastructure in Ukraine</u></a>. Once again, Ukrainian officials implemented emergency power outages, which impacted Internet traffic in multiple areas across the country. Starting around 07:00 local time (05:00 UTC) on November 28, <a href="https://x.com/CloudflareRadar/status/1862210312625152034"><u>we observed traffic drop by as much as 65%</u></a> as compared to the previous week in Kherson Oblast, Mykolaiv, Ternopil Oblast, Rivna, and Lviv. Traffic remained lower over the next several days, but appears to have generally recovered by December 1.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4MdWZ5jx2HNvMb98N1OSUv/b2241380e2c8f41bad84c1bfc9d97e26/Nov_28_-_Ukraine_-_Kherson_-_compare.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5BT5MUbigIKPn23Qrro81i/aaaaca0a3ec7281efb726b3fd2ee47b6/Nov_28_-_Ukraine_-_Mykolaiv_-_compare.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3SFzcNP5FgPxQ20T5HH1UD/9bc3da6e9be2465e9df62bb47dc8e1be/Nov_28_-_Ukraine_-_Ternopil_-_compare.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6i7si8ch823gPYhcvrANJe/08cf403779b09f7d451da06c61d137d3/Nov_28_-_Ukraine_-_Rivne_-_compare.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3jcZqZeL0hDEgb7jTVNaMW/5de359cf61ba96763b6416b835790722/Nov_28_-_Ukraine_-_LVIV_-_compare.png" />
          </figure>
    <div>
      <h2>Maintenance</h2>
      <a href="#maintenance">
        
      </a>
    </div>
    
    <div>
      <h3>Switzerland, Salt Mobile</h3>
      <a href="#switzerland-salt-mobile">
        
      </a>
    </div>
    <p>According to the image below, which replaced the homepage of Swiss provider <a href="https://radar.cloudflare.com/as15796"><u>Salt Mobile (AS15796)</u></a>, reported maintenance took the network completely offline early in the morning of December 3.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3UcFvHDUtaPnmtKKVi0SaM/f02396634bef3dee5579428956189ebc/Dec_2_-_Switzerland_-_Salt_Mobile_-splash_-_border.jpg" />
          </figure><p>The outage lasted nearly three hours, with observed traffic at or near zero, between 01:25 and 04:20 local time (00:25 - 03:20 UTC). </p>
    <div>
      <h3>Greenland, Tusass A/S</h3>
      <a href="#greenland-tusass-a-s">
        
      </a>
    </div>
    <p>A December 10 <a href="https://www.tusass.gl/en/press/article/?id=139"><u>update from Tusass A/S</u></a> <a href="https://radar.cloudflare.com/as8818"><u>(AS8818, formerly TeleGreenland)</u></a> explained why the provider experienced a complete Internet outage between 02:30 and 05:15 local time (04:30 - 07:15 UTC) that morning. The post noted “<i>This happened because preventive maintenance was to be done on the connections in Canada between 02:00 and 06:00 last night, but with a combined fault on our connection to Denmark we lost nationwide connectivity. Fortunately, the fault on the connection to Denmark occurred on land, and therefore easy to repair.</i>” The graphs below show that for the duration of the outage, traffic from the network dropped to zero, no IPv6 address space was announced, and the volume of announced IPv4 address space fell by 94%.</p><p>According to <a href="https://www.submarinecablemap.com/"><u>TeleGeography’s Submarine Cable Map</u></a>, the <a href="https://www.submarinecablemap.com/submarine-cable/greenland-connect"><u>Greenland Connect</u></a> cable system connects <a href="https://radar.cloudflare.com/gl"><u>Greenland</u></a> to Newfoundland, <a href="https://radar.cloudflare.com/ca"><u>Canada</u></a>. It is possible that the fault on the connection to <a href="https://radar.cloudflare.com/dk"><u>Denmark</u></a> may have occurred on the Greenland-to-<a href="https://radar.cloudflare.com/is"><u>Iceland</u></a> segment of the Greenland Connect cable system; the Iceland-to-Denmark connection is made over the <a href="https://www.submarinecablemap.com/submarine-cable/danice"><u>DANICE</u></a> submarine cable.</p>
    <div>
      <h2>Unknown</h2>
      <a href="#unknown">
        
      </a>
    </div>
    
    <div>
      <h3>United States, Verizon</h3>
      <a href="#united-states-verizon">
        
      </a>
    </div>
    <p>Very early in the morning of November 12, some subscribers of Verizon’s Fios Internet service experienced a disruption to their Internet connectivity. A <a href="https://puck.nether.net/pipermail/outages/2024-November/015342.html"><u>post to the Outages mailing list</u></a> noted that a major multi-state Verizon Fios outage began at 12:28am EST, impacting Virginia, Washington DC, Maryland, and New Jersey, as well as parts of eastern Pennsylvania. Traffic from <a href="https://radar.cloudflare.com/as701"><u>AS701</u></a>, the autonomous system used by Verizon for their Fios service, dropped by approximately 30% around 00:30 Eastern time (05:30 UTC). At a state level, traffic from AS701 dropped between 50-70% in Pennsylvania, Delaware, Maryland, and Washington DC.</p><p>A subsequent post on the Outages mailing list stated that the outage was resolved everywhere at 3:23am EST (08:23 UTC). Nearly six hours after the outage ended, <a href="https://x.com/VerizonSupport/status/1856334754888438158"><u>Verizon Support published a post on X</u></a> acknowledging the issue, stating “<i>A network issue early this morning disrupted service for some Verizon Fios customers in the Northeast for a short period of time. As soon as the issue was identified, our engineering teams quickly restored the service.</i>” However, they did not provide any information on what ultimately caused the service disruption.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>In addition to the outages and disruptions covered above, <a href="https://blog.cloudflare.com/resilient-internet-connectivity-baltic-cable-cuts/"><u>resilient Internet connectivity</u></a> meant that two Baltic Sea cable cuts that occurred on November 17 and 18 had minimal impact. Whether <a href="https://www.datacenterdynamics.com/en/news/baltic-subsea-cable-damage-was-accidental-not-sabotage-us-and-european-officials/"><u>accidental or sabotage</u></a>, the security and resiliency of submarine cable infrastructure continues to be <a href="https://www.datacenterdynamics.com/en/news/nato-launches-baltic-sentry-for-subsea-cable-security/"><u>an important topic</u></a>. The security and resilience of terrestrial cable infrastructure, as well as other critical Internet infrastructure, must also remain top of mind to help speed recovery from storms, earthquakes, military action, and power outages.</p><p>The Cloudflare Radar team is constantly monitoring for Internet disruptions, sharing our observations on the <a href="https://radar.cloudflare.com/outage-center"><u>Cloudflare Radar Outage Center</u></a>, via social media, and in posts on <a href="https://blog.cloudflare.com/tag/cloudflare-radar/"><u>blog.cloudflare.com</u></a>. Follow us on social media at <a href="https://twitter.com/CloudflareRadar"><u>@CloudflareRadar</u></a> (X), <a href="https://noc.social/@cloudflareradar"><u>noc.social/@cloudflareradar</u></a> (Mastodon), and <a href="https://bsky.app/profile/radar.cloudflare.com"><u>radar.cloudflare.com</u></a> (Bluesky), or contact us via <a><u>email</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Radar]]></category>
            <category><![CDATA[Internet Shutdown]]></category>
            <category><![CDATA[Internet Traffic]]></category>
            <category><![CDATA[Consumer Services]]></category>
            <guid isPermaLink="false">1dimpxaWgcG7zwbJJYhdfX</guid>
            <dc:creator>David Belson</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare’s perspective of the October 30, 2024, OVHcloud outage]]></title>
            <link>https://blog.cloudflare.com/cloudflare-perspective-of-the-october-30-2024-ovhcloud-outage/</link>
            <pubDate>Wed, 30 Oct 2024 00:00:00 GMT</pubDate>
            <description><![CDATA[ On October 30, 2024, cloud hosting provider OVHcloud (AS16276) suffered a brief but significant outage. Within this post, we review Cloudflare’s perspective on this outage. ]]></description>
            <content:encoded><![CDATA[ <p>On October 30, 2024, cloud hosting provider <a href="https://radar.cloudflare.com/as16276"><u>OVHcloud (AS16276)</u></a> suffered a brief but significant outage. According to their <a href="https://network.status-ovhcloud.com/incidents/qgb1ynp8x0c4"><u>incident report</u></a>, the problem started at 13:23 UTC, and was described simply as “<i>An incident is in progress on our backbone infrastructure.</i>” OVHcloud noted that the incident ended 17 minutes later, at 13:40 UTC. As a major global cloud hosting provider, some customers use OVHcloud as an origin for sites delivered by Cloudflare — if a given content asset is not in our cache for a customer’s site, we retrieve the asset from OVHcloud.</p><p>We observed traffic starting to drop at 13:21 UTC, just ahead of the reported start time. By 13:28 UTC, it was approximately 95% lower than pre-incident levels. Recovery appeared to start at 13:31 UTC, and by 13:40 UTC, the reported end time of the incident, it had reached approximately 50% of pre-incident levels. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/62w8PcLJ3Q05F1BtA12zUb/6d8ce87f85eb585a7fe0ac02f8cd93d5/image4.jpg" />
          </figure><p><sup><i>Traffic from OVHcloud (AS16276) to Cloudflare</i></sup></p><p></p><p>Cloudflare generally exchanges most of our traffic with OVHcloud over peering links. However, as shown below, peered traffic volume during the incident fell significantly. It appears that some small amount of traffic briefly began to flow over transit links from Cloudflare to OVHcloud due to sudden changes in which Cloudflare data centers we were receiving OVHcloud requests. (Peering is a direct connection between two network providers for the purpose of exchanging traffic. Transit is when one network pays an intermediary network to carry traffic to the destination network.) </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2L0IaXd7B5C6RX23iTG5Pf/3fd2489f159e2281d191f157f5695f94/image3.jpg" />
          </figure><p>Because we peer directly, we exchange most traffic over our private peering sessions with OVHcloud. Instead, we found OVHcloud routing to Cloudflare dropped entirely for a few minutes, then switched to just a single Internet Exchange port in Amsterdam, and finally normalized globally minutes later.</p><p>As the graphs below illustrate, we normally see the largest amount of traffic from OVHcloud in our Frankfurt and Paris data centers, as <a href="https://www.ovhcloud.com/en/about-us/global-infrastructure/regions/"><u>OVHcloud has large data center presences in these regions</u></a>. However, in that shift to transit, and the shift to an Amsterdam Internet Exchange peering point, we saw a spike in traffic routed to our Amsterdam data center. We suspect the routing shifts are the earliest signs of either internal BGP reconvergence, or general network recovery within AS16276, starting with their presence nearest our Amsterdam peering point.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/yCDGCplEsmqXU7uRifjTU/12176147c10ab6e9a766ee5d788b133a/image2.jpg" />
          </figure><p>The <a href="https://network.status-ovhcloud.com/incidents/qgb1ynp8x0c4"><u>postmortem</u></a> published by OVHcloud noted that the incident was caused by “<i>an issue in a network configuration mistakenly pushed by one of our peering partner[s]</i>” and that “<i>We immediately reconfigured our network routes to restore traffic.</i>” One possible explanation for the backbone incident may be a BGP route leak by the mentioned peering partner, where OVHcloud could have accepted a full Internet table from the peer and therefore overwhelmed their network or the peering partner’s network with traffic, or caused unexpected internal BGP route updates within AS16276.</p><p>Upon investigating what route leak may have caused this incident impacting OVHcloud, we found evidence of a maximum prefix-limit threshold being breached on our peering with <a href="https://radar.cloudflare.com/as49981"><u>Worldstream (AS49981)</u></a> in Amsterdam. </p>
            <pre><code>Oct 30 13:16:53  edge02.ams01 rpd[9669]: RPD_BGP_NEIGHBOR_STATE_CHANGED: BGP peer 141.101.65.53 (External AS 49981) changed state from Established to Idle (event PrefixLimitExceeded) (instance master)</code></pre>
            <p></p><p>As the number of received prefixes exceeded the limits configured for our peering session with Worldstream, the BGP session automatically entered an idle state. This prevented the route leak from impacting Cloudflare’s network. In analyzing <a href="https://datatracker.ietf.org/doc/html/rfc7854"><u>BGP Monitoring Protocol (BMP)</u></a> data from AS49981 prior to the automatic session shutdown, we were able to confirm Worldstream was sending advertisements with AS paths that contained their upstream Tier 1 transit provider.</p><p>During this time, we also detected over 500,000 BGP announcements from AS49981, as Worldstream was announcing routes to many of their peers, visible on <a href="https://radar.cloudflare.com/routing/as49981?dateStart=2024-10-30&amp;dateEnd=2024-10-30#bgp-announcements"><u>Cloudflare Radar</u></a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2YmTSJfXomzeb3mh93JyRH/15c764790576468a47d3760bc7f48153/Screenshot_2024-10-30_at_12.49.25_PM.png" />
          </figure><p>Worldstream later <a href="https://noc.worldstream.nl"><u>posted a notice</u></a> on their status page, indicating that their network experienced a route leak, causing routes to be unintentionally advertised to all peers:</p><blockquote><p><i>“Due to a configuration error on one of the core routers, all routes were briefly announced to all our peers. As a result, we pulled in more traffic than expected, leading to congestion on some paths. To address this, we temporarily shut down these BGP sessions to locate the issue and stabilize the network. We are sorry for the inconvenience.”</i></p></blockquote><p>We believe Worldstream also leaked routes on an OVHcloud peering session in Amsterdam, which caused today’s impact.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Cloudflare has written about<a href="https://blog.cloudflare.com/cloudflare-1111-incident-on-june-27-2024"> <u>impactful route leaks</u></a> before, and there are multiple methods available to prevent BGP route leaks from impacting your network. One is setting <a href="https://www.rfc-editor.org/rfc/rfc7454.html#section-8"><u>max prefix-limits</u></a> for a peer, so the BGP session is automatically torn down when a peer sends more prefixes than they are expected to. Other forward-looking measures are<a href="https://manrs.org/2023/02/unpacking-the-first-route-leak-prevented-by-aspa/"> <u>Autonomous System Provider Authorization (ASPA) for BGP</u></a>, where Resource Public Key Infrastructure (RPKI) helps protect a network from accepting BGP routes with an invalid AS path, or<a href="https://rfc.hashnode.dev/rfc9234-observed-in-the-wild"> <u>RFC9234,</u></a> which prevents leaks by tying strict customer, peer, and provider relationships to BGP updates. For improved Internet resilience, we recommend that network operators follow recommendations defined within<a href="https://manrs.org/netops/"> <u>MANRS for Network Operators</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Radar]]></category>
            <category><![CDATA[Trends]]></category>
            <category><![CDATA[Consumer Services]]></category>
            <category><![CDATA[Outage]]></category>
            <guid isPermaLink="false">Vn5VV2dLkJbOn1YNqSSBv</guid>
            <dc:creator>Bryton Herdes</dc:creator>
            <dc:creator>David Belson</dc:creator>
            <dc:creator>Tanner Ryan</dc:creator>
        </item>
        <item>
            <title><![CDATA[Forced offline: the Q3 2024 Internet disruption summary]]></title>
            <link>https://blog.cloudflare.com/q3-2024-internet-disruption-summary/</link>
            <pubDate>Tue, 29 Oct 2024 13:05:00 GMT</pubDate>
            <description><![CDATA[ The third quarter of 2024 was particularly active, with quite a few significant Internet disruptions.  ]]></description>
            <content:encoded><![CDATA[ <p>Cloudflare’s network spans more than 330 cities in over 120 countries, where we interconnect with over 13,000 network providers in order to provide a broad range of services to millions of customers. The breadth of both our network and our customer base provides us with a unique perspective on Internet resilience, enabling us to observe the impact of Internet disruptions. Thanks to <a href="https://radar.cloudflare.com/"><u>Cloudflare Radar</u></a> functionality released earlier this year, we can explore the impact from a <a href="https://developers.cloudflare.com/radar/glossary/#bgp-announcements"><u>routing</u></a> perspective, as well as a traffic perspective, at both a <a href="https://x.com/CloudflareRadar/status/1768654743742579059"><u>network</u></a> and <a href="https://x.com/CloudflareRadar/status/1773704264650543416"><u>location</u></a> level.</p><p>As we have noted in the past, this post is intended as a summary overview of observed and confirmed disruptions, and is not an exhaustive or complete list of issues that have occurred during the quarter. </p><p>A larger list of detected traffic anomalies is available in the <a href="https://radar.cloudflare.com/outage-center#traffic-anomalies"><u>Cloudflare Radar Outage Center</u></a>.</p><p>Having said that, the third quarter of 2024 was particularly active, with quite a few significant Internet disruptions. Unfortunately, <a href="#government-directed"><u>governments continued to impose nationwide Internet shutdowns</u></a> intended to prevent cheating on exams. <a href="#cable-cuts"><u>Damage to both terrestrial and submarine cables</u></a> impacted Internet connectivity across Africa and in other parts of the world. <a href="#severe-weather"><u>Damage caused by an active hurricane season</u></a> caused Internet outages across the Caribbean and in multiple parts of the United States. Because Internet connectivity is dependent on reliable electrical power, both <a href="#power-outages"><u>planned and unplanned power outages</u></a> in South America and Africa resulted in multi-hour Internet disruptions. <a href="#military-action"><u>Military action</u></a> continued to cause Internet outages in affected countries, as did <a href="#maintenance"><u>infrastructure maintenance</u></a>, <a href="#fire"><u>fire</u></a>, and a purported <a href="#cyberattack"><u>cyberattack</u></a>. The quarter also saw several noteworthy Internet disruptions that <a href="#unknown"><u>did not have verified causes</u></a>.</p>
    <div>
      <h2>Government Directed</h2>
      <a href="#government-directed">
        
      </a>
    </div>
    <p>Over the past several years, we have seen multiple governments around the world implement Internet shutdowns in response to protests within their countries. Some shutdowns are more targeted, affecting only (a subset of) mobile Internet providers, while others are more aggressive, effectively cutting off Internet connectivity at a national level. In addition, we all too frequently see governments implement nationwide multi-hour Internet shutdowns in an effort to prevent students from cheating on national exams. Unfortunately, governments were active in both respects during the third quarter, as we observed multiple government directed Internet shutdowns. Several were covered in our August 1 blog post, <a href="https://blog.cloudflare.com/a-recent-spate-of-internet-disruptions-july-2024/"><i><u>A recent spate of Internet disruptions</u></i></a><i>.</i></p>
    <div>
      <h3>Bangladesh</h3>
      <a href="#bangladesh">
        
      </a>
    </div>
    <p><a href="https://timesofindia.indiatimes.com/world/south-asia/internet-shut-nationwide-bandh-announced-why-is-bangladesh-experiencing-deadly-protests/articleshow/111829956.cms"><u>Violent student protests</u></a> in <a href="https://radar.cloudflare.com/bd"><u>Bangladesh</u></a> against quotas in government jobs and rising unemployment rates led the government to order the nationwide shutdown of mobile Internet connectivity on July 18, <a href="https://therecord.media/bangladesh-mobile-internet-social-media-outages-student-protests"><u>reportedly</u></a> to “<i>ensure the security of citizens.</i>” This government-directed shutdown ultimately became a near-complete Internet outage for the country, as broadband networks were taken offline as well. At a country level, <a href="https://radar.cloudflare.com/traffic/bd?dateStart=2024-07-14&amp;dateEnd=2024-07-28"><u>Internet traffic in Bangladesh dropped to near zero</u></a> just before 21:00 local time (15:00 UTC). <a href="https://radar.cloudflare.com/routing/bd?dateStart=2024-07-14&amp;dateEnd=2024-07-28"><u>Announced IP address space from the country dropped to near zero</u></a> at that time as well, meaning that nearly every network in the country was disconnected from the Internet.</p><p>Traffic and announced IP address space at a national level began to recover around 18:00 local time (12:00 UTC) on July 23, and continued over the next several days, as <a href="https://developingtelecoms.com/telecom-business/telecom-regulation/17059-mobile-internet-in-bangladesh-to-stay-dark-until-at-least-sunday.html"><u>fixed broadband connectivity was restored</u></a>, with <a href="https://developingtelecoms.com/telecom-business/telecom-regulation/17067-mobile-internet-returns-to-bangladesh-but-not-social-media-apps.html"><u>mobile connectivity returning on July 28</u></a>. The initial restoration was characterized as a “trial run”, prioritizing banking, commercial sectors, technology firms, exporters, outsourcing providers and media outlets, <a href="https://www.dhakatribune.com/bangladesh/352554/broadband-internet-restored-in-limited-areas-after"><u>according to</u></a> the state minister for post, telecommunication and information technology.</p><p>Ahead of this nationwide shutdown, we observed outages across several Bangladeshi network providers, perhaps foreshadowing what was to come. At <a href="https://radar.cloudflare.com/as24389"><u>AS24389 (Grameenphone)</u></a>, a complete Internet outage started at 01:30 local time on July 18 (19:30 UTC on July 17), with a total loss of both <a href="https://radar.cloudflare.com/traffic/as24389?dateStart=2024-07-14&amp;dateEnd=2024-07-29"><u>Internet traffic</u></a> and <a href="https://radar.cloudflare.com/routing/as24389?dateStart=2024-07-14&amp;dateEnd=2024-07-29"><u>announced IP address space</u></a>.</p><p>The outage at <a href="https://radar.cloudflare.com/as45245"><u>AS25245 (Banglalink)</u></a> started at 02:15 local time on July 18 (20:15 UTC on July 17) as both <a href="https://radar.cloudflare.com/traffic/as45245?dateStart=2024-07-14&amp;dateEnd=2024-07-29"><u>Internet traffic</u></a> and <a href="https://radar.cloudflare.com/routing/as45245?dateStart=2024-07-14&amp;dateEnd=2024-07-29"><u>announced IP address space</u></a> dropped to zero.</p><p>At <a href="https://radar.cloudflare.com/as24432"><u>AS24432 (Robi Axiata)</u></a>, an Internet outage was observed starting around 06:30 local time on July 18 (00:30 UTC), with both <a href="https://radar.cloudflare.com/traffic/as24432?dateStart=2024-07-14&amp;dateEnd=2024-07-29"><u>Internet traffic</u></a> and <a href="https://radar.cloudflare.com/routing/as24432?dateStart=2024-07-14&amp;dateEnd=2024-07-29"><u>announced IP address space</u></a> disappearing at that time.</p><p><a href="https://radar.cloudflare.com/traffic/as58715?dateStart=2024-07-14&amp;dateEnd=2024-07-29"><u>Internet traffic</u></a> at <a href="https://radar.cloudflare.com/as58715"><u>AS58715 (Earth Telecommunication)</u></a> began to fall at 18:00 local time on July 18 (12:00 UTC), reaching zero four hours later. <a href="https://radar.cloudflare.com/routing/as58715?dateStart=2024-07-14&amp;dateEnd=2024-07-29"><u>Announced IP address space</u></a> began to fall at 21:00 local time (15:00 UTC), and was completely gone by 21:25 local time (15:25 UTC).</p><p><a href="https://radar.cloudflare.com/as63526"><u>AS63526 (Carnival Internet)</u></a> was one of the last to fall before the complete shutdown, <a href="https://radar.cloudflare.com/traffic/as63526?dateStart=2024-07-14&amp;dateEnd=2024-07-29"><u>losing traffic</u></a> at 20:45 local time (14:45 UTC), and seeing all of its <a href="https://radar.cloudflare.com/routing/as63526?dateStart=2024-07-14&amp;dateEnd=2024-07-29"><u>announced IP address space</u></a> withdrawn over the following hour.</p><p>These mobile connectivity outages lasted from July 18 through July 28. Just a few days after connectivity was restored, <a href="https://www.business-standard.com/world-news/bangladesh-protests-internet-shutdown-curfew-imposed-97-dead-in-clashes-124080500205_1.html"><u>additional clashes between police and protestors</u></a> drove the government to <a href="https://developingtelecoms.com/telecom-business/telecom-regulation/17105-bangladesh-switches-off-mobile-internet-again-as-protests-escalate-2.html"><u>order mobile Internet connectivity to be shut down</u></a> again. As shown in the graphs below, traffic on these mobile network providers dropped between 13:30 and 14:15 local time (07:30 to 08:15 UTC) on Sunday, August 4.</p><p>These protests ultimately led the government to order a full Internet shutdown in the country, with both traffic and announced IP address space dropping precipitously around 10:30 local time (04:30 UTC) on Monday, August 5. However, the shutdown appeared to be short-lived, as <a href="https://en.prothomalo.com/bangladesh/gm0o97gu3x"><u>broadband connectivity</u></a> began to recover around 13:20 local time (07:20 UTC), with <a href="https://en.prothomalo.com/bangladesh/aoczyp8xg8"><u>mobile connectivity</u></a> being restored around 14:00 local time (08:00 UTC).</p>
    <div>
      <h3>Iraqi Kurdistan</h3>
      <a href="#iraqi-kurdistan">
        
      </a>
    </div>
    <p>Both <a href="https://radar.cloudflare.com/iq"><u>Iraq</u></a> and Iraqi Kurdistan (the autonomous Kurdistan region in the northern part of the country) regularly implement government directed Internet shutdowns to prevent cheating on secondary and baccalaureate exams. Within Iraqi Kurdistan, we observed two sets of exam-related Internet shutdowns during the third quarter. The impacts of the shutdowns are visible on traffic from networks that operate within the region, as well as on the country-level graphs for Iraq.</p><p>The first round of shutdowns occurred in July, impacting <a href="https://radar.cloudflare.com/as59625"><u>AS59625 (KorekTel)</u></a>, <a href="https://radar.cloudflare.com/as21277"><u>AS21277 (Newroz Telecom)</u></a>, <a href="https://radar.cloudflare.com/as48492"><u>AS48492 (IQ Online)</u></a>, and <a href="https://radar.cloudflare.com/as206206"><u>AS206206 (KNET)</u></a> between 06:00 - 08:00 local time (03:00 - 05:00 UTC) on July 3, 7, 10, and 14. This is consistent with shutdowns observed in the <a href="https://blog.cloudflare.com/q2-2024-internet-disruption-summary/"><u>second quarter</u></a>, as well as in <a href="https://blog.cloudflare.com/exam-internet-shutdowns-iraq-algeria/"><u>June 2023</u></a>. None of the impacted networks experienced a drop in announced IP address space during these shutdowns.</p><p>The second set of shutdowns in Iraqi Kurdistan took place across multiple days during the back half of August. On August 17, 19, 21, 24, 26, 28, and 31, all four network providers were again impacted, as seen in the graphs below, with traffic dropping between 06:00 - 08:00 local time (03:00 - 05:00 UTC).</p>
    <div>
      <h3>Iraq</h3>
      <a href="#iraq">
        
      </a>
    </div>
    <p>In <a href="https://radar.cloudflare.com/iq"><u>Iraq</u></a>, a second round of exams for 12th graders resulted in over two weeks of regular Internet shutdowns across the country occurring between 06:00 - 08:00 local time (03:00 - 05:00 UTC) on multiple days between August 29 and September 16, intended to prevent cheating on <a href="https://www.facebook.com/Iraq.Ministry.of.Education/posts/pfbid08kbeG2VEaFPweRiH1ofDdRazpVFKnHA2tRXM6pjQgCsQUXmuCar3oDSVsaCnwUZil"><u>second ministerial exams for secondary education</u></a>. Both HTTP traffic and announced IP address space from Iraq dropped during these shutdowns, as seen in the graphs below.</p><p>(Note that the red annotation bar visible on September 11 &amp; 12 on both the country and network-level graphs below highlights an internal data pipeline issue, and is not associated with an Internet shutdown in Iraq.)</p><p>This round of government-directed shutdowns impacted multiple local network providers, including <a href="https://radar.cloudflare.com/as58322"><u>AS58322 (Halasat)</u></a>, <a href="https://radar.cloudflare.com/as51684"><u>AS51684 (AsiaCell)</u></a>, <a href="https://radar.cloudflare.com/as203214"><u>AS203214 (HulumTele)</u></a>, <a href="https://radar.cloudflare.com/as199739"><u>AS199739 (Earthlink)</u></a>, and <a href="https://radar.cloudflare.com/as59588"><u>AS59588 (ZAINAS)</u></a>. In reviewing the distribution of mobile device and desktop traffic at a network level, gaps were observed during the shutdowns on <a href="https://radar.cloudflare.com/traffic/as58322?dateStart=2024-08-28&amp;dateEnd=2024-09-17#mobile-vs-desktop"><u>AS58322</u></a> and <a href="https://radar.cloudflare.com/traffic/as199739?dateStart=2024-08-28&amp;dateEnd=2024-09-17#mobile-vs-desktop"><u>AS199739</u></a>, and to a lesser extent, <a href="https://radar.cloudflare.com/traffic/as203214?dateStart=2024-08-28&amp;dateEnd=2024-09-17#mobile-vs-desktop"><u>AS203214</u></a>, suggesting that these networks were completely offline, while AS56184 and AS59588 remained at least partially online. (This is also corroborated by complete or partial loss of announced IP address space across these networks during the shutdowns.)</p>
    <div>
      <h3>Syria</h3>
      <a href="#syria">
        
      </a>
    </div>
    <p>A first round of exam-related Internet shutdowns took place in <a href="https://radar.cloudflare.com/sy"><u>Syria</u></a> earlier this year, between May 26 and June 13, and were discussed in our <a href="https://blog.cloudflare.com/syria-iraq-algeria-exam-internet-shutdown"><u>Exam-ining recent Internet shutdowns in Syria, Iraq, and Algeria</u></a> blog post. A second set of exams, and the associated Internet shutdowns requested by the Ministry of Education, began on July 25 and ran through August 8, as specified in the schedule <a href="https://www.facebook.com/photo/?fbid=862569062570288&amp;set=a.449047400589125"><u>published by Syrian Telecom on its Facebook page</u></a>.</p><p>The length of the shutdowns varied by day — they all began at 07:00 local time (04:00 UTC), but the end times ranged between 09:45 -10:30 local time (06:45 - 07:30 UTC). The graphs below show the impact at a country level, as well as to <a href="https://radar.cloudflare.com/as29256"><u>AS29256 (Syrian Telecom)</u></a>, the <a href="https://radar.cloudflare.com/routing/sy"><u>primary telecommunications provider within the country</u></a>.</p><p>These shutdowns were also covered in our August 1 blog post, <a href="https://blog.cloudflare.com/a-recent-spate-of-internet-disruptions-july-2024/"><i><u>A recent spate of Internet disruptions</u></i></a><i>.</i></p>
    <div>
      <h3>Mauritania</h3>
      <a href="#mauritania">
        
      </a>
    </div>
    <p>On August 12, a round of <a href="https://ami.mr/fr/archives/251895"><u>baccalaureate exams began</u></a> in <a href="https://radar.cloudflare.com/mr"><u>Mauritania</u></a>, and in an effort to <a href="https://akhbarwatan.net/%D9%85%D9%88%D8%B1%D9%8A%D8%AA%D8%A7%D9%86%D9%8A%D8%A7-%D9%82%D8%B7%D8%B9-%D8%A7%D9%84%D8%A5%D9%86%D8%AA%D8%B1%D9%86%D8%AA-%D8%A8%D8%B3%D8%A8%D8%A8-%D8%A7%D9%84%D9%85%D8%B3%D8%A7%D8%A8%D9%82%D8%A7/"><u>prevent student cheating on the exams</u></a>, the government instituted multiple Internet shutdowns that impacted several major mobile providers. Two shutdowns were observed on August 12, between 08:00 - 12:00 local time (08:00 - 12:00 UTC) and between 15:00 - 19:00 local time (15:00 - 19:00 UTC), and an additional one was observed on August 13, between 08:00 - 12:30 local time (08:00 - 12:30 UTC). Impacted network providers included <a href="https://radar.cloudflare.com/as37508"><u>AS37508 (Mattel)</u></a>, <a href="https://radar.cloudflare.com/as37541"><u>AS37541 (Chinguitel)</u></a>, and <a href="https://radar.cloudflare.com/as29544"><u>AS29544 (Mauritel)</u></a>. Announced IP address space for these networks remained unchanged during the shutdown periods, suggesting that that mobile subscriber connectivity was disabled, as opposed to the networks effectively being disconnected from the Internet, as we have seen in other countries.</p><p>Exam-related Internet shutdowns are, unfortunately, not new to Mauritania, as authorities in the country also implemented them <a href="https://smex.org/mauritania-the-drawbacks-of-disrupting-mobile-internet-after-prisoners-escape/"><u>between 2017 and 2020</u></a>.</p>
    <div>
      <h2>Cable cuts</h2>
      <a href="#cable-cuts">
        
      </a>
    </div>
    
    <div>
      <h3>Eswatini (Swaziland)</h3>
      <a href="#eswatini-swaziland">
        
      </a>
    </div>
    <p>On July 14, MTN Eswatini (AS327765) informed customers via <a href="https://x.com/MTNEswatini/status/1812558000009163027"><u>a post on X</u></a> that “<i>connection to the internet and data services is currently intermittent, because of fiber cable breaks resulting from wildfires.</i>” This apparent connection disruption was visible in Cloudflare Radar between 19:30 and 20:15 local time (17:30 and 18:15 UTC).</p>
    <div>
      <h3>Cameroon</h3>
      <a href="#cameroon">
        
      </a>
    </div>
    <p>In <a href="https://radar.cloudflare.com/cm"><u>Cameroon</u></a>, a fiber cut that occurred on August 4 during sanitation work disrupted mobile connectivity for Cameroon Telecommunications (<a href="https://radar.cloudflare.com/as15964"><u>AS15964 (Camtel)</u></a>) customers for over half a day. According to a (translated) <a href="https://x.com/Camtelonline/status/1820133286058062079"><u>post on X from Camtel</u></a>, “<i>We inform you that due to the sanitation work carried out in the city of Yaoundé, at the place called Cradat, our Voice and Data services have been temporarily interrupted on the entire mobile network.</i>” The observed disruption occurred between 03:00 - 16:30 local time (02:00 - 15:30 UTC). Although it initially started during a time when traffic was lower overnight anyway, both <a href="https://radar.cloudflare.com/explorer?dataSet=http&amp;loc=as15964&amp;dt=2024-08-04_2024-08-04&amp;timeCompare=2024-07-28"><u>request</u></a> and <a href="https://radar.cloudflare.com/explorer?dataSet=netflows&amp;loc=as15964&amp;dt=2024-08-04_2024-08-04&amp;timeCompare=2024-07-28"><u>bytes</u></a> traffic remained lower than the same time a week prior during the duration of the disruption.</p>
    <div>
      <h3>Liberia</h3>
      <a href="#liberia">
        
      </a>
    </div>
    <p>The <a href="https://radar.cloudflare.com/lr"><u>Liberia</u></a> Telecommunications Authority <a href="https://www.facebook.com/TelecommunicationsAuthorityLIBERA/posts/pfbid0Ryktd7oPg1c8UYc1kAiDWo8aQPK3uUADDkuUYgSdeZtC2tYn4JiCYr66oZQoRBc2l"><u>posted an announcement to their Facebook page</u></a> on August 21 noting that “<i>We have been informed by the CCL that the ACE Cable is experiencing interruptions.</i>” (The <a href="https://ace-submarinecable.com/en/submarine-cable/"><u>Africa Coast to Europe (ACE) submarine cable</u></a> connects multiple countries along the West Coast of Africa to Portugal and Europe.) The announcement further noted that the first signs of interruption occurred at 01:00 local time (and UTC), and that <a href="https://radar.cloudflare.com/as37410"><u>Lonestar Cell MTN (AS37410)</u></a> was among the providers that had been “gravely affected” by the cut.</p><p>We observed traffic on Lonestar Cell MTN dropping just after 01:00, in line with the announcement. The network experienced a complete outage lasting over a day and a half, before traffic started to recover at 14:00 local time (and UTC) on August 22. In a <a href="https://www.facebook.com/LonestarCellMTN/posts/pfbid02xE2qxVEt1XnCHgqftjkj34KQssez13PoGTjSGoBAH688g6m4G7XCLHM58SLBCW8Ll"><u>Facebook post</u></a> on August 22, Lonestar Cell MTN confirmed that Internet service had been restored, and that customer accounts would be credited with 500 MB of data for free.</p>
    <div>
      <h3>Niger</h3>
      <a href="#niger">
        
      </a>
    </div>
    <p>A September 7 <a href="https://x.com/airtelniger/status/1832430266222096571"><u>post on X from Airtel Niger</u></a> alerted customers to Internet service disruptions caused by cuts on international fiber optic cables. As a land-locked country, <a href="https://radar.cloudflare.com/ne"><u>Niger</u></a> is dependent on terrestrial connections to networks in neighboring countries, but it isn’t clear which connection or country Airtel Niger’s post was referencing.</p><p>Two significant Internet disruptions were observed around the time of Airtel Niger’s post that we believe are related to the referenced fiber cuts. The first occurred between 18:00 - 21:00 local time (17:00 - 20:00 UTC) on September 6, visible at a country level and at a network level as well on <a href="https://radar.cloudflare.com/as37531"><u>AS37531 (Airtel Niger)</u></a> and <a href="https://radar.cloudflare.com/as37233"><u>AS37233 (Orange Niger / Zamani Telecom)</u></a>. The second disruption occurred between 10:45 - 12:00 local time (09:45 - 11:00 UTC) on September 7, visible at a country level as well as on those two networks. </p>
    <div>
      <h3>Haiti</h3>
      <a href="#haiti">
        
      </a>
    </div>
    <p>Internet disruptions related to submarine cable failures often take a significant amount of time to resolve because of the challenges repair crews face in getting to, and accessing, the damaged portion of the cable, as it is frequently located deep underwater in the middle of an ocean. A September 14 submarine cable failure that impacted <a href="https://radar.cloudflare.com/as27653"><u>Digicel Haiti (AS27653)</u></a> lasted for over a week for a similar, but slightly different, reason.</p><p>A significant loss of traffic on Digicel Haiti was first observed at 08:00 local time (12:00 UTC) on September 14. On September 16, Digicel Haiti <a href="https://x.com/DigicelHT/status/1835774732743876713/photo/1"><u>posted a press release</u></a> confirming that since September 14, a failure had been detected on an international submarine cable belonging to Cable and Wireless, and that the cable damage occurred at Kaliko Beach Club (the property is <a href="https://www.haitilibre.com/en/news-43221-haiti-digicel-failure-detected-on-an-international-submarine-cable-against-a-backdrop-of-litigation.html"><u>reportedly</u></a> used as a cable entry point). Digicel noted that their technicians went to the scene of the damage immediately, but were denied access, apparently because of a business dispute dating back to 2021. The release also explained that technical teams had taken temporary steps to ensure the continuity of essential services, which prevented the incident from resulting in a complete loss of connectivity. On September 22, a subsequent <a href="https://x.com/DigicelHT/status/1837875515148898513/photo/1"><u>press release</u></a> posted by Digicel Haiti announced the restoration of Internet services as of 02:00 local time (06:00 UTC), and referenced vandalism as the cause of the cable damage.</p>
    <div>
      <h3>Kyrgyzstan</h3>
      <a href="#kyrgyzstan">
        
      </a>
    </div>
    <p>Reported damage to the “<a href="https://akipress.com/news:797695:Internet_disruptions_in_Kyrgyzstan_caused_by_damage_of_main_communication_channel/"><u>backbone wire</u></a>” or “<a href="https://economist-kg.translate.goog/novosti/2024/09/25/akniet-obiasnil-prichinu-probliem-s-dostupom-k-intiernietu-v-bishkiekie-i-chuiskoi-oblasti/?_x_tr_sl=auto&amp;_x_tr_tl=en&amp;_x_tr_hl=en&amp;_x_tr_pto=wapp"><u>main cable</u></a>” of an <a href="https://kaktus-media.translate.goog/doc/510016_propal_internet_y_nekotoryh_sotovyh_operatorov_i_provayderov._pochemy.html?_x_tr_sl=auto&amp;_x_tr_tl=en&amp;_x_tr_hl=en&amp;_x_tr_pto=wapp"><u>upstream provider</u></a> resulted in a brief Internet outage for <a href="https://radar.cloudflare.com/kg"><u>Kyrgyzstan</u></a> Internet provider <a href="https://radar.cloudflare.com/as50223"><u>Megacom (AS50223)</u></a> of September 25. <a href="https://radar.cloudflare.com/as12389"><u>AS12389 (Rostelecom)</u></a> is <a href="https://radar.cloudflare.com/routing/as50223"><u>listed</u></a> as Megacom’s only upstream provider.</p><p>The outage lasted for only an hour, between 15:45 and 16:45 local time (09:45 - 10:45 UTC), dropping both traffic and announced IP address space to zero. At a country level, traffic dropped as much as 72% as compared to the previous week. Given the complete loss of both traffic and IP address space, the damage likely occurred on the connection between Megacom and Rostelecom.</p>
    <div>
      <h2>Severe weather</h2>
      <a href="#severe-weather">
        
      </a>
    </div>
    <p>An active hurricane season during July, August, and September resulted in infrastructure damage caused by multiple hurricanes disrupting Internet connectivity in multiple places across the Caribbean and Southeastern United States.</p>
    <div>
      <h3>Grenada &amp; Saint Vincent and the Grenadines</h3>
      <a href="#grenada-saint-vincent-and-the-grenadines">
        
      </a>
    </div>
    <p>At the start of the third quarter, <a href="https://radar.cloudflare.com/gd"><u>Grenada</u></a> and <a href="https://radar.cloudflare.com/vc"><u>Saint Vincent and the Grenadines</u></a> both suffered significant damage from Hurricane Beryl, <a href="https://www.usatoday.com/story/news/nation/2024/07/03/hurricane-beryl-destruction-islands/74296817007/"><u>reportedly</u></a> causing destruction of infrastructure, buildings, agriculture, and the natural environment.</p><p>On July 1, traffic from Grenada dropped significantly at 10:00 local time (14:00 UTC), just ahead of <a href="https://www.cnn.com/2024/07/01/weather/hurricane-beryl-caribbean-landfall-monday/index.html"><u>landfall</u></a> on Grenada’s Carriacou Island. The most significant impacts to traffic were seen for approximately the first 24 hours, though traffic did not return to expected pre-storm levels until around 10:00 local time (14:00 UTC) on July 5.</p><p>Internet traffic in Saint Vincent and the Grenadines was also disrupted by Hurricane Beryl, also falling at 10:00 local time (14:00 UTC). Similar to Grenada, the most significant impact was seen in the first 24 hours, with consistent gradual recovery seen after that time. However, traffic did not return to expected pre-storm levels until July 11.</p>
    <div>
      <h3>Jamaica</h3>
      <a href="#jamaica">
        
      </a>
    </div>
    <p>As Hurricane Beryl continued across the Caribbean, it <a href="https://x.com/weatherchannel/status/1808576720234008765"><u>passed Jamaica on July 3</u></a>. The associated damage that it caused impacted Internet connectivity on the island, with traffic dropping significantly around 14:00 local time (19:00 UTC). As the graph below shows, the disruption was preceded by higher than normal traffic volumes, presumably due to residents looking for information about Beryl. The disruption lasted nearly a week, with traffic returning to expected levels on July 10.</p>
    <div>
      <h3>U.S. Virgin Islands</h3>
      <a href="#u-s-virgin-islands">
        
      </a>
    </div>
    <p>The following month, damage from Tropical Storm Ernesto caused <a href="https://x.com/VIWAPA/status/1824110275710091527"><u>power outages across the U.S. Virgin Islands</u></a>, resulting in disruptions to Internet connectivity. Traffic from the islands dropped precipitously at 22:00 local time on August 13 (02:00 UTC on August 14) and remained lower for over two days, before returning to expected pre-storm levels around 11:00 local time (15:00 UTC) on August 16.</p>
    <div>
      <h3>Bermuda</h3>
      <a href="#bermuda">
        
      </a>
    </div>
    <p>Over the course of the following few days, Ernesto strengthened from a tropical storm into a hurricane, but had weakened by the time it hit <a href="https://radar.cloudflare.com/bm"><u>Bermuda</u></a> on August 16/17. In this case, damage was <a href="https://www.reuters.com/business/environment/hurricane-ernesto-weakens-still-dangerous-it-closes-bermuda-2024-08-17/"><u>reportedly</u></a> limited to power outages, downed trees, and flooding, but even this limited damage disrupted Internet connectivity on the island. As the storm made landfall on the island, traffic levels dropped over 80% at 22:00 local time on August 16 (01:00 UTC on August 17). Traffic levels remained depressed for about two and a half days, recovering to expected levels around 09:00 local time (12:00 UTC) on August 19.</p>
    <div>
      <h3>Nepal</h3>
      <a href="#nepal">
        
      </a>
    </div>
    <p><a href="https://www.dw.com/en/nepal-floods-landslides-leave-at-least-151-dead/a-70354640"><u>Heavy rains in Nepal</u></a> at the end of September resulted in flooding and landslides across much of the country, which in turn resulted in power outages and Internet disruptions. One such disruption believed to be associated with the impacts of the storm was observed on September 28, when <a href="https://radar.cloudflare.com/as23752"><u>AS23752 (Nepal Telecom)</u></a>, <a href="https://radar.cloudflare.com/as45650"><u>AS45650 (Vianet)</u></a>, <a href="https://radar.cloudflare.com/as139922"><u>AS139922 (Dishhome)</u></a>, and <a href="https://radar.cloudflare.com/as17501"><u>AS17501 (Worldlink)</u></a> all saw traffic drop 50 - 70% between 14:15 - 16:00 local time (08:30 - 10:15 UTC).</p>
    <div>
      <h3>United States</h3>
      <a href="#united-states">
        
      </a>
    </div>
    <p>A disruption to traffic from <a href="https://radar.cloudflare.com/as11427"><u>AS11427 (Charter Communications/Spectrum)</u></a> in Texas that occurred between 12:30 and 19:30 local time on July 9 (17:30 - 00:30 UTC) was caused by “<i>a third-party infrastructure issue caused by the impact of Hurricane Beryl</i>”, according to a July 9 <a href="https://x.com/Ask_Spectrum/status/1810804196112806016"><u>post on X</u></a> from the provider. Spectrum <a href="https://x.com/Ask_Spectrum/status/1810735748410396680"><u>acknowledged the issue</u></a> shortly after it began, and <a href="https://x.com/Ask_Spectrum/status/1810851153053118568"><u>followed up again</u></a> after service had been restored.</p><p>Hurricane Helene <a href="https://www.wistv.com/2024/10/03/reviewing-hurricane-helenes-destructive-path-through-southeast/"><u>made landfall in northern Florida</u></a> as a Category 4 storm late in the evening (local time) on September 26, and over the following hours and days, <a href="https://www.usatoday.com/story/graphics/2024/09/29/hurricane-helene-damage-maps/75440587007/"><u>continued north</u></a> through Georgia, South Carolina, and North Carolina, and into Tennessee. Even as it weakened, it caused historic flooding and damage to roads, homes, power lines, and telecommunications infrastructure. Below, we review the traffic impacts observed at a state level in three of the most impacted states, as well as exploring the impact at a network level for selected providers. (<a href="https://www.kentik.com/blog/author/doug-madory/">Doug Madory at Kentik</a> published an excellent <a href="https://www.kentik.com/blog/hurricane-helene-devastates-network-connectivity-in-parts-of-the-south/"><u>blog post exploring the impact of Helene</u></a> from the perspective of their data, and the networks referenced below were informed by that post.)</p>
    <div>
      <h4>Georgia</h4>
      <a href="#georgia">
        
      </a>
    </div>
    <p>Helene entered Georgia early morning on Friday, September 27, and by midday (local time), peak traffic was approximately 20% lower than peak levels seen in the days ahead of the storm. (The lower peaks on September 28 &amp; 29 are likely due to it being a weekend.) At a state level, peak traffic remained lower over the following week, with more recovery seen heading into the week of October 6.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3aYsyZNt5qCWqJhgK8yt1g/0f8be9f2ed8c2ab5121caef9b8e079ff/SEVERE_WEATHER_-_UNITED_STATES_-_Helene_-_Georgia.png" />
          </figure><p>One of the most significantly impacted network providers in Georgia was <a href="https://radar.cloudflare.com/as11240?dateStart=2024-09-24&amp;dateEnd=2024-10-07"><u>AS11240 (ATC Broadband)</u></a>, which saw traffic start to drop around 22:00 local time on September 26 (02:00 UTC on September 27). Subscribers and customers experienced a near complete outage until around 08:00 local time on September 30 (12:00 UTC), when traffic volumes slowly started to recover. The normal diurnal traffic pattern became more clear in the following days, with peak traffic levels continuing to increase over the next week as well.</p><p>Other network providers in Georgia that experienced significant impacts include <a href="https://radar.cloudflare.com/as400511?dateStart=2024-09-24&amp;dateEnd=2024-10-07"><u>AS400511 (Clearwave Fiber)</u></a>, <a href="https://radar.cloudflare.com/as394473?dateStart=2024-09-24&amp;dateEnd=2024-10-07"><u>AS394473 (Brantley Telephone Company)</u></a>, <a href="https://radar.cloudflare.com/as40285?dateStart=2024-09-24&amp;dateEnd=2024-10-07"><u>AS40285 (Northland Cable Television)</u></a>, <a href="https://radar.cloudflare.com/as15313?dateStart=2024-09-24&amp;dateEnd=2024-10-07"><u>AS15313 (Pembroke Telephone Company)</u></a>, and <a href="https://radar.cloudflare.com/as397118?dateStart=2024-09-24&amp;dateEnd=2024-10-07"><u>AS397118 (Glenwood Telephone Company)</u></a>.</p>
    <div>
      <h4>South Carolina</h4>
      <a href="#south-carolina">
        
      </a>
    </div>
    <p>The midday traffic peak on September 27 in South Carolina was just 65% of the preceding days, with the peaks remaining lower over the following two weekend days. Traffic remained somewhat lower during the week following Helene, with peak increases becoming more evident the week of October 6.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/40euoNqEw8bwaAqVgmsmaQ/ccf5c7114e26a85f6445ce9eaf21b00c/SEVERE_WEATHER_-_UNITED_STATES_-_Helene_-_South_Carolina.png" />
          </figure><p>At <a href="https://radar.cloudflare.com/as19212?dateStart=2024-09-24&amp;dateEnd=2024-10-07"><u>AS19212 (Piedmont Rural Telephone)</u></a> in South Carolina, traffic began to fall rapidly around midnight local time on September 27 (04:00 UTC), reaching a state of near complete outage over the next eight hours. A gradual recovery is visible over the following several days, with a more regular pattern becoming evident on October 1, with rapid growth over the following week, accelerating towards the end of the week.</p><p>Other network providers in South Carolina, including <a href="https://radar.cloudflare.com/as397068?dateStart=2024-09-24&amp;dateEnd=2024-10-07"><u>AS397068 (Carolina Connect)</u></a>, <a href="https://radar.cloudflare.com/as10279?dateStart=2024-09-24&amp;dateEnd=2024-10-07"><u>AS10279 (West Carolina Communications)</u></a>, <a href="https://radar.cloudflare.com/as20222?dateStart=2024-09-24&amp;dateEnd=2024-10-07"><u>AS20222</u></a> &amp; <a href="https://radar.cloudflare.com/as21898?dateStart=2024-09-24&amp;dateEnd=2024-10-07"><u>AS21898 (TruVista)</u></a>, and <a href="https://radar.cloudflare.com/as14615?dateStart=2024-09-24&amp;dateEnd=2024-10-07"><u>AS14615 (Rock Hill Telephone)</u></a>, also experienced significant disruptions to connectivity in the wake of Helene.</p>
    <div>
      <h4>North Carolina</h4>
      <a href="#north-carolina">
        
      </a>
    </div>
    <p>Although a drop in traffic is visible in the graph for North Carolina on September 27, it occurs after a midday peak in line with previous days, and the magnitude is not as significant as that seen in South Carolina and Georgia. Traffic peaks over the following week are in line with the week preceding Helene’s arrival, with higher peaks seen the week of October 6.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4ggc01nO3m5J85jNwm5rSF/13af760fe7a839472ae5c14116042f9c/SEVERE_WEATHER_-_UNITED_STATES_-_Helene_-_North_Carolina.png" />
          </figure><p>North Carolina providers <a href="https://radar.cloudflare.com/as53488?dateStart=2024-09-24&amp;dateEnd=2024-10-07"><u>AS53488 (Morris Broadband)</u></a> and <a href="https://radar.cloudflare.com/as53274?dateStart=2024-09-24&amp;dateEnd=2024-10-07"><u>AS53274 (Skyrunner)</u></a> both experienced multi-day disruptions, likely related to damage from Helene. However, these disruptions took Morris Broadband completely offline several times over the course of a week — the announced IP address space graph below shows three distinct drops to zero, aligning with outages visible in the traffic graph, when the network was effectively disconnected from the Internet. A similar but less severe pattern was seen at Skyrunner, which lost 75-80% of announced IP address space for a two-day period covering September 27-29, aligning with an outage visible in the associated traffic graph.</p><p>Other impacted network providers in North Carolina included <a href="https://radar.cloudflare.com/as22191?dateStart=2024-09-24&amp;dateEnd=2024-10-07"><u>AS22191 (Wilkes Communications)</u></a> and <a href="https://radar.cloudflare.com/as23118?dateStart=2024-09-24&amp;dateEnd=2024-10-07"><u>AS23118 (Skyline Telephone)</u></a>.</p>
    <div>
      <h2>Power outages</h2>
      <a href="#power-outages">
        
      </a>
    </div>
    
    <div>
      <h3>Venezuela</h3>
      <a href="#venezuela">
        
      </a>
    </div>
    <p>A nationwide power outage in <a href="https://radar.cloudflare.com/ve"><u>Venezuela</u></a> on August 30 was, <a href="https://www.reuters.com/business/energy/venezuelas-capital-caracas-other-regions-face-power-outage-2024-08-30/"><u>according to President Nicolás Maduro</u></a>, the result of an attack on the Guri Reservoir, Venezuela's largest hydroelectric project. A <a href="https://www.reuters.com/business/energy/venezuelas-capital-caracas-other-regions-face-power-outage-2024-08-30/"><u>published report</u></a> indicated that all 24 of the country's states reported a total or partial loss of electricity supply. The loss of power unsurprisingly caused an Internet disruption, with country-level traffic dropping 82%, starting around 04:45 local time (08:45 UTC). Traffic began to increase as electricity returned to various parts of the country throughout the day, and returned to expected levels just after midnight local time on August 31 (04:00 UTC). </p>
    <div>
      <h3>Kenya</h3>
      <a href="#kenya">
        
      </a>
    </div>
    <p>On August 30, Kenya Power Care <a href="https://www.facebook.com/KenyaPowerLtd/posts/pfbid0krBvZqWT7AfF8HjTPdm9Y84QmmkgfUzPjhtgxZjzEpyVqRLFS6VBt5vR43s5dxiHl"><u>posted a Customer Alert on its Facebook page</u></a>, issued at 21:57 local time (18:57 UTC), stating that “<i>We have lost power supply to various parts of the country except North Rift region and sections of Western region.</i>” Approximately a half hour before that alert, Kenya’s Internet traffic began to drop, falling as much as 61%. Just two hours later, Kenya Power Care <a href="https://www.facebook.com/KenyaPowerLtd/posts/pfbid0m4kP2NwdiDPnH4UpWH39QkpLANTWc6SR3bpiHxnwCUdBvwwou7p1skfaWbghRFWml"><u>posted a follow up</u></a>, stating “<i>Following the partial outage affecting several parts of the country this evening, we are pleased to report that power supply has now been restored to the entire Western region, as well as parts of Central Rift, South Nyanza, and Nairobi regions.</i>” However, traffic did not return to expected levels for several more hours, taking until 06:00 local time (03:00 UTC).</p><p>A week later, on September 6, Kenya Power Care <a href="https://www.facebook.com/KenyaPowerLtd/posts/pfbid02BcJVt9uu1N3mmGzf9mivyXev4FSJVpPZ5ni1VkZC9WSdYyYyk7MCMtignBPzcVnyl"><u>posted another similar Customer Alert</u></a>, noting that “<i>We are experiencing a power outage affecting several parts of the country, except sections of North Rift and Western regions.</i>” This alert was issued at 09:20 local time (06:20 UTC), and follows a drop in Internet traffic that started around 09:00 local time (06:00 UTC). Traffic dropped approximately 45% during this power outage, and returned to expected levels around 16:00 local time (13:00 UTC). Traffic recovery aligns with a <a href="https://www.facebook.com/KenyaPowerLtd/posts/pfbid02VzrAMQeuTrmfyywXeB7qXFyAmeM1eEQCBX6dvY3DHbyfUoTjgTJATcg9cToBk7zal"><u>subsequent Customer Alert posted on Facebook</u></a>, where Kenya Power Care stated “<i>We are glad to report that normal electricity supply was restored across the country as at 3:49pm”.</i></p><p>A statement from Energy and Petroleum Cabinet Secretary Opiyo Wandayi, <a href="https://www.facebook.com/KenyaPowerLtd/posts/pfbid02Nck9kx6NFmvFRdLEpzPxk1UPW3HtNw41PHNhHd3PMR2Y73BpkMALZmNU3mkar8DPl"><u>shared on Facebook by Kenya Power Care</u></a>, explained the cause of the power outage: “<i>Today, Friday 6th September 2024 at 8.56 am, the 220kV High Voltage Loiyangalani transmission line tripped at Suswa substation while evacuating 288MW from Lake Turkana Wind Power (LTWP) plant. This was followed by a trip on the Ethiopia – Kenya 500kV DC interconnector that was then carrying 200MW, resulting to a total loss of 488MW…</i>” </p>
    <div>
      <h3>Ecuador</h3>
      <a href="#ecuador">
        
      </a>
    </div>
    <p>According to a (translated) September 7 <a href="https://x.com/OperadorCenace/status/1832431918563872871"><u>post on X from CENACE</u></a>, the national electricity operator in <a href="https://radar.cloudflare.com/ec"><u>Ecuador</u></a>, “<i>We inform the public that due to a fault in the Molino substation bar, which is connected to the Paute generation, there has been a power outage in some provinces of the country. Cenace's technical team, in coordination with the distribution companies, is working to gradually restore electrical service. It is estimated that it will take 3 to 4 hours maximum for the supply to return to normal.</i>” The post was published at 09:53 local time (14:53 UTC), approximately an hour after Internet traffic from the country began to drop. Traffic returned to expected levels just under four hours later, at around 12:30 local time (17:30 UTC), in line with CENACE’s predicted time for power to be fully restored.</p><p>On September 18/19, the first of several planned nightly power outages to enable needed grid maintenance in Ecuador disrupted Internet connectivity. Traffic dropped by over 60% as compared to the same time the prior week starting around 21:30 local (02:30 UTC), with the power outages <a href="https://www.americaeconomia.com/en/node/288653"><u>reportedly</u></a> scheduled for 22:00 - 06:00 local time. Internet traffic recovered to expected levels around 06:00 local time (11:00 UTC) as power was restored. Similar power cuts were <a href="https://ec.usembassy.gov/alert-series-of-nationwide-overnight-power-outages-and-curfews/"><u>reportedly planned from September 23 to September 27</u></a>, but these power outages did not appear to impact <a href="https://radar.cloudflare.com/explorer?dataSet=netflows&amp;loc=ec&amp;dt=2024-09-22_2024-09-28&amp;timeCompare=1"><u>traffic levels in Ecuador as compared to the previous week</u></a>. </p>
    <div>
      <h3>Senegal</h3>
      <a href="#senegal">
        
      </a>
    </div>
    <p><a href="https://radar.cloudflare.com/sn"><u>Senegal’s</u></a> power company, Senelec, <a href="https://x.com/Senelecofficiel/status/1834245424787394629"><u>posted a communiqué on X</u></a> on September 12 that stated (translated) “<i>Senelec informs its valued customers that an incident that occurred this morning at the Hann substation resulted in the loss of the OMVS interconnected network and disruptions to electricity distribution.</i>” This disruption to electricity distribution also resulted in a disruption to Internet traffic, which dropped sharply at 13:00 local time (13:00 UTC), falling as much as 80%. Traffic recovered to expected levels by 20:00 local time (20:00 UTC) around the same time that Senelec <a href="https://x.com/Senelecofficiel/status/1834320225954922533"><u>posted a followup about the incident</u></a> that stated (translated) “<i>Effective restoration of electricity supply in all localities.</i>”</p>
    <div>
      <h2>Maintenance</h2>
      <a href="#maintenance">
        
      </a>
    </div>
    
    <div>
      <h3>Syria</h3>
      <a href="#syria">
        
      </a>
    </div>
    <p>As we discussed above, Internet users in <a href="https://radar.cloudflare.com/sy"><u>Syria</u></a> were impacted by an exam-related Internet shutdown from 07:00 - 10:15 local time (04:00 - 07:15 UTC) on July 30. However, just an hour after connectivity was restored, another disruption occurred, as seen in both the traffic and announced IP address space graphs below. According to a (translated) <a href="https://www.facebook.com/photo?fbid=868145108679350&amp;set=a.449047403922458"><u>Facebook post from Syrian Telecom</u></a>, “...<i>during the periodic maintenance of one of the air conditioners in one of the technical halls, an explosion occurred, which caused the internet circuits to be temporarily out of service.</i>” Traffic remained depressed for approximately eight hours, recovering to expected levels around 19:00 local time (16:00 UTC).</p>
    <div>
      <h2>Cyberattack</h2>
      <a href="#cyberattack">
        
      </a>
    </div>
    
    <div>
      <h3>Russia</h3>
      <a href="#russia">
        
      </a>
    </div>
    <p>Roskomnadzor, Russia’s Internet regulate, <a href="https://t.me/roskomnadzorro/1897"><u>blamed</u></a> a brief disruption in traffic observed in <a href="https://radar.cloudflare.com/ru"><u>Russia</u></a> and on <a href="https://radar.cloudflare.com/as12389"><u>AS12389 (Rostelecom)</u></a> on August 21 on a distributed denial-of-service (DDoS) attack that targeted Russian telecommunications operators. The disruption was brief, lasting from around 13:45 until 14:30 Moscow time (10:45 - 11:30 UTC). Roskomnadzor <a href="https://www.uawire.org/massive-internet-outage-in-russia-kremlin-s-attempt-to-block-messaging-apps-causes-nationwide-disruption"><u>subsequently stated</u></a> "<i>As of 3 PM Moscow time, the attack has been repelled, and services are operating normally.</i>" The disruption <a href="https://www.barrons.com/news/large-scale-outages-hit-telegram-whatsapp-in-russia-3a08695c"><u>reportedly</u></a> impacted messaging services Telegram and WhatsApp, <a href="https://www.uawire.org/massive-internet-outage-in-russia-kremlin-s-attempt-to-block-messaging-apps-causes-nationwide-disruption"><u>as well as</u></a> Wikipedia, Yandex, VKontakte, telecom support services, and mobile banking apps. Some experts <a href="https://www.uawire.org/massive-internet-outage-in-russia-kremlin-s-attempt-to-block-messaging-apps-causes-nationwide-disruption"><u>questioned the official explanation</u></a>, suggesting instead that the disruption was due to <a href="https://therecord.media/russia-blames-websites-apps-outages-on-ddos"><u>centralized interference from Roskomnadzor</u></a>.</p>
    <div>
      <h2>Military action</h2>
      <a href="#military-action">
        
      </a>
    </div>
    
    <div>
      <h3>Palestine</h3>
      <a href="#palestine">
        
      </a>
    </div>
    <p>We have covered Internet disruptions related to the ongoing conflict in Gaza multiple times since October 2023, both on <a href="https://x.com/search?q=gaza%20internet%20(from%3Acloudflareradar)&amp;src=typed_query&amp;f=live"><u>Cloudflare Radar’s presence on X</u></a>, and on the Cloudflare blog (<a href="https://blog.cloudflare.com/internet-traffic-patterns-in-israel-and-palestine-following-the-october-2023-attacks/"><u>1</u></a>, <a href="https://blog.cloudflare.com/q4-2023-internet-disruption-summary/"><u>2</u></a>, <a href="https://blog.cloudflare.com/q1-2024-internet-disruption-summary/"><u>3</u></a>). In many of these cases, Paltel (AS12975) has posted notices on social media regarding service disruptions and outages. On September 8, <a href="https://www.facebook.com/paltel.970/posts/pfbid036YptxzF77Rk5U7tVGT5Xh4Yx4897BVoeb4qsZNhGkLh1XxLCTLMzDjp1RLAkBfJHl"><u>Paltel posted a message on its Facebook page</u></a>, stating (translated) “<i>We regret to announce the suspension of home internet services in the central and southern areas of the Gaza Strip, due to the ongoing aggression.</i>”</p><p>Within the Gaza, Rafah, Deir al-Balah Governorates, we observed a sharp drop in traffic at 18:00 local time (16:00 UTC). The impact appeared to be most significant in Rafah and Deir al-Balah. Traffic returned to expected levels around 23:00 local time (21:00 UTC), and Paltel <a href="https://www.facebook.com/paltel.970/posts/pfbid0hJxQReZimYRnNxbMNeyscVCtwhS2wnA4Us6fucJ4WntFuQeS3BAKqhMWxJJqFzaVl"><u>confirmed the service restoration in a subsequent Facebook post</u></a>, stating (translated) “<i>We would like to announce the return of home Internet services in central and southern Gaza Strip to the way it was before it was interrupted hours ago.</i>”	</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2QELKmNYaZC5NmvkTDreST/f913dde97df36d81772756d528745980/MILITARY_ACTION_-_PALESTINE_-_Gaza_-_Gaza_Governorate.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3zALCKZTWs6E62cptuxPjq/cd71ff38103f4574b7d2f6e3c3b66ab6/MILITARY_ACTION_-_PALESTINE_-_Gaza_-_Rafah_Governorate.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7yvARj41pkM60UhcfNtEEC/4963c76fe2ef45802211ae2b6ebbe5ff/MILITARY_ACTION_-_PALESTINE_-_Gaza_-_Deir_al-Balah_Governorate.png" />
          </figure>
    <div>
      <h3>Lebanon</h3>
      <a href="#lebanon">
        
      </a>
    </div>
    <p><a href="https://www.cnn.com/world/live-news/israel-lebanon-war-hezbollah-09-27-24#cm1lbrhcd001k3b6mxt9ij3lb"><u>Israeli airstrikes targeting the Lebanese capital of Beirut</u></a> on September 28 likely knocked local network provider <a href="https://radar.cloudflare.com/as42852"><u>Solidere (AS42852)</u></a> offline for several hours. The graph below shows a loss of traffic starting around 12:15 local time (10:15 UTC), at the same time a complete loss of announced IP address space occurred. Most of Solidere’s IP address space started to get announced again at 14:45 local time (12:45 UTC), and a slight increase in traffic was seen at that time as well. Traffic levels fully recovered just after 18:00 local time (16:00 UTC), and announced IP address space had stabilized by that time as well. </p>
    <div>
      <h2>Fire</h2>
      <a href="#fire">
        
      </a>
    </div>
    
    <div>
      <h3>Algeria</h3>
      <a href="#algeria">
        
      </a>
    </div>
    <p>A fire near a data center in Blida Province, <a href="https://radar.cloudflare.com/dz"><u>Algeria</u></a> disrupted connectivity on AS327931 (Djezzy) at 13:00 and local time (12:00 UTC) on July 24. According to a (translated) <a href="https://x.com/djezzy/status/1816272546284855678"><u>X post from Djezzy</u></a>, “<i>Djezzy announced fluctuations in its services in some areas of the country, as it was a victim of a fire that broke out on Wednesday, July 24, 2024, in a warehouse of one of the companies located near its technical center in the state of Blida.</i>” The post from Djezzy predicted that “<i>97% of the sites will be restored by around 3 am [July 25]</i>”, but traffic did not return to expected levels until the end of the day on July 25.</p>
    <div>
      <h2>Unknown</h2>
      <a href="#unknown">
        
      </a>
    </div>
    
    <div>
      <h3>United States</h3>
      <a href="#united-states">
        
      </a>
    </div>
    <p>On Monday, September 30, customers on Verizon’s mobile network in multiple cities across the United States <a href="https://apnews.com/article/verizon-outage-sos-mode-phone-service-b03c9b8615e0650669339daa2eaa1713"><u>reported</u></a> experiencing a loss of connectivity. Impacted phones showed “SOS” instead of the usual bar-based signal strength indicator, and customers complained of an inability to make or receive calls on their mobile devices. Although initial reports of connectivity problems started around 09:00 ET (13:00 UTC), we didn’t see a noticeable change in request volume at an ASN level until about two hours later. <a href="https://radar.cloudflare.com/as6167"><u>AS6167 (CELLCO)</u></a> is the <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/"><u>autonomous system</u></a> used by Verizon for its mobile network.</p><p>Just before 12:00 ET (16:00 UTC), Verizon <a href="https://x.com/VerizonNews/status/1840780785084985777"><u>published a social media post acknowledging the problem</u></a>, stating “We are aware of an issue impacting service for some customers. Our engineers are engaged, and we are working quickly to identify and solve the issue.” As the <a href="https://radar.cloudflare.com/explorer?dataSet=http&amp;loc=as6167&amp;dt=2024-09-30_2024-09-30&amp;timeCompare=2024-09-23"><u>graph</u></a> below shows, a slight decline (-5%) in HTTP traffic as compared to traffic at the same time a week prior is first visible around 11:00 ET (15:00 UTC), and request volume fell as much as 9% below expected levels at 13:45 ET (17:45 UTC).</p><p>Media reports listed cities including Chicago, Indianapolis, New York City, Atlanta, Cincinnati, Omaha, Phoenix, Denver, Minneapolis, Seattle, Los Angeles, and Las Vegas as being most impacted. Traffic graphs illustrating the impacts seen in these cities can be found in our <a href="https://blog.cloudflare.com/impact-of-verizons-september-30-outage-on-internet-traffic/"><i><u>Impact of Verizon’s September 30 outage on Internet traffic</u></i></a> blog post.</p><p>Traffic appeared to return to expected levels around 17:15 ET (21:15 UTC). At 19:18 ET (23:18 UTC), a <a href="https://x.com/VerizonNews/status/1840893978411221191"><u>social media post</u></a> from Verizon noted “<i>Verizon engineers have fully restored today's network disruption that impacted some customers. Service has returned to normal levels.</i>”</p>
    <div>
      <h3>Pakistan</h3>
      <a href="#pakistan">
        
      </a>
    </div>
    <p>On July 31, <a href="https://radar.cloudflare.com/pk"><u>Pakistan</u></a> experienced a wide-scale Internet disruption that lasted approximately two hours, between 13:30 - 15:30 local time (08:30 - 10:30 UTC). Traffic only dropped ~45% at a country level, but <a href="https://radar.cloudflare.com/as17557"><u>AS17557 (PTCL)</u></a> experienced a near complete loss of traffic, while traffic at <a href="https://radar.cloudflare.com/as24499"><u>AS24499 (Telenor Pakistan)</u></a> dropped nearly 90%. Together, the two network providers serve an estimated nine million users, and are among the top five Internet service providers in the country.</p><p>The actual cause of the disruption is disputed. It was <a href="https://www.globalvillagespace.com/internet-outage-in-pakistan/"><u>reported</u></a> that the Pakistan Telecommunication Authority (PTA) attributed the disruptions to a technical glitch in the international submarine cable affecting the Pakistan Telecommunication Company Limited (PTCL) network. However, another <a href="https://incpak.com/national/internet-services-outtage-across-pakistan/"><u>published report</u></a> noted “According to our sources, the government’s latest firewall edition to block the content was misconfigured, resulting in Internet connectivity disruption.” Additional details can be found in our August 1 blog post, <a href="https://blog.cloudflare.com/a-recent-spate-of-internet-disruptions-july-2024/"><i><u>A recent spate of Internet disruptions</u></i></a><i>.</i></p>
    <div>
      <h3>United Kingdom</h3>
      <a href="#united-kingdom">
        
      </a>
    </div>
    <p>On August 14, subscribers of <a href="https://radar.cloudflare.com/gb"><u>UK</u></a> service provider <a href="https://radar.cloudflare.com/as25135"><u>Vodafone (AS25135)</u></a> <a href="https://www.dailymail.co.uk/sciencetech/article-13742755/Vodafone-network-crashes-internet.html"><u>reported problems</u></a> accessing both mobile and landline Internet connections. Starting around 11:00 local time (10:00 UTC), we observed traffic starting to drop, ultimately falling 43% below the same time the prior week. The disruption was fairly short-lived, as traffic returned to expected levels by 13:30 local time (12:30 UTC). Vodafone did not acknowledge the issue on social media, nor did it provide a public explanation for what caused the disruption.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Although Internet disruptions observed during the third quarter had a variety of underlying causes, those caused by power outages due to aging or insufficiently maintained electrical infrastructure are worth highlighting. Of course, widespread power outages always create a massive inconvenience for impacted populations, but over the last several years, as communication, entertainment, commerce, and more have become increasingly reliant on the Internet, the impact of these outages has become even more significant, because losing electrical power largely means losing Internet connectivity. Although mobile connectivity may still be available in some cases, it is decidedly not a complete replacement, not to mention that mobile devices will eventually need to be recharged. While addressing the underlying infrastructure issues require non-trivial amounts of time, resources, and money, governments appear to be taking steps towards doing so.</p><p>Visit <a href="https://radar.cloudflare.com/"><u>Cloudflare Radar</u></a> for additional insights around Internet disruptions, routing issues, Internet traffic trends, security and attacks, and Internet quality. Follow us on social media at <a href="https://x.com/CloudflareRadar"><u>@CloudflareRadar</u></a> (X), <a href="https://noc.social/@cloudflareradar"><u>noc.social/@cloudflareradar</u></a> (Mastodon), and <a href="https://bsky.app/profile/radar.cloudflare.com"><u>radar.cloudflare.com</u></a> (Bluesky), or contact us via e-mail.</p> ]]></content:encoded>
            <category><![CDATA[Radar]]></category>
            <category><![CDATA[Internet Quality]]></category>
            <category><![CDATA[Internet Shutdown]]></category>
            <category><![CDATA[Outage]]></category>
            <category><![CDATA[Internet Traffic]]></category>
            <category><![CDATA[Consumer Services]]></category>
            <guid isPermaLink="false">3xoUhxvPcDFTiYCT9CjHhs</guid>
            <dc:creator>David Belson</dc:creator>
        </item>
        <item>
            <title><![CDATA[Impact of Verizon’s September 30 outage on Internet traffic]]></title>
            <link>https://blog.cloudflare.com/impact-of-verizons-september-30-outage-on-internet-traffic/</link>
            <pubDate>Tue, 01 Oct 2024 00:32:00 GMT</pubDate>
            <description><![CDATA[ On Monday, September 30, customers on Verizon’s mobile network in multiple cities across the United States reported experiencing a loss of connectivity. HTTP request traffic data from Verizon’s mobile ASN (AS6167) showed nominal declines across impacted cities.
 ]]></description>
            <content:encoded><![CDATA[ <p>On Monday, September 30, 2024, customers on Verizon’s mobile network in multiple cities across the United States <a href="https://apnews.com/article/verizon-outage-sos-mode-phone-service-b03c9b8615e0650669339daa2eaa1713"><u>reported</u></a> experiencing a loss of connectivity. Impacted phones showed “SOS” instead of the usual bar-based signal strength indicator, and customers complained of an inability to make or receive calls on their mobile devices.</p><p><a href="https://radar.cloudflare.com/as6167"><u>AS6167 (CELLCO)</u></a> is the <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/"><u>autonomous system</u></a> used by Verizon for its mobile network. To better understand how the outage impacted Internet traffic on Verizon’s network, we took a look at HTTP request volume from AS6167 independent of geography, as well as traffic from AS6167 in various cities that were <a href="https://mashable.com/live/verizon-outage-live-updates"><u>reported</u></a> to be the most significantly impacted.</p><p>Although initial reports of connectivity problems started around 09:00 ET (13:00 UTC), we didn’t see a noticeable change in request volume at an ASN level until about two hours later. Just before 12:00 ET (16:00 UTC), Verizon <a href="https://x.com/VerizonNews/status/1840780785084985777"><u>published a social media post acknowledging the problem</u></a>, stating “<i>We are aware of an issue impacting service for some customers. Our engineers are engaged and we are working quickly to identify and solve the issue.</i>”</p><p>As the <a href="https://radar.cloudflare.com/explorer?dataSet=http&amp;loc=as6167&amp;dt=2024-09-30_2024-09-30&amp;timeCompare=2024-09-23"><u>Cloudflare Radar graph</u></a> below shows, a slight decline (-5%) in HTTP traffic as compared to traffic at the same time a week prior is first visible around 11:00 ET (15:00 UTC). Request volume fell as much as 9% below expected levels at 13:45 ET (17:45 UTC).</p><p>Just after 17:00 ET (21:00 UTC), Verizon <a href="https://x.com/VerizonNews/status/1840860310997254609"><u>published a second social media post</u></a> noting, in part, “<i>Verizon engineers are making progress on our network issue and service has started to be restored.</i>” Request volumes returned to expected levels around the same time, surpassing the previous week’s levels at 17:15 ET (21:15 UTC). At 19:18 ET (23:18 UTC), a <a href="https://x.com/VerizonNews/status/1840893978411221191"><u>social media post</u></a> from Verizon noted “Verizon engineers have fully restored today's network disruption that impacted some customers. Service has returned to normal levels.”</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5fwaspr4MBTWT0zf36YEmf/86dfccdab0df85ea90edaa369520c23b/BLOG-2587_2.png" />
          </figure><p>Media reports listed cities including Chicago, Indianapolis, New York City, Atlanta, Cincinnati, Omaha, Phoenix, Denver, Minneapolis, Seattle, Los Angeles, and Las Vegas as being most impacted. In addition to looking at comparative traffic trends across the whole Verizon Wireless network, we also compared request volumes in the listed cities to the same time a week prior (September 23).</p><p>Declines in request traffic starting around 11:00 ET (15:00 UTC) are clearly visible in cities including Los Angeles, Seattle, Omaha, Denver, Phoenix, Minneapolis, Indianapolis, and Chicago. In contrast to other cities, Omaha’s request volume was already trending lower than last week heading into today’s outage, but its graph clearly shows the impact of today’s disruption as well. Omaha’s difference in traffic was the most significant, down approximately 30%, while other cities saw declines in the 10-20% range. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/wg0aCH9hUSpECGnzs4I9s/c7eeef574e5ba07cf26c7c667a0a9239/BLOG-2587_3.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5kE1kBRS4tTyB5TPY8ain7/5a153b7c7052352e9e44c6d88b1fab1c/BLOG-2587_4.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7gg28UcLpcp92UYvLpykuO/715fa25efcfcdfc6dd18b8dd5b81b0ca/BLOG-2587_5.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7rcGVaVuBsK7oMhPaAfOx3/3b785c2b4d296b6d459e5f50a5d69da7/BLOG-2587_6.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2WuYIbLm9MdkjN5C20JgG2/6784dca790e32ad199f2fb2122e21fcb/BLOG-2587_7.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3YTBl2XO8YDDgFSzAoIAxo/0a9481add616709fa8d95762cfbfc71e/BLOG-2587_8.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3dt2aD35Ipq0eZKuQcF8PB/2f51bd06ee1ced5fab68a76cc71f70ce/BLOG-2587_9.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6NS4Wq7KB0U60IkhyzhwTs/f69159fe84dd373c6905ebe56e570b6d/BLOG-2587_10.png" />
          </figure><p>Request traffic from Las Vegas initially appeared to exhibit a bit of volatility around 11:00 ET (15:00 UTC), but continues to track fairly closely to last week’s levels before exceeding them starting at 16:00 ET (20:00 UTC). Cincinnati was tracking slightly above last week’s request volume before the outage began, and tracked closely to the prior week during the outage period.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4NatjBWUvwgaU2mRRHAEUV/9a21e0c3ce61eb4fc11a0fb1d239c411/BLOG-2587_11.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3jqFrqdvO203QDKMsFUWrM/1d3e538cd3f00b7e522be922eb6b3664/BLOG-2587_12.png" />
          </figure><p>We observed week-over-week traffic increases during the outage period in New York and Atlanta. However, in both cities, traffic was already slightly above last week’s levels, and that trend continued throughout the day. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/UnTO0qSD24rreW9LHRk0H/043ebdf603b7604c4018944ad5192f2c/BLOG-2587_13.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1gDfClWDv7hqlgZE30HRjF/b427490198d2ba4fc74c6e84753de502/BLOG-2587_14.png" />
          </figure><p>Based on our observations, it appears that voice services on Verizon’s network may have been more significantly impacted than data services, as we saw some declines in request traffic across impacted cities, but none experienced full outages.</p><p>As of this writing (19:15 ET, 23:15 UTC), no specific information has been made available by Verizon regarding the root cause of the network problems. </p> ]]></content:encoded>
            <category><![CDATA[Radar]]></category>
            <category><![CDATA[Trends]]></category>
            <category><![CDATA[Consumer Services]]></category>
            <category><![CDATA[Outage]]></category>
            <guid isPermaLink="false">2dMuW3phhA9ClF8ROOAYPH</guid>
            <dc:creator>David Belson</dc:creator>
        </item>
        <item>
            <title><![CDATA[A global assessment of third-party connection tampering]]></title>
            <link>https://blog.cloudflare.com/connection-tampering/</link>
            <pubDate>Thu, 05 Sep 2024 07:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare brings visibility to the practice of connection tampering as observed from our global network. ]]></description>
            <content:encoded><![CDATA[ <p>Have you ever made a phone call, only to have the call cut as soon as it is answered, with no obvious reason or explanation? This analogy is the starting point for understanding connection tampering on the Internet and its impact. </p><p>We have <a href="https://research.cloudflare.com/publications/SundaraRaman2023/"><u>found</u></a> that 20 percent of all Internet connections are abruptly closed before any useful data can be exchanged. Essentially, every fifth call is cut before being used. As with a phone call, it can be challenging for one or both parties to know what happened. Was it a faulty connection? Did the person on the other end of the line hang up? Did a third party intervene to stop the call?  </p><p>On the Internet, Cloudflare is in a unique position to help figure out when a third party may have played a role. Our global network allows us to identify patterns that suggest that an external party may have intentionally tampered with a connection to prevent content from being accessed. Although they are often hard to decipher, the ways connections are abruptly closed give clues to what might have happened. Sources of tampering generally do not try to hide their actions, which leaves hints of their existence that we can use to identify detectable ‘signatures’ in the connection protocol. As we explain below, there are other protocol features that are less likely to be spoofed and that point to third party actions. We can use these hints to build signature patterns of connection tampering that can be recognized.</p><p>To be clear, there are many reasons a third party might tamper with a connection. Enterprises may tamper with outbound connections from their networks to prevent users from interacting with spam or phishing sites. ISPs may use connection tampering to enforce court or regulatory orders that demand website blocking to address copyright infringement or for other legal purposes. Governments may mandate large-scale censorship and information control. </p><p>Despite the fact that everyone knows it happens, no other large operation has previously looked at the use of connection tampering at scale and across jurisdictions. We think that creates a notable gap in understanding what is happening in the Internet ecosystem, and that shedding light on these practices is important for transparency and the long-term health of the Internet. So today, we’re proud to share a view of global connection tampering practices.</p><p>The full technical details were recently peer-reviewed and published in “<a href="https://research.cloudflare.com/publications/SundaraRaman2023/"><u>Global, Passive Detection of Connection Tampering</u></a>” at ACM SIGCOMM, with its <a href="https://youtu.be/RD73IgzQMFo?si=OWvNnlNNLalbhygV&amp;t=2984"><u>public presentation</u></a>. We’re also announcing a new <a href="https://radar.cloudflare.com/security-and-attacks#tcp-resets-and-timeouts"><u>dashboard</u></a> and <a href="https://developers.cloudflare.com/api/operations/radar-get-connection-tampering-summary"><u>API</u></a> on Cloudflare Radar that shows a near real-time view of specific connection timeout and reset events – the two mechanisms dominant in tampering experienced by users<b> </b>connecting to Cloudflare’s network globally.</p><p>To better understand our perspective, it helps to understand the nature of connection tampering and reasons we’re talking about it.</p>
    <div>
      <h2>Global insights for a global audience</h2>
      <a href="#global-insights-for-a-global-audience">
        
      </a>
    </div>
    <p>Evidence of connection tampering is visible in networks all around the world. We were initially shocked that, globally, about 20% of all connections to Cloudflare close unexpectedly before any useful data exchange occurs — consistent with connection tampering. Here is a snapshot of these anomalous connections seen by Cloudflare that, as of today, <a href="https://blog.cloudflare.com/tcp-resets-timeouts"><u>we’re sharing on Radar</u></a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1nPz5Ulu2YS7eV6Hpmniwg/fa2537c949602d057dfa83a6a599f553/2544-2.png" />
          </figure><p><i><sub>via </sub></i><a href="https://radar.cloudflare.com/security-and-attacks?dateStart=2024-07-28&amp;dateEnd=2024-08-26#tcp-resets-and-timeouts"><i><sub>Cloudflare Radar</sub></i></a></p><p>It’s not all tampering, but some of it clearly is, as we describe in more detail below. The challenge is filtering through the noise to determine which anomalous connections can confidently be attributed to tampering.</p>
    <div>
      <h2>Macro-level analysis and validation</h2>
      <a href="#macro-level-analysis-and-validation">
        
      </a>
    </div>
    <p>In <a href="https://research.cloudflare.com/publications/SundaraRaman2023/"><u>our work</u></a> we identified 19 patterns of anomalous connections as being candidate signatures for connection tampering. From those, we found that 14 had been previously reported by active “on the ground” measurement efforts, which presented an opportunity for validation at macro-level: If we observe our tampering signatures from Cloudflare’s network in the same places others observe them from the ground, we could have greater confidence that the signatures capture true cases of connection tampering when observed elsewhere, where there has been no prior reporting. To mitigate the risk of confirmation bias from looking where tampering is known to exist, we decided to look everywhere at the same time.</p><p>Taking that approach, the figure below, taken from our peer-review <a href="https://research.cloudflare.com/publications/SundaraRaman2023/"><u>study</u></a>, is a visual side-by-side comparison of each of the 19 signatures. The data is taken from a two-week interval starting January 26, 2023. Within each signature column is the proportion of matching connections broken down by the country where the connection originated. For example, the column third from the right labeled with ⟨PSH → RST;RST<sub>0</sub>⟩ indicates that we almost exclusively observed that signature on connections from China. Overall, what we find is a mirror of known cases from public and prior reports, which is an indication that our methodology works.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4w1iJCVyRwZdgblT7uk2tZ/53a3201f13cf1b4994db8f8a43b9d64b/2544-3.png" />
            
            </figure><p><i><sub></sub></i><a href="https://research.cloudflare.com/publications/SundaraRaman2023/"><i><sub><u>Figure 1</u></sub></i></a><i><sub>: Signature matching across countries: Each column is the total global number of connections matching a specific signature. Within each column is the proportion of connections initiations from individual countries matching that signature.</sub></i></p><p>Interestingly, by honing in on prevalence, and setting aside the raw number of signature matches, interesting patterns emerge. As a result of this data-driven perspective, unexpected macro-insights also emerge. If we focus on the three most populous countries in the world ranked by <a href="https://worldpopulationreview.com/country-rankings/internet-users-by-country"><u>number of Internet users</u></a>, connections from China contribute a substantial portion of matches across no fewer than 9 of the signatures. This is perhaps unsurprising, but reinforces prior studies that find evidence of the Great Firewall (GFW) being made of many <a href="https://research.cloudflare.com/publications/SundaraRaman2023/"><u>different deployments and implementations</u></a> of blocking mandates. Next, matches on connections from India also contribute substantially to nine 9 different signatures, five of which are in common with signatures where China matches feature highly. Looking at the third most populous, the United States, a visible if not substantial proportion of matches appear on all but two of the signatures.</p><p>A snapshot of signature distributions per-country, also taken from the peer-review <a href="https://research.cloudflare.com/publications/SundaraRaman2023/"><u>study</u></a>, appears below for a select set of countries. The global distribution is included for comparison. The dark gray portions marked ⟨SYN → ∅⟩ are included for completeness, but have more non-tampering alternative explanations than the others (for example, as result of a low-rate <a href="https://blog.cloudflare.com/the-rise-of-multivector-amplifications/"><u>SYN flood</u></a>).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/57Dip4I995kVVjXAhQxMrH/da5414d088777c000551a66589b80c3f/2544-4.png" />
            
            </figure><p><a href="https://research.cloudflare.com/publications/SundaraRaman2023/"><sub><i><u>Figure 4</u></i></sub></a><sub><i>: Signature distribution per country: The percentage of connections originating from select countries (and globally) that match a particular signature, or are not tampered with.</i></sub></p><p>From this perspective we again observe patterns that match prior studies. We focus first on rates above the global average, and ignore the noisiest signature ⟨SYN → ∅⟩ in medium-gray; there are simply too many other explanations for a signature match at this earliest possible stage. Among all connections from Turkmenistan (TM), Russia (RU), Iran (IR), and China (CN), roughly 80%, 30%, 40%, and 30%, respectively, of those connections match a tampering signature. The data also reveals high signature match rates where no prior reports exist. For example, connections from Peru (PE) and Mexico (MX), match roughly 50% and 25%, respectively; <a href="https://blog.cloudflare.com/tcp-resets-timeouts"><u>analysis of individual networks</u></a> in these countries suggests a likely explanation is zero-rating in mobile and cellular networks, where an ISP allows access to certain resources (but not others) at no cost. If we look below the global average, Great Britain (GB), the United States (US), and Germany (DE), each match a signature on about 10% of connections.</p><p>The data makes clear that connection tampering is widespread, and close to many users, if not most. In many ways, it’s closer than most know. To explain why, we explain connection tampering with a very familiar communication tool, the telephone.</p>
    <div>
      <h2>Explaining tampering with telephone calls</h2>
      <a href="#explaining-tampering-with-telephone-calls">
        
      </a>
    </div>
    <p>Connection tampering is a way for a third party to block access to particular content. However, it’s not enough for the third party to know the <i>type</i> of content it wants to block. The third party can only block an identity by name. </p><p>Ultimately, connection tampering is possible only by accident – an unintended side effect of protocol design. On the Internet, the most common identity is the domain name. In a communication on the Internet, the domain name is most often transmitted in the “<a href="https://www.cloudflare.com/en-gb/learning/ssl/what-is-sni/"><u>server name indication (SNI)</u></a>” field in TLS – exposed in cleartext for all to see.</p><p>To understand why this matters, it helps to understand what connection tampering looks like in human-to-human communications without the Internet. The Internet itself looks and operates much like the postal system, which relies only on addresses and never on names. However, the way most people use the Internet is much more like the “<a href="https://en.wikipedia.org/wiki/Plain_old_telephone_service"><u>plain old telephone system</u></a>,” which <i>requires</i> names to succeed.</p><p>In the telephone system, a person first dials a phone number, <i>not</i> a name. The call is <code>connected</code> and usable only after the other side answers, and the caller hears a voice.  The caller asks for a name only <i>after</i> the call is connected. The call manifests in the system as energy signals that do not identify the communicating parties. Finally, after the call ends, a new call is required to communicate again.</p><p>On the Internet, a client such as a browser “establishes a connection.” Much like a telephone caller, it initiates a connection request to a server’s <code>number</code>, which is an IP address. The longest-standing “connection-oriented” protocol to connect two devices is called the <a href="https://cloudflare.tv/shows/this-week-in-net/this-week-in-net-50th-anniversary-of-the-tcp-paper/oZKEA4v4"><u>Transmission Control Protocol</u></a>, or TCP. The domain name is transmitted in isolation from the connection establishment, much like asking for a name once the phone is answered. The connections are “logical” identified by metadata that does not identify communicating parties. Finally, a new connection is established with each new visit to a website.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1BzA3XvqopuaP1WSg0rK6X/fe39d77acdc14c1d984512dfdb01279c/2544-5.png" />
          </figure><p><sub><i>Comparison between a TCP connection and a telephone call</i></sub></p><p>What happens if a telephone company is required to prevent a call to some party? One option is to modify or manipulate phone directories so a caller can’t get the phone number they need to dial the phone that makes the call; this is the essence of <a href="https://www.cloudflare.com/en-gb/learning/access-management/what-is-dns-filtering/"><u>DNS filtering</u></a>. A second option is to block all calls to the phone number, but this inadvertently affects others, just like <a href="https://blog.cloudflare.com/consequences-of-ip-blocking/"><u>IP blocking</u></a> does.</p><p>Once a phone call is initiated, the only way for the telephone company to know <i>who</i> is being called is to listen in on the call and wait for the caller to say, “is so-and-so there?” or “can I speak with so-and-so?” Mobile and cellular calls are no exception. The idea that the number we call <i>is</i> the person who will answer is just an expectation – it has never been the reality. For example, a parent could get a number to give to their child, or a taxi company could leave the mobile phone with whomever is on-shift at the time. As a result, the telephone company <i>must listen in</i>. Once it hears a certain name it can cut the call; neither side would have any idea what has happened – this is the very definition of connection tampering on the Internet. </p><p>For the purpose of establishing a communication channel, phone calls and TCP connections are at least comparable, and arguably exactly the same – not least because the domain name is transmitted separately from establishing a connection.</p><p>Similarly, on the Internet, the only way for a third party to know the intended recipient of a connection is to “look inside” of packets as they are transmitted. Where a telephone company would have to listen for a name, a third party on the Internet waits to see something it does not like, most often a forbidden name. Recall from above the unintended side-effect of the protocol: the name is visible in the SNI, which is required to help encrypt the data communication. When that happens, the third party causes one or both devices to close the connection by either dropping messages or injecting specially-crafted messages that cause the communicating parties to abort the connection.</p><p>The mechanisms to trigger tampering begin with <a href="https://www.cloudflare.com/learning/security/what-is-next-generation-firewall-ngfw/"><u>deep packet inspection (DPI)</u></a>, which means looking into the data portions that lie beyond the address and other metadata belonging to the connection. It’s safe to say that this functionality does not come for free; whether it’s an ISP’s router or a parental proxy, DPI is an expensive operation that gets more expensive at large scale or high speed. </p><p>One last point worth mentioning is that weaknesses in telephone tampering similarly appear in connection tampering. For example, the sound of Jean and Gene are indistinguishable to any ear, despite being different names. Similarly, tampering with connections to Twitter’s short-form name “t.co” would also affect “microsoft.com” – and <a href="https://en.wikipedia.org/wiki/Internet_censorship_in_Russia#Deep_packet_inspection"><u>has</u></a>.</p>
    <div>
      <h2>A live view of tampering during Mahsa Amini protests</h2>
      <a href="#a-live-view-of-tampering-during-mahsa-amini-protests">
        
      </a>
    </div>
    <p>Before we delve deeply into the technical, there is one more motivation that is personal to many at Cloudflare. Transparency is important and the reason we started this work, but it was after seeing the data <i>during</i> the Mahsa Amini protests in Iran in 2022 that we committed internally to share the data on Radar. </p><p>The figure below is for connections from Iran during 17 days overlapping the protests. The plot-lines track individual signals of anomalous connections, including signatures of <a href="https://blog.cloudflare.com/passive-detection-of-connection-tampering"><u>different types</u></a> of connection tampering. This data pre-dates the Radar service, so we have elected to share this representation from the <a href="https://research.cloudflare.com/publications/SundaraRaman2023/"><u>peer-reviewed paper</u></a>. It was also the first visual example of the value of the data if it could be shared via Radar. 
</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3IKNhC2gLo7CeUQclssiuM/58300eccf981f132689ee75b57db8cb2/2544-6.png" />
          </figure><p><sub><i></i></sub><a href="https://research.cloudflare.com/publications/SundaraRaman2023/"><sub><i><u>Figure 8</u></i></sub></a><sub><i>: Signature match rates longitudinally in Iran during a period of nation-wide protests. (𝑥-axis is local time.)</i></sub></p><p></p><p>From the data there are two observations that stick out. First is the way that the lines appear stable before the protests, then increase after the protests began. Second is the variation between the lines over time, in particular the lines in light gray, dark purple, and dark green. Recall that each line is a different tampering signature, so the variation between lines suggests changes in the underlying causes – either the mechanisms at work, or the traffic that invokes them.</p><p>We emphasize that a signature match, alone, does not in itself mean there is tampering. However, in the case of Iran in 2022 there were public reports of blocking of various forms. The methods in use at the time, specifically <a href="https://www.cloudflare.com/en-gb/learning/ssl/what-is-sni/"><u>Server Name Indication (SNI)</u></a>-based blocking of access to content, had also previously been <a href="https://ooni.org/post/2020-iran-sni-blocking/"><u>well-documented</u></a>, and matched with our observations represented by the figure above.</p><p>What about today? Below we see the Radar view of the twelve months from August 2023 to August 2024. Each color represents a different stage of the connection where tampering might happen. In the previous 12 months, TCP connection anomalies in Iran are lower than the <a href="https://radar.cloudflare.com/security-and-attacks?dateStart=2024-08-01&amp;dateEnd=2024-08-08"><u>worldwide averages</u></a>, overall, but appear significantly higher in the portion of anomalies represented by the light-blue region. This “Post ACK” phase of communication is often associated with SNI-based blocking. (In the graph above, the relevant signatures are represented by the dark purple and dark green lines.) Alongside, the changing proportions of the different plot-lines since mid-December 2023 suggest that techniques have been changing over time.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3qhesraWkFdNPbWgx2x0xX/1a72a1707e768270c29d570b5bd35545/2544-7.png" />
          </figure><p><i><sub>via </sub></i><a href="https://radar.cloudflare.com/security-and-attacks/ir?dateStart=2023-08-26&amp;dateEnd=2024-08-26#tcp-resets-and-timeouts"><i><sub>Cloudflare Radar</sub></i></a></p>
    <div>
      <h2>The importance of an open network measurement community</h2>
      <a href="#the-importance-of-an-open-network-measurement-community">
        
      </a>
    </div>
    <p>As a testament to the importance of open measurement and research communities, this work very literally “<a href="https://en.wikipedia.org/wiki/Standing_on_the_shoulders_of_giants">builds on the shoulders of giants</a>.” It was produced in collaboration with researchers at the <a href="https://www.cs.umd.edu/">University of Maryland</a>, <a href="https://www.epfl.ch/">École Polytechnique Fédérale de Lausanne</a>, and the <a href="https://cse.engin.umich.edu/">University of Michigan</a>, but does not exist in isolation. There have been extensive efforts to measure connection tampering, most of which comes from the censorship measurement community. The bulk of that work consists of <i>active</i> measurements, in which researchers craft and transmit probes in or along networks and regions to identify blocking behavior. Unsurprisingly, active measurement has both strengths and weaknesses, as described in <a href="https://research.cloudflare.com/publications/SundaraRaman2023/"><u>Section 2</u></a> in the paper). </p><p>The counterpart to active measurement, and the focus of our project, is <i>passive</i> measurement, which takes an “observe and do nothing” approach. Passive measurement comes with its own strengths and weaknesses but, crucially, it relies on having a good vantage point such as a large network operator. Each of active and passive measurements are most effective when working in conjunction, in this case helping to paint a more complete picture of the impact of connection tampering on users.</p><p>Most importantly, when embarking upon any type of measurement, great care must be taken to understand and <a href="https://cacm.acm.org/magazines/2016/10/207765-ethical-considerations-in-network-measurement-papers/fulltext"><u>evaluate the safety of the measurement</u></a> since the risk imposed on people and networks are often indirect, or hidden from view.</p>
    <div>
      <h2>Limitations of our data</h2>
      <a href="#limitations-of-our-data">
        
      </a>
    </div>
    <p>We have no doubt about the importance of being transparent with connection tampering, but we also need to be explicit about the limits on the insights that can be gleaned from the data. As passive observers of connections to the Cloudflare network – and only the Cloudflare network – we are only able to see or infer the following:</p><ol><li><p><b>Signs of connection tampering, but not where it happened.</b> Any software or device between the client’s application and the server systems can tamper with a connection. The list ranges from purpose-built systems, to firewalls in the enterprise or home broadband router, and protection software installed on home or school computers. <i>All we can infer is where the connection started</i> (albeit at the limits of geolocation inaccuracies inherent in the Internet’s design)<i>.</i></p></li><li><p><b>(Often, but not always) What triggered the tampering, but not why.</b> Typically, tampering systems are triggered by domain names, keywords, or regular expressions. With enough repetition, and manual inspection, it may be possible to identify the <i>likely</i> cause of tampering, but not the reasons. Many tampering system designs are prone to unintended consequences, among them the <a href="https://en.wikipedia.org/wiki/Internet_censorship_in_Russia#Deep_packet_inspection"><u>t.co</u></a> example mentioned above.</p></li><li><p><b>Who and what </b><b><i>is</i></b><b> affected, but not who or what </b><b><i>could</i></b><b> be affected.</b> As passive observers, there are limits on the kinds of inferences we can make. For example, observable tampering on 1000 out of 1001 connections to <code>example.com</code> suggests that tampering is likely on the next connection attempt. However, that says nothing about connections to <code>another-example.com</code>. </p></li></ol>
    <div>
      <h2>Data, data, data: Extracting signals from the noise</h2>
      <a href="#data-data-data-extracting-signals-from-the-noise">
        
      </a>
    </div>
    <p>If you just want to get and use the data on Radar, see our “<a href="https://blog.cloudflare.com/tcp-resets-timeouts"><u>how to</u></a>” guide. Otherwise, let’s understand the data itself.</p><p>The focus of this work is <a href="https://www.cloudflare.com/learning/ddos/glossary/tcp-ip/"><u>TCP</u></a>. In our data there are two mechanisms available to a third-party to force a connection to close: <a href="https://en.wikipedia.org/wiki/Packet_drop_attack"><u>dropping packets</u></a> to induce timeouts or <a href="https://en.wikipedia.org/wiki/TCP_reset_attack#TCP_resets"><u>injecting forged TCP RST packets</u></a>, each with various deployment choices. Individual tampering signatures may be reflections of those choices. For comparison, a graceful TCP close is initiated with a FIN packet. </p>
    <div>
      <h3>Connection tampering signatures</h3>
      <a href="#connection-tampering-signatures">
        
      </a>
    </div>
    <p>Our detection mechanism evaluates sets of packets in a connection against a set of <i>signatures</i> for connection tampering. The signatures are hand-crafted from signatures identified in prior work, and by analyzing samples of connections to Cloudflare’s network that we classify as <i>anomalous </i>– connections that close early, and ungracefully by way of a RST packet or timeout within the first 10 packets from the client. We analyzed the samples and found that 19 patterns accounted for 86.9% of all possibly tampered connections in the samples, shown in the table below.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5LssSxT9SkDpXzMXtPmtZ1/a18e564cf39613bb7fe7569337d91b65/2544-8.png" />
          </figure><p><sub></sub><a href="https://research.cloudflare.com/publications/SundaraRaman2023/"><sub><u>Table 1</u></sub></a><sub>: The comprehensive set of tampering signatures we identify through global passive measurements.</sub></p><p></p><p>To help reason about tampering, we also classed the 19 signatures above according to the stage of the connection lifetime in which they appear. Each stage implies something about the middlebox, as described below alongside corresponding sequence diagrams:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/49GKWrmtfdg9Xpk0K2RiGJ/f0e755a2ba1fcac763a44185bf566f61/Screenshot_2024-09-04_at_2.57.52_PM.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6xZbuENcYfkucNmVUJmwQv/b1f69b105c2e19eb9f5ef583d50416b8/Screenshot_2024-09-04_at_2.58.00_PM.png" />
          </figure><p></p><ul><li><p><b>(a) Post-SYN (mid-handshake)</b>: Tampering is likely triggered by the destination IP address because the middlebox has likely not seen application data, which is typically transmitted after the handshake completes.</p></li><li><p><b>(b) Post-ACK (immediately after handshake)</b>: The connection is established and immediately forced to close before seeing any data. It is possible, even likely, that the middlebox has likely seen a data packet; for example, the host header in HTTP or SNI field in TLS. </p></li><li><p><b>(c) Post-PSH (after first data packet)</b>: The middlebox has definitely seen the first data packet because the server has received it. The middlebox may have been waiting for a packet with a PSH flag, typically set to indicate data in the packet should be delivered to the application on receipt, without delay. The likely middlebox is likely a <a href="https://en.wikipedia.org/wiki/Man-on-the-side_attack"><u>monster-on-the-side</u></a> because it permits the offending packet to reach the destination.</p></li><li><p><b>(d) Later-in-flow (after multiple data packets)</b>: Tampering at later stages in the connection (not immediately after the first data packet, but still within the first 10 packets). The prevalence of encrypted data in TLS makes this the least likely stage for tampering to occur. The likely triggers are keywords appearing in cleartext later in (HTTP) connections, or by the likes of enterprise proxies and parental protection software that has visibility into encrypted traffic and can reset connections when certain keywords are encountered. </p></li></ul>
    <div>
      <h3>Accounting for alternative explanations</h3>
      <a href="#accounting-for-alternative-explanations">
        
      </a>
    </div>
    <p>How can we be confident that the signatures above detect middlebox tampering, and not just atypical client behavior? One of the challenges of passive measurement is that we do not have full visibility into the clients connecting to our network, so absolute positives are hard if not impossible. Instead, we look for strong positive evidence of tampering, that must first begin by identifying <b>false positives</b>. </p><p>We are aware of the following sources of false positives that can be hard to disambiguate from true sources of tampering. <i>All but the last occur in the first two stages</i> of the connection, before data packets are received. </p><ul><li><p><b>Scanners</b> are client-side applications that probe servers to elicit responses. Some scanner software uses fixed bits in the header to self-identify, which helps us filter. For example, we found that <a href="https://zmap.io/"><u>Zmap</u></a> accounts for approximately 1% of all <code>⟨SYN → RST⟩</code> signature matches.</p></li><li><p><b>SYN flood attacks</b> are another likely source of false positives, especially for signatures in the Post-SYN connection stage like the <code>⟨SYN → ∅⟩</code> and <code>⟨SYN → RST⟩</code> signatures. These are less likely to appear in our dataset collection, which happens <a href="https://www.cloudflare.com/learning/ddos/syn-flood-ddos-attack/https://www.cloudflare.com/learning/ddos/syn-flood-ddos-attack/"><u>after the DDoS protection</u></a> systems.</p></li><li><p><b>Happy Eyeballs</b> is a <a href="https://datatracker.ietf.org/doc/html/rfc8305"><u>common technique</u></a> used by dual-stack clients in which the client initiates an IPv6 connection to the server and, with some delay to favor IPv6, also makes an IPv4 connection. The client keeps the connection that succeeds first and drops the other. Clients that cease transmission or close the connection with a RST instead of a FIN would show up in the data, matching the <code>⟨SYN → RST⟩</code> signature. </p></li><li><p><b>Browser-triggered RSTs</b> may appear at any stage of the connection, but especially for signatures that match later in a connection (after multiple data packets). It might be triggered, for example, by a user closing a browser tab. Unlike targeted tampering, however, RSTs originating from browsers are unlikely to be biased towards specific services or websites. </p></li></ul><p>How can we separate legitimate client-initiated false positives from third-party tampering? We seek an evidence-based approach to distinguish tampering signatures from other signals within the dataset. For this we turn to individual bits in the packet headers.</p>
    <div>
      <h3>Signature validation – letting the data speak</h3>
      <a href="#signature-validation-letting-the-data-speak">
        
      </a>
    </div>
    <p>Signature matches in isolation are insufficient to make good determinations. Alongside, we can find further supporting evidence of their accuracy by examining connections in aggregate – if the cause is tampering, and tampering is targeted, then there must be other patterns or markers in common. For example, we expect browser behavior to appear worldwide; however, as we showed above, signatures that match on connections in only some places or some time intervals stick out. </p><p>Similarly, we expect certain characteristics in contiguous packets within a connection to also stick out, and indeed they do, namely in the <a href="https://datatracker.ietf.org/doc/html/rfc6864"><u>IP-ID</u></a> and <a href="https://www.cloudflare.com/learning/cdn/glossary/time-to-live-ttl/"><u>TTL</u></a> fields in the IP header.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7CVNobND5JzSYY4rszgem9/6b533f5a3681cf8c4f05d6a6179c0630/Screenshot_2024-09-04_at_2.57.36_PM.png" />
          </figure><p><b>The IP-ID (IP identification) field</b> in the IPv4 packet header is usually a fixed per-connection value, often incremented by the client for each subsequent packet it sends. In other words, we expect the change in IP-ID value in subsequent packets sent from the same client to be small. Thus, large changes in IP-ID value between subsequent packets are unexpected in normal connections, and could be used as an indicator of packet injection. This is exactly what we see in the figure above, marked (a), for a select set of signatures.</p><p><b>The Time-to-Live (TTL) field </b>offers another clue for detecting injected packets. Here, too, most client implementations use the same <a href="https://www.cloudflare.com/learning/cdn/glossary/time-to-live-ttl/"><u>TTL</u></a> for each packet sent on a connection, usually set initially to either 64 or 128 and decremented by every router along the packet’s route. If a RST packet does not have the same TTL as other packets in a connection, it’s a strong signal that it was injected. Looking at the figure above, marked (b), we can see marked differences in TTLs, indicating the presence of a third party. </p><p>We strongly encourage readers to read the <a href="https://research.cloudflare.com/publications/SundaraRaman2023/"><u>underlying details</u></a> of how and why these make sense.  Connections with high maximum IP-ID and TTL differences give positive evidence for traffic tampering, but the <i>absence</i> of these signals does not necessarily mean that tampering did not occur, as some middleboxes are known to <a href="https://censoredplanet.org/assets/censorship-devices.pdf"><u>copy IP header values</u></a> including the IP-ID and TTL from the original packets in the connection. Our interest is in responsibly ensuring our dataset has indicative value.</p><p><b>There is one last caveat: </b>While our tampering signatures capture many forms of tampering, there is still potential for<b> false negatives</b> for connections that <i>were</i> tampered with but escaped our detection. Some examples are connections terminated after the first 10 packets (since we don’t sample that far), FIN injection (a less common alternative to RST injection), or connections where all packets are dropped before reaching Cloudflare’s servers. Our signatures also do not apply to <a href="https://www.cloudflare.com/learning/ddos/glossary/user-datagram-protocol-udp/"><u>UDP-based protocols</u></a> such as QUIC. We hope to expand the scope of our connection tampering signatures in the future.</p>
    <div>
      <h2>Case studies</h2>
      <a href="#case-studies">
        
      </a>
    </div>
    <p>To get a sense of how this looks on the Cloudflare network, below we provide further examples of TCP connection anomalies that are consistent with <a href="https://ooni.org/reports/"><u>OONI reports</u></a> of connection tampering.</p><p>For additional insights from this specific study, see the full technical <a href="https://research.cloudflare.com/publications/SundaraRaman2023/"><u>paper</u></a> and <a href="https://www.youtube.com/watch?v=DyDv3MHICto&amp;list=PLU4C2_kotFP2JAkoL6pcgbb52f6GIJJd7&amp;ab_channel=ACMSIGCOMM"><u>presentation</u></a>. For other regions and networks not listed below, please see the <a href="https://blog.cloudflare.com/tcp-resets-timeouts"><u>new data on Radar</u></a>.</p>
    <div>
      <h3>Pakistan</h3>
      <a href="#pakistan">
        
      </a>
    </div>
    <p>Reporting from <a href="https://tribune.com.pk/story/2491142/pakistan-should-be-transparent-about-internet-disruptions-surveillance-amnesty-international"><u>inside</u></a> Pakistan suggests changes in users’ Internet experience throughout August 2024. Taking a look at a two-week interval in early August, there is a significant shift in Post-ACK connection anomalies starting on August 9, 2024.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/38SPF52mjzVCDvEnO63hhu/c1a5e7b57d3f63bcf12349dbe7e1b377/2544-12.png" />
          </figure><p><sub><i>via </i></sub><a href="https://radar.cloudflare.com/security-and-attacks/pk?dateStart=2024-08-03&amp;dateEnd=2024-08-17#tcp-resets-and-timeouts"><sub><i>Cloudflare Radar</i></sub></a></p><p>The August 9 Post-ACK spike can be almost entirely attributed to <a href="https://radar.cloudflare.com/as56167"><u>AS56167 (Pak Telecom Mobile Limited)</u></a>, shown below in the first image, where Post-ACK anomalies jumped from under 5% to upwards of 70% of all connections, and has remained high since. Correspondingly, we see a significant reduction in the number of successful HTTP requests reaching Cloudflare’s network from clients in AS56167, below in the second image, which provides evidence that connections are being disrupted. This Pakistan example reinforces the importance of corroborating reports and observations, discussed in more detail in the <a href="https://blog.cloudflare.com/tcp-resets-timeouts"><u>Radar dataset release</u></a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6y0WGCYjlvOhOurny2rN7p/6afbbd35e037beac32728f0e93b1fe17/2544-13.png" />
          </figure><p><sub><i>via </i></sub><a href="https://radar.cloudflare.com/security-and-attacks/AS56167?dateStart=2024-08-03&amp;dateEnd=2024-08-17#tcp-resets-and-timeouts"><sub><i>Cloudflare Radar</i></sub></a></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2aFE80hJQ5f6uJprUYFmEt/b79ffa0e1c8e30fcb57d9a8bb97c41b3/2544-14.png" />
          </figure><p><sub><i>via </i></sub><a href="https://radar.cloudflare.com/traffic/AS56167?dateStart=2024-08-03&amp;dateEnd=2024-08-17#http-traffic"><sub><i>Cloudflare Radar</i></sub></a></p>
    <div>
      <h3>Tanzania</h3>
      <a href="#tanzania">
        
      </a>
    </div>
    <p>A <a href="https://ooni.org/post/2024-tanzania-lgbtiq-censorship-and-other-targeted-blocks/"><u>OONI report</u></a> from April 2024 discusses targeted connection tampering in <a href="https://radar.cloudflare.com/tz"><u>Tanzania</u></a>. The report states that this blocking is observed on the client side as connection timeouts after the <a href="https://www.cloudflare.com/learning/ssl/what-happens-in-a-tls-handshake/"><u>Client Hello</u></a> message during the <a href="https://www.cloudflare.com/learning/ssl/what-happens-in-a-tls-handshake/"><u>TLS handshake</u></a>, indicating that a middlebox is dropping the packet containing the Client Hello message. On the server side, connections tampered with in this way would appear as Post-ACK timeouts as the PSH packet containing the Client Hello message never reaches the server.</p><p>Looking at the Post-ACK data represented in the light-blue portion, below, we find matching evidence: close to 30% of all new TCP connections from Tanzania appear as Post-ACK anomalies. Breaking this down further (not shown in the plots below), approximately one third is due to timeouts, consistent with the OONI report above. The remainder is due to RSTs.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2IVvmuECBkPhL8xS7fdOcV/203ca4916a474d2c06f02d6a3f04d006/2544-15.png" />
          </figure><p><sub><i>via </i></sub><a href="https://radar.cloudflare.com/security-and-attacks/tz?dateStart=2024-07-24&amp;dateEnd=2024-08-20#tcp-resets-and-timeouts"><sub><i>Cloudflare Radar</i></sub></a></p><p></p>
    <div>
      <h3>Ethiopia</h3>
      <a href="#ethiopia">
        
      </a>
    </div>
    <p><a href="https://radar.cloudflare.com/et"><u>Ethiopia</u></a> is another location with <a href="https://ooni.org/post/2023-ethiopia-blocks-social-media/"><u>previously-reported</u></a> connection tampering. Consistent with this, we see elevated rates of Post-PSH TCP anomalies across networks in Ethiopia. Our internal data shows that the majority of Post-PSH anomalies in this case are due to RSTs, although timeouts are also prevalent.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/YTj7Kypiu0nSjmvZ00Jjo/b3306284a04a67d91351da645b6332f7/2544-16.png" />
          </figure><p><sub><i>via </i></sub><a href="https://radar.cloudflare.com/security-and-attacks/et?dateStart=2024-07-24&amp;dateEnd=2024-08-20#tcp-resets-and-timeouts"><sub><i>Cloudflare Radar</i></sub></a></p><p>The majority of traffic arriving to Cloudflare’s servers from IP addresses geolocated in Ethiopia is from <a href="https://radar.cloudflare.com/as24757"><u>AS24757 (Ethio Telecom)</u></a>, shown below in the first image, so it is perhaps unsurprising that its data closely matches the country-wide distribution of connection anomalies. The number of Post-PSH connections originating from <a href="https://radar.cloudflare.com/as328988"><u>AS328988 (SAFARICOM TELECOMMUNICATIONS ETHIOPIA PLC)</u></a>, shown below in the second image, are higher in proportion and account for over 33% of all connections from that network.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/43sDPBfmz2u2JIRIPN0htd/6e0825edaef48ba73df9180a30411231/2544-17.png" />
          </figure><p><sub>via </sub><a href="https://radar.cloudflare.com/security-and-attacks/AS24757?dateStart=2024-07-24&amp;dateEnd=2024-08-20#tcp-resets-and-timeouts"><sub>Cloudflare Radar</sub></a></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5fyguCauAPqdXYXkCvhjAl/7e86f97f2b71a73cbf3d8c95d27a92b1/2544-18.png" />
          </figure><p><sub>via </sub><a href="https://radar.cloudflare.com/security-and-attacks/AS328988?dateStart=2024-07-24&amp;dateEnd=2024-08-20#tcp-resets-and-timeouts"><sub><i>Cloudflare Radar</i></sub></a></p>
    <div>
      <h2>Reflecting on the present to promote a resilient future</h2>
      <a href="#reflecting-on-the-present-to-promote-a-resilient-future">
        
      </a>
    </div>
    <p>Connection tampering is a blocking mechanism that is deployed in various forms throughout the Internet. Although we have developed ways to help detect and understand it globally, the experience is just as individual as an interrupted phone call.</p><p>Connection tampering is also made possible <i>by accident</i>. It works because domain names are visible in cleartext. But it may not always be this way. For example, <a href="https://datatracker.ietf.org/doc/draft-ietf-tls-esni/"><u>Encrypted Client Hello (ECH)</u></a> is an emerging building block that encrypts the SNI field. </p><p>We’ll continue to look for ways to talk about network activity and disruption, all to foster wider conversations. Check out the newest additions about connection anomalies on <a href="https://radar.cloudflare.com/security-and-attacks#tcp-resets-and-timeouts"><u>Cloudflare Radar</u></a> and the <a href="https://blog.cloudflare.com/tcp-resets-timeouts"><u>corresponding blog post</u></a>, as well as the <a href="https://research.cloudflare.com/publications/SundaraRaman2023/"><u>peer-reviewed technical paper</u></a> and its <a href="https://youtu.be/RD73IgzQMFo?si=OWvNnlNNLalbhygV&amp;t=2984"><u>15-minute summary talk</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Radar]]></category>
            <category><![CDATA[Internet Traffic]]></category>
            <category><![CDATA[Internet Quality]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Trends]]></category>
            <category><![CDATA[Consumer Services]]></category>
            <guid isPermaLink="false">2PQ5yUYNh250JZfC8YuElJ</guid>
            <dc:creator>Ram Sundara Raman</dc:creator>
            <dc:creator>Luke Valenta</dc:creator>
            <dc:creator>Marwan Fayed</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing HTTP request traffic insights on Cloudflare Radar ]]></title>
            <link>https://blog.cloudflare.com/http-requests-on-cloudflare-radar/</link>
            <pubDate>Tue, 13 Aug 2024 14:00:00 GMT</pubDate>
            <description><![CDATA[ The traffic graphs on Cloudflare Radar have been enhanced to include HTTP request traffic. This new metric complements the existing bytes-based “HTTP traffic” view, and the new graphs can be found on Radar’s Overview and Traffic pages. ]]></description>
            <content:encoded><![CDATA[ <p>Historically, <a href="https://radar.cloudflare.com/traffic"><u>traffic graphs on Cloudflare Radar</u></a> have displayed two metrics: total traffic and <a href="https://www.cloudflare.com/learning/ddos/glossary/hypertext-transfer-protocol-http/"><u>HTTP</u></a> traffic. These graphs show normalized traffic volumes measured in bytes, derived from aggregated<a href="https://www.kentik.com/kentipedia/what-is-netflow-overview/"><u> NetFlow</u></a> data. (NetFlow is a protocol used to collect metadata about IP traffic flows traversing network devices.) Today, we’re adding an additional metric that reflects the number of HTTP requests, normalized over the same time period. By comparing bytes with requests, readers can gain additional insights into traffic patterns and user behavior. Below, we review how this new data has been incorporated into Radar, and explore HTTP request traffic in more detail.</p><p>Note that while we refer to “HTTP request traffic” in this post and on Radar, the term encompasses requests made in the clear over HTTP <b>and</b> over encrypted connections using HTTPS – the latter accounts for <a href="https://radar.cloudflare.com/adoption-and-usage?dateStart=2024-07-01&amp;dateEnd=2024-07-31"><u>~95% of all requests to Cloudflare during July 2024</u></a>.</p>
    <div>
      <h2>New and updated graphs</h2>
      <a href="#new-and-updated-graphs">
        
      </a>
    </div>
    <p>Graphs including HTTP request-based traffic data have been added to the Overview and Traffic sections on Cloudflare Radar. On the <a href="https://radar.cloudflare.com/"><u>Overview</u></a> page, the “Traffic trends” graph now includes a drop-down selector at the upper right, where you can choose between “Total &amp; HTTP bytes” and “HTTP requests &amp; bytes”. We explore the distinction between these further in the following sections.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6JcsgZtjziThbeKMxSUWrT/2abd7b9aee9920c6f2b58e675254f1b7/Screenshot_2024-08-09_at_11.04.05_AM.png" />
          </figure><p></p><p>The default “Total &amp; HTTP bytes” selection displays a time series graph, showing total bytes and HTTP bytes traffic over time, as Radar has done for several years now.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4scdHiTWAUgBgF0mfibZ8c/2f82024f4e8b5a96f9795839b5e7e492/2493-3.png" />
          </figure><p>
</p><p>Selecting “HTTP requests &amp; bytes” from the dropdown switches the view to a time series graph that HTTP requests traffic and HTTP bytes traffic over time. In both graphs, users can click on a metric in the legend to deselect it and remove it from the graph. These (de)selections are maintained when a user chooses to download or save a graph.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/oQim0jPqAIzfmPZC1ASOB/12fc22dfc52a12b0c362df98f44451d6/2493-4.png" />
          </figure><p></p><p>In addition, we’ve added a “Protocols” summary next to the graph that shows the share of bytes over the selected time period that HTTP accounts for, and the remaining aggregate share associated with the protocols used by other non-HTTP Cloudflare services (such as DNS, WARP, etc.). For most locations or ASNs, HTTP traffic will comprise the majority share of bytes-based traffic.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4k0EgZW5fnUpOuEdIt0BE/4cfd8bd682a11ada069008c541aca07e/Screenshot_2024-08-09_at_11.03.48_AM.png" />
          </figure><p></p><p>On Radar’s <a href="https://radar.cloudflare.com/traffic"><u>Traffic</u></a> page, we have added the HTTP requests metric to the “Traffic volume” graph at the top of the page, allowing you to see how request volume has changed during the selected time period as compared to the previous period, in addition to the changes in the bytes-based metrics.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5AMP7mv3K6Zk1DooGateDx/af663823183a0d06c676a873cdfbf59e/2493-6.png" />
          </figure><p></p><p>A new standalone request-based “HTTP traffic” graph was also added to the Traffic page, just below the bytes-based “Traffic trends” graph. This new graph shows normalized HTTP request traffic volume across the selected time period, and by default, also compares it with the previous time period.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4xVJOTfmnBtYJPzFuvi3iY/0a179dc7b6a8924efc5f2ece94690535/2493-7.png" />
          </figure><p></p><p>Similar to other Radar graphs, these new HTTP request-based graphs can also be downloaded, copied to the clipboard, or embedded in other websites – just click on the share icon.</p><p>As always, the underlying data is also available through the Radar API. The <a href="https://developers.cloudflare.com/api/operations/radar-get-http-timeseries"><u>“HTTP requests Time Series” API endpoint</u></a> returns normalized HTTP request time series data across the specified time period for the requested location or <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/"><u>autonomous system (ASN)</u></a>.</p>
    <div>
      <h2>What is HTTP request traffic?</h2>
      <a href="#what-is-http-request-traffic">
        
      </a>
    </div>
    <p>An HTTP <a href="https://httpwg.org/specs/rfc9110.html#GET"><u>GET</u></a> request is a message sent from a client (such as your web browser) to a web server (such as one operated by Cloudflare), asking for a particular resource (file). In addition to returning the requested resource, which could range from a single-pixel GIF accounting for just a few bytes, to an API call that returns a few kilobytes of data, to a multi-gigabyte software package, the Web server also returns a set of <a href="https://developer.mozilla.org/en-US/docs/Glossary/Response_header"><u>headers</u></a>, which can include information about the <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Type"><u>content type</u></a>, the <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Last-Modified"><u>last time the resource was modified</u></a>, <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cookie"><u>cookie</u></a> information, <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control"><u>cacheability</u></a>, and more. While GET requests account for the overwhelming majority of HTTP request traffic, such traffic also includes other <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods"><u>HTTP request methods</u></a> including HEAD, POST, PUT, and more.</p><p>Cloudflare temporarily logs HTTP requests received by our network, including associated <a href="https://developer.mozilla.org/en-US/docs/Glossary/Request_header"><u>header</u></a> information and “metadata” about the request, such as the <a href="https://developers.cloudflare.com/bots/concepts/bot-score/"><u>bot score</u></a> computed for the request and the associated <a href="https://developers.cloudflare.com/logs/reference/log-fields/zone/http_requests/#cachecachestatus"><u>cache status</u></a>. Request logs for a customer’s web properties are <a href="https://developers.cloudflare.com/logs/"><u>available for them to download</u></a>, and after processing and analysis, this data is also presented in the <a href="https://developers.cloudflare.com/analytics/account-and-zone-analytics/"><u>Analytics</u></a> section of the Cloudflare dashboard. The HTTP request data now available on Radar is based on a sample of this log data, aggregated across Cloudflare’s global customer base.</p>
    <div>
      <h2>The value of request-based traffic insights</h2>
      <a href="#the-value-of-request-based-traffic-insights">
        
      </a>
    </div>
    <p>Cloudflare Radar already has HTTP data, so why add more? One key reason for analyzing and including HTTP request traffic is resilience. Having multiple sources of truth with respect to HTTP traffic allows us to ​​better and more quickly distinguish between real events (such as an Internet disruption in a given country or network) and data pipeline issues.</p><p>While bytes-based metrics provide a reasonable proxy into human (user) behavior, especially with respect to activity surrounding Internet disruptions, request-based metrics provide an even better perspective. A lot of HTTP traffic involves relatively small responses – especially API traffic, which now <a href="https://blog.cloudflare.com/application-security-report-2024-update"><u>accounts for 60%</u></a> of all traffic. Furthermore, response sizes can vary widely, ranging from a single-pixel GIF accounting for just a few bytes, to an API call that returns a few kilobytes of data, to a multi-gigabyte software package</p><p>To that end, the scope of user activity may be insufficiently reflected by a bytes-based metric, or buried in the noise, whereas request activity provides a cleaner signal and a more direct proxy for user activity. This is especially important as we examine the restoration of connectivity after an Internet disruption, attempting to ascertain when activity has returned to “expected” pre-disruption levels.</p><p>Finally, incorporating request-based traffic insights into Radar is simply extending the way that the data is already being used on the site. All of the graphs, maps, and tables presented on Radar’s <a href="https://radar.cloudflare.com/adoption-and-usage"><u>Adoption &amp; Usage</u></a> page, are based on analysis of HTTP request traffic, making use of information contained within request headers (such as HTTP version or user agent) or characteristics of the underlying connection (such as IP version).</p>
    <div>
      <h2>Bytes vs requests – what’s the difference?</h2>
      <a href="#bytes-vs-requests-whats-the-difference">
        
      </a>
    </div>
    <p>The current “HTTP traffic” view aggregates the bytes associated with HTTP requests to Cloudflare’s <a href="https://www.cloudflare.com/en-gb/learning/cdn/what-is-a-cdn/"><u>content delivery (CDN)</u></a> services from the selected location or <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/"><u>autonomous system (ASN)</u></a>. “Total traffic” aggregates this HTTP traffic along with the traffic associated with other Cloudflare services, including our <a href="https://one.one.one.one/dns/"><u>1.1.1.1 DNS resolver</u></a>, <a href="https://www.cloudflare.com/application-services/products/dns/"><u>authoritative DNS</u></a>, <a href="https://one.one.one.one/"><u>WARP</u></a>, and <a href="https://developers.cloudflare.com/spectrum/"><u>Spectrum</u></a>, among others. (While Spectrum, WARP, and 1.1.1.1 also carry HTTP traffic, the share of HTTP traffic carried by these services is opaque to Radar, and isn't accounted for as part of the HTTP traffic calculations.)</p><p>The bytes associated with a given request include the size of the request, the size of the headers associated with the response, and the size of the response itself. As noted above, the size of a file returned in response to a request can vary widely, depending on what was requested. The shape of the HTTP requests and HTTP bytes lines may be quite similar, but the potential variability in response sizes (in aggregate) can cause the lines to diverge, sometimes significantly so. For example, if an application regularly makes background requests to check for updates, the availability and subsequent download of a large file containing a software update would cause a spike in the HTTP bytes line, while the HTTP requests pattern remained consistent. </p><p>As another example, consider the graph below, capturing HTTP requests and bytes traffic trends for Portugal during the first week of August. HTTP bytes traffic initially grows each day between 06:00 and 09:00 UTC (07:00 - 10:00 local summer time), increases much more slowly until around 19:00 UTC (20:00 local summer time), and then increases rapidly before peaking around 21:00 UTC (22:00 local time). This suggests that content consumed during the workday is lighter in terms of bytes (such as API traffic, as discussed above), while evening traffic is more byte-heavy (possibly due to increased consumption of media content). In contrast, after starting to increase around 06:00 UTC (07:00 local summer time), request traffic generally sees three successively higher peaks each day – occurring around 10:00, 14:00, and 21:00 UTC respectively (11:00, 15:00, and 22:00 local summer time). These peaks are most pronounced on weekdays, but are still apparent on weekend days as well, suggesting regular patterns of user activity at those times.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3LSadrlwBTmm091qayB6sS/2dc5afb1ce0470f50cfc81325e729100/2493-8.png" />
          </figure><p></p><p>It is important to remember that in looking at the “HTTP requests &amp; bytes” graphs on Radar that they are showing two different metrics, and as such, only their shape over time is comparable, not their relative sizes. (As both metrics are normalized on a 0 to 1 (Max) scale, the lines on the graph are scaled relative to the maximum normalized value of each metric, including the previous period.)</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>The addition of HTTP request metrics to Cloudflare Radar brings additional visibility to traffic trends at a global, location, and network level, complementing the existing bytes-based HTTP traffic metrics. Derived from traffic to customer web properties, these new metrics can be found on Radar’s Overview and Traffic pages.</p><p>In addition to HTTP traffic trends, visit <a href="https://radar.cloudflare.com/"><u>Cloudflare Radar</u></a> for additional insights around Internet disruptions, routing issues, attacks, domain popularity, and Internet quality. Follow us on social media at <a href="https://x.com/CloudflareRadar"><u>@CloudflareRadar</u></a> (X), <a href="https://noc.social/@cloudflareradar"><u>noc.social/@cloudflareradar</u></a> (Mastodon), and <a href="https://bsky.app/profile/radar.cloudflare.com"><u>radar.cloudflare.com</u></a> (Bluesky), or <a><u>contact us via email</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Radar]]></category>
            <category><![CDATA[Trends]]></category>
            <category><![CDATA[Internet Traffic]]></category>
            <category><![CDATA[HTTPS]]></category>
            <category><![CDATA[Consumer Services]]></category>
            <guid isPermaLink="false">6fI16wZ1kKoXv4VV5pIJ9O</guid>
            <dc:creator>David Belson</dc:creator>
        </item>
        <item>
            <title><![CDATA[How the Paris 2024 Summer Olympics has impacted Internet traffic]]></title>
            <link>https://blog.cloudflare.com/paris-2024-summer-olympics-impacted-internet-traffic/</link>
            <pubDate>Tue, 30 Jul 2024 20:46:00 GMT</pubDate>
            <description><![CDATA[ This blog post explores the impact of the Paris 2024 Summer Olympics on Internet traffic in France and beyond, concentrating on web activity during the opening ceremony and the initial days of competition. Let the games continue. ]]></description>
            <content:encoded><![CDATA[ 
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1CiBP3qp9ZohZ6IkVrRQlr/22ef0981735e700746d23fffb5f7c915/1.png" />
          </figure><p>The <a href="https://en.wikipedia.org/wiki/2024_Summer_Olympics"><u>Paris 2024 Summer Olympics</u></a>, themed “Games Wide Open” (<i>“Ouvrons grand les Jeux”</i>), kicked off on Friday, July 26, 2024, and will run until August 11. A total of 10,714 athletes from 204 nations, including individual and refugee teams, will compete in 329 events across 32 sports. This blog post focuses on the opening ceremony and the initial days of the event, examining associated impact on Internet traffic, especially in France, the popularity of Olympic websites by country, and the rise in Olympics-related spam and malicious emails.</p><p>Cloudflare has a global presence with data centers in over 320 cities, supporting millions of customers, which provides a global view of what’s happening on the Internet. This is helpful for improving security, privacy, efficiency, and speed, but also for observing Internet disruptions and traffic trends.</p><p>We are closely monitoring the event through our <a href="https://radar.cloudflare.com/reports/paris-2024-olympics"><u>2024 Olympics report on Cloudflare Radar</u></a> and will provide updates on significant Internet trends as they develop. </p>
    <div>
      <h3>An opening ceremony to remember</h3>
      <a href="#an-opening-ceremony-to-remember">
        
      </a>
    </div>
    <p>For the first time in modern Olympic history, the opening ceremony was held outside a stadium, lasting nearly four hours and clearly impacting Internet traffic in France. The nation’s engagement was evident during the TV broadcast, leading to noticeable traffic drops similar to those observed <a href="https://blog.cloudflare.com/euro-2024s-impact-on-internet-traffic-a-closer-look-at-finalists-spain-and-england"><u>during Euro 2024</u></a> – we’ve seen that national TV broadcast events usually come with drops in Internet traffic.</p><p>The Olympics are more than just sporting events – they are filled with inspiring moments and stories that capture global attention in real time, and create stories that live on. Significant traffic dips during the ceremony coincided with performances by Celine Dion and Lady Gaga, the lighting of the Olympic cauldron, and John Lennon’s “Imagine” performed by Juliette Armanet. Here is a breakdown of the top five traffic drops compared to the previous week that occurred during the ceremony, detailing the events occurring at those times. Our data provides insights with 15-minute granularity.</p>
    <div>
      <h3>Moments of the ceremony by traffic drop</h3>
      <a href="#moments-of-the-ceremony-by-traffic-drop">
        
      </a>
    </div>
    <table><tr><td><p>
</p></td><td><p>Time of drop (UTC)</p></td><td><p><b>Drop %</b></p></td><td><p><b>Events at the time</b></p></td></tr><tr><td><p>#1</p></td><td><p>~21:15</p></td><td><p>-20%</p></td><td><p>The Olympic cauldron is lit and floats into the Paris sky via air balloon; Celine Dion serenades Paris from the Eiffel Tower.</p></td></tr><tr><td><p>#2</p></td><td><p>~17:45</p></td><td><p>-17%</p></td><td><p>Lady Gaga sings the French classic “Mon truc en plumes” by Zizi Jeanmaire.</p></td></tr><tr><td><p>#3</p></td><td><p>~19:45</p></td><td><p>-16.9%</p></td><td><p>Team USA boat takes to the river, followed by Team France – the last boat en route to the Eiffel Tower.</p></td></tr><tr><td><p>#4</p></td><td><p>~20:15</p></td><td><p>-16.9%</p></td><td><p>Dionysus performs the song “Naked” (Philippe Katerine); John Lennon’s “Imagine” is sung from the middle of the Seine by Juliette Armanet; a metal horse rides down the river.</p></td></tr><tr><td><p>#5</p></td><td><p>~18:00</p></td><td><p>-16.7%</p></td><td><p>As the boats continue along the Seine, around 80 artists from the Moulin Rouge perform the famous French cabaret dance, the can-can.</p></td></tr></table><p>During the opening ceremony on July 26, between 17:30 to 21:20 UTC, traffic in France was noticeably lower than the previous week, with losses between 15% and 20%. However, there were moments with smaller drops. For example, at 19:30 UTC, traffic only fell by 4% during the middle of the boat parade of athletes on the Seine River. Right after the event, at 21:45 UTC, traffic increased by as much as 8% compared to the previous week.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/jPW2UvCb66a7e2aeSnXxR/80a5f6e11522787f16ab8fbf7e4bcac0/2.png" />
          </figure><p>The opening ceremony also resulted in a higher mobile share of traffic than usual in France. At 20:45 UTC, close to the end of the ceremony, the mobile share of Internet traffic was 61%, up from 57% the previous week.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6113ZXiQpMELJGiCfyQ37K/ca508cf3864514426d92c9734ebb79c1/3.png" />
          </figure>
    <div>
      <h3>Parisians leaving town before the Olympics</h3>
      <a href="#parisians-leaving-town-before-the-olympics">
        
      </a>
    </div>
    <p>With the Olympics in Paris, many locals <a href="https://www.barrons.com/articles/where-are-parisians-going-during-the-olympics-281b7676"><u>left the city</u></a>, either for vacations or quieter places, while tourists arrived for the games. Our data shows that two French regions, Île-de-France, where Paris is located, and Grand Est, east of Paris, experienced the most significant traffic drops. The chart below illustrates daily traffic to these regions, with a noticeable decline visible during the weekend before the Olympics in Île-de-France.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7vRCUb6WDR3mgPMu7nOC8i/56202de6f064c4230a6e383584f81ad7/4.png" />
          </figure><p>Analyzing the percentage change in request traffic from the previous week, Île-de-France saw its largest drops in the first week of July (July 1-7), with a 15% decrease, and the week before the Olympics started, with an 8% decrease. Interestingly, there was no percentage change in traffic during the week of the Olympics (July 22-28) – that was also the week when most visitors for the Olympics started to arrive.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4q4PHY473B331F8VmsiLEF/50295705360d40980d7b3a3831de2afd/5.png" />
          </figure><p>The daily share of mobile device traffic from France also reveals shifts in typical patterns, with increases noted especially after the June 30 weekend, indicative of vacation periods and leisure Internet use. Mobile device traffic peaked during the first Olympic weekend, reaching 53% on July 26, the day of the opening ceremony – higher than any previous Friday since June. On Sunday, July 28, mobile device traffic peaked at 58%, the highest since June.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/28XkXwMdDy7AAWPwV86b8J/dfbe55f4165a866600a2fdfa776d538f/6.png" />
          </figure>
    <div>
      <h3>Impact to Internet traffic outside of France </h3>
      <a href="#impact-to-internet-traffic-outside-of-france">
        
      </a>
    </div>
    <p>Globally, Internet traffic variations were less pronounced than in France. However, on July 26, the day of the opening ceremony, a noticeable global drop occurred during the event. This was particularly evident during two key moments previously highlighted: during song performances at 20:15 UTC, traffic dropped 3% compared to the previous week, and around the end of the ceremony, at 21:15 UTC, it dropped 2%.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1rYB8UfWJmCc2ueB8kcODl/ee9aa6ed78b016fd9cebf1f031099682/7.png" />
          </figure><p>Expanding our view to other countries, moments of significant drops in traffic during the opening ceremony were clearly visible. Below is a summary list of 30 countries selected based on their tally of Summer Olympic medals.</p><table><tr><td><p><b>Country</b></p></td><td><p><b>Drop in traffic (%)</b></p></td><td><p><b>Time of drop (UTC)</b></p></td></tr><tr><td><p>United States</p></td><td><p>-4%</p></td><td><p>20:15</p></td></tr><tr><td><p>Great Britain</p></td><td><p>-8%</p></td><td><p>20:15</p></td></tr><tr><td><p>France</p></td><td><p>-20%</p></td><td><p>21:15</p></td></tr><tr><td><p>Germany</p></td><td><p>-4%</p></td><td><p>20:15</p></td></tr><tr><td><p>China</p></td><td><p>-4%</p></td><td><p>21:00</p></td></tr><tr><td><p>Italy</p></td><td><p>-11%</p></td><td><p>18:15</p></td></tr><tr><td><p>Australia</p></td><td><p>-2%</p></td><td><p>20:00</p></td></tr><tr><td><p>Hungary</p></td><td><p>-5%</p></td><td><p>21:15</p></td></tr><tr><td><p>Sweden</p></td><td><p>-4%</p></td><td><p>21:15</p></td></tr><tr><td><p>Japan</p></td><td><p>-12%</p></td><td><p>21:15</p></td></tr><tr><td><p>Russia</p></td><td><p>-7%</p></td><td><p>19:45</p></td></tr><tr><td><p>Canada</p></td><td><p>-3%</p></td><td><p>20:15</p></td></tr><tr><td><p>Netherlands</p></td><td><p>-6%</p></td><td><p>21:15</p></td></tr><tr><td><p>Romania</p></td><td><p>-12%</p></td><td><p>20:00</p></td></tr><tr><td><p>Finland</p></td><td><p>-12%</p></td><td><p>17:30</p></td></tr><tr><td><p>Poland</p></td><td><p>-5%</p></td><td><p>21:15</p></td></tr><tr><td><p>South Korea</p></td><td><p>-4%</p></td><td><p>20:15</p></td></tr><tr><td><p>Cuba</p></td><td><p>-3%</p></td><td><p>19:00</p></td></tr><tr><td><p>Bulgaria</p></td><td><p>-6%</p></td><td><p>21:15</p></td></tr><tr><td><p>Switzerland</p></td><td><p>-10%</p></td><td><p>18:15</p></td></tr><tr><td><p>Denmark</p></td><td><p>-2%</p></td><td><p>21:15</p></td></tr><tr><td><p>Spain</p></td><td><p>-8%</p></td><td><p>18:15</p></td></tr><tr><td><p>Norway</p></td><td><p>-2%</p></td><td><p>21:15</p></td></tr><tr><td><p>Belgium</p></td><td><p>-5%</p></td><td><p>21:15</p></td></tr><tr><td><p>Brazil</p></td><td><p>-3%</p></td><td><p>18:15</p></td></tr><tr><td><p>Czech Republic</p></td><td><p>-10%</p></td><td><p>18:00</p></td></tr><tr><td><p>Slovakia</p></td><td><p>-11%</p></td><td><p>20:15</p></td></tr><tr><td><p>Ukraine</p></td><td><p>-2%</p></td><td><p>20:45</p></td></tr><tr><td><p>New Zealand</p></td><td><p>-9%</p></td><td><p>21:15</p></td></tr><tr><td><p>Greece</p></td><td><p>-11%</p></td><td><p>18:00</p></td></tr></table><p>Additionally, the world map below highlights the countries that experienced notable Internet traffic impacts during the opening ceremony. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5xuE0PslAlP4mLSxcs0vc6/602ef3123ea83972d7271d3758f36648/8.png" />
          </figure><p><i>(Source: Cloudflare; created with Datawrapper)</i></p><p>Outside Europe, the countries with the most substantial drops were New Zealand (-9%), Uzbekistan (-12%), Argentina (-13%), and Mongolia -(20%), all experiencing greater declines than those in Europe.​</p>
    <div>
      <h3>Significant moments at the games: from Simone Biles to Olympic records</h3>
      <a href="#significant-moments-at-the-games-from-simone-biles-to-olympic-records">
        
      </a>
    </div>
    <p>Below, we highlight specific Olympic events affecting Internet traffic, starting from the first full competition day on Saturday, July 27, 2024.</p><p><b>United States</b>: The artistic gymnastics competition featuring four-time Olympic gold medalist Simone Biles notably impacted US Internet traffic more than the opening ceremony. On July 26-28, traffic dipped most significantly during Biles’ events. At 10:00 UTC, concurrent with her beam routine, traffic was already 4% lower than the previous week. It dropped by 6% at 10:45 UTC during her floor and vault routines.</p><p><b>France</b>: French swimmer Léon Marchand’s gold medal and <a href="https://x.com/nytimes/status/1817641073994256735"><u>Olympic record-setting performance</u></a> in the men’s 400-meter individual medley on July 28 had the most significant impact in the host nation. Traffic fell by 17% at 18:30 UTC during his event. However, as we noted above, the opening ceremony drove a bigger drop in traffic.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/46kATNlDmsf1XvlkVGPdM2/efd7a58db634cb7f64b462d8b178d3f4/9.png" />
          </figure><p><b>Australia</b>: During Mollie O’Callaghan’s victory in the women’s 200m freestyle on July 29, at around 20:00 UTC, Australian traffic was 5% lower than the previous week This was larger than during the opening ceremony, which saw a 2% drop.</p><p><b>South Korea</b>: The Korean women’s archery team’s gold medal win on July 28 at 15:30 UTC led to an 8% drop in traffic, the most significant decrease noted in the country from July 26 to July 29.</p><p><b>Brazil</b>: Traffic in Brazil was15% lower than the previous week on July 27 at around 19:30 UTC, surpassing the opening ceremony’s impact. This occurred as Brazilian swimmers Guilherme Costa and Maria Fernanda Costa competed in the men’s and women’s 400 m freestyle events.</p>
    <div>
      <h3>DNS trends to official Olympic websites by country</h3>
      <a href="#dns-trends-to-official-olympic-websites-by-country">
        
      </a>
    </div>
    <p>On July 22, before the Olympics started, we <a href="https://blog.cloudflare.com/countdown-to-paris-2024-france-leads-in-olympic-web-interest"><u>reported</u></a> on the heightened interest in official Olympic websites based on request data from our <a href="http://1.1.1.1/"><u>1.1.1.1</u></a> DNS resolver. We noted France’s dominance with 24% of DNS traffic to official Olympic websites, followed by the UK (20%) and the US (17%). However, the start of the Olympics marked a shift, with the US taking the lead.</p><p>On the first full day of competitions, July 27, the US led with 16% of all DNS request traffic to official Olympic sites. This change indicates a broader spread of interest across countries during the Olympics. A dynamic version of the map below is available in our <a href="https://radar.cloudflare.com/reports/paris-2024-olympics"><u>Paris 2024 Olympics report</u></a>. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/59uVt1kbMAZTOagybdXbgY/1d340a17a52dba49d1c458c2607d10e4/10.png" />
          </figure><p>Here are the top 10 countries with the highest shares of DNS request traffic for the first full day of competitions, July 27, to Olympic sites (percentages rounded):</p><ol><li><p>United States: 16%</p></li><li><p>Germany: 12%</p></li><li><p>France: 9%</p></li><li><p>Vietnam: 9%</p></li><li><p>Brazil: 5%</p></li><li><p>Australia: 5%</p></li><li><p>United Kingdom: 4%</p></li><li><p>Netherlands: 4%</p></li><li><p>Canada: 3%</p></li><li><p> South Africa: 2%</p></li></ol>
    <div>
      <h3>Growth in interest as the Olympics drew closer</h3>
      <a href="#growth-in-interest-as-the-olympics-drew-closer">
        
      </a>
    </div>
    <p>Global daily DNS request traffic to official Olympic websites began climbing to the highest levels seen year to date starting on July 23, showing a steady increase. It peaked on July 28, the second full day of events, with a fivefold (509%) increase from the previous week. On the opening ceremony day, traffic was already 110% higher than the previous week.</p><p>Country-specific peaks included the US, where traffic to Olympic sites surged 719% on July 28, coinciding with Simone Biles’ first competition day. In France, traffic peaked on the same day with a 391% increase, and in Germany, it skyrocketed by 2300% on July 27.</p><p>The evolving DNS ranking of Olympic site traffic by country reveals that from July 19, the US overtook France. Also, Germany ascended to the #2 spot on July 27, the first full day of competitions, while Australia climbed to #4 on July 28, and Canada’s peak day was also July 28.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1hvJt3mpx2yIDMpCK9ITnL/f9ec940e0a6d7f3dfa7596e2b37eba07/11.png" />
          </figure>
    <div>
      <h3>Railway attacks on opening ceremony day cause surge in traffic</h3>
      <a href="#railway-attacks-on-opening-ceremony-day-cause-surge-in-traffic">
        
      </a>
    </div>
    <p>The opening ceremony day, July 26, was also disrupted by <a href="https://en.wikipedia.org/wiki/2024_France_railway_arson_attacks"><u>railway arson attacks</u></a> in France, <a href="https://www.theguardian.com/sport/article/2024/jul/26/vandals-target-french-rail-network-olympics-opening-ceremony"><u>affecting</u></a> the 800,000 passengers on the high-speed railway system. At 10:00 UTC, there was a significant surge in DNS traffic to public transportation websites, including high-speed railway services. Traffic spiked by 2000% compared to the previous week as users accessed websites to check updates.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2VQoauax2hXAnBhvYNtAgt/36f069c878b0ae76e205a9050738e463/12.png" />
          </figure>
    <div>
      <h3>DDoS attacks: always around</h3>
      <a href="#ddos-attacks-always-around">
        
      </a>
    </div>
    <p>As we’ve observed with <a href="https://blog.cloudflare.com/tag/election-security"><u>elections</u></a> in 2024, including the <a href="https://blog.cloudflare.com/2024-french-elections-political-cyber-attacks-and-internet-traffic-shifts"><u>French elections</u></a>, political parties are not the only targets of DDoS (<a href="https://www.cloudflare.com/learning/ddos/what-is-a-ddos-attack/"><u>Distributed Denial of Service</u></a>) attacks during significant events. While we haven’t seen any coordinated flow of major DDoS attacks targeting services potentially used during the Olympics in France, we have observed a few incidents.</p><p>A generally used French government website was targeted by a DDoS attack on July 29, 2024, lasting nine minutes and peaked at 207,000 requests per second at 20:34 UTC.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3vuNjigeYMPmT9yPIoZv8r/b937f2c6162c51b903cb387cfb7a069a/13.png" />
          </figure><p>Before the Olympics began, a national transportation website was also targeted by a smaller DDoS attack, lasting only a couple of minutes and peaking at 10,000 requests per second on July 21 at 10:20 UTC.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3BTiio6bHujSTa2wF3PRHu/7adafe614213a89dcc272cafb33af74e/14.png" />
          </figure><p>As highlighted in our <a href="https://blog.cloudflare.com/ddos-threat-report-for-2024-q2"><u>Q2 DDoS report</u></a>, most DDoS attacks are short-lived, as exemplified by the two mentioned attacks. Also, 81% of HTTP DDoS attacks peak at under 50,000 requests per second (rps), and only 7% reach between 100,000 and 250,000 rps. While a 10,000 rps attack might seem minor to Cloudflare, it can be devastating for websites not equipped to handle such high levels of traffic.</p>
    <div>
      <h3>“Olympics” and “Paris 2024” emails on the rise</h3>
      <a href="#olympics-and-paris-2024-emails-on-the-rise">
        
      </a>
    </div>
    <p>From another cybersecurity perspective, major events often attract phishing and spam, and the Olympics are no exception. From January 2024 through late July, <a href="https://www.cloudflare.com/zero-trust/products/email-security/"><u>Cloudflare’s Cloud Email Security</u></a> service processed over a million emails containing “Olympics” or “Paris 2024” in the subject. During the week of July 22-28, coinciding with the first few days of the Olympics, there was a 304% increase in such emails compared to the previous week and a staggering 3111% increase compared to the busiest week in January.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5TGRfpcQU8GKDWqooC5yR9/26d64c8e47ab73b15f5c7ae05d98bf20/15.png" />
          </figure><p>Regarding unwanted messages, spam accounted for 1.5% of all emails with “Olympics” or “Paris 2024” in the subject, while malicious emails made up 0.1% since January 2024. This means that in a sample of 1000 emails, roughly 15 would be spam and 1 would be malicious. The peak for malicious Olympic-related emails occurred the week of May 6, with 0.6% classified as malicious. Although there was a decline after this peak, rates increased slightly in July, reaching 0.4% on July 8. Despite the surge in volume during the week of July 22, only 0.05% of emails were malicious. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7jMHbikIFNolrJXaLa9YcT/734418210b3c2a86291374d493cf62d4/16.png" />
          </figure><p>That same week, when the Olympics started, also saw an increase in spam emails to over 2%, the highest since the 7% peak the week of June 24.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3OTQnTrsaO0BEqpApAgdfA/454e4d38afad2d32449bd2cc94f5a9cb/17.png" />
          </figure>
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>The Paris 2024 Olympics started on July 26, with a clear impact on Internet traffic in different countries, most notably in France, the host nation. The significant traffic drops during key moments of the opening ceremony, and the reactive spikes following major events highlight the ever-present interplay between physical events and the way humans interact with the online world. Not many events take the focus away from the Internet, and in this case, into TV broadcast.</p><p>We’ve also observed how the interest in official Olympic websites surged, with clear increases in DNS traffic after the event started, in different countries, with the US ultimately taking the gold.</p><p>Regarding the July 29, 2024 <a href="https://www.theregister.com/2024/07/29/french_fiber_cables_cut/"><u>sabotage of French fiber optic cables</u></a>, we did not observe any notable disruptions of Internet traffic in France or its cities during the day.</p><p>As the games continue, we will maintain a <a href="https://radar.cloudflare.com/reports/paris-2024-olympics"><u>Paris 2024 Olympics report</u></a> on Cloudflare Radar, updating it as significant Internet trends related to the event emerge.</p> ]]></content:encoded>
            <category><![CDATA[Radar]]></category>
            <category><![CDATA[Trends]]></category>
            <category><![CDATA[Olympics]]></category>
            <category><![CDATA[Sports]]></category>
            <category><![CDATA[Internet Traffic]]></category>
            <category><![CDATA[Consumer Services]]></category>
            <guid isPermaLink="false">25YqXpqaqgkt7nzhZ6ccAz</guid>
            <dc:creator>João Tomé</dc:creator>
            <dc:creator>Jorge Pacheco</dc:creator>
        </item>
        <item>
            <title><![CDATA[Q2 2024 Internet disruption summary]]></title>
            <link>https://blog.cloudflare.com/q2-2024-internet-disruption-summary/</link>
            <pubDate>Tue, 16 Jul 2024 13:00:01 GMT</pubDate>
            <description><![CDATA[ Government directed shutdowns and cable cuts were both significant sources of Internet outages in Q2 2024. This post explores these disruptions, as well as others caused by power outages, maintenance, ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Cloudflare’s network spans more than 320 cities in over 120 countries, where we interconnect with over 13,000 network providers in order to provide a broad range of services to millions of customers. The breadth of both our network and our customer base provides us with a unique perspective on Internet resilience, enabling us to observe the impact of Internet disruptions. Thanks to <a href="https://radar.cloudflare.com/">Cloudflare Radar</a> functionality released earlier this year, we can explore the impact from a <a href="https://developers.cloudflare.com/radar/glossary/#bgp-announcements">routing</a> perspective, as well as a traffic perspective, at both a <a href="https://twitter.com/CloudflareRadar/status/1768654743742579059">network</a> and <a href="https://twitter.com/CloudflareRadar/status/1773704264650543416">location</a> level.</p><p>As we have seen in previous years, nationwide exams take place across several MENA countries in the second quarter, and with them come <a href="#governmentdirected">government directed Internet shutdowns</a>. <a href="#cablecuts">Cable cuts</a>, both terrestrial and submarine, caused Internet outages across a number of countries, with the ACE submarine cable being a particular source of problems. <a href="#maintenance">Maintenance</a>, <a href="#poweroutages">power outages</a>, and <a href="#technicalproblems">technical problems</a> also disrupted Internet connectivity, as did <a href="#unknown">unknown</a> issues. And as we have frequently seen in the two-plus years since the conflict began, Internet connectivity in Ukraine suffers as a result of Russian <a href="#attacks">attacks</a>.</p><p>As we have noted in the past, this post is intended as a summary overview of observed disruptions, and is not an exhaustive or complete list of issues that have occurred during the quarter.</p><h2>Government directed</h2>
    <div>
      <h3>Syria, Algeria, Iraq</h3>
      <a href="#syria-algeria-iraq">
        
      </a>
    </div>
    <p>Each spring, governments in several countries in the Middle East and North Africa (MENA) region order local telecommunications providers to shut down or disrupt Internet connectivity across the country in an effort to prevent students from cheating on national secondary and high school exams. These shutdowns/disruptions generally occur for several hours per day over a multi-week period. We covered such events in <a href="/exam-internet-shutdowns-iraq-algeria">2023</a>, <a href="/syria-sudan-algeria-exam-internet-shutdown">2022</a>, and <a href="/syria-exam-related-internet-shutdowns/">2021</a>, as they occurred in locations including Syria, Sudan, Algeria, and Iraq.</p><p>In June, we published <a href="/syria-iraq-algeria-exam-internet-shutdown"><i>Exam-ining recent Internet shutdowns in Syria, Iraq, and Algeria</i></a>, which examined the daily Internet shutdowns that took place in Iraq and Syria, as well as the two multi-hour daily disruptions in Algeria, which appeared to be pursuing a content blocking strategy, rather than a full nationwide shutdown. The post examined the impact that these shutdowns have on Internet traffic, and also analyzed routing information and traffic from other Cloudflare services in an effort to better understand how these shutdowns are being implemented.</p><p>In addition to the shutdowns covered in the previously referenced blog post, Iraq implemented a second round of shutdowns that started on June 23, and ran through at least July 14. Some of these shutdowns impacted the same set of networks seen in the first round, and some impacted networks in the autonomous Kurdistan region in the north.</p><p>Among the latter set, <a href="https://radar.cloudflare.com/as206206">AS206206 (Kurdistan Net)</a>, <a href="https://radar.cloudflare.com/as59625">AS59625 (Korek Telecom)</a>, <a href="https://radar.cloudflare.com/as48492">AS48492 (IQ-Online)</a>, and <a href="https://radar.cloudflare.com/as21277">AS21277 (Newroz Telecom)</a> all implemented shutdowns on June 23, June 26, June 30, July 3, July 7, and July 10, between 06:00 - 08:00 local time (03:00 - 05:00 UTC).</p>






<p>Outside the autonomous Kurdistan region, networks including <a href="https://radar.cloudflare.com/as59588">AS59588 (Zainas)</a>, <a href="https://radar.cloudflare.com/as199739">AS199739 (Earthlink)</a>, <a href="https://radar.cloudflare.com/as203214">AS203214 (HulumTele)</a>, <a href="https://radar.cloudflare.com/as51684">AS51684 (Asiacell)</a>, and <a href="https://radar.cloudflare.com/as58322">AS58322 (Halasat)</a> implemented Internet shutdowns between 06:00 - 08:00 local time (03:00 - 05:00 UTC) on June 23, June 24, June 26, June 27, June 29, June 30, July 1, and July 2.</p>







<p>Both sets of shutdowns reviewed above appeared to have followed the same approach as the first round covered in the earlier blog post.</p>
    <div>
      <h3>Kenya, Burundi, Uganda, Rwanda, Tanzania</h3>
      <a href="#kenya-burundi-uganda-rwanda-tanzania">
        
      </a>
    </div>
    <p>Concerns over a potential Internet shutdown during planned protests against tax increases proposed in “<a href="https://en.wikipedia.org/wiki/Kenya_Finance_Bill_2024">Finance Bill 2024</a>” by the Kenyan government led to the publication of a joint statement signed by multiple organizations. The statement strongly urged the Kenyan government to refrain from enforcing any</p><p>Internet shutdowns or information controls, and highlighted the “disastrous economic effects” such a move could have. In response, the Communications Authority of Kenya <a href="https://x.com/CA_Kenya/status/1805311316719993274">issued a press release</a> stating that “<i>For the avoidance of doubt, the Authority has no intention whatsoever to shut down Internet traffic or interfere with the quality of connectivity. Such actions would be a betrayal of the Constitution as a whole, the freedom of expression in particular and our own ethos.</i>”</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/18VUH92c3DCiuL6Tlb9Ex1/d19d8a28302c53690fdd7ee8904d85d1/10.png" />
            
            </figure><p>As <a href="https://en.wikipedia.org/wiki/Kenya_Finance_Bill_protests">protests escalated</a> on June 25, Internet traffic in <a href="https://radar.cloudflare.com/ke">Kenya</a> dropped at 16:30 local time (13:30 UTC). Initially, this outage was thought to be due to issues with <a href="https://www.submarinecablemap.com/country/kenya">one or more undersea cables</a> that provide international connectivity to the country, with the potential cause supported by social media posts from <a href="https://x.com/SafaricomPLC/status/1805615681951375595">Safaricom</a> and <a href="https://x.com/AIRTEL_KE/status/1805635373680193836">Airtel</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4nseWjxCypENox62FqSzFt/ca5f0eb0e634fca9597c1482b0834459/Screenshot-2024-07-14-at-10.56.25-PM.png" />
            
            </figure><p>Similar concurrent drops in Internet traffic were observed in <a href="https://radar.cloudflare.com/bi">Burundi</a>, <a href="https://radar.cloudflare.com/ug">Uganda</a>, <a href="https://radar.cloudflare.com/rw">Rwanda</a>, and <a href="https://radar.cloudflare.com/tz">Tanzania</a>, as shown below. Issues with submarine cables connected to one country can impact Internet connectivity in other countries if there is a dependency on that country/cable for upstream Internet connectivity. As such, the observed disruptions in those four countries were not that unusual. To that end, a (subsequently deleted) <a href="https://twitter.com/mtnug/status/1805707549385044057">post on X from MTN Uganda</a> noted: "<i>Our esteemed customers, We are experiencing a degraded service on all our internet services due to an outage caused by our connectivity supply through Kenya. Our technical teams and partners are working jointly to resolve the issue in the shortest time possible. In the interim, we kindly advise our customers to use *165# to access Mobile Money and other app based services. Thank you.</i>"</p>






<p>However, other participants in the Internet infrastructure community in Africa called the undersea cable outage explanation into question. Kyle Spencer, Executive Director of the <a href="https://www.uixp.co.ug/">Uganda Internet eXchange Point</a>, <a href="https://x.com/kyleville/status/1805614461190906295">posted on X</a> that “<i>I am told the Kenyan government ordered sea cable landing stations to disconnect circuits.</i>” Ben Roberts, Group CTIO at <a href="https://liquid.tech/">Liquid Intelligent Technologies</a> (a pan-African network infrastructure provider), <a href="https://x.com/benliquidkenya/status/1805851264082751756">posted</a> “<i>No cables are damaged this week.</i>” In addition, outages on undersea cables are rarely, if ever, resolved in a matter of hours, as this disruption was – they frequently last for days or weeks.</p><p>On June 26, Safaricom’s CEO <a href="https://www.standardmedia.co.ke/sports/business/article/2001497896/why-there-was-network-outage-during-protests-safaricom-ceo-explains">claimed</a> “This outage was occasioned by reduced bandwidth on some cables that carry Internet traffic”, contradicting the company’s original claim. No additional information was forthcoming from Airtel or the Communications Authority of Kenya, but as noted above, some within the industry believe that the disruption that impacted connectivity in Kenya, Burundi, Uganda, Rwanda, and Tanzania was directed by the government of Kenya, and was not caused by submarine cable outages.</p><h2>Cable cuts</h2>
    <div>
      <h3>Haiti</h3>
      <a href="#haiti">
        
      </a>
    </div>
    <p>At 17:36 local time (21:36 UTC) on April 28, <a href="https://x.com/DigicelHT/status/1784698298290376936">Digicel Haiti posted an “important note” on X</a> that stated in part (translated) “<i>On April 27, 2024, the company suffered several attacks on its international optical infrastructure in the Drouya area on National Road #1. The optical fiber was damaged by the impact of cartridges after the armed clashes in the area for a few days. It affected several services such as internet (data), SMS, MonCash and international calling. For now, we are happy to inform the population that all services are restored to 100%.</i>” The graph below shows the impact of the fiber damage, <a href="https://radar.cloudflare.com/as27653">with AS27653 (Digicel Haiti)</a> suffering an Internet outage lasting nearly 24 hours, from around 17:30 local time (21:30 UTC) on April 27 through approximately 16:00 local time (20:00 UTC) on April 28, after which traffic quickly recovered.</p><p>Then on May 3, The Director General of Digicel Haiti <a href="https://x.com/jpbrun30/status/1786368179440079102">posted on X</a> that (translated) “<i>Digicel is informing the general public that it suffered two more damages to its international fiber infrastructure at 2am this morning. We have restored Moncash services, SMS, and Fiber Optic connections. Our crews are already on their way to address the apparent landslide in the Canaan area.</i>” The disruption caused by this fiber damage lasted for approximately eight hours, between 02:15 - 10:30 local time (06:15 - 14:30 UTC), and as seen in the graph below, appeared to have a nominal impact on traffic.</p>
    <div>
      <h3>Kenya, Madagascar, Malawi, Mozambique, Rwanda, Tanzania, Uganda</h3>
      <a href="#kenya-madagascar-malawi-mozambique-rwanda-tanzania-uganda">
        
      </a>
    </div>
    <p>On Sunday, May 12, issues with the <a href="https://www.submarinecablemap.com/submarine-cable/eastern-africa-submarine-system-eassy">EASSy</a> and <a href="https://www.submarinecablemap.com/submarine-cable/seacomtata-tgn-eurasia">Seacom</a> submarine cables again disrupted connectivity to East Africa, impacting a number of countries previously affected by a set of cable cuts that occurred nearly three months earlier. Insight into these earlier cable cuts and the initial impact of May’s cable damage was covered in our <a href="/east-african-internet-connectivity-again-impacted-by-submarine-cable-cuts"><i>East African Internet connectivity again impacted by submarine cable cuts</i></a> blog post.</p><p>Traffic levels across a number of the impacted countries dropped just before 11:00 local time (08:00 UTC).  The magnitude of the initial impact varied by country, with traffic initially dropping by 10-25% in <a href="https://radar.cloudflare.com/traffic/ke?dateStart=2024-05-12">Kenya</a>, <a href="https://radar.cloudflare.com/traffic/ug?dateStart=2024-05-12">Uganda</a>, <a href="https://radar.cloudflare.com/traffic/mg?dateStart=2024-05-12">Madagascar</a>, and <a href="https://radar.cloudflare.com/traffic/mz?dateStart=2024-05-12">Mozambique</a>, while traffic in <a href="https://radar.cloudflare.com/traffic/rw?dateStart=2024-05-12">Rwanda</a>, <a href="https://radar.cloudflare.com/traffic/mw?dateStart=2024-05-12">Malawi</a>, and <a href="https://radar.cloudflare.com/traffic/tz?dateStart=2024-05-12">Tanzania</a> dropped by one-third or more than compared to the previous week. The overall impact was most significant in Tanzania, Madagascar, and Rwanda, as seen in the graphs below. Traffic returned to expected levels at various times over the following week, ranging from a day and a half later (May 13) in Kenya to a week later (May 19) in Rwanda.</p><p>Repairs to the EASSy and Seacom cables <a href="https://www.linkedin.com/posts/philippe-devaux-218423199_31may24-east-africa-eassy-seacom-subsea-activity-7202342753345650688-ll0q?utm_source=share&amp;utm_medium=member_desktop">were completed on May 31</a>. Repairs to the cables damaged in February were <a href="https://www.linkedin.com/posts/philippe-devaux-218423199_09jul24-red-sea-subsea-cables-tentative-activity-7216513121945841664-towG?utm_source=share&amp;utm_medium=member_desktop">ongoing as of July 9</a>, as their location in a war zone complicates repair efforts.</p>
    <div>
      <h3>Chad</h3>
      <a href="#chad">
        
      </a>
    </div>
    <p>A <a href="https://x.com/LeNdjam_Post/status/1794475979567505735">reported</a> fiber optic cable cut in Cameroon disrupted Internet connectivity for customers of <a href="https://radar.cloudflare.com/as327802">Moov Africa TChad</a> on May 25. The outage lasted three hours, between 15:15 -18:15 local time (14:15 - 17:15 UTC), with the impact visible at a country level as well. Routing was disrupted too, as the number of IPv4 /24 prefixes (256 IPv4 addresses) announced by Moov Africa Tchad fell from eight to three during the disruption.</p><p>The event was similar to one that <a href="https://www.facebook.com/moovafrica.td/posts/pfbid0kB9W5CkhVJqBq34agPWqG81yeCfLBijKYc6WiLDKLE79nPmhie4T9idZVStc8f6Xl">occurred on January 10</a>, when Moov Africa Tchad and country-level traffic was disrupted for over 12 hours “due to a cut in the optical fiber coming from Cameroon through which Chad has access to the Internet”. During that event, significant volatility was also observed from a routing perspective, as the volume of announced IPv4 address space shifted frequently at a <a href="https://radar.cloudflare.com/routing/as327802?dateStart=2024-01-10&amp;dateEnd=2024-01-11">network</a> and <a href="https://radar.cloudflare.com/routing/td?dateStart=2024-01-10&amp;dateEnd=2024-01-11">country</a> level during the disruption. As we noted last quarter, as a landlocked country, Chad is dependent on terrestrial Internet connections to/through neighboring countries, and the <a href="https://afterfibre.nsrc.org/">AfTerFibre cable map</a> illustrates Chad’s reliance on limited cable paths through Cameroon and Sudan.</p>
    <div>
      <h3>Gambia, Mauritania, Senegal</h3>
      <a href="#gambia-mauritania-senegal">
        
      </a>
    </div>
    <p>A <a href="https://x.com/CloudflareRadar/status/1798546000258417082">reported</a> “network interruption” on the <a href="https://www.submarinecablemap.com/submarine-cable/africa-coast-to-europe-ace">Africa Coast to Europe (ACE) submarine cable</a> disrupted traffic across networks in the Gambia, Mauritania, and Senegal on June 5. <a href="https://radar.cloudflare.com/as25250">AS25250 (Gamtel)</a>, <a href="https://radar.cloudflare.com/as29544">AS29544 (Mauritel)</a>, and <a href="https://radar.cloudflare.com/as37649">AS37649 (Free/Tigo)</a> all saw traffic drop around 23:00 local time (23:00 UTC). As seen in the graphs below, the outage lasted for nearly 11 hours, with traffic recovering just 10:00 local time on June 6 (10:00 UTC). Mauritel saw a near complete outage, while Gamtel and Free/Tigo saw less severe impacts, possibly because they were able to <a href="https://x.com/Gamtel/status/1798513818873831562">shift traffic to back up links</a>.</p><h2>Maintenance</h2>
    <div>
      <h3>Guinea, Gambia, Sierra Leone, Liberia</h3>
      <a href="#guinea-gambia-sierra-leone-liberia">
        
      </a>
    </div>
    <p>Above, we discussed an unexpected network interruption on the ACE submarine cable that caused outages across multiple countries on June 5. However, two months earlier, a planned outage for repair work on the cable also disrupted connectivity across multiple African countries. A <a href="https://twitter.com/jeanfrancis/status/1777231780002615593">communiqúe</a> issued by the Ministry of Posts, Telecommunications and the Digital Economy in Guinea noted in part (translated) “<i>...the ACE (Africa Coast to Europe) network will undergo a planned outage on April 8, 2024, between midnight and 2:00 a.m. morning in the following countries: Guinea, Senegal, Gambia, Sierra Leone and Liberia. This total outage of approximately 2 hours will affect Internet traffic and international calls.</i>”</p><p>The graphs below show the impact to traffic in the listed countries for the planned two-hour repair window, though it appears that traffic did not return fully to expected levels after the repair window concluded – it is unclear why it remained slightly depressed. In addition, despite being listed as one of the impacted countries, no impact to traffic was observed in <a href="https://radar.cloudflare.com/sn?dateStart=2024-04-07&amp;dateEnd=2024-04-08">Senegal</a>.</p>
    <div>
      <h3>Guinea</h3>
      <a href="#guinea">
        
      </a>
    </div>
    <p>Rounding out a trifecta of entries about the ACE submarine cable, planned maintenance work on the cable by <a href="https://guilab.com.gn/">GUILAB</a> reportedly caused a multi-hour outage at <a href="https://radar.cloudflare.com/as37461">AS37461 (Orange Guinea)</a> and at a country level as well, lasting from 12:15 - 15:45 local time (12:15 - 15:45 UTC). (GUILAB is the company in charge of managing the capacity allocated to Guinea on the ACE submarine cable.) The maintenance work was reported by Orange Guinea in two X posts (<a href="https://web.archive.org/web/20240601134921/https://twitter.com/orangeguinee_gn/status/1796901855705907676">1</a>, <a href="https://web.archive.org/web/20240601134922/https://twitter.com/orangeguinee_gn/status/1796901858444755127">2</a>), although these posts were subsequently deleted.</p><h2>Power outage</h2>
    <div>
      <h3>Kenya</h3>
      <a href="#kenya">
        
      </a>
    </div>
    <p>At 18:30 local time (15:30 UTC) on May 2, <a href="https://x.com/KenyaPower_Care/status/1786058653058961589">Kenya Power posted a “Power Outage Alert” on X</a> that stated “<i>At 5:40 PM (EAT) today, Thursday, 2nd May 2024, we experienced a system disturbance on the grid, resulting in power supply disruption in most parts of the country.</i>” The graph below shows the resultant impact on Internet connectivity in the country, with traffic dropping sharply between 17:30 - 17:45 local time (14:30 - 14:45 UTC). The drop in traffic lasted until approximately 21:30 local time (18:30 UTC), the same time that <a href="https://x.com/KenyaPower_Care/status/1786104840990437749">Kenya Power posted a “Power Supply Restoration” notice on X</a>, highlighting the restoration of power to parts of the country. Although the post-outage spike seen in the graph would suggest pent-up demand for online content, a <a href="https://radar.cloudflare.com/ke?dateStart=2024-04-28&amp;dateEnd=2024-05-04">longer-term view</a> of Kenya's Internet traffic shows traffic peaks at the same time (22:00 local time, 19:00 UTC) during the preceding two days as well.</p>
    <div>
      <h3>Ecuador</h3>
      <a href="#ecuador">
        
      </a>
    </div>
    <p>A nationwide <a href="https://www.cnn.com/2024/06/19/americas/ecuador-nationwide-blackout-intl-latam/index.html">power outage in Ecuador on June 19</a> impacted hospitals, homes, and the subway, in addition to causing a major disruption to Internet connectivity. The graph below shows Ecuador’s Internet traffic dropping sharply just after 15:00 local time (20:00 UTC). A <a href="https://x.com/RobertoLuqueN/status/1803531032978661816">post on X from Public Works Minister Roberto Luque</a> explained (translated) “<i>The immediate report that we received from CENACE is that there is a failure in the transmission line that caused a cascade disconnection, so there is no energy service on a national scale.</i>” A subsequent post pointed at a lack of investment in the underlying systems, and noted that as of 18:41 pm local time (23:41 UTC), “<i>95% of the energy has already been restored</i>”. After the initial sharp drop, traffic began to recover fairly quickly, and was effectively back to expected levels by the stated time.</p>
    <div>
      <h3>Albania, Bosnia, Montenegro</h3>
      <a href="#albania-bosnia-montenegro">
        
      </a>
    </div>
    <p>A sudden increase in power consumption related to increased usage due to high temperatures, as well electrical systems being impacted by the heat, caused a <a href="https://www.reuters.com/world/europe/power-blackout-hits-montenegro-bosnia-albania-croatias-adriatic-coast-2024-06-21/">widespread power outage</a> across Montenegro, Bosnia, and Montenegro on June 21. The outage <a href="https://www.msn.com/en-gb/news/world/several-countries-across-europe-have-been-hit-by-a-massive-power-cut/ar-BB1oDYDy">reportedly</a> originated in Montenegro after a 400-kilowatt transmission line exploded. While power outages are generally more localized to a single country, or region within a country, power distribution systems are linked across Balkan countries as part of the <a href="https://international-partnerships.ec.europa.eu/policies/global-gateway/electricity-corridor-western-balkans_en">Trans-Balkan Electricity Corridor</a>.</p><p>Published reports (<a href="https://www.msn.com/en-gb/news/world/several-countries-across-europe-have-been-hit-by-a-massive-power-cut/ar-BB1oDYDy">MSN</a>, <a href="https://www.reuters.com/world/europe/power-blackout-hits-montenegro-bosnia-albania-croatias-adriatic-coast-2024-06-21/">Reuters</a>) noted that electrical networks went down 12:00 - 13:00 local time (10:00 - 11:00 UTC), and that electricity suppliers in the impacted countries started restoring power by mid-afternoon, and had it largely restored by the evening. The graphs below show traffic from Albania, Bosnia, and Montenegro starting to drop around 12:00 local time (10:00 UTC), reaching its nadir in Albania and Bosnia at 12:30 local time (10:30 UTC) and at 13:00 local time (11:00 UTC) in Montenegro. Traffic recovered gradually over the next several hours as power was restored, returning to expected levels by 15:30 local time (13:30 UTC).</p><p>Croatia was reportedly impacted by the power outage as well, but <a href="https://radar.cloudflare.com/hr?dateStart=2024-06-21&amp;dateEnd=2024-06-21">no adverse impact to traffic</a> at a country level is visible during the timeframe that connectivity in the other countries was disrupted.</p><h2>Military action</h2>
    <div>
      <h3>Ukraine</h3>
      <a href="#ukraine">
        
      </a>
    </div>
    <p>During the two-plus years of the Russia-Ukraine conflict, Ukraine’s power grid has been a frequent target for Russian air attacks. When damage to Ukraine’s electrical power infrastructure occurs as a result of these attacks, Internet connectivity is also disrupted. <a href="https://apnews.com/article/ukraine-power-grid-russian-attacks-c763050237bcc1388747283bf336f8ad">Attacks on May 21</a> caused power outages across a number of areas in Ukraine. The most significant impact was in Sumy, where traffic dropped as low as 82% below the previous week at 00:00 on May 22 local time (21:00 UTC). As the graphs below illustrate, traffic was also lower than the previous week for several hours in Kyiv, Kharkiv, and Vinnytsia, with traffic returning to expected levels by around 08:00 local time (05:00 UTC) on May 22.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6glDuBLv30DAkA6PoUrx3S/0bd46f992616c555f1a339f17f4202f3/11.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4pmAMdj3lv6eOeXHyB9D9Z/39888ca8908236c61a9e012bd265f4f2/12.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/640mnWgGRtUsIHSeoILNWV/63ead0b3dfc430f651eb8dcc6cd02c3f/13.png" />
            
            </figure><p>\</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6afAZfCn0vxaQbr81eAFGv/6bb391de75e320ac0a5f851df9733a49/14.png" />
            
            </figure><h2>Technical problems</h2>
    <div>
      <h3>Malaysia</h3>
      <a href="#malaysia">
        
      </a>
    </div>
    <p>As we’ve covered in previous quarterly posts, Internet outages and disruptions aren’t always due to significant wide-scale events like severe weather, power outages, or cable cuts. Sometimes more mundane technical issues can cause problems when users try to access the Internet. One example of this occurred on April 15 in Malaysia, when customers of <a href="https://radar.cloudflare.com/as9930">Time Internet</a> experienced a network outage for nearly two hours. The company explained the reason for the outage in a <a href="https://www.facebook.com/TimeInternet/posts/pfbid0RbJE44cgJvxA9FQSKFTVoe4NbJBdGuwkyXjuB5URiAW78zmgS5x1V8YPUt91ym9Al">contrite post on their Facebook page</a>, stating in part “<i>This Internet service outage was by far the worst in our history - affecting approximately 40% of our customers. … At 5.38pm today, both our primary and secondary Secure DNS servers became unreachable. This means that any browser or service requiring a DNS address resolution was not able to reach its intended site.</i>” Because subscribers could not reach Time Internet’s DNS resolvers, they were unable to resolve hostnames for Internet services, sites, and applications, including those delivered by Cloudflare. This resulted in the drop in traffic seen in the graph below, which started just after 17:00 local time (05:00 UTC), and began to recover approximately an hour later. The company did not provide any additional information on what caused the DNS servers to fail.</p>
    <div>
      <h3>Nepal</h3>
      <a href="#nepal">
        
      </a>
    </div>
    <p>In Nepal, a number of local Internet service providers including <a href="https://radar.cloudflare.com/as45650">AS45650 (Vianet)</a> and <a href="https://radar.cloudflare.com/as139922">AS139922 (Dishhome)</a> rely on Indian provider <a href="https://radar.cloudflare.com/as9498">Bharti Airtel</a> for upstream connectivity, enabling them to reach the rest of the Internet. A <a href="https://kathmandupost.com/money/2024/05/01/isps-warn-of-possible-internet-disruption">published report</a> underscores the reliance, noting “<i>Nepali ISPs buy 70 percent of their internet from Airtel.</i>”</p><p>On April 25, these ISPs <a href="https://www.nepalitelecom.com/latest-internet-shutdown-saga-in-nepal">warned</a> that their services could be interrupted because the Nepali government had not provided them with foreign exchange services that would enable them to pay bandwidth vendors such as Airtel, whom they reportedly owed USD $30 million to. On May 1, Airtel informed the delinquent Nepali providers that Internet services may be interrupted at any time due to the overdue payment, and on May 2, Airtel took that step. The graphs below show Vianet’s traffic dropping to near zero at 16:15 local time (10:30 UTC), recovering to expected levels six hours later. An hour later, at 17:15 local time (11:30 UTC), Dishhome’s traffic dropped significantly, though not as severely as Vianet’s. Dishhome’s traffic also recovered approximately six hours later.</p><p>Dishhome may not have experienced a near-complete outage like Vianet did because Bharti Airtel is one of <a href="https://radar.cloudflare.com/routing/as132799?dateStart=2024-05-02&amp;dateEnd=2024-05-02">four upstream providers used by its parent company</a>, whereas Bharti Airtel is <a href="https://radar.cloudflare.com/routing/as45650?dateStart=2024-05-02&amp;dateEnd=2024-05-02">one of Vianet's two upstream providers</a>.</p><p>A month later, on June 3, <a href="https://radar.cloudflare.com/as45650">AS45650 (Vianet)</a> and <a href="https://radar.cloudflare.com/as17501">AS17501 (Worldlink)</a> in Nepal experienced Internet disruptions that were <a href="https://myrepublica.nagariknetwork.com/news/internet-slowdown-across-nepal-due-to-airtel-network-issues/">reportedly</a> caused by routing issues on Bharti Airtel’s network. On Worldlink, a drop in traffic occurred between 12:15 - 14:00 local time (06:30 - 08:15 UTC), while on Vianet, the loss of traffic took place between 12:15 - 13:15 local time (06:30 - 07:30 UTC).</p><h2>Unknown</h2><p>Most of the Internet disruptions covered in this blog post series have a known root cause, whether admitted/stated by the impacted provider(s) or closely associated with a real world event (severe weather, power outage, etc.) However, other disruptions are observed and even publicized by the impacted provider, but no underlying reason for the outage is ever made public.</p>
    <div>
      <h3>Malaysia</h3>
      <a href="#malaysia">
        
      </a>
    </div>
    <p>On May 21, <a href="https://radar.cloudflare.com/as10030">CelcomDigi (AS10030)</a> <a href="https://x.com/CelcomDigi/status/1792912132406657448">posted on X</a> that it was experiencing an outage on its network, and that it was working to resolve the issue as soon as possible. However. just 12 minutes later, it <a href="https://x.com/CelcomDigi/status/1792915174468301241">published a second post</a> stating that it had fully restored Celcom Internet service. These posts were made at 21:35 and 21:47 local time (13:35 and 13:47) respectively. However, as the graph below shows, traffic volumes had returned to expected levels over an hour earlier, as the observed Internet disruption on Celcom’s network lasted between 18:00 - 20:15 local time (10:00 - 12:15 UTC). (Note that the second disruption shown in the graph below was due to an internal Cloudflare data pipeline issue, and not any sort of problem with Celcom’s network.)</p>
    <div>
      <h3>Starlink</h3>
      <a href="#starlink">
        
      </a>
    </div>
    <p>SpaceX Starlink’s satellite Internet service is unique in that it has an international subscriber base, so outages on its network have a more wide-reaching impact than issues with an ISP that covers a single country. At 01:59 UTC on May 29, <a href="https://radar.cloudflare.com/as14593">Starlink</a> <a href="https://x.com/Starlink/status/1795636172972314730">shared on X</a> that it was currently experiencing a network outage, and that it was actively implementing a solution. Twenty-eight minutes later, it <a href="https://x.com/Starlink/status/1795643144094285883">posted</a> “<i>The network issue has been fully resolved.</i>” This brief outage is visible in the graph below as a slight dip in traffic. However, what is particularly interesting is the spike in traffic to Cloudflare from Starlink’s network following the resolution of the outage. The sharp increase and rapid decline of the traffic curve after service was restored suggests that it may be related to an automated connectivity check of some kind, rather than pent-up user demand for content.</p>
    <div>
      <h3>Chad</h3>
      <a href="#chad">
        
      </a>
    </div>
    <p>A near-complete Internet outage was observed in <a href="https://radar.cloudflare.com/td">Chad</a> on June 5 between 08:15 - 12:00 local time (07:15 - 11:00 UTC), as seen in the graph below. Routing was also impacted, as the number of IPv4 /24 address blocks (256 IPv4 addresses) announced by network providers in the country dropped by as much as 75% during the outage.</p><p>A <a href="https://fr.apanews.net/news/tchad-linternet-coupe-pendant-des-heures/">news item covering the outage</a> noted that only Starlink subscribers retained Internet access during the outage. It also noted that Chad has faced recurring Internet disruptions since 2016, either because of problems with fiber-optic cables, or due to government directed shutdowns in the name of national security. It is unclear what ultimately caused this particular outage.</p>
    <div>
      <h3>India</h3>
      <a href="#india">
        
      </a>
    </div>
    <p>With an <a href="https://www.businessinsider.in/business/telecom/news/reliance-jio-and-airtel-add-nearly-5-million-subscribers-in-january-2024/articleshow/109040220.cms">estimated subscriber base in excess of over 460 million</a>, any Internet disruption affecting <a href="https://radar.cloudflare.com/as55836">Reliance Jio’s network (AS55836)</a> is going to have a widespread impact across India. On June 18, Reliance Jio experienced two disruptions that occurred between 13:15 - 17:15 local time (07:45 - 11:45 UTC). Each disruption lasted less than an hour, and dropped traffic levels to approximately half of those seen at the same time a week prior. <a href="https://www.editorji.com/tech-news/reliance-jio-down-big-outage-disrupts-services-1718706733637">Both mobile and fiber connectivity was affected</a>, and no additional information has been provided by Reliance Jio regarding the root cause of the connectivity issues.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>As we become increasingly dependent on reliable Internet connectivity, we must recognize that that connectivity is itself reliant on a complex and interconnected foundation of physical, technical, and political factors. A failure in any one of these foundational components, whether due to a cable cut, power outage, misconfiguration, or government action, can have a significant impact, disrupting Internet connectivity for millions of users, potentially across multiple countries. While the resilience and reliability of the physical and technical components can be improved through redundancy and best practices, political factors have arguably proven to be the hardest to address. However, organizations like <a href="https://www.accessnow.org/">AccessNow</a>, through their <a href="https://www.accessnow.org/campaign/keepiton/">#KeepItOn</a> campaign, mobilize people, communities, and civil society actors globally to fight against government-directed Internet shutdowns, which can have <a href="https://pulse.internetsociety.org/en/netloss/">significant financial consequences</a>.</p><p>Visit <a href="https://radar.cloudflare.com/">Cloudflare Radar</a> for additional insights around Internet disruptions, routing issues, Internet traffic trends, security and attacks, and Internet quality. Follow us on social media at <a href="https://x.com/CloudflareRadar">@CloudflareRadar</a> (X), <a href="https://noc.social/@cloudflareradar">noc.social/@cloudflareradar</a> (Mastodon), and <a href="https://bsky.app/profile/radar.cloudflare.com">radar.cloudflare.com</a> (Bluesky), or <a>contact us via e-mail</a>.</p> ]]></content:encoded>
            <category><![CDATA[Radar]]></category>
            <category><![CDATA[Internet Traffic]]></category>
            <category><![CDATA[Internet Shutdown]]></category>
            <category><![CDATA[Internet Quality]]></category>
            <category><![CDATA[Outage]]></category>
            <category><![CDATA[Consumer Services]]></category>
            <guid isPermaLink="false">5ka42ShR5MS2QH9vV2Dpn</guid>
            <dc:creator>David Belson</dc:creator>
        </item>
        <item>
            <title><![CDATA[Exam-ining recent Internet shutdowns in Syria, Iraq, and Algeria]]></title>
            <link>https://blog.cloudflare.com/syria-iraq-algeria-exam-internet-shutdown/</link>
            <pubDate>Fri, 21 Jun 2024 13:00:02 GMT</pubDate>
            <description><![CDATA[ Similar to actions taken over the last several years, governments in Syria, Iraq, and Algeria have again disrupted Internet connectivity nationwide in an attempt to prevent cheating on exams. We investigate how these disruptions were implemented, and their impact ]]></description>
            <content:encoded><![CDATA[ <p></p><p>The practice of cheating on exams (or at least attempting to) is presumably as old as the concept of exams itself, especially when the results of the exam can have significant consequences for one’s academic future or career. As access to the Internet became more ubiquitous with the growth of mobile connectivity, and communication easier with an assortment of social media and messaging apps, a new avenue for cheating on exams emerged, potentially facilitating the sharing of test materials or answers. <a href="https://www.theguardian.com/technology/2016/may/18/iraq-shuts-down-internet-to-stop-pupils-cheating-in-exams">Over the last decade</a>, some governments have reacted to this perceived risk by taking aggressive action to prevent cheating, ranging from targeted DNS-based blocking/filtering to multi-hour nationwide shutdowns across multi-week exam periods.</p><p>Syria and Iraq are well-known practitioners of the latter approach, and we have covered past exam-related Internet shutdowns in Syria (<a href="/syria-exam-related-internet-shutdowns">2021</a>, <a href="/syria-sudan-algeria-exam-internet-shutdown">2022</a>, <a href="/q2-2023-internet-disruption-summary">2023</a>) and Iraq (<a href="/syria-sudan-algeria-exam-internet-shutdown">2022</a>, <a href="/exam-internet-shutdowns-iraq-algeria">2023</a>) here on the Cloudflare blog. It is now mid-June 2024, and exams in both countries took place over the last several weeks, and with those exams, regular nationwide Internet shutdowns. In addition, Baccalaureate exams also took place in Algeria, and we have written about related Internet disruptions there in the past (<a href="/syria-sudan-algeria-exam-internet-shutdown">2022</a>, <a href="/exam-internet-shutdowns-iraq-algeria">2023</a>). However, in contrast to the single daily shutdowns in Syria and Iraq, the Algerian government opted instead for two multi-hour disruptions each day – one in the morning, one in the afternoon – and appears to be pursuing a content blocking strategy, rather than a full nationwide shutdown.</p><p>As we have done in past year’s posts, we will examine the impact that these shutdowns have on Internet traffic, but also analyze routing information and traffic from other Cloudflare services in an effort to better understand how these shutdowns are being implemented.</p>
    <div>
      <h3>Syria</h3>
      <a href="#syria">
        
      </a>
    </div>
    <p>The Syrian Telecom Company, to their credit, publishes an exam schedule on social media, with the image below <a href="https://www.facebook.com/photo/?fbid=827972736029921&amp;set=a.449047400589125">published to their Facebook page</a>. The English version was created by applying Google Translate to the image. The schedule shows the date &amp; time of each Internet shutdown (“disconnection”), in addition to the subject(s) of that day’s exam(s). In 2024, exams started on May 26, and went through June 13.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/9lgp9NIpBSo2RWII1nUmh/1331a833433eda16be39e4fda41d3413/Screenshot-2024-06-20-at-1.00.58-PM.png" />
            
            </figure><p>In Syria, <a href="https://radar.cloudflare.com/as29256">AS29256 (Syrian Telecom)</a> is effectively the Internet, as shown <a href="https://radar.cloudflare.com/routing/as29256">in the table below</a>. While there are a few other <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/">autonomous systems</a> (ASNs/ASes) registered in Syria, there are only two that currently announce IP address space to the public Internet. As such, the trends seen at a country level for Syria reflect those seen for AS29256, and this is clearly evident in the traffic graphs below.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2XEcIEFQPHIKevCJZkVbaM/3be900a49f17d26e90505a2c77704bc0/unnamed--1--2.png" />
            
            </figure><p>Nationwide Internet shutdowns in Syria began on May 26, taking place for varying multi-hour periods from Sunday to Thursday for three consecutive weeks. The graphs below show Internet traffic from the country, as well as AS29256, dropping to zero during the scheduled shutdowns.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6uW4E8cmeiDolFehpOFXaE/113ee0007138f1eccebd2cec87ae2891/image42.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/FjHhDRQjGou5kNsIZNzQp/3cb60e3a92c1142f1c483b942db5afa2/image5-3.png" />
            
            </figure><p>In addition, graphs from the Cloudflare Radar <a href="https://radar.cloudflare.com/routing/">Routing</a> pages for <a href="https://radar.cloudflare.com/routing/sy">Syria</a> and <a href="https://radar.cloudflare.com/routing/as29256">AS29256</a> show the number of IPv4 and IPv6 prefixes being announced country-wide and by AS29256 dropping to at or near zero during each shutdown. This ultimately means that there is no Internet path back to systems (IP addresses) connected to Syrian Telecom. Below, we explore why this is important and problematic.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5xFTZkPlwMrvEhmn5Tnz42/b7eaffa70e91993663d39a1b5fff9682/image4-4.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1NhFb2khiuAhEJ9Fo8QN19/95f146770a68bf63b87726099eff0143/image47.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3QesstGGtNnaKfBMpRO6GL/f1ea77bb39c751aae45de68096426e00/image15-1.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7yXJYfsxpfK1mwQs5a2ZLw/b7432284cdd28841c22041eb1d4e323a/image30.png" />
            
            </figure><p>As has been <a href="/syria-sudan-algeria-exam-internet-shutdown">observed in the past</a>, the shutdowns in Syria are <a href="https://x.com/DougMadory/status/1138064496008806400">asymmetrical</a>. That is, traffic can exit the country (via AS29256), but there are no paths for responses to return. The impact of this approach is clearly evident in traffic to <a href="https://one.one.one.one/dns/">Cloudflare’s 1.1.1.1 DNS Resolver</a>. We continue to see traffic to the resolver when the shutdowns take place, and in fact, we see the traffic spike during the shutdowns, as the graph below shows.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2LSn47S6TsjvoLuDx5fKPU/6cc2d335fcbdf706e26887b84d873824/image49.png" />
            
            </figure><p>If we dig into traffic to 1.1.1.1 by protocol, we can see that it is driven by requests over <a href="https://www.cloudflare.com/learning/ddos/glossary/user-datagram-protocol-udp/">UDP</a> port 53, the <a href="https://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml?&amp;page=2">standard port</a> used for DNS requests over UDP and TCP. (Given the request pattern, that also appears to be the primary way that we see traffic to the resolver from Syria.)</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4CDPlKnjVUF6ViD75S2Zwa/e27da9a43911fd9808bbcadbd097477a/image12-1.png" />
            
            </figure><p>If we remove the UDP line from the graph, we see that request volume for DNS over TCP port 53, as well as <a href="https://developers.cloudflare.com/1.1.1.1/encryption/dns-over-https/">DNS over HTTPS (DoH)</a> and <a href="https://developers.cloudflare.com/1.1.1.1/encryption/dns-over-tls/">DNS over TLS (DoT)</a>, all drops to zero during the shutdowns.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/vWtoXnCHPogP4HfSjm2G5/ea6f8b6fb58f58cff01c70d9ee592f75/image1-18.png" />
            
            </figure><p>Similarly, we can clearly see the shutdowns in HTTP(S) request-based traffic graphs as well, since HTTP(S) is also a TCP-based protocol.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7phbRNoQ9Kg4L1n5F7chjg/572136f844df3879c4486374f5bc5092/image35.png" />
            
            </figure><p>Why do we see this impact? With DNS over UDP, the client simply makes a request to the resolver – no multi-step handshake is involved, as with TCP. So in this case, 1.1.1.1 is receiving these requests, but as shown above, there’s no path for the response to reach the client. Because it hasn’t received a response, the client retries the request, and this flood of retries is manifested as the spike seen in the graphs above.</p><p>However, as we see above, request volume for DNS over TCP, as well as DoH, DoT, and HTTP(S) (which all use TCP), falls to zero during the shutdowns. The lack of a path back to the client means that the <a href="https://www.geeksforgeeks.org/tcp-3-way-handshake-process/">TCP 3-way handshake</a> can’t complete, and thus we don’t see DNS requests over these protocols.</p><p>In looking at 1.1.1.1 Resolver request volume from Syria for popular social media and messaging applications, we can see traffic for facebook.com most closely matches the spikes shown above. Removing facebook.com from the graph, we can also see similar, though more limited, increases for domains used by popular messaging applications WhatsApp, Signal, and Telegram. Facebook and WhatsApp are <a href="https://medialandscapes.org/country/syria/media/social-networks">reportedly</a> the most popular social media and messaging applications in Syria.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7nmo0Zbae88h2VZhbRpd8f/85eaf702c9f638ce544d2b485114aa65/image18.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3gKoz2uKoz2kVIchd0C2vV/7a2a6107c35d3e1433d7d956e0af9fb6/image33.png" />
            
            </figure><p>Although we have focused on the analysis of traffic to Cloudflare’s DNS resolver, and the patterns seen within that traffic, it is also worth highlighting an interesting pattern observed in traffic to Cloudflare’s <a href="https://www.cloudflare.com/application-services/products/dns/">Authoritative DNS</a> platform. (<a href="https://www.cloudflare.com/en-gb/learning/dns/dns-server-types/">DNS resolvers</a> act as a middleman between clients, such as a laptop or phone, and an authoritative DNS server. <a href="https://www.cloudflare.com/en-gb/learning/dns/dns-server-types/">Authoritative DNS servers</a> contain information specific to the domain names they serve, including IP addresses and other types of records.)</p><p>The graph below shows bits/second traffic from Syria for Cloudflare’s <a href="https://www.cloudflare.com/application-services/products/dns/">authoritative DNS service</a> on June 13. (Similar patterns were observed during the other days when shutdowns occurred, but data volume limits the ability to create a graph showing an extended period of time.) In this graph, we can see that at the start of the shutdown (03:00 UTC), traffic rises sharply, effectively plateaus for the duration of the shutdown, and then returns to normal levels. We believe that the traffic pattern illustrated here could be the result of some local resolvers in Syria having the IP addresses for our authoritative DNS servers cached, and are making requests to them. The increased traffic level could be because they are retrying their queries after not receiving responses, but in a less aggressive fashion than the client applications driving the resolver traffic spikes shown above.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2fOqZ4ORPIY4znUD1HRb70/06d1cc0e24f13bbb23886eedea3a89a0/unnamed-2.png" />
            
            </figure><p>In summary, Syria appears to be implementing their Internet shutdowns not through filtering, but rather by simply not announcing their IP address space for the duration of the shutdown, thereby preventing any responses from returning to the originating requestor, whether client application, web browser, or local DNS resolver.</p>
    <div>
      <h3>Iraq</h3>
      <a href="#iraq">
        
      </a>
    </div>
    <p>On May 19, the Iraqi Ministry of Communication <a href="https://moc.gov.iq/?article=767">posted an update</a> that stated (translated) <i>“The Ministry of Communications would like to note that the Internet service will be cut off for two hours during the general exams for intermediate studies, from six in the morning until eight in the morning, based on higher directives and at the request of the Ministry of Education.”</i> The post came nearly a year after the Iraqi Ministry of Communication <a href="https://www.kurdistan24.net/en/story/31453-Iraq%E2%80%99s-communication-ministry-refuses-to-enforce-internet-blackout-for-final-exams">refused a request from the Ministry of Education to shut down the Internet</a> during the baccalaureate exams as part of efforts to prevent cheating. On May 20, the Iraqi Ministry of Education <a href="https://www.facebook.com/Iraq.Ministry.of.Education/posts/pfbid07ny6LazyvGJED37iCmRkk9h9rNPWeEPtANVu8vaL8gknoaBmwgmVZX9a7LkSbhy2l">posted the schedule</a> for the upcoming set of exams to its Facebook page.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6Wb00a9bDq0VYe1zhqFHAR/22aecee54a71bfd430789e976315b040/Screenshot-2024-06-21-at-11.07.18.png" />
            
            </figure><p>Iraq has a much richer network service provider environment than Syria does, with <a href="https://radar.cloudflare.com/routing/iq#a-ses-registered-in-iraq">over 150</a> <a href="https://www.cloudflare.com/en-gb/learning/network-layer/what-is-an-autonomous-system/">autonomous systems (ASNs)</a> registered in the country and announcing IP address space, compared to just <a href="https://radar.cloudflare.com/routing/sy#a-ses-registered-in-syria">two</a> ASNs (both Syrian Telecom) in Syria announcing IP address space. Although traffic in Iraq is generally concentrated among the larger providers, shutdowns are rarely “complete” at a country level because not every autonomous system (network provider) in the country implements a shutdown. (This is due in part to the autonomous Kurdistan region in the north, which often implements similar shutdowns on their own schedule. Network providers in this region are included in Iraq’s country-level graphs.)</p><p>We can see this in a Cloudflare Radar traffic graph that shows the shutdowns at a country level, where traffic is dropping by around 87% during each multi-hour shutdown. In addition to the five networks also shown here (<a href="https://radar.cloudflare.com/as203214">AS203214 (HulumTele)</a>, <a href="https://radar.cloudflare.com/as199739">AS199739 (Earthlink)</a>, <a href="https://radar.cloudflare.com/as58322">AS58322 (Halasat)</a>, <a href="https://radar.cloudflare.com/as51684">AS51684 (Asiacell)</a>, and <a href="https://radar.cloudflare.com/as59588">AS59588 (Zainas)</a>), further analysis finds more than 30 where we observed a complete loss of traffic during the shutdowns, with a number of them downstream of these providers.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6YkIihynKuDtI63g32YJ0u/3c6f135f0e890145f3b0be6ba6659553/image45.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6W7vWmG8ivneqIJavrTLCU/1b0248818877d395987fca076af52ce6/image28.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/ezNpiALAh32m1LyOng14j/ecb5d055692266f0f9c758976204baae/image38.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3bRSpefwBNbcdvMQ2kOkO1/1b072bfd9ffc362923d9a2db06908068/image8-3.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2WqSWExs5iBdfDlGTqjXlK/63c853a1b688e0bd4a328881a2b3c280/image22.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5iUNjwfTlp0hz8lqgEnaAV/caf0caa06f14d277ce9a874b0fb9738d/image44.png" />
            
            </figure><p>In contrast to Syria, the changes to announced IP address space during the shutdowns are much less severe in Iraq. Several of the shutdowns are correlated with a drop of ~20-25% in announced IPv4 address space, while a few others saw a drop closer to just 2%.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/kAYLZYHK3wnfSh5KNfc33/309f01763d1815a38bf55e1aa42e9725/image51.png" />
            
            </figure><p>At an ASN level, the changes in announced address space were mixed – <a href="https://radar.cloudflare.com/routing/as59588">AS59588 (Zainas)</a>, <a href="https://radar.cloudflare.com/routing/as199739">AS199739 (Earthlink)</a>, and <a href="https://radar.cloudflare.com/routing/as51684">AS51684 (Asiacell)</a> experienced a significant loss, while <a href="https://radar.cloudflare.com/routing/as203214">AS203214 (HulumTele)</a> and <a href="https://radar.cloudflare.com/routing/as58322">AS58322 (Halasat)</a> experienced little to no change.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3HoWb2U5b8uBf9aZX5PJAb/e4ae72f055f2f9fd627251769b16d6bd/image13-3.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1jJpJYqXgDGeb8So4bMWee/34ba179a6e9fff7b311e057134c34d97/image50.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6sVOgW0xr4mwXUU3id092u/6c198aeb32b840f1142e84ec06822f79/image39.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5H64bnOria4mzCtW7VEs40/178e8f4e060b3be172c97b762c334207/image20.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6k1oSwvj9pwTyHD9OMesEV/a28168e19e0f21ce771166872153c9a4/image24.png" />
            
            </figure><p>Similar to Syria, we can also look at 1.1.1.1 resolver traffic data to better understand how the shutdowns are being implemented. The country-level graphs below suggest that UDP traffic patterns are not visibly changing, suggesting that responses from the resolver are, in fact, getting back to the clients. However, this likely isn’t the case, and such a conclusion is at least in part an artifact of the graph’s time frame and hourly granularity, as well as the inclusion of resolver traffic from Kurdish network providers (ASNs). The shutdowns are more clearly evident in the DNS-over-TCP and DNS-over-HTTPS graphs below, as well as in the graph for HTTP(S) request traffic (both mobile &amp; desktop), which is also TCP-based. In these graphs, the troughs on days that shutdowns occurred generally dip lower than those on the days that the Internet remained available.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/o1NrRFgAMr5FLKa2j4Ktb/b85d5ccf7a9cd7e560229b58130fc91e/image41.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4pD02P65uRmaiL73oLzAm9/44ab4e57a7e90ce367e2fc3c3f9e467a/image3-8.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2AscDvWNoxmwWeTGcKyDds/a36e4278a86f651bb75f3a789bb4840b/image27.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/51pfAyJ519aXX0w6KYCm5I/d055dafda12380a3b3b46e748e198ded/image32.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3XlAjHrSgzngJulJRIP36J/b5e8b3058c5441e8dcdff18b347c64e8/image16-1.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4zO7k4idsNJRtohBPYgXhD/676ddfe345adfcab86ea380bdcfa7e54/image43.png" />
            
            </figure><p>In looking at authoritative DNS traffic from Iraq during a shutdown (for June 13 as an example day, as above), we see evidence of a decline in traffic during the time the shutdown occurs.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5LQtyB4CbJBlUjw2PrIBjI/76cdc488ae2d8a0cc46c4441f9ac08ee/image34.png" />
            
            </figure><p>The decline in authoritative DNS traffic is more evident at an ASN level, such as in the graph below for AS203214 (Hulum), effectively confirming that UDP traffic is not getting through here either.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/14fZ29LuL3wsJpTnTVbeku/466dd9542998c77af044647e5e3d49ac/image48.png" />
            
            </figure><p>Considering the traffic, 1.1.1.1 Resolver, and authoritative DNS observations reviewed here, it suggests that the Internet shutdowns taking place in Iraq are more complex than Syria’s, as it appears that both UDP and TCP traffic are unable to egress from impacted network providers. As not all impacted network providers are showing a complete loss of announced IP address space during the shutdowns, Iraq is taking a different approach to disrupting Internet connectivity. Although analysis of our data doesn’t provide a definitive conclusion, there are several likely options, and network providers in the country may be combining several. These options revolve around:</p><ol><li><p><b>IP:</b> Block packets from reaching IP addresses. This may be done by withdrawing prefix announcements from the routing table (a brute force approach) or by blocking access to specific IP addresses, such as those associated with a specific application or service (a more surgical approach).</p></li><li><p><b>Connection:</b> Block connections based on <a href="https://www.cloudflare.com/learning/ssl/what-is-sni/">SNI</a>/HTTP headers, or other application data. If a network or on-path device is able to observe the server name (or other relevant headers/data), then the connection can be terminated.</p></li><li><p><b>DNS:</b> Operators of private or ‘internal’ DNS resolvers, offered by ISPs and enterprise environments for use by their own users, can apply content restrictions, blocking the resolution of hostnames associated with websites and other applications.</p></li></ol><p>The consequences of these options are covered in more detail <a href="/consequences-of-ip-blocking">in a blog post</a>. In addition, applying them at common network chokepoints, such as <a href="https://iraqixp.com/">AS212330 (IRAQIXP)</a> or <a href="https://radar.cloudflare.com/routing/as208293">AS208293</a> (<a href="https://alsalam.gov.iq/">AlSalam State Company</a>, associated with the Iraqi Ministry of Communications), can disrupt connectivity at multiple downstream ISPs, without those providers necessarily having to take action themselves.</p>
    <div>
      <h3>Algeria</h3>
      <a href="#algeria">
        
      </a>
    </div>
    <p>As we noted in blog posts in <a href="/syria-sudan-algeria-exam-internet-shutdown">2022</a> and <a href="/exam-internet-shutdowns-iraq-algeria">2023</a>, Algeria has a history of disrupting Internet connectivity during Baccalaureate exams. This has been taking place since <a href="https://www.bbc.com/news/world-africa-44557028">2018</a>, following widespread cheating in 2016 that saw questions leaked online both before and during tests. On March 13, the Algerian Ministry of Education <a href="https://www.aps.dz/en/health-science-technology/51394-ministry-of-education-announces-dates-for-middle-school-and-high-school-final-exams">announced</a> that the Baccalaureate exams would be held June 9-13. As expected, Internet disruptions were observed both country-wide and at a network level. Similar to previous years, two disruptions were observed each day. The first one began at 08:00 local time (07:00 UTC), and except for June 9, lasted three hours, ending at 11:00 local time (10:00 UTC). (On June 9, it lasted until 13:00 local time (12:00 UTC).) The second one began between 14:00-14:30 local time (13:00-13:30 UTC), and lasted until 16:00-17:00 local time (15:00-16:00 UTC) – the end time varied by day.</p><p>As seen in the graphs below, the impact to traffic was fairly nominal, suggesting that wide scale Internet shutdowns similar to those seen in Syria were not being implemented. While this is in line with 2023’s <a href="https://x.com/TheAlgiersPost/status/1535917324485656576">pronouncement</a> by the Minister of Education that there would be no Internet shutdown on exam days, <a href="https://x.com/search?f=live&amp;q=algeria%20exam%20until%3A2024-06-13%20since%3A2024-06-09&amp;src=typed_query">a number of posts on X</a> complained of broader cuts to Internet connectivity.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5b4bFM6GwqaslpXJNGxCSu/063e3729ef3197b88dfe367cc91fc0f4/image14-2.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5I0HpVYB36DNHhycNbRUjp/ecc17427d720b5ca90ed554946924dfc/image17.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1pOO1TV09EyRfv5hRlV3sX/950600eac0cd068804f2ce990e93e112/image25.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1brvTsdkG8vlUDB7CNQ6vL/2c4ba312f499adc4893fb8afa5378ca6/image2-6.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4mOPVdahSJQ0n6gzs7cGSL/86fa9d9e06718da3fa7a291c055773b3/image37.png" />
            
            </figure><p>Similar to the analysis above of the shutdowns in Syria and Iraq, we can also review changes to announced IP address space to better understand how connectivity was being disrupted. In this case, as the graphs below show, no meaningful changes to announced IPv4 address space were observed during the days the Baccalaureate exams were given. As such, the observed drops in traffic were not caused by routing changes.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7l6qUEMRcrdvShbzdd8y5I/5696832adbe82ab92cef0265df11bdc9/image52.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3gEPeuP0l9czg2nipM8Jjb/8f49d22da1c031d1db5ed2577ec8462f/image21.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6m0XeCwN3lYLot6AutTNaS/7b0034f1106d4e99eaaec28b1cd8e9a5/image6-3.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3oiRlajqzZGBY5Io0dbXkj/234f0120c0002f7ffcf69b6a8bbe0038/image40.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5Kv6LndNtCYpsJUhUPF8UT/ae8359550b84298abd5d0994b475cd76/AD_4nXdmsCY4R4OwP5lh6E6PgQdXYDxwUTWl8o5A-sRdNCSBmRNe0Zq7-OlWczYH8tr8q75P8WLqOsd3Po-03gykFfJDJNgqXcOkX4i3KuVp73q1GW7aLXeTNAzkK7yU" />
            
            </figure><p>In the HTTP(S) request traffic graph below, the twice-daily disruptions are highlighted, with the morning one appearing as a nominal drop in traffic, and the afternoon one causing a more severe decline. (The graph shows request traffic aggregated at a country level, but the graphs for the ASNs listed above also show similar patterns.)</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7afFEaI4r2fREfV0o54f2/68f9fe97d63624a99e0868d0de5df096/image19.png" />
            
            </figure><p>In addition, similar patterns are observed in 1.1.1.1 resolver traffic at a country and ASN level, but only for DNS over TCP, DNS over TLS, and DNS over HTTPS, all of which leverage TCP. In the graph below showing only resolver traffic over UDP, there’s no clear evidence of disruptions. However, in the graph that shows resolver traffic over HTTPS, TCP, and TLS, a slight perturbation is visible in the morning, as traffic begins to rise for the day, and a sharper decrease is visible in the afternoon, with both disruptions aligning with the twice daily drops in traffic discussed above.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/ydmjTs8hCqfZ4W969jSSy/4120e9a30b2c0c2015c6f097c1f8ee8b/image31.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7qSu9orpg29sE5T2cawABq/1ec368088682d0f3d34f1287b3ed7d03/image7-2.png" />
            
            </figure><p>These observations support the conjecture that the Algerian government is likely taking a more nuanced approach to restricting access to content, interfering in some fashion with TCP-based traffic. The conjecture is also supported by an internal tool that helps to understand connection tampering that is based on research co-designed and developed by members of the <a href="https://research.cloudflare.com/">Cloudflare Research</a> team. We will be launching insights into TCP connection tampering on Cloudflare Radar later in 2024 and, in the meantime, technical details can be found in the peer-reviewed paper titled <a href="https://research.cloudflare.com/publications/SundaraRaman2023/">Global, Passive Detection of Connection Tampering</a>.</p><p>The graph below, taken from the internal tool, highlights observed TCP connection tampering in connections from Algeria during the week that the Baccalaureate exams took place. While some baseline level of post-ACK and post-PSH tampering is consistently visible, we see significant increases in post-ACK twice a day during the exam period, at the times that align with the shifts in traffic discussed above. Technical descriptions of post-ACK and post-PSH tampering can be found in the <a href="https://developers.cloudflare.com/radar/glossary/#tcp-resets-and-timeouts">Cloudflare Radar glossary</a>, but in short, tampering post-ACK means an established TCP connection to Cloudflare’s server has been abruptly ended by one or more RST packets <i>before</i> the server sees data packets. Although clients do use RSTs, clients are more likely to close connections with a FIN (as specified by the <a href="https://datatracker.ietf.org/doc/html/rfc9293">RFC</a>). The RST method can also be used by middleboxes that  (i) sees the data packet, then (ii) drops the data packet, then (iii) sends an RST to the server to force the server to close the connection (and very likely another RST to the client too for the same reason). Tampering post-PSH means that something on the path, like a middlebox, (i) saw something it didn't like on an established connection, then (ii) permitted the data to pass but then, (iii) it sends the RST to force endpoints to close the connection.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/75i4amnWWtCkUaMhftCllH/a612735891aef71331339b9164171d48/image11-1.png" />
            
            </figure><p>Looking beyond Cloudflare-sourced data, aggregated test results from the <a href="https://ooni.org/">Open Observatory of Network Interference (OONI)</a> also show evidence of anomalous behavior. Using <a href="https://ooni.org/install/">OONI Probe</a>, a mobile and desktop app, can probe for potential blocking of websites, instant messaging apps, and censorship circumvention tools. Examining test results from users in Algeria for popular messaging platforms <a href="https://explorer.ooni.org/chart/mat?probe_cc=DZ&amp;since=2024-06-01&amp;until=2024-06-15&amp;time_grain=day&amp;axis_x=measurement_start_day&amp;test_name=whatsapp">WhatsApp</a>, <a href="https://explorer.ooni.org/chart/mat?probe_cc=DZ&amp;since=2024-06-01&amp;until=2024-06-15&amp;time_grain=day&amp;axis_x=measurement_start_day&amp;test_name=telegram">Telegram</a>, <a href="https://explorer.ooni.org/chart/mat?probe_cc=DZ&amp;since=2024-06-01&amp;until=2024-06-15&amp;time_grain=day&amp;axis_x=measurement_start_day&amp;test_name=signal">Signal</a>, and <a href="https://explorer.ooni.org/chart/mat?probe_cc=DZ&amp;since=2024-06-01&amp;until=2024-06-15&amp;time_grain=day&amp;axis_x=measurement_start_day&amp;test_name=facebook_messenger">Facebook Messenger</a> for the first two weeks of June, we clearly see the appearance of test results marked as “Anomaly” starting on June 9. (OONI defines “Anomaly” results as “<i>Measurements that provided signs of potential blocking</i>”.) OONI <a href="https://ooni.org/nettest/tor/">Tor test</a> <a href="https://explorer.ooni.org/chart/mat?probe_cc=DZ&amp;since=2024-06-01&amp;until=2024-06-20&amp;time_grain=day&amp;axis_x=measurement_start_day&amp;test_name=tor">results</a> also show a similar “Anomaly” pattern. Anomalous traffic patterns are also visible for <a href="https://transparencyreport.google.com/traffic/overview?hl=en&amp;fraction_traffic=start:1717200000000;product:19;region:DZ;end:1718495999999&amp;lu=fraction_traffic">Google Web Search</a>, <a href="https://transparencyreport.google.com/traffic/overview?hl=en&amp;fraction_traffic=start:1717200000000;product:21;region:DZ;end:1718495999999&amp;lu=fraction_traffic">YouTube</a>, and <a href="https://transparencyreport.google.com/traffic/overview?hl=en&amp;fraction_traffic=start:1717200000000;product:6;region:DZ;end:1718495999999&amp;lu=fraction_traffic">GMail</a>.</p><p>Although the analysis of these observations and data sets doesn’t provide us with specific details around exactly how the observed Internet disruptions are being implemented, it strongly supports the supposition that network providers in Algeria are, in some fashion, interfering with TCP connections, but not blocking them outright nor shutting down their networks completely. Given that popular messaging platforms, Google properties, Cloudflare’s 1.1.1.1 DNS resolver, and some number of Cloudflare customer sites all appear to be impacted, it suggests that a list of hostnames are being targeted for disruption/interference, <a href="/consequences-of-ip-blocking">either by the SNI or the destination IP address</a>.</p>
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Perhaps recognizing the broad negative impact that brute-force nationwide Internet shutdowns have as a response to cheating on exams, some governments appear to be turning to more nuanced techniques, such as content blocking or connection tampering. However, because these are widely applied as well, they are arguably just as disruptive as a full nationwide Internet shutdown. The cause of full shutdowns, such as those seen in Syria, are arguably easier to diagnose than the disruptions to connectivity seen in Iraq and Algeria, which appear to use approaches that are hard to specifically identify from the outside.</p><p>Visit <a href="https://radar.cloudflare.com/">Cloudflare Radar</a> for additional insights around these, and other, Internet disruptions. Follow us on social media at <a href="https://x.com/CloudflareRadar">@CloudflareRadar</a> (X), <a href="https://noc.social/@cloudflareradar">noc.social/@cloudflareradar</a> (Mastodon), and <a href="https://bsky.app/profile/radar.cloudflare.com">radar.cloudflare.com</a> (Bluesky), or contact us via email.</p> ]]></content:encoded>
            <category><![CDATA[Radar]]></category>
            <category><![CDATA[Internet Traffic]]></category>
            <category><![CDATA[Outage]]></category>
            <category><![CDATA[Internet Shutdown]]></category>
            <category><![CDATA[BGP]]></category>
            <category><![CDATA[Consumer Services]]></category>
            <guid isPermaLink="false">7I3aMukuPURTotjQ1Njiei</guid>
            <dc:creator>David Belson</dc:creator>
        </item>
        <item>
            <title><![CDATA[Measuring the Internet's pulse: trending domains now on Cloudflare Radar]]></title>
            <link>https://blog.cloudflare.com/radar-trending-domains/</link>
            <pubDate>Mon, 24 Jul 2023 13:00:27 GMT</pubDate>
            <description><![CDATA[ Today, we are improving our Domain Rankings page and adding Trending Domains lists ]]></description>
            <content:encoded><![CDATA[ <p></p><p>In 2022, we <a href="/radar-domain-rankings/">launched</a> the Radar Domain Rankings, with top lists of the most popular domains based on how people use the Internet globally. The lists are calculated using a machine learning model that uses aggregated <a href="https://1.1.1.1/">1.1.1.1</a> resolver data that is anonymized in accordance with our <a href="https://developers.cloudflare.com/1.1.1.1/privacy/public-dns-resolver/">privacy commitments</a>. While the <a href="https://radar.cloudflare.com/domains">top 100</a> list is updated daily for each location, typically the first results of that list are stable over time, with the big names such as Google, Facebook, Apple, Microsoft and TikTok leading. Additionally, these global big names appear for the majority of locations.</p><p>Today, we are improving our <a href="https://radar.cloudflare.com/domains">Domain Rankings</a> page and adding Trending Domains lists. The new data shows which domains are currently experiencing an increase in popularity. Hence, while with the top popular domains we aim to show domains of broad appeal and of interest to many Internet users, with the trending domains we want to show domains that are generating a surge in interest.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/DGYSP79QBrRnCHw7xsLFz/2a7c068c52febff98427402a0bb854dc/image4-4.png" />
            
            </figure>
    <div>
      <h3>How we generate the Trending Domains</h3>
      <a href="#how-we-generate-the-trending-domains">
        
      </a>
    </div>
    <p>When we started looking at the best way to generate a list of trending domains, we needed to answer the following questions:</p><ul><li><p>What type of popularity changes do we want to capture?</p></li><li><p>What should we use as a baseline to calculate the change?</p></li><li><p>And how do we quantify it?</p></li></ul><p>We soon realized that we needed two lists. One reflecting sudden increased interest related to a particular event or a topic, showing spikes in popularity in domains that jump in the ranking from one day to the next, and another one reflecting steady growth in popularity, showing domains that are increasing their user base over a longer period.</p><p>For this reason, we are launching both the Trending Today and Trending This Week top 10 lists to capture the two different types of popularity increase.</p><p>To select the baseline for calculating the increase in popularity, we analyzed the volatility of the <a href="/radar-domain-rankings/">Radar Domain Ranking</a> list for different top list sizes. The advantage of starting with the Radar Ranking lists is that they <b>already incorporate a good popularity metric that quantifies the estimated relative size of the user population that accesses a domain over some period of time</b>. You can read more about how we define popularity in our “Goodbye, Alexa. Hello, Cloudflare Radar Domain Rankings” <a href="/radar-domain-rankings/">blog</a>.</p><p>As expected, smaller list sizes were more stable, meaning the percentage of domains in the top 100 that changed the ranking from one day to the next was much lower than the percentage of domains that changed in the top 10,000. Hence, to have a dynamic daily list of trending domains, we had to look beyond the top 100 most popular domains.</p><p>However, we did not want to go all the way to the long tail of the list, as we already know that the ranks there are based on “significantly smaller and hence less reliable numbers” (see the paper "<a href="https://arxiv.org/abs/1805.11506">A Long Way to the Top: Significance, Structure, and Stability of Internet Top Lists</a>"). Hence, we selected an appropriate list size for each location, based on the distribution of the number of DNS queries per domain. For example, for the Worldwide trending list we analyzed the top 20,000 most popular domains, for Brazil we looked at the top 10,000, Angola 5,000 and for the Faroe Islands top 500.</p>
    <div>
      <h3>Trending Today</h3>
      <a href="#trending-today">
        
      </a>
    </div>
    <p>We then evaluated how much the domains change rank from one day to the next.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5JbwD8sP5PhNY6dLkxPXIA/8374ca5a59f39232ffdb4bc96ba1fcde/image2-13.png" />
            
            </figure><p>We saw that on average, the biggest changes in the top lists, from one day to the next, happen from Fridays to Saturdays and from Sundays to Mondays, and hence on Saturdays and Mondays the lists have the least overlap with the lists of the previous day. We also compared the rank changes from one day to the next corresponding weekday, say from one Monday to the next and saw that on average, rankings on Mondays typically have more overlap with the rankings of the previous Mondays, than with the rankings of Sunday. From this we decided that in order to capture which domains are trending due to the weekend effect, we needed to compare the domain's daily rank to the rank of the previous day(s), and not of the corresponding weekday.</p><p>However, we also did not want to show as trending those domains that highly oscillate in the rankings, jumping up and down from one day to the next, showing up as trending every few days. Hence, we could not simply compare the daily rank with the rank from the day before. Instead, as a compromise between capturing the most recent trends, including the weekend trends, but still filtering out the domains whose ranking oscillates over a short period of time, we decided to compare the domain's daily rank with its best rank of the previous four days.</p><p>Then, to calculate the increase in popularity, we simply calculate the percentage change in the current rank compared to the best rank of the previous four days.</p>
    <div>
      <h3>Trending This Week</h3>
      <a href="#trending-this-week">
        
      </a>
    </div>
    <p>For calculating the domains steadily growing over the week, we used a slightly different approach.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7arHPbJLnmUh0gwrLY3L6v/94e7f5174b5cc362b4d5cee687025138/image3-3.png" />
            
            </figure><p>We want to highlight domains that keep improving their rank day by day and especially those that have been really trending in the most recent days. Therefore, we decided not to directly compare the current rank with the best rank during the previous week. Instead, we looked at the weighted average per day rank improvement and compared it with the best rank of the previous six days, with more recent days being given more weight.</p>
    <div>
      <h3>Example trending domains</h3>
      <a href="#example-trending-domains">
        
      </a>
    </div>
    <p>What do these lists look like at the end? We compiled the lists for the eventful days of June 21 to 24.</p><p>On June 22, nba.com was trending in 28 locations, shown in the table below, the United States, as expected, but also Austria, Australia and Japan, to name a few, reflecting the interest in the events of <a href="https://en.wikipedia.org/wiki/2023_NBA_draft">NBA Draft 2023</a>.</p><p><b>Trending Today</b> data from Friday, June 23, 2023:</p>
<table>
<thead>
  <tr>
    <th><span>Location</span></th>
    <th><span>Trending rank</span></th>
    <th><span>Domain</span></th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><span>Albania</span></td>
    <td><span>5</span></td>
    <td><span>nba.com</span></td>
  </tr>
  <tr>
    <td><span>Argentina</span></td>
    <td><span>9</span></td>
    <td><span>nba.com</span></td>
  </tr>
  <tr>
    <td><span>Australia</span></td>
    <td><span>1</span></td>
    <td><span>nba.com</span></td>
  </tr>
  <tr>
    <td><span>Austria</span></td>
    <td><span>9</span></td>
    <td><span>nba.com</span></td>
  </tr>
  <tr>
    <td><span>Belgium</span></td>
    <td><span>5</span></td>
    <td><span>nba.com</span></td>
  </tr>
  <tr>
    <td><span>Canada</span></td>
    <td><span>5</span></td>
    <td><span>nba.com</span></td>
  </tr>
  <tr>
    <td><span>Chile</span></td>
    <td><span>6</span></td>
    <td><span>nba.com</span></td>
  </tr>
  <tr>
    <td><span>Colombia</span></td>
    <td><span>3</span></td>
    <td><span>nba.com</span></td>
  </tr>
  <tr>
    <td><span>Dominican Republic</span></td>
    <td><span>5</span></td>
    <td><span>nba.com</span></td>
  </tr>
  <tr>
    <td><span>Greece</span></td>
    <td><span>2</span></td>
    <td><span>nba.com</span></td>
  </tr>
  <tr>
    <td><span>Honduras</span></td>
    <td><span>6</span></td>
    <td><span>nba.com</span></td>
  </tr>
  <tr>
    <td><span>Hong Kong</span></td>
    <td><span>1</span></td>
    <td><span>nba.com</span></td>
  </tr>
  <tr>
    <td><span>India</span></td>
    <td><span>7</span></td>
    <td><span>nba.com</span></td>
  </tr>
  <tr>
    <td><span>Indonesia</span></td>
    <td><span>4</span></td>
    <td><span>nba.com</span></td>
  </tr>
  <tr>
    <td><span>Ireland</span></td>
    <td><span>3</span></td>
    <td><span>nba.com</span></td>
  </tr>
  <tr>
    <td><span>Japan</span></td>
    <td><span>9</span></td>
    <td><span>nba.com</span></td>
  </tr>
  <tr>
    <td><span>Mexico</span></td>
    <td><span>2</span></td>
    <td><span>nba.com</span></td>
  </tr>
  <tr>
    <td><span>New Zealand</span></td>
    <td><span>1</span></td>
    <td><span>nba.com</span></td>
  </tr>
  <tr>
    <td><span>Norway</span></td>
    <td><span>1</span></td>
    <td><span>nba.com</span></td>
  </tr>
  <tr>
    <td><span>Philippines</span></td>
    <td><span>1</span></td>
    <td><span>nba.com</span></td>
  </tr>
  <tr>
    <td><span>Poland</span></td>
    <td><span>9</span></td>
    <td><span>nba.com</span></td>
  </tr>
  <tr>
    <td><span>Serbia</span></td>
    <td><span>2</span></td>
    <td><span>nba.com</span></td>
  </tr>
  <tr>
    <td><span>South Korea</span></td>
    <td><span>3</span></td>
    <td><span>nba.com</span></td>
  </tr>
  <tr>
    <td><span>Taiwan</span></td>
    <td><span>1</span></td>
    <td><span>nba.com</span></td>
  </tr>
  <tr>
    <td><span>Thailand</span></td>
    <td><span>1</span></td>
    <td><span>nba.com</span></td>
  </tr>
  <tr>
    <td><span>Ukraine</span></td>
    <td><span>1</span></td>
    <td><span>nba.com</span></td>
  </tr>
  <tr>
    <td><span>United States</span></td>
    <td><span>6</span></td>
    <td><span>nba.com</span></td>
  </tr>
  <tr>
    <td><span>Venezuela</span></td>
    <td><span>4</span></td>
    <td><span>nba.com</span></td>
  </tr>
</tbody>
</table><p>Two domains trending in multiple locations on Saturday, June 24, were: rt.com, a Russian news site in English, and liveuamap.com, a site with interactive map of <a href="https://radar.cloudflare.com/ua">Ukraine</a>. These are probably the effects of the events related to the Wagner group on June 23 and 24. Related to the same events, domain jetphotos.com was trending on the same day in Russia, Norway and Albania.</p><p><b>Trending Today</b> data from Saturday, June 24, 2023:</p>
<table>
<thead>
  <tr>
    <th><span>Location</span></th>
    <th><span>Trending rank</span></th>
    <th><span>Domain</span></th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><span>Armenia</span></td>
    <td><span>4</span></td>
    <td><span>rt.com</span></td>
  </tr>
  <tr>
    <td><span>Australia</span></td>
    <td><span>5</span></td>
    <td><span>rt.com</span></td>
  </tr>
  <tr>
    <td><span>Belgium</span></td>
    <td><span>2</span></td>
    <td><span>rt.com</span></td>
  </tr>
  <tr>
    <td><span>Bulgaria</span></td>
    <td><span>9</span></td>
    <td><span>rt.com</span></td>
  </tr>
  <tr>
    <td><span>Canada</span></td>
    <td><span>6</span></td>
    <td><span>rt.com</span></td>
  </tr>
  <tr>
    <td><span>Denmark</span></td>
    <td><span>6</span></td>
    <td><span>rt.com</span></td>
  </tr>
  <tr>
    <td><span>Greece</span></td>
    <td><span>6</span></td>
    <td><span>rt.com</span></td>
  </tr>
  <tr>
    <td><span>Italy</span></td>
    <td><span>2</span></td>
    <td><span>rt.com</span></td>
  </tr>
  <tr>
    <td><span>Kazakhstan</span></td>
    <td><span>8</span></td>
    <td><span>rt.com</span></td>
  </tr>
  <tr>
    <td><span>Lebanon</span></td>
    <td><span>4</span></td>
    <td><span>rt.com</span></td>
  </tr>
  <tr>
    <td><span>Netherlands</span></td>
    <td><span>8</span></td>
    <td><span>rt.com</span></td>
  </tr>
  <tr>
    <td><span>Papua New Guinea</span></td>
    <td><span>9</span></td>
    <td><span>rt.com</span></td>
  </tr>
  <tr>
    <td><span>Singapore</span></td>
    <td><span>2</span></td>
    <td><span>rt.com</span></td>
  </tr>
  <tr>
    <td><span>Spain</span></td>
    <td><span>6</span></td>
    <td><span>rt.com</span></td>
  </tr>
  <tr>
    <td><span>Turkey</span></td>
    <td><span>4</span></td>
    <td><span>rt.com</span></td>
  </tr>
  <tr>
    <td><span>United Kingdom</span></td>
    <td><span>5</span></td>
    <td><span>rt.com</span></td>
  </tr>
  <tr>
    <td><span>United States</span></td>
    <td><span>3</span></td>
    <td><span>rt.com</span></td>
  </tr>
  <tr>
    <td><span>Uzbekistan</span></td>
    <td><span>2</span></td>
    <td><span>rt.com</span></td>
  </tr>
</tbody>
</table><p>Other domains trending in various locations on Friday and Saturday were different Gaming and Video Streaming domains such as roblox.com, twitch.tv and callofduty.com, showing an increased interest in gaming activities as the weekend approaches.</p><p>Yet another interesting effect of the weekend was the presence of five weather forecast sites on the top 10 trending sites on Friday, in <a href="https://www.thedubrovniktimes.com/lifestyle/opinion/item/15049-from-guaranteed-sunshine-to-unexpected-showers-dubrovnik-s-changing-summer-weather-raises-questions-and-points-to-global-warming">Croatia</a>, showing preoccupation with the summer weekend plans.</p><p><b>Trending Today in Croatia</b> (data from Friday, June 23, 2023)</p>
<table>
<thead>
  <tr>
    <th><span>Trending rank</span></th>
    <th><span>Domain</span></th>
    <th><span>Category</span></th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><span>1</span></td>
    <td><span>lightningmaps.org</span></td>
    <td><span>Weather; Education</span></td>
  </tr>
  <tr>
    <td><span>2</span></td>
    <td><span>freemeteo.com.hr</span></td>
    <td><span>Weather</span></td>
  </tr>
  <tr>
    <td><span>3</span></td>
    <td><span>Vrijeme.hr</span><br /><span>(Croatian Meteorological and Hydrological Service)</span></td>
    <td><span>Politics, Advocacy, and Government-Related</span></td>
  </tr>
  <tr>
    <td><span>4</span></td>
    <td><span>arso.gov.si</span></td>
    <td></td>
  </tr>
  <tr>
    <td><span>5</span></td>
    <td><span>rain-alarm.com</span></td>
    <td><span>Weather; News &amp; Media</span></td>
  </tr>
  <tr>
    <td><span>6</span></td>
    <td><span>sorbs.net</span></td>
    <td><span>Information Security</span></td>
  </tr>
  <tr>
    <td><span>7</span></td>
    <td><span>neverin.hr</span></td>
    <td><span>Information Technology</span></td>
  </tr>
  <tr>
    <td><span>8</span></td>
    <td><span>meteo.hr</span><br /><span>(Croatian Meteorological and Hydrological Service)</span></td>
    <td><span>Business</span></td>
  </tr>
  <tr>
    <td><span>9</span></td>
    <td><span>gamespot.com</span></td>
    <td><span>Gaming; Video Streaming</span></td>
  </tr>
  <tr>
    <td><span>10</span></td>
    <td><span>grad.hr</span></td>
    <td><span>Business</span></td>
  </tr>
</tbody>
</table><p>These were all examples of <b>daily trending domains</b>, but what domains have steadily grown in popularity that week?</p><p>In multiple countries we had travel sites trending that week, sites such as booking.com, rentcars.com and amadeus.com, as many people were making their summer vacation plans. Weather forecast, specifically windy.com domain, was also trending the whole week in locations such as the Dominican Republic, Saint Lucia and Reunion, which was not surprising as the hurricane season began.</p><p><b>Trending This Week</b> (Week June 17 -23, 2023)</p>
<table>
<thead>
  <tr>
    <th><span>Dominican Republic</span></th>
    <th><span>Reunion</span></th>
    <th><span>Saint Lucia</span></th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><span>cecomsa.com</span></td>
    <td><span>atera.com </span></td>
    <td><span>adition.com</span></td>
  </tr>
  <tr>
    <td><span>blur.io</span></td>
    <td><span>sharethis.com</span></td>
    <td><span>windy.com</span></td>
  </tr>
  <tr>
    <td><span>pxfuel.com</span></td>
    <td><span>windy.com</span></td>
    <td><span>bbc.co.uk</span></td>
  </tr>
  <tr>
    <td><span>windy.com</span></td>
    <td><span>baidu.com</span></td>
    <td><span>ampproject.org</span></td>
  </tr>
  <tr>
    <td><span>mihoyo.com</span></td>
    <td><span>inmobi.com</span></td>
    <td><span>aniview.com</span></td>
  </tr>
</tbody>
</table>
    <div>
      <h3>Final words</h3>
      <a href="#final-words">
        
      </a>
    </div>
    <p>Both <b>Trending Today</b> and <b>Trending This Week</b> top 10 lists are now available on Radar starting today and on <a href="https://developers.cloudflare.com/api/operations/radar-get-ranking-top-domains">Radar API</a>. Feel free to explore them and see what is trending on the Internet.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1Le8JQl3OIy2ut6ynfnl4P/c0af2b1c7b13b7af990621f017e1560b/image1-10.png" />
            
            </figure><p>Visit <a href="https://radar.cloudflare.com/">Cloudflare Radar</a> for additional insights around (Internet disruptions, routing issues, Internet traffic trends, attacks, Internet quality, etc.). Follow us on social media at <a href="https://twitter.com/CloudflareRadar">@CloudflareRadar</a> (Twitter), <a href="https://noc.social/@cloudflareradar">https://noc.social/@cloudflareradar</a> (Mastodon), and <a href="https://bsky.app/profile/radar.cloudflare.com">radar.cloudflare.com</a> (Bluesky), or contact us via e-mail.</p><p>Popular domains are domains of broad appeal based on how people use the Internet. Trending domains are domains that are generating a surge in interest.</p> ]]></content:encoded>
            <category><![CDATA[Radar]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Domain Rankings]]></category>
            <category><![CDATA[Consumer Services]]></category>
            <guid isPermaLink="false">24xPOTIXpDpcyt6C9L0cy6</guid>
            <dc:creator>Sabina Zejnilovic</dc:creator>
        </item>
    </channel>
</rss>