
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sun, 12 Apr 2026 18:12:50 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Cloudflare outage on February 20, 2026]]></title>
            <link>https://blog.cloudflare.com/cloudflare-outage-february-20-2026/</link>
            <pubDate>Sat, 21 Feb 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare suffered a service outage on February 20, 2026. A subset of customers who use Cloudflare’s Bring Your Own IP (BYOIP) service saw their routes to the Internet withdrawn via Border Gateway Protocol (BGP). ]]></description>
            <content:encoded><![CDATA[ <p>On February 20, 2026, at 17:48 UTC, Cloudflare experienced a service outage when a subset of customers who use Cloudflare’s Bring Your Own IP (BYOIP) service saw their routes to the Internet withdrawn via Border Gateway Protocol (BGP).</p><p>The issue was not caused, directly or indirectly, by a cyberattack or malicious activity of any kind. This issue was caused by a change that Cloudflare made to how our network manages IP addresses onboarded through the BYOIP pipeline. This change caused Cloudflare to unintentionally withdraw customer prefixes.</p><p>For some BYOIP customers, this resulted in their services and applications being unreachable from the Internet, causing timeouts and failures to connect across their Cloudflare deployments that used BYOIP. The website for Cloudflare’s recursive DNS resolver (1.1.1.1) saw 403 errors as well. The total duration of the incident was 6 hours and 7 minutes with most of that time spent restoring prefix configurations to their state prior to the change.</p><p>Cloudflare engineers reverted the change and prefixes stopped being withdrawn when we began to observe failures. However, before engineers were able to revert the change, ~1,100 BYOIP prefixes were withdrawn from the Cloudflare network. Some customers were able to restore their own service by using the Cloudflare dashboard to re-advertise their IP addresses. We resolved the incident when we restored all prefix configurations.</p><p>We are sorry for the impact to our customers. We let you down today. This post is an in-depth recounting of exactly what happened and which systems and processes failed. We will also outline the steps we are taking to prevent outages like this from happening again.</p>
    <div>
      <h2>How did the outage impact customers?</h2>
      <a href="#how-did-the-outage-impact-customers">
        
      </a>
    </div>
    <p>This graph shows the amount of prefixes advertised by Cloudflare during the incident to a BGP neighbor, which correlates to impact as prefixes that weren’t advertised were unreachable on the Internet:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5QnazHN20Gcf3vLH5r95Cd/c8f42e90f266dd3daeaa308945507024/BLOG-3193_2.png" />
          </figure><p>Out of the total 6,500 prefixes advertised to this peer, 4,306 of those were BYOIP prefixes. These BYOIP prefixes are advertised to every peer and represent all the BYOIP prefixes we advertise globally.   </p><p>During the incident, 1,100 prefixes out of the total 6,500 were withdrawn from 17:56 to 18:46 UTC. Out of the 4,306 total BYOIP prefixes, 25% of BYOIP prefixes were unintentionally withdrawn. We were able to detect impact on one.one.one.one and revert the impacting change before more prefixes were impacted. At 19:19 UTC, we published guidance to customers that they would be able to self-remediate this incident by going to the Cloudflare dashboard and re-advertising their prefixes.</p><p>Cloudflare was able to revert many of the advertisement changes around 20:20 UTC, which caused 800 prefixes to be restored. There were still ~300 prefixes that were unable to be remediated through the dashboard because the service configurations for those prefixes were removed from the edge due to a software bug. These prefixes were manually restored by Cloudflare engineers at 23:03 UTC. </p><p>This incident did not impact all BYOIP customers because the configuration change was applied iteratively and not instantaneously across all BYOIP customers. Once the configuration change was revealed to be causing impact, the change was reverted before all customers were affected. </p><p>The impacted BYOIP customers first experienced a behavior called <a href="https://blog.cloudflare.com/going-bgp-zombie-hunting/"><u>BGP Path Hunting</u></a>. In this state, end user connections traverse networks trying to find a route to the destination IP. This behavior will persist until the connection that was opened times out and fails. Until the prefix is advertised somewhere, customers will continue to see this failure mode. This loop-until-failure scenario affected any product that uses BYOIP for advertisement to the Internet. Additionally, visitors to one.one.one.one, the website for Cloudflare’s recursive DNS resolver, were met with HTTP 403 errors and an “Edge IP Restricted” error message. DNS resolution over the 1.1.1.1 Public Resolver, including DNS over HTTPS, was not affected. A full breakdown of the services impacted is below.</p>
<div><table><colgroup>
<col></col>
<col></col>
</colgroup>
<thead>
  <tr>
    <th><span>Service/Product</span></th>
    <th><span>Impact Description</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>Core CDN and Security Services</span></td>
    <td><span>Traffic was not attracted to Cloudflare, and users connecting to websites advertised on those ranges would have seen failures to connect</span></td>
  </tr>
  <tr>
    <td><span>Spectrum</span></td>
    <td><span>Spectrum apps on BYOIP failed to proxy traffic due to traffic not being attracted to Cloudflare</span></td>
  </tr>
  <tr>
    <td><span>Dedicated Egress</span></td>
    <td><span>Customers who used Gateway Dedicated Egress leveraging BYOIP or Dedicated IPs for CDN Egress leveraging BYOIP would not have been able to send traffic out to their destinations</span></td>
  </tr>
  <tr>
    <td><span>Magic Transit</span></td>
    <td><span>End users connecting to applications protected by Magic Transit would not have been advertised on the Internet, and would have seen connection timeouts and failures</span></td>
  </tr>
</tbody></table></div><p>There was also a set of customers who were unable to restore service by toggling the prefixes on the Cloudflare dashboard. As engineers began reannouncing prefixes to restore service for these customers, these customers may have seen increased latency and failures despite their IP addresses being advertised. This was because the addressing settings for some users were removed from edge servers due an issue in our own software, and the state had to be propagated back to the edge. </p><p>We’re going to get into what exactly broke in our addressing system, but to do that we need to cover a quick primer on the Addressing API, which is the underlying source of truth for customer IP addresses at Cloudflare.</p>
    <div>
      <h2>Cloudflare’s Addressing API</h2>
      <a href="#cloudflares-addressing-api">
        
      </a>
    </div>
    <p>The Addressing API is an authoritative dataset of the addresses present on the Cloudflare network. Any change to that dataset is immediately reflected in Cloudflare's global network. While we are in the process of improving how these systems roll out changes as a part of <a href="https://blog.cloudflare.com/fail-small-resilience-plan/"><u>Code Orange: Fail Small</u></a>, today customers can configure their IP addresses by interacting with public-facing APIs which configure a set of databases that trigger operational workflows propagating the changes to Cloudflare’s edge. This means that changes to the Addressing API are immediately propagated to the Cloudflare edge.</p><p>Advertising and configuring IP addresses on Cloudflare involves several steps:</p><ul><li><p>Customers signal to Cloudflare about advertisement/withdrawal of IP addresses via the Addressing API or BGP Control</p></li><li><p>The Addressing API instructs the machines to change the prefix advertisements</p></li><li><p>BGP will be updated on the routers once enough machines have received the notification to update the prefix</p></li><li><p>Finally, customers can configure Cloudflare products to use BYOIP addresses via <a href="https://developers.cloudflare.com/byoip/service-bindings/"><u>service bindings</u></a> which will assign products to these ranges</p></li></ul><p>The Addressing API allows us to automate most of the processes surrounding how we advertise or withdraw addresses, but some processes still require manual actions. These manual processes are risky because of their close proximity to Production. As a part of <a href="https://blog.cloudflare.com/fail-small-resilience-plan/"><u>Code Orange: Fail Small</u></a>, one of the goals of remediation was to remove manual actions taken in the Addressing API and replace them with safe workflows.</p>
    <div>
      <h2>How did the incident occur?</h2>
      <a href="#how-did-the-incident-occur">
        
      </a>
    </div>
    <p>The specific piece of configuration that broke was a modification attempting to automate the customer action of removing prefixes from Cloudflare’s BYOIP service, a regular customer request that is done manually today. Removing this manual process was part of our Code Orange: Fail Small work to push all changes toward safe, automated, health-mediated deployment. Since the list of related objects of BYOIP prefixes can be large, this was implemented as part of a regularly running sub-task that checks for BYOIP prefixes that should be removed, and then removes them. Unfortunately, this regular cleanup sub-task queried the API with a bug.</p><p>Here is the API query from the cleanup sub-task:</p>
            <pre><code> resp, err := d.doRequest(ctx, http.MethodGet, `/v1/prefixes?pending_delete`, nil)
</code></pre>
            <p>And here is the relevant part of the API implementation:</p>
            <pre><code>	if v := req.URL.Query().Get("pending_delete"); v != "" {
		// ignore other behavior and fetch pending objects from the ip_prefixes_deleted table
		prefixes, err := c.RO().IPPrefixes().FetchPrefixesPendingDeletion(ctx)
		if err != nil {
			api.RenderError(ctx, w, ErrInternalError)
			return
		}

		api.Render(ctx, w, http.StatusOK, renderIPPrefixAPIResponse(prefixes, nil))
		return
	}
</code></pre>
            <p>Because the client is passing pending_delete with no value, the result of Query().Get(“pending_delete”) here will be an empty string (“”), so the API server interprets this as a request for all BYOIP prefixes instead of just those prefixes that were supposed to be removed. The system interpreted this as all returned prefixes being queued for deletion. The new sub-task then began systematically deleting all BYOIP prefixes and all of their related dependent objects including <a href="https://developers.cloudflare.com/byoip/service-bindings/"><u>service bindings</u></a>, until the impact was noticed, and an engineer identified the sub-task and shut it down.</p>
    <div>
      <h3>Why did Cloudflare not catch the bug in our staging environment or testing?</h3>
      <a href="#why-did-cloudflare-not-catch-the-bug-in-our-staging-environment-or-testing">
        
      </a>
    </div>
    <p>Our staging environment contains data that matches Production as closely as possible, but was not sufficient in this case and the mock data we relied on to simulate what would occur was insufficient. </p><p>In addition, while we have tests for this functionality, coverage for this scenario in our testing process and environment was incomplete. Initial testing and code review focused on the BYOIP self-service API journey and were completed successfully. While our engineers successfully tested the exact process a customer would have followed, testing did not cover a scenario where the task-runner service would independently execute changes to user data without explicit input.</p>
    <div>
      <h3>Why was recovery not immediate?</h3>
      <a href="#why-was-recovery-not-immediate">
        
      </a>
    </div>
    <p>Affected BYOIP prefixes were not all impacted in the same way, necessitating more intensive data recovery steps. As a part of Code Orange: Fail Small, we are building a system where operational state snapshots can be safely rolled out through health-mediated deployments. In the event something does roll out that causes unexpected behavior, it can be very quickly rolled back to a known-good state. However, that system is not in Production today.</p><p>BYOIP prefixes were in different states of impact during this incident, and each of these different states required different actions:</p><ul><li><p>Most impacted customers only had their prefixes withdrawn. Customers in this configuration could go into the dashboard and toggle their advertisements, which would restore service. </p></li><li><p>Some customers had their prefixes withdrawn and some bindings removed. These customers were in a partial state of recovery where they could toggle some prefixes but not others.</p></li><li><p>Some customers had their prefixes withdrawn and all service bindings removed. They could not toggle their prefixes in the dashboard because there was no <a href="https://developers.cloudflare.com/byoip/service-bindings/"><u>service</u></a> (Magic Transit, Spectrum, CDN) bound to them. These customers took the longest to mitigate, as a global configuration update had to be initiated to reapply the service bindings for all these customers to every single machine on Cloudflare’s edge.</p></li></ul>
    <div>
      <h3>How does this incident relate to Code Orange: Fail Small?</h3>
      <a href="#how-does-this-incident-relate-to-code-orange-fail-small">
        
      </a>
    </div>
    <p>The change we were making when this incident occurred is part of the Code Orange: Fail Small initiative, which is aimed at improving the resiliency of code and configuration at Cloudflare. As a brief primer of the <a href="https://blog.cloudflare.com/fail-small-resilience-plan/"><u>Code Orange: Fail Small</u></a> initiatives, the work can be divided into three buckets:</p><ul><li><p>Require controlled rollouts for any configuration change that is propagated to the network, just like we do today for software binary releases.</p></li><li><p>Change our internal “break glass” procedures and remove any circular dependencies so that we, and our customers, can act fast and access all systems without issue during an incident.</p></li><li><p>Review, improve, and test failure modes of all systems handling network traffic to ensure they exhibit well-defined behavior under all conditions, including unexpected error states.</p></li></ul><p>The change that we attempted to deploy falls under the first bucket. By moving risky, manual changes to safe, automated configuration updates that are deployed in a health-mediated manner, we aim to improve the reliability of the service.</p><p>Critical work was already ongoing to enhance the Addressing API's configuration change support through staged test mediation and better correctness checks. This work was ongoing in parallel with the deployed change. Although preventative measures weren't fully deployed before the outage, teams were actively working on these systems when the incident occurred. Following our Code Orange: Fail Small promise to require controlled rollouts of any change into Production, our engineering teams have been reaching deep into all layers of our stack to identify and fix all problematic findings. While this outage wasn't itself global, the blast radius and impact were unacceptably large, further reinforcing Code Orange: Fail Small as a priority until we have re-established confidence in all changes to our network being as gradual as possible. Now let’s talk more specifically about improvements to these systems.</p>
    <div>
      <h2>Remediation and follow-up steps</h2>
      <a href="#remediation-and-follow-up-steps">
        
      </a>
    </div>
    
    <div>
      <h3>API schema standardization</h3>
      <a href="#api-schema-standardization">
        
      </a>
    </div>
    <p>One of the issues in this incident is that the pending_delete flag was interpreted as a string, making it difficult for both client and server to rationalize the value of the flag. We will improve the API schema to ensure better standardization, which will make it much easier for testing and systems to validate whether an API call is properly formed or not. This work is part of the third Code Orange workstream, which aims to create well-defined behavior under all conditions.</p>
    <div>
      <h3>Better separation between operational and configured state</h3>
      <a href="#better-separation-between-operational-and-configured-state">
        
      </a>
    </div>
    <p>Today, customers make changes to the addressing schema that are persisted in an authoritative database, and that database is the same one used for operational actions. This makes manual rollback processes more challenging because engineers need to utilize database snapshots instead of rationalizing between desired and actual states. We will redesign the rollback mechanism and database configuration to ensure that we have an easy way to roll back changes quickly and also to introduce layers between customer configuration and Production.  </p><p>We will snap shot the data that we read from the database and are applying to Production, and apply those snapshots in the same way that we deploy all our other Production changes, mediated by health metrics that can automatically stop the deployment if things are going wrong. This means that the next time we have a problem where the database gets changed into a bad state, we can near-instantly revert individual customers (or all customers) to a version that was working.</p><p>While this will temporarily block our customers from being able to make direct updates via our API in the event of an outage, it will mean that we can continue serving their traffic while we work to fix the database, instead of being down for that time. This work aligns with the first and second Code Orange workstreams, which involves fast rollback and also safe, health-mediated deployment of configuration.</p>
    <div>
      <h3>Better arbitrate large withdrawal actions</h3>
      <a href="#better-arbitrate-large-withdrawal-actions">
        
      </a>
    </div>
    <p>We will improve our monitoring to detect when changes are happening too fast or too broadly, such as withdrawing or deleting BGP prefixes quickly, and disable the deployment of snapshots when this happens. This will form a type of circuit breaker to stop any out-of-control process that is manipulating the database from having a large blast radius, like we saw in this incident.</p><p>We also have some ongoing work to directly monitor that the services run by our customers are behaving correctly, and those signals can also be used to trip the circuit breaker and stop potentially dangerous changes from being applied until we have had time to investigate. This work aligns with the first Code Orange workstream, which involves safe deployment of changes.</p><p>Below is the timeline of events inclusive of deployment of the change and remediation steps: </p>
<div><table><colgroup>
<col></col>
<col></col>
<col></col>
</colgroup>
<thead>
  <tr>
    <th><span>Time (UT</span><span>C</span><span>)</span></th>
    <th><span>Status</span></th>
    <th><span>Description</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>2026-02-05 21:53</span></td>
    <td><span>Code merged into system</span></td>
    <td><span>Broken sub-process merged into code base</span></td>
  </tr>
  <tr>
    <td><span>2026-02-20 17:46</span></td>
    <td><span>Code deployed into system</span></td>
    <td><span>Address API release with broken sub-process completes</span></td>
  </tr>
  <tr>
    <td><span>2026-02-20 17:56</span></td>
    <td><span>Impact Start</span></td>
    <td><span>Broken sub-process begins executing. Prefix advertisement updates begin propagating and prefixes begin to be withdrawn </span><span>– IMPACT STARTS – </span></td>
  </tr>
  <tr>
    <td><span>2026-02-20 18:13</span></td>
    <td><span>Cloudflare engaged</span></td>
    <td><span>Cloudflare engaged for failures on one.one.one.one</span></td>
  </tr>
  <tr>
    <td><span>2026-02-20 18:18</span></td>
    <td><span>Internal incident declared</span></td>
    <td><span>Cloudflare engineers continue investigating impact</span></td>
  </tr>
  <tr>
    <td><span>2026-02-20 18:21</span></td>
    <td><span>Addressing API team paged</span></td>
    <td><span>Engineering team responsible for Addressing API engaged and debugging begins</span></td>
  </tr>
  <tr>
    <td><span>2026-02-20 18:46</span></td>
    <td><span>Issue identified</span></td>
    <td><span>Broken sub-process terminated by an engineer and regular execution disabled; remediation begins</span></td>
  </tr>
  <tr>
    <td><span>2026-02-20 19:11</span></td>
    <td><span>Mitigation begins</span></td>
    <td><span>Cloudflare Engineers begin to restore serviceability for prefixes that were withdrawn while others focused on prefixes that were removed</span></td>
  </tr>
  <tr>
    <td><span>2026-02-20 19:19</span></td>
    <td><span>Some prefixes mitigated</span></td>
    <td><span>Customers begin to re-advertise their prefixes via the dashboard to restore service. </span><span>– IMPACT DOWNGRADE –</span></td>
  </tr>
  <tr>
    <td><span>2026-02-20 19:44</span></td>
    <td><span>Additional mitigation continues</span></td>
    <td><span>Engineers begin database recovery methods for removed prefixes</span></td>
  </tr>
  <tr>
    <td><span>2026-02-20 20:30</span></td>
    <td><span>Final mitigation process begins</span></td>
    <td><span>Engineers complete release to restore withdrawn prefixes that still have existing service bindings. Others are still working on removed prefixes </span><span>– IMPACT DOWNGRADE –</span></td>
  </tr>
  <tr>
    <td><span>2026-02-20 21:08</span></td>
    <td><span>Configuration update deploys</span></td>
    <td><span>Engineering begins global machine configuration rollout to restore prefixes that were not self-mitigated or mitigated via previous efforts </span><span>– IMPACT DOWNGRADE –</span></td>
  </tr>
  <tr>
    <td><span>2026-02-20 23:03</span></td>
    <td><span>Configuration update completed</span></td>
    <td><span>Global machine configuration deployment to restore remaining prefixes is completed. </span><span>– IMPACT ENDS –</span></td>
  </tr>
</tbody></table></div><p>We deeply apologize for this incident today and how it affected the service we provide our customers, and also the Internet at large. We aim to provide a network that is resilient to change, and we did not deliver on our promise to you. We are actively making these improvements to ensure improved stability moving forward and to prevent this problem from happening again.</p> ]]></content:encoded>
            <category><![CDATA[Post Mortem]]></category>
            <category><![CDATA[Incident Response]]></category>
            <category><![CDATA[Outage]]></category>
            <guid isPermaLink="false">6apSdbZfHEgeIzBwCqn5ob</guid>
            <dc:creator>David Tuber</dc:creator>
            <dc:creator>Dzevad Trumic</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cable cuts, storms, and DNS: a look at Internet disruptions in Q4 2025]]></title>
            <link>https://blog.cloudflare.com/q4-2025-internet-disruption-summary/</link>
            <pubDate>Mon, 26 Jan 2026 14:00:00 GMT</pubDate>
            <description><![CDATA[ The last quarter of 2025 brought several notable disruptions to Internet connectivity. Cloudflare Radar data reveals the impact of cable cuts, power outages, extreme weather, technical problems, and more. ]]></description>
            <content:encoded><![CDATA[ <p>In 2025, we <a href="https://radar.cloudflare.com/outage-center?dateStart=2025-01-01&amp;dateEnd=2025-12-31"><u>observed over 180 Internet disruptions</u></a> spurred by a variety of causes – some were brief and partial, while others were complete outages lasting for days. In the fourth quarter, we tracked only a single <a href="#government-directed"><u>government-directed</u></a> Internet shutdown, but multiple <a href="#cable-cuts"><u>cable cuts</u></a> wreaked havoc on connectivity in several countries. <a href="#power-outages"><u>Power outages</u></a> and <a href="#weather"><u>extreme weather</u></a> disrupted Internet services in multiple places, and the ongoing <a href="#military-action"><u>conflict</u></a> in Ukraine impacted connectivity there as well. As always, a number of the disruptions we observed were due to <a href="#known-or-unspecified-technical-problems"><u>technical problems</u></a> – with some acknowledged by the relevant providers, while others had unknown causes. In addition, incidents at several hyperscaler <a href="#cloud-platforms"><u>cloud platforms</u></a> and <a href="#cloudflare"><u>Cloudflare</u></a> impacted the availability of websites and applications.  </p><p>This post is intended as a summary overview of observed and confirmed disruptions and is not an exhaustive or complete list of issues that have occurred during the quarter. These anomalies are detected through significant deviations from expected traffic patterns observed across our network. Check out the <a href="https://radar.cloudflare.com/outage-center"><u>Cloudflare Radar Outage Center</u></a> for a full list of verified anomalies and confirmed outages. </p>
    <div>
      <h2>Government-directed</h2>
      <a href="#government-directed">
        
      </a>
    </div>
    
    <div>
      <h3>Tanzania</h3>
      <a href="#tanzania">
        
      </a>
    </div>
    <p><a href="https://bsky.app/profile/radar.cloudflare.com/post/3m4df6i7hjk25"><u>The Internet was shut down in Tanzania</u></a> on October 29 as <a href="https://www.theguardian.com/world/2025/oct/29/tanzania-election-president-samia-suluhu-hassan-poised-to-retain-power"><u>violent protests</u></a> took place during the country’s presidential election. Traffic initially fell around 12:30 local time (09:30 UTC), dropping more than 90% lower than the previous week. The disruption lasted approximately 26 hours, with <a href="https://bsky.app/profile/radar.cloudflare.com/post/3m4qec7zdnt2u"><u>traffic beginning to return</u></a> around 14:30 local time (11:30 UTC) on October 30. However, that restoration <a href="https://bsky.app/profile/radar.cloudflare.com/post/3m4gjngzck72u"><u>proved to be quite brief</u></a>, with a significant decrease in traffic occurring around 16:15 local time (13:15 UTC), approximately two hours after it returned. This second near-complete outage lasted until November 3, <a href="https://bsky.app/profile/radar.cloudflare.com/post/3m4g47vasfm2u"><u>when traffic aggressively returned</u></a> after 17:00 local time (14:00 UTC). Nominal drops in <a href="https://radar.cloudflare.com/routing/tz?dateStart=2025-10-29&amp;dateEnd=2025-11-04#announced-ip-address-space"><u>announced IPv4 and IPv6 address space</u></a> were also observed during the shutdown, but there was never a complete loss of announcements, which would have signified a total disconnection of the country from the Internet. (<a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/"><u>Autonomous systems</u></a> announce IP address space to other Internet providers, letting them know what blocks of IP addresses they are responsible for.)</p><p>Tanzania’s president later <a href="https://apnews.com/article/tanzania-samia-suluhu-hassan-internet-shutdown-october-election-1ec66b897e7809865d8971699a7284e0"><u>expressed sympathy</u></a> for the members of the diplomatic community and foreigners residing in the country regarding the impact of the Internet shutdown. Internet and social media services were also <a href="https://www.dw.com/en/tanzania-internet-slowdown-comes-at-a-high-cost/a-55512732"><u>restricted in 2020</u></a> ahead of the country’s general elections.</p>
    <div>
      <h2>Cable cuts</h2>
      <a href="#cable-cuts">
        
      </a>
    </div>
    
    <div>
      <h3>Digicel Haiti</h3>
      <a href="#digicel-haiti">
        
      </a>
    </div>
    <p>Digicel Haiti is unfortunately no stranger to Internet disruptions caused by cable cuts, and the network experienced two more such incidents during the fourth quarter. On October 16, traffic from <a href="https://radar.cloudflare.com/as27653"><u>Digicel Haiti (AS27653)</u></a> began to fall at 14:30 local time (18:30 UTC), reaching near zero at 16:00 local time (20:00 UTC). A translated <a href="https://x.com/jpbrun30/status/1978920959089230003"><u>X post from the company’s Director General</u></a> noted: “<i>We advise our clientele that @DigicelHT is experiencing 2 cuts on its international fiber optic infrastructure.</i>” Traffic began to recover after 17:00 local time (21:00 UTC), and reached expected levels within the following hour. At 17:33 local time (21:34 UTC), the Director General <a href="https://x.com/jpbrun30/status/1978937426841063504"><u>posted</u></a> that “<i>the first fiber on the international infrastructure has been repaired” </i>and service had been restored. </p><p>On November 25, another translated <a href="https://x.com/jpbrun30/status/1993283730467963345"><u>X post from the provider’s Director General</u></a> stated that its “<i>international optical fiber infrastructure on National Road 1</i>” had been cut. We observed traffic dropping on Digicel’s network approximately an hour earlier, with a complete outage observed between 02:00 - 08:00 local time (07:00 - 13:00 UTC). A <a href="https://x.com/jpbrun30/status/1993309357438910484"><u>follow-on X post</u></a> at 08:22 local time (13:22 UTC) stated that all services had been restored.</p>
    <div>
      <h3>Cybernet/StormFiber (Pakistan)</h3>
      <a href="#cybernet-stormfiber-pakistan">
        
      </a>
    </div>
    <p>At 17:30 local time (12:30 UTC) on October 20, Internet traffic for <a href="https://radar.cloudflare.com/as9541"><u>Cybernet/StormFiber (AS9541)</u></a> dropped sharply, falling to a level approximately 50% the same time a week prior. At the same time, the network’s announced IPv4 address space dropped by over a third. The cause of these shifts was damage to the <a href="https://www.submarinecablemap.com/submarine-cable/peace-cable"><u>PEACE</u></a> submarine cable, which suffered a cut in the Red Sea near Sudan. </p><p>PEACE is one of several submarine cable systems (including <a href="https://www.submarinecablemap.com/submarine-cable/imewe"><u>IMEWE</u></a> and <a href="https://www.submarinecablemap.com/submarine-cable/seamewe-4"><u>SEA-ME-WE-4</u></a>) that carry international Internet traffic for Pakistani providers. The provider <a href="https://profit.pakistantoday.com.pk/2025/10/24/stormfiber-pledges-full-restoration-by-monday-after-weeklong-internet-disruptions/"><u>pledged to fully restore service</u></a> by October 27, but traffic and announced IPv4 address space had recovered to near expected levels by around 02:00 local time on October 21 (21:00 UTC on October 20).</p>
<p>


    </p><div>
      <h3>Camtel, MTN Cameroon, Orange Cameroun</h3>
      <a href="#camtel-mtn-cameroon-orange-cameroun">
        
      </a>
    </div>
    <p>Unusual traffic patterns observed across multiple Internet providers in Cameroon on October 23 were reportedly caused by problems on the <a href="https://www.submarinecablemap.com/submarine-cable/west-africa-cable-system-wacs"><u>WACS (West Africa Cable System)</u></a> submarine cable, which connects countries along the west coast of Africa to Portugal. </p><p>A (translated) <a href="https://teleasu.tv/internet-graves-perturbations-observees-ce-jeudi-23-octobre-2025/"><u>published report</u></a> stated that MTN informed subscribers that “<i>following an incident on the WACS fiber optic cable, Internet service is temporarily disrupted</i>” and Orange Cameroun informed subscribers that “<i>due to an incident on the international access fiber, Internet service is disrupted.</i>” An <a href="https://x.com/Camtelonline/status/1981424170316464390"><u>X post from Camtel</u></a> stated “<i>Cameroon Telecommunications (CAMTEL) wishes to inform the public that a technical incident involving WACS cable equipment in Batoke (LIMBE) occurred in the early hours of 23 October 2025, causing Internet connectivity disruptions throughout the country.</i>” </p><p>Traffic across the impacted providers originally fell just at around  05:00 local time (04:00 UTC) before recovering to expected levels around 22:00 local time (21:00 UTC). Traffic across these networks was quite volatile during the day, dropping 90-99% at times. It isn’t clear what caused the visible spikiness in the traffic pattern—possibly attempts to shift Internet traffic to <a href="https://www.submarinecablemap.com/country/cameroon"><u>other submarine cable systems that connect to Cameroon</u></a>. Announced IP address space from <a href="https://radar.cloudflare.com/routing/as30992?dateStart=2025-10-23&amp;dateEnd=2025-10-23#announced-ip-address-space"><u>MTN Cameroon</u></a> and <a href="https://radar.cloudflare.com/routing/as36912?dateStart=2025-10-23&amp;dateEnd=2025-10-23#announced-ip-address-space"><u>Orange Cameroon</u></a> dropped during this period as well, although <a href="https://radar.cloudflare.com/routing/as15964?dateStart=2025-10-23&amp;dateEnd=2025-10-23#announced-ip-address-space"><u>Camtel’s</u></a> announced IP address space did not change.</p><p>Connectivity in the <a href="https://radar.cloudflare.com/cf"><u>Central African Republic</u></a> and <a href="https://radar.cloudflare.com/cg"><u>Republic of Congo</u></a> was also reportedly impacted by the WACS issues.</p>



    <div>
      <h3>Claro Dominicana</h3>
      <a href="#claro-dominicana">
        
      </a>
    </div>
    <p>On December 9, we saw traffic from <a href="https://radar.cloudflare.com/as6400"><u>Claro Dominicana (AS6400)</u></a>, an Internet provider in the Dominican Republic, drop sharply around 12:15 local time (16:15 UTC). Traffic levels fell again around 14:15 local time (18:15 UTC), bottoming out 77% lower than the previous week before quickly returning to expected levels. The connectivity disruption was likely caused by two fiber optic outages, as an <a href="https://x.com/ClaroRD/status/1998468046311002183"><u>X post from the provider</u></a> during the outage noted that they were “causing intermittency and slowness in some services.” A <a href="https://x.com/ClaroRD/status/1998496113838764343"><u>subsequent post on X</u></a> from Claro stated that technicians had restored Internet services nationwide by repairing the severed fiber optic cables.</p>
    <div>
      <h2>Power outages</h2>
      <a href="#power-outages">
        
      </a>
    </div>
    
    <div>
      <h3>Dominican Republic</h3>
      <a href="#dominican-republic">
        
      </a>
    </div>
    <p>According to a (translated) <a href="https://x.com/ETED_RD/status/1988326178219061450"><u>X post from the Empresa de Transmisión Eléctrica Dominicana</u></a> (ETED), a transmission line outage caused an interruption in electrical service in the <a href="https://radar.cloudflare.com/do"><u>Dominican Republic</u></a> on November 11. This power outage impacted Internet traffic from the country, resulting in a <a href="https://noc.social/@cloudflareradar/115533081511310085"><u>nearly 50% drop in traffic</u></a> compared to the prior week, starting at 13:15 local time (17:15 UTC). Traffic levels remained lower until approximately 02:00 local time (06:00 UTC) on December 12, with a later <a href="https://x.com/ETED_RD/status/1988575130990330153"><u>(translated) X post from ETED</u></a> noting “<i>At 2:20 a.m. we have completed the recovery of the national electrical system, supplying 96% of the demand…</i>”</p><p>A subsequent <a href="https://dominicantoday.com/dr/local/2025/11/27/manual-line-disconnection-triggered-nationwide-blackout-report-says/"><u>technical report found</u></a> that “<i>the blackout began at the 138 kV San Pedro de Macorís I substation, where a live line was manually disconnected, triggering a high-intensity short circuit. Protection systems responded immediately, but the fault caused several nearby lines to disconnect, separating 575 MW of generation in the eastern region from the rest of the grid. The imbalance caused major power plants to trip automatically as part of their built-in safety mechanisms.</i>”</p>
    <div>
      <h3>Kenya</h3>
      <a href="#kenya">
        
      </a>
    </div>
    <p>On December 9, a <a href="https://www.tuko.co.ke/kenya/612181-kenya-power-reveals-7-pm-nationwide-blackout-multiple-regions/"><u>major power outage</u></a> impacted multiple regions across <a href="https://radar.cloudflare.com/ke"><u>Kenya</u></a>. Kenya Power explained that the outage “<i>was triggered by an incident on the regional Kenya-Uganda interconnected power network, which caused a disturbance on the Kenyan side of the system</i>” and claimed that “<i>[p]ower was restored to most of the affected areas within approximately 30 minutes.</i>” However, impacts to Internet connectivity lasted for nearly four hours, between 19:15 - 23:00 local time (16:15 - 20:00 UTC). The power outage caused traffic to drop as much as 18% at a national level, with the traffic shifts most visible in <a href="https://radar.cloudflare.com/traffic/7668902"><u>Nakuru County</u></a> and <a href="https://radar.cloudflare.com/traffic/192709"><u>Kaimbu County</u></a>.</p>


    <div>
      <h2>Military action</h2>
      <a href="#military-action">
        
      </a>
    </div>
    
    <div>
      <h3>Odesa, Ukraine</h3>
      <a href="#odesa-ukraine">
        
      </a>
    </div>
    <p><a href="https://odessa-journal.com/russia-carried-out-a-massive-drone-attack-on-the-odessa-region"><u>Russian drone strikes</u></a> on the <a href="https://radar.cloudflare.com/traffic/698738"><u>Odesa region</u></a> in <a href="https://radar.cloudflare.com/ua"><u>Ukraine</u></a> on December 12 damaged warehouses and energy infrastructure, with the latter causing power outages in parts of the region. Those outages disrupted Internet connectivity, resulting in <a href="https://x.com/CloudflareRadar/status/2000993223406211327?s=20"><u>traffic dropping by as much as 57%</u></a> as compared to the prior week. After the initial drop at midnight on December 13 (22:00 UTC on December 12), traffic gradually recovered over the following several days, returning to expected levels around 14:30 local time (12:30 UTC) on December 16.</p>
    <div>
      <h2>Weather</h2>
      <a href="#weather">
        
      </a>
    </div>
    
    <div>
      <h3>Jamaica</h3>
      <a href="#jamaica">
        
      </a>
    </div>
    <p><a href="https://www.nytimes.com/live/2025/10/28/weather/hurricane-melissa-jamaica-landfall?smid=url-share#df989e67-a90e-50fb-92d0-8d5d52f76e84"><u>Hurricane Melissa</u></a> made landfall on <a href="https://radar.cloudflare.com/jm"><u>Jamaica</u></a> on October 28 and left a trail of damage and destruction in its path. Associated <a href="https://www.jamaicaobserver.com/2025/10/28/eyeonmelissa-35-jps-customers-without-power/"><u>power outages</u></a> and infrastructure damage impacted Internet connectivity, causing traffic to initially <a href="https://x.com/CloudflareRadar/status/1983266694715084866"><u>drop by approximately half</u></a>, <a href="https://x.com/CloudflareRadar/status/1983217966347866383"><u>starting</u></a> around 06:15 local time (11:15 UTC), ultimately reaching as much as <a href="https://x.com/CloudflareRadar/status/1983357587707048103"><u>70% lower</u></a> than the previous week. Internet traffic from Jamaica remained well below pre-hurricane levels for several days, and ultimately started to make greater progress towards expected levels <a href="https://x.com/CloudflareRadar/status/1985708253872107713?s=20"><u>during the morning of November 4</u></a>. It can often take weeks or months for Internet traffic from a country to return to “normal” levels following storms that cause massive and widespread damage – while power may be largely restored within several days, damage to physical infrastructure takes significantly longer to address.</p>
    <div>
      <h3>Sri Lanka &amp; Indonesia</h3>
      <a href="#sri-lanka-indonesia">
        
      </a>
    </div>
    <p>On November 26, <a href="https://apnews.com/article/indonesia-sri-lanka-thailand-malaysia-floods-landsides-aa9947df1f6192a3c6c72ef58659d4d2"><u>Cyclone Senyar</u></a> caused catastrophic floods and landslides in <a href="https://radar.cloudflare.com/lk"><u>Sri Lanka</u></a> and <a href="https://radar.cloudflare.com/id"><u>Indonesia</u></a>, killing over 1,000 people and damaging telecommunications and power infrastructure across these countries. The infrastructure damage resulted in <a href="https://x.com/CloudflareRadar/status/1996233525989720083"><u>disruptions to Internet connectivity</u></a>, and resultant lower traffic levels, across multiple regions.</p><p>In Sri Lanka, regions outside the main Western Province were the most affected, and several provinces saw traffic drop <a href="https://x.com/CloudflareRadar/status/1996233528032301513"><u>between 80% and 95%</u></a> as compared to the prior week, including <a href="https://radar.cloudflare.com/traffic/1232860?dateStart=2025-11-24&amp;dateEnd=2025-12-14"><u>North Western</u></a>, <a href="https://radar.cloudflare.com/traffic/1227618?dateStart=2025-11-24&amp;dateEnd=2025-12-14"><u>Southern</u></a>, <a href="https://radar.cloudflare.com/traffic/1225265?dateStart=2025-11-24&amp;dateEnd=2025-12-14"><u>Uva</u></a>, <a href="https://radar.cloudflare.com/traffic/8133521?dateStart=2025-11-24&amp;dateEnd=2025-12-14"><u>Eastern</u></a>, <a href="https://radar.cloudflare.com/traffic/7671049?dateStart=2025-11-24&amp;dateEnd=2025-12-14"><u>Northern</u></a>, <a href="https://radar.cloudflare.com/traffic/1232870?dateStart=2025-11-24&amp;dateEnd=2025-12-14"><u>North Central</u></a>, and <a href="https://radar.cloudflare.com/traffic/1228435?dateStart=2025-11-24&amp;dateEnd=2025-12-14"><u>Sabaragamuwa</u></a>.</p>

<p>In <a href="https://x.com/CloudflareRadar/status/1996233530267885938"><u>Indonesia</u></a>, <a href="https://radar.cloudflare.com/traffic/1215638?dateStart=2025-11-24&amp;dateEnd=2025-12-14"><u>Aceh</u></a> and the Sumatra regions saw the biggest Internet disruptions. In Aceh, traffic initially dropped over 75% as compared to the previous week. In Sumatra, <a href="https://radar.cloudflare.com/traffic/1213642?dateStart=2025-11-24&amp;dateEnd=2025-12-14"><u>North Sumatra</u></a> was the most affected, with an early 30% drop as compared to the previous week, before starting to recover more actively the following week.</p>


    <div>
      <h2>Known or unspecified technical problems</h2>
      <a href="#known-or-unspecified-technical-problems">
        
      </a>
    </div>
    
    <div>
      <h3>Smartfren (Indonesia)</h3>
      <a href="#smartfren-indonesia">
        
      </a>
    </div>
    <p>On October 3, subscribers to Indonesian Internet provider <a href="https://radar.cloudflare.com/as18004"><u>Smartfren (AS18004</u></a>) experienced a service disruption. The issues were <a href="https://x.com/smartfrenworld/status/1973957300466643203"><u>acknowledged by the provider in an X post</u></a>, which stated (in translation), “<i>Currently, telephone, SMS and data services are experiencing problems in several areas.</i>” Traffic from the provider fell as much as 84%, starting around 09:00 local time (02:00 UTC). The disruption lasted for approximately eight hours, as traffic returned to expected levels around 17:00 local time (10:00 UTC). Smartfren did not provide any additional information on what caused the service problems.</p>
    <div>
      <h3>Vodafone UK</h3>
      <a href="#vodafone-uk">
        
      </a>
    </div>
    <p>Major British Internet provider Vodafone UK (<a href="https://radar.cloudflare.com/as5378"><u>AS5378</u></a> &amp; <a href="https://radar.cloudflare.com/as25135"><u>AS25135</u></a>) experienced a brief service outage on October 23. At 15:00 local time (14:00 UTC), traffic on both Vodafone <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/"><u>ASNs</u></a> dropped to zero. Announced IPv4 address space from <a href="https://radar.cloudflare.com/routing/as5378?dateStart=2025-10-13&amp;dateEnd=2025-10-13#announced-ip-address-space"><u>AS5378</u></a> fell by 75%, while announced IPv4 address space from <a href="https://radar.cloudflare.com/routing/as25135?dateStart=2025-10-13&amp;dateEnd=2025-10-13#announced-ip-address-space"><u>AS25135</u></a> disappeared entirely. Both Internet traffic and address space recovered two hours later, returning to expected levels around 17:00 local time (16:00 UTC). Vodafone did not provide any information on their social media channels about the cause of the outage, and their <a href="https://www.vodafone.co.uk/network/status-checker"><u>network status checker page</u></a> was also unavailable during the outage.</p>






    <div>
      <h3>Fastweb (Italy)</h3>
      <a href="#fastweb-italy">
        
      </a>
    </div>
    <p>According to a <a href="https://tg24.sky.it/tecnologia/2025/10/22/fastweb-down-problemi-internet-oggi"><u>published report</u></a>, a DNS resolution issue disrupted Internet services for customers of Italian provider <a href="https://radar.cloudflare.com/as12874"><u>Fastweb (AS12874)</u></a> on October 22, causing observed traffic volumes to drop by over 75%. Fastweb <a href="https://www.firstonline.info/en/fastweb-down-oggi-internet-bloccato-in-tutta-italia-migliaia-di-segnalazioni/"><u>acknowledged the issue</u></a>, which impacted wired Internet customers between 09:30 - 13:00 local time (08:30 - 12:00 UTC).</p><p>Although not an Internet outage caused by connectivity failure, the impact of DNS resolution issues on Internet traffic is very similar. When a provider’s <a href="https://www.cloudflare.com/learning/dns/dns-server-types/"><u>DNS resolver</u></a> is experiencing problems, switching to a service like Cloudflare’s <a href="https://1.1.1.1/dns"><u>1.1.1.1 public DNS resolver</u></a> will often restore connectivity.</p>
    <div>
      <h3>SBIN, MTN Benin, Etisalat Benin</h3>
      <a href="#sbin-mtn-benin-etisalat-benin">
        
      </a>
    </div>
    <p>On December 7, a concurrent drop in traffic was observed across <a href="https://radar.cloudflare.com/as28683"><u>SBIN (AS28683)</u></a>, <a href="https://radar.cloudflare.com/as37424"><u>MTN Benin (AS37424)</u></a>, and <a href="https://radar.cloudflare.com/as37136"><u>Etisalat Benin (AS37136)</u></a>. Between 18:30 - 19:30 local time (17:30 - 18:30 UTC), traffic dropped as much as 80% as compared to the prior week at a country level, nearly 100% at Etisalat and MTN, and over 80% at SBIN.</p><p>While an <a href="https://www.reuters.com/world/africa/soldiers-benins-national-television-claim-have-seized-power-2025-12-07/"><u>attempted coup</u></a> had taken place earlier in the day, it is unclear whether the observed Internet disruption was related in any way. From a routing perspective, all three impacted networks share <a href="https://radar.cloudflare.com/as174"><u>Cogent (AS174)</u></a> as an upstream provider, so a localized issue at Cogent may have contributed to the brief outage.  </p>



    <div>
      <h3>Cellcom Israel</h3>
      <a href="#cellcom-israel">
        
      </a>
    </div>
    <p>According to a <a href="https://www.ynetnews.com/article/2gpt1kt35"><u>reported announcement</u></a> from Israeli provider <a href="https://radar.cloudflare.com/as1680"><u>Cellcom (AS1680)</u></a>, on December 18, there was “<i>a malfunction affecting Internet connectivity that is impacting some of our customers.</i>” This malfunction dropped traffic nearly 70% as compared to the prior week, and occurred between 09:30 - 11:00 local time (07:30 - 09:00 UTC). The “malfunction” may have been a DNS failure, according to a <a href="https://www.israelnationalnews.com/news/419552"><u>published report</u></a>.</p>
    <div>
      <h3>Partner Communications (Israel)</h3>
      <a href="#partner-communications-israel">
        
      </a>
    </div>
    <p>Closing out 2025, on December 30, a major technical failure at Israeli provider <a href="https://radar.cloudflare.com/as12400"><u>Partner Communications (AS12400)</u></a> <a href="https://www.ynetnews.com/tech-and-digital/article/hjewkibnwe"><u>disrupted</u></a> mobile, TV, and Internet services across the country. Internet traffic from Partner fell by two-thirds as compared to the previous week between 14:00 - 15:00 local time (12:00 - 13:00 UTC). During the outage, queries to Cloudflare’s 1.1.1.1 public DNS resolver spiked, suggesting that the problem may have been related to Partner’s DNS infrastructure. However, the provider did not publicly confirm what caused the outage.</p>




    <div>
      <h2>Cloud Platforms</h2>
      <a href="#cloud-platforms">
        
      </a>
    </div>
    <p>During the fourth quarter, we launched a new <a href="https://radar.cloudflare.com/cloud-observatory"><u>Cloud Observatory</u></a> page on Radar that tracks availability and performance issues at a region level across hyperscaler cloud platforms, including <a href="https://radar.cloudflare.com/cloud-observatory/amazon"><u>Amazon Web Services</u></a>, <a href="https://radar.cloudflare.com/cloud-observatory/microsoft"><u>Microsoft Azure</u></a>, <a href="https://radar.cloudflare.com/cloud-observatory/google"><u>Google Cloud Platform</u></a>, and <a href="https://radar.cloudflare.com/cloud-observatory/oracle"><u>Oracle Cloud Infrastructure</u></a>.</p>
    <div>
      <h3>Amazon Web Services</h3>
      <a href="#amazon-web-services">
        
      </a>
    </div>
    <p>On October 20, the Amazon Web Services us-east-1 region in Northern Virginia experienced “<a href="https://health.aws.amazon.com/health/status?eventID=arn:aws:health:us-east-1::event/MULTIPLE_SERVICES/AWS_MULTIPLE_SERVICES_OPERATIONAL_ISSUE/AWS_MULTIPLE_SERVICES_OPERATIONAL_ISSUE_BA540_514A652BE1A"><u>increased error rates and latencies</u></a>” that affected multiple services within the region. The issues impacted not only customers with public-facing Web sites and applications that rely on infrastructure within the region, but also Cloudflare customers that have origin resources hosted in us-east-1.</p><p>We began to see the impact of the problems around 06:30 UTC, as the share of <a href="https://radar.cloudflare.com/cloud-observatory/amazon/us-east-1?dateStart=2025-10-20&amp;dateEnd=2025-10-21#success-rate"><u>error</u></a> (<a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Status#server_error_responses"><u>5xx-class</u></a>) responses began to climb, reaching as high as 17% around 08:00 UTC. The number of <a href="https://radar.cloudflare.com/cloud-observatory/amazon/us-east-1?dateStart=2025-10-20&amp;dateEnd=2025-10-21#connection-failures"><u>failures encountered when attempting to connect to origins</u></a> in us-east-1 climbed as well, peaking around 12:00 UTC.</p>

<p>The impact could also be clearly seen in key network performance metrics, which remained elevated throughout the incident, returning to normal levels just before the end of the incident, around 23:00 UTC. Both <a href="https://radar.cloudflare.com/cloud-observatory/amazon/us-east-1?dateStart=2025-10-20&amp;dateEnd=2025-10-21#tcp-handshake-duration"><u>TCP</u></a> and <a href="https://radar.cloudflare.com/cloud-observatory/amazon/us-east-1?dateStart=2025-10-20&amp;dateEnd=2025-10-21#tls-handshake-duration"><u>TLS</u></a> handshake durations got progressively worse throughout the incident—these metrics measure the amount of time needed for Cloudflare to establish TCP and TLS connections respectively with customer origin servers in us-east-1. In addition, the amount of time elapsed before Cloudflare <a href="https://radar.cloudflare.com/cloud-observatory/amazon/us-east-1/#response-header-receive-duration"><u>received response headers</u></a> from the origin increased significantly during the first several hours of the incident, before gradually returning to expected levels.  </p>





    <div>
      <h3>Microsoft Azure</h3>
      <a href="#microsoft-azure">
        
      </a>
    </div>
    <p>On October 29, Microsoft Azure experienced an <a href="https://azure.status.microsoft/en-us/status/history/?trackingId=YKYN-BWZ"><u>incident</u></a> impacting <a href="https://azure.microsoft.com/en-us/products/frontdoor"><u>Azure Front Door</u></a>, its content delivery network service. According to <a href="https://azure.status.microsoft/en-us/status/history/?trackingId=YKYN-BWZ"><u>Azure's report on the incident</u></a>, “<i>A specific sequence of customer configuration changes, performed across two different control plane build versions, resulted in incompatible customer configuration metadata being generated. These customer configuration changes themselves were valid and non-malicious – however they produced metadata that, when deployed to edge site servers, exposed a latent bug in the data plane. This incompatibility triggered a crash during asynchronous processing within the data plane service.</i>”</p><p>The incident report marked the start time at 15:41 UTC, although we observed the volume of <a href="https://radar.cloudflare.com/cloud-observatory/microsoft/global?dateStart=2025-10-29&amp;dateEnd=2025-10-30#connection-failures"><u>failed connection attempts</u></a> to Azure-hosted origins begin to climb about 45 minutes prior. The TCP and TLS handshake metrics also became more volatile during the incident period, with <a href="https://radar.cloudflare.com/cloud-observatory/microsoft/global?dateStart=2025-10-29&amp;dateEnd=2025-10-30#tcp-handshake-duration"><u>TCP handshakes</u></a> taking over 50% longer at times, and <a href="https://radar.cloudflare.com/cloud-observatory/microsoft/global?dateStart=2025-10-29&amp;dateEnd=2025-10-30#tls-handshake-duration"><u>TLS handshakes</u></a> taking nearly 200% longer at peak. The impacted metrics began to improve after 20:00 UTC, and according to Microsoft, the incident ended at 00:05 UTC on October 30.</p>



    <div>
      <h2>Cloudflare</h2>
      <a href="#cloudflare">
        
      </a>
    </div>
    <p>In addition to the outages discussed above, Cloudflare also experienced two disruptions during the fourth quarter. While these were not Internet outages in the classic sense, they did prevent users from accessing Web sites and applications delivered and protected by Cloudflare when they occurred.</p><p>The first incident took place on November 18, and was caused by a software failure triggered by a change to one of our database systems' permissions, which caused the database to output multiple entries into a “feature file” used by our Bot Management system. Additional details, including a root cause analysis and timeline, can be found in the associated <a href="https://blog.cloudflare.com/18-november-2025-outage/"><u>blog post</u></a>.</p><p>The second incident occurred on December 5, and impacted a subset of customers, accounting for approximately 28% of all HTTP traffic served by Cloudflare. It was triggered by changes being made to our request body parsing logic while attempting to detect and mitigate a newly disclosed industry-wide React Server Components vulnerability. A post-mortem <a href="https://blog.cloudflare.com/5-december-2025-outage/"><u>blog post</u></a> contains additional details, including a root cause analysis and timeline.</p><p>For more information about the work underway at Cloudflare to prevent outages like these from happening again, check out our <a href="https://blog.cloudflare.com/fail-small-resilience-plan/"><u>blog post</u></a> detailing “Code Orange: Fail Small.”</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>The disruptions observed in the fourth quarter underscore the importance of real-time data in maintaining global connectivity. Whether it’s a government-ordered shutdown or a minor technical issue, transparency allows the technical community to respond faster and more effectively. We will continue to track these shifts on Cloudflare Radar, providing the insights needed to navigate the complexities of modern networking. We share our observations on the <a href="https://radar.cloudflare.com/outage-center"><u>Cloudflare Radar Outage Center</u></a>, via social media, and in posts on <a href="https://blog.cloudflare.com/tag/cloudflare-radar/"><u>blog.cloudflare.com</u></a>. Follow us on social media at <a href="https://twitter.com/CloudflareRadar"><u>@CloudflareRadar</u></a> (X), <a href="https://noc.social/@cloudflareradar"><u>noc.social/@cloudflareradar</u></a> (Mastodon), and <a href="https://bsky.app/profile/radar.cloudflare.com"><u>radar.cloudflare.com</u></a> (Bluesky), or contact us via <a><u>email</u></a>.</p><p>As a reminder, while these blog posts feature graphs from <a href="https://radar.cloudflare.com/"><u>Radar</u></a> and the <a href="https://radar.cloudflare.com/explorer"><u>Radar Data Explorer</u></a>, the underlying data is available from our <a href="https://developers.cloudflare.com/api/resources/radar/"><u>API</u></a>. You can use the API to retrieve data to do your own local monitoring or analysis, or you can use the <a href="https://github.com/cloudflare/mcp-server-cloudflare/tree/main/apps/radar#cloudflare-radar-mcp-server-"><u>Radar MCP server</u></a> to incorporate Radar data into your AI tools.</p> ]]></content:encoded>
            <category><![CDATA[Radar]]></category>
            <category><![CDATA[Internet Shutdown]]></category>
            <category><![CDATA[Internet Traffic]]></category>
            <category><![CDATA[Outage]]></category>
            <category><![CDATA[Internet Trends]]></category>
            <category><![CDATA[AWS]]></category>
            <category><![CDATA[Microsoft Azure]]></category>
            <category><![CDATA[Consumer Services]]></category>
            <guid isPermaLink="false">6dRT0oOSVcyQzjnZCkzH7S</guid>
            <dc:creator>David Belson</dc:creator>
        </item>
        <item>
            <title><![CDATA[Code Orange: Fail Small — our resilience plan following recent incidents]]></title>
            <link>https://blog.cloudflare.com/fail-small-resilience-plan/</link>
            <pubDate>Fri, 19 Dec 2025 22:35:30 GMT</pubDate>
            <description><![CDATA[ We have declared “Code Orange: Fail Small” to focus everyone at Cloudflare on a set of high-priority workstreams with one simple goal: ensure that the cause of our last two global outages never happens again. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>On <a href="https://blog.cloudflare.com/18-november-2025-outage/"><u>November 18, 2025</u></a>, Cloudflare’s network experienced significant failures to deliver network traffic for approximately two hours and ten minutes. Nearly three weeks later, on <a href="https://blog.cloudflare.com/5-december-2025-outage/"><u>December 5, 2025</u></a>, our network again failed to serve traffic for 28% of applications behind our network for about 25 minutes.</p><p>We published detailed post-mortem blog posts following both incidents, but we know that we have more to do to earn back your trust. Today we are sharing details about the work underway at Cloudflare to prevent outages like these from happening again.</p><p>We are calling the plan “<b>Code Orange: Fail Small</b>”, which reflects our goal of making our network more resilient to errors or mistakes that could lead to a major outage. A “Code Orange” means the work on this project is prioritized above all else. For context, we declared a “Code Orange” at Cloudflare <a href="https://blog.cloudflare.com/major-data-center-power-failure-again-cloudflare-code-orange-tested/"><u>once before</u></a>, following another major incident that required top priority from everyone across the company. We feel the recent events require the same focus.  Code Orange is our way to enable that to happen, allowing teams to work cross-functionally as necessary to get the job done while pausing any other work.</p><p>The Code Orange work is organized into three main areas:</p><ul><li><p>Require controlled rollouts for any configuration change that is propagated to the network, just like we do today for software binary releases.</p></li><li><p>Review, improve, and test failure modes of all systems handling network traffic to ensure they exhibit well-defined behavior under all conditions, including unexpected error states.</p></li><li><p>Change our internal “break glass”* procedures, and remove any circular dependencies so that we, and our customers, can act fast and access all systems without issue during an incident.</p></li></ul><p>These projects will deliver iterative improvements as they proceed, rather than one “big bang” change at their conclusion. Every individual update will contribute to more resiliency at Cloudflare. By the end, we expect Cloudflare’s network to be much more resilient, including for issues such as those that triggered the global incidents we experienced in the last two months.</p><p>We understand that these incidents are painful for our customers and the Internet as a whole. We’re deeply embarrassed by them, which is why this work is the first priority for everyone here at Cloudflare.</p><p><sup><b><i>*</i></b></sup><sup><i> Break glass procedures at Cloudflare allow certain individuals to elevate their privilege under certain circumstances to perform urgent actions to resolve high severity scenarios.</i></sup></p>
    <div>
      <h2>What went wrong?</h2>
      <a href="#what-went-wrong">
        
      </a>
    </div>
    <p>In the first incident, users visiting a customer site on Cloudflare saw error pages that indicated Cloudflare could not deliver a response to their request. In the second, they saw blank pages.</p><p>Both outages followed a similar pattern. In the moments leading up to each incident we instantaneously deployed a configuration change in our data centers in hundreds of cities around the world.</p><p>The November change was an automatic update to our Bot Management classifier. We run various artificial intelligence models that learn from the traffic flowing through our network to build detections that identify bots. We constantly update those systems to stay ahead of bad actors trying to evade our security protection to reach customer sites.</p><p>During the December incident, while trying to protect our customers from a vulnerability in the popular open source framework React, we deployed a change to a security tool used by our security analysts to improve our signatures. Similar to the urgency of new bot management updates, we needed to get ahead of the attackers who wanted to exploit the vulnerability. That change triggered the start of the incident.</p><p>This pattern exposed a serious gap in how we deploy configuration changes at Cloudflare, versus how we release software updates. When we release software version updates, we do so in a controlled and monitored fashion. For each new binary release, the deployment must successfully complete multiple gates before it can serve worldwide traffic. We deploy first to employee traffic, before carefully rolling out the change to increasing percentages of customers worldwide, starting with free users. If we detect an anomaly at any stage, we can revert the release without any human intervention.</p><p>We have not applied that methodology to configuration changes. Unlike releasing the core software that powers our network, when we make configuration changes, we are modifying the values of how that software behaves and we can do so instantly. We give this power to our customers too: If you make a change to a setting in Cloudflare, it will propagate globally in seconds.</p><p>While that speed has advantages, it also comes with risks that we need to address. The past two incidents have demonstrated that we need to treat any change that is applied to how we serve traffic in our network with the same level of tested caution that we apply to changes to the software itself.</p>
    <div>
      <h2>We will change how we deploy configuration updates at Cloudflare</h2>
      <a href="#we-will-change-how-we-deploy-configuration-updates-at-cloudflare">
        
      </a>
    </div>
    <p>Our ability to deploy configuration changes globally within seconds was the core commonality across the two incidents. In both events, a wrong configuration took down our network in seconds.</p><p>Introducing controlled rollouts of our configuration, just as we <b><i>already do</i></b> for software releases, is the most important workstream of our Code Orange plan.</p><p>Configuration changes at Cloudflare propagate to the network very quickly. When a user creates a new DNS record, or creates a new security rule, it reaches 90% of servers on the network within seconds. This is powered by a software component that we internally call Quicksilver.</p><p>Quicksilver is also used for any configuration change required by our own teams. The speed is a feature: we can react and globally update our network behavior very quickly. However, in both incidents this caused a breaking change to propagate to the entire network in seconds rather than passing through gates to test it.</p><p>While the ability to deploy changes to our network on a near-instant basis is useful in many cases, it is rarely necessary. Work is underway to treat configuration the same way that we treat code by introducing controlled deployments within Quicksilver to any configuration change.</p><p>We release software updates to our network multiple times per day through what we call our Health Mediated Deployment (HMD) system. In this framework, every team at Cloudflare that owns a service (a piece of software deployed into our network) must define the metrics that indicate a deployment has succeeded or failed, the rollout plan, and the steps to take if it does not succeed.</p><p>Different services will have slightly different variables. Some might need longer wait times before proceeding to more data centers, while others might have lower tolerances for error rates even if it causes false positive signals.</p><p>Once deployed, our HMD toolkit begins to carefully progress against that plan while monitoring each step before proceeding. If any step fails, the rollback will automatically begin and the team can be paged if needed.</p><p>By the end of Code Orange, configuration updates will follow this same process. We expect this to allow us to quickly catch the kinds of issues that occurred in these past two incidents long before they become widespread problems.</p>
    <div>
      <h2>How will we address failure modes between services?</h2>
      <a href="#how-will-we-address-failure-modes-between-services">
        
      </a>
    </div>
    <p>While we are optimistic that better control over configuration changes will catch more problems before they become incidents, we know that mistakes can and will occur. During both incidents, errors in one part of our network became problems in most of our technology stack, including the control plane that customers rely on to configure how they use Cloudflare.</p><p>We need to think about careful, graduated rollouts not just in terms of geographic progression (spreading to more of our data centers) or in terms of population progression (spreading to employees and customer types). We also need to plan for safer deployments that contain failures from service progression (spreading from one product like our Bot Management service to an unrelated one like our dashboard).</p><p>To that end, we are in the process of reviewing the interface contracts between every critical product and service that comprise our network to ensure that we a) <b>assume failure will occur</b> between each interface and b) handle that failure in the absolute <b>most reasonable way possible</b>. </p><p>To go back to our Bot Management service failure, there were at least two key interfaces where, if we had assumed failure was going to happen, we could have handled it gracefully to the point that it was unlikely any customer would have been impacted. The first was in the interface that read the corrupted config file. Instead of panicking, there should have been a sane set of validated defaults which would have allowed traffic to pass through our network, while we would have, at worst, lost the realtime fine-tuning that feeds into our bot detection machine-learning models.

The second interface was between the core software that runs our network and the Bot Management module itself. In the event that our bot management module failed (as it did), we should not have dropped traffic by default. Instead, we could have come up with, yet again, a more sane default of allowing the traffic to pass with a passable classification.</p>
    <div>
      <h2>How will we solve emergencies faster?</h2>
      <a href="#how-will-we-solve-emergencies-faster">
        
      </a>
    </div>
    <p>During the incidents, it took us too long to resolve the problem. In both cases, this was worsened by our security systems preventing team members from accessing the tools they needed to fix the problem, and in some cases, circular dependencies slowed us down as some internal systems also became unavailable.</p><p>As a security company, all our tools are behind authentication layers with fine-grained access controls to ensure customer data is safe and to prevent unauthorized access. This is the right thing to do, but at the same time, our current processes and systems slowed us down when speed was a top priority.</p><p>Circular dependencies also affected our customer experience. For example, during the November 18 incident, Turnstile, our no CAPTCHA bot solution, became unavailable. As we use Turnstile on the login flow to the Cloudflare dashboard, customers who did not have active sessions, or API service tokens, were not able to log in to Cloudflare in the moment of most need to make critical changes.</p><p>Our team will be reviewing and improving all of the break glass procedures and technology to ensure that, when necessary, we can access the right tools as fast as possible while maintaining our security requirements. This includes reviewing and removing circular dependencies, or being able to “bypass” them quickly in the event there is an incident.<b> </b>We will also increase the frequency of our training exercises, so that processes are well understood by all teams prior to any potential disaster scenario in the future. </p>
    <div>
      <h2>When will we be done?</h2>
      <a href="#when-will-we-be-done">
        
      </a>
    </div>
    <p>While we haven’t captured in this post all the work being undertaken internally, the workstreams detailed above describe the top priorities the teams are being asked to focus on. Each of these workstreams maps to a detailed plan touching nearly every product and engineering team at Cloudflare. We have a lot of work to do.</p><p>By the end of Q1, and largely before then, we will:</p><ul><li><p>Ensure all production systems are covered by Health Mediated Deployments (HMD) for configuration management.</p></li><li><p>Update our systems to adhere to proper failure modes as appropriate for each product set.</p></li><li><p>Ensure we have processes in place so the right people have the right access to provide proper remediation during an emergency.</p></li></ul><p>Some of these goals will be evergreen. We will always need to better handle circular dependencies as we launch new software and our break glass procedures will need to update to reflect how our security technology changes over time.</p><p>We failed our users and the Internet as a whole in these past two incidents. We have work to do to make it right. We plan to share updates as this work proceeds and appreciate the questions and feedback we have received from our customers and partners.</p> ]]></content:encoded>
            <category><![CDATA[Outage]]></category>
            <category><![CDATA[Post Mortem]]></category>
            <category><![CDATA[Code Orange]]></category>
            <guid isPermaLink="false">DMVZ2E5NT13VbQvP1hUNj</guid>
            <dc:creator>Dane Knecht</dc:creator>
        </item>
        <item>
            <title><![CDATA[The 2025 Cloudflare Radar Year in Review: The rise of AI, post-quantum, and record-breaking DDoS attacks]]></title>
            <link>https://blog.cloudflare.com/radar-2025-year-in-review/</link>
            <pubDate>Mon, 15 Dec 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ We present our 6th annual review of Internet trends and patterns observed across the globe, revealing the disruptions, advances and metrics that defined 2025.  ]]></description>
            <content:encoded><![CDATA[ <p>The <a href="https://radar.cloudflare.com/year-in-review/2025/"><u>2025 Cloudflare Radar Year in Review</u></a> is here: our sixth annual review of the Internet trends and patterns we observed throughout the year, based on Cloudflare’s expansive network view.</p><p>Our view is unique, due to Cloudflare’s global <a href="https://cloudflare.com/network"><u>network</u></a>, which has a presence in 330 cities in over 125 countries/regions, handling over 81 million HTTP requests per second on average, with more than 129 million HTTP requests per second at peak on behalf of millions of customer Web properties, in addition to responding to approximately 67 million (<a href="https://www.cloudflare.com/learning/dns/dns-server-types/"><u>authoritative + resolver</u></a>) DNS queries per second. <a href="https://radar.cloudflare.com/"><u>Cloudflare Radar</u></a> uses the data generated by these Web and DNS services, combined with other complementary data sets, to provide near-real time insights into <a href="https://radar.cloudflare.com/traffic"><u>traffic</u></a>, <a href="https://radar.cloudflare.com/bots"><u>bots</u></a>, <a href="https://radar.cloudflare.com/security/"><u>security</u></a>, <a href="https://radar.cloudflare.com/quality"><u>connectivity</u></a>, and <a href="https://radar.cloudflare.com/dns"><u>DNS</u></a> patterns and trends that we observe across the Internet. </p><p>Our <a href="https://radar.cloudflare.com/year-in-review/2025/"><u>Radar Year in Review</u></a> takes that observability and, instead of a real-time view, offers a look back at 2025: incorporating interactive charts, graphs, and maps that allow you to explore and compare selected trends and measurements year-over-year and across geographies, as well as share and embed Year in Review graphs. </p><p>The 2025 Year In Review is organized into six sections: <a href="https://radar.cloudflare.com/year-in-review/2025#internet-traffic-growth"><u>Traffic</u></a>, <a href="https://radar.cloudflare.com/year-in-review/2025#robots-txt"><u>AI</u></a>, <a href="https://radar.cloudflare.com/year-in-review/2025#ios-vs-android"><u>Adoption &amp; Usage</u></a>, <a href="https://radar.cloudflare.com/year-in-review/2025#internet-outages"><u>Connectivity</u></a>, <a href="https://radar.cloudflare.com/year-in-review/2025#mitigated-traffic"><u>Security</u></a>, and <a href="https://radar.cloudflare.com/year-in-review/2025#malicious-emails"><u>Email Security</u></a>, with data spanning the period from January 1 to December 2, 2025. To ensure consistency, we kept underlying methodologies unchanged from previous years’ calculations. We also incorporated several new data sets this year, including multiple AI-related metrics, <a href="https://radar.cloudflare.com/year-in-review/2025#speed-tests"><u>global speed test activity</u></a>, and <a href="https://radar.cloudflare.com/year-in-review/2025#ddos-attacks"><u>hyper-volumetric DDOS size progression</u></a>. Trends for over 200 countries/regions are available on the microsite; smaller or less-populated locations are excluded due to insufficient data. Some metrics are only shown worldwide and are not displayed if a country/region is selected. </p><p>In this post, we highlight key findings and interesting observations from the major Year In Review microsite sections, and we have again published a companion <i>Most Popular Internet Services </i><a href="https://blog.cloudflare.com/radar-2025-year-in-review-internet-services/"><u>blog post</u></a> that specifically explores trends seen across <a href="https://radar.cloudflare.com/year-in-review/2025#internet-services"><u>top Internet Services</u></a>.</p><p>We encourage you to visit the <a href="https://radar.cloudflare.com/year-in-review/2025/"><u>2025 Year in Review microsite</u></a> to explore the datasets and metrics in more detail, including those for your country/region to see how they have changed since 2024, and how they compare to other areas of interest. </p><p>We hope you’ll find the Year in Review to be an insightful and powerful tool — to explore the disruptions, advances, and metrics that defined the Internet in 2025. </p><p>Let’s dig in.</p>
    <div>
      <h2>Key Findings</h2>
      <a href="#key-findings">
        
      </a>
    </div>
    
    <div>
      <h3>Traffic</h3>
      <a href="#traffic">
        
      </a>
    </div>
    <ul><li><p>Global Internet traffic grew 19% in 2025, with significant growth starting in August. <a href="#global-internet-traffic-grew-19-in-2025-with-significant-growth-starting-in-august"><u>➜</u></a></p></li><li><p>The top 10 most popular Internet services saw a few year-over-year shifts, while a number of new entrants landed on category lists. <a href="#the-top-10-most-popular-internet-services-saw-some-year-over-year-shifts-while-the-category-lists-saw-a-number-of-new-entrants"><u>➜</u></a></p></li><li><p>Starlink traffic doubled in 2025, including traffic from over 20 new countries/regions. <a href="#starlink-traffic-doubled-in-2025-including-traffic-from-over-20-new-countries-regions"><u>➜</u></a></p></li><li><p>Googlebot was again responsible for the highest volume of request traffic to Cloudflare in 2025 as it crawled millions of Cloudflare customer sites for search indexing and AI training. <a href="#googlebot-was-again-responsible-for-the-highest-volume-of-request-traffic-to-cloudflare-in-2025-as-it-crawled-millions-of-cloudflare-customer-sites-for-search-indexing-and-ai-training"><u>➜</u></a></p></li><li><p>The share of human-generated Web traffic that is post-quantum encrypted has grown to 52%. <a href="#the-share-of-human-generated-web-traffic-that-is-post-quantum-encrypted-has-grown-to-52"><u>➜</u></a></p></li><li><p>Googlebot was responsible for more than a quarter of Verified Bot traffic. <a href="#googlebot-was-responsible-for-more-than-a-quarter-of-verified-bot-traffic"><u>➜</u></a></p></li></ul>
    <div>
      <h3>AI</h3>
      <a href="#ai">
        
      </a>
    </div>
    <ul><li><p>Crawl volume from dual-purpose Googlebot dwarfed other AI bots and crawlers. <a href="#crawl-volume-from-dual-purpose-googlebot-dwarfed-other-ai-bots-and-crawlers"><u>➜</u></a></p></li><li><p>AI “user action” crawling increased by over 15x in 2025. <a href="#ai-user-action-crawling-increased-by-over-15x-in-2025"><u>➜</u></a></p></li><li><p>While other AI bots accounted for 4.2% of HTML request traffic, Googlebot alone accounted for 4.5%. <a href="#while-other-ai-bots-accounted-for-4-2-of-html-request-traffic-googlebot-alone-accounted-for-4-5"><u>➜</u></a></p></li><li><p>Anthropic had the highest crawl-to-refer ratio among the leading AI and search platforms. <a href="#anthropic-had-the-highest-crawl-to-refer-ratio-among-the-leading-ai-and-search-platforms"><u>➜</u></a></p></li><li><p>AI crawlers were the most frequently fully disallowed user agents found in robots.txt files. <a href="#ai-crawlers-were-the-most-frequently-fully-disallowed-user-agents-found-in-robots-txt-files"><u>➜</u></a></p></li><li><p>On Workers AI, Meta’s llama-3-8b-instruct model was the most popular model, and text generation was the most popular task type. <a href="#on-workers-ai-metas-llama-3-8b-instruct-model-was-the-most-popular-model-and-text-generation-was-the-most-popular-task-type"><u>➜</u></a></p></li></ul>
    <div>
      <h3>Adoption &amp; Usage</h3>
      <a href="#adoption-usage">
        
      </a>
    </div>
    <ul><li><p>iOS devices generated 35% of mobile device traffic globally — and more than half of device traffic in many countries. <a href="#ios-devices-generated-35-of-mobile-device-traffic-globally-and-more-than-half-of-device-traffic-in-many-countries"><u>➜</u></a></p></li><li><p>The shares of global Web requests using HTTP/3 and HTTP/2 both increased slightly in 2025. <a href="#the-shares-of-global-web-requests-using-http-3-and-http-2-both-increased-slightly-in-2025"><u>➜</u></a></p></li><li><p>JavaScript-based libraries and frameworks remained integral tools for building Web sites. <a href="#javascript-based-libraries-and-frameworks-remained-integral-tools-for-building-web-sites"><u>➜</u></a></p></li><li><p>One-fifth of automated API requests were made by Go-based clients. <a href="#one-fifth-of-automated-api-requests-were-made-by-go-based-clients"><u>➜</u></a></p></li><li><p>Google remains the top search engine, with Yandex, Bing, and DuckDuckGo distant followers. <a href="#google-remains-the-top-search-engine-with-yandex-bing-and-duckduckgo-distant-followers"><u>➜</u></a></p></li><li><p>Chrome remains the top browser across platforms and operating systems – except on iOS, where Safari has the largest share. <a href="#chrome-remains-the-top-browser-across-platforms-and-operating-systems-except-on-ios-where-safari-has-the-largest-share"><u>➜</u></a></p></li></ul>
    <div>
      <h3>Connectivity</h3>
      <a href="#connectivity">
        
      </a>
    </div>
    <ul><li><p>Almost half of the 174 major Internet outages observed around the world in 2025 were due to government-directed regional and national shutdowns of Internet connectivity. <a href="#almost-half-of-the-174-major-internet-outages-observed-around-the-world-in-2025-were-due-to-government-directed-regional-and-national-shutdowns-of-internet-connectivity"><u>➜</u></a></p></li><li><p>Globally, less than a third of dual-stack requests were made over IPv6, while in India, over two-thirds were. <a href="#globally-less-than-a-third-of-dual-stack-requests-were-made-over-ipv6-while-in-india-over-two-thirds-were"><u>➜</u></a></p></li><li><p>European countries had some of the highest download speeds, all above 200 Mbps. Spain remained consistently among the top locations across measured Internet quality metrics. <a href="#european-countries-had-some-of-the-highest-download-speeds-all-above-200-mbps-spain-remained-consistently-among-the-top-locations-across-measured-internet-quality-metrics"><u>➜</u></a></p></li><li><p>London and Los Angeles were hotspots for Cloudflare speed test activity in 2025. <a href="#london-and-los-angeles-were-hotspots-for-cloudflare-speed-test-activity-in-2025"><u>➜</u></a></p></li><li><p>More than half of request traffic comes from mobile devices in 117 countries/regions. <a href="#more-than-half-of-request-traffic-comes-from-mobile-devices-in-117-countries-regions"><u>➜</u></a></p></li></ul>
    <div>
      <h3>Security</h3>
      <a href="#security">
        
      </a>
    </div>
    <ul><li><p>6% of global traffic over Cloudflare’s network was mitigated by our systems — either as potentially malicious or for customer-defined reasons. <a href="#6-of-global-traffic-over-cloudflares-network-was-mitigated-by-our-systems-either-as-potentially-malicious-or-for-customer-defined-reasons"><u>➜</u></a></p></li><li><p>40% of global bot traffic came from the United States, with Amazon Web Services and Google Cloud originating a quarter of global bot traffic. <a href="#40-of-global-bot-traffic-came-from-the-united-states-with-amazon-web-services-and-google-cloud-originating-a-quarter-of-global-bot-traffic"><u>➜</u></a></p></li><li><p>Organizations in the "People and Society” sector were the most targeted during 2025. <a href="#organizations-in-the-people-and-society-vertical-were-the-most-targeted-during-2025"><u>➜</u></a></p></li><li><p>Routing security, measured as the shares of RPKI valid routes and covered IP address space, saw continued improvement throughout 2025. <a href="#routing-security-measured-as-the-shares-of-rpki-valid-routes-and-covered-ip-address-space-saw-continued-improvement-throughout-2025"><u>➜</u></a></p></li><li><p>Hyper-volumetric DDoS attack sizes grew significantly throughout the year. <a href="#hyper-volumetric-ddos-attack-sizes-grew-significantly-throughout-the-year"><u>➜</u></a></p></li><li><p>More than 5% of email messages analyzed by Cloudflare were found to be malicious. <a href="#more-than-5-of-email-messages-analyzed-by-cloudflare-were-found-to-be-malicious"><u>➜</u></a></p></li><li><p>Deceptive links, identity deception, and brand impersonation were the most common types of threats found in malicious email messages. <a href="#deceptive-links-identity-deception-and-brand-impersonation-were-the-most-common-types-of-threats-found-in-malicious-email-messages"><u>➜</u></a></p></li><li><p>Nearly all of the email messages from the .christmas and .lol Top Level Domains were found to be either spam or malicious. <a href="#nearly-all-of-the-email-messages-from-the-christmas-and-lol-top-level-domains-were-found-to-be-either-spam-or-malicious"><u>➜</u></a></p></li></ul>
    <div>
      <h2>Traffic trends</h2>
      <a href="#traffic-trends">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3EqqyX4A0PI27tBdVijUq2/9102522d8661d7d5911ece00c1b1e678/BLOG-3077_2.png" />
          </figure>
    <div>
      <h3>Global Internet traffic grew 19% in 2025, with significant growth starting in August</h3>
      <a href="#global-internet-traffic-grew-19-in-2025-with-significant-growth-starting-in-august">
        
      </a>
    </div>
    <p>To determine the traffic trends over time for the Year in Review, we use the average daily traffic volume (excluding bot traffic) over the second full calendar week (January 12-18) of 2025 as our baseline. (The second calendar week is used to allow time for people to get back into their “normal” school and work routines after the winter holidays and New Year’s Day.) The percent change shown in the traffic trends chart is calculated relative to the baseline value — it does not represent absolute traffic volume for a country/region. The trend line represents a seven-day trailing average, which is used to smooth the sharp changes seen with data at a daily granularity. </p><p>Traffic growth in 2025 appeared to occur in several phases. Traffic was, on average, somewhat flat through mid-April, generally within a couple of percent of the baseline value. However, it then saw growth through May to approximately 5% above baseline, staying in the +4-7% range through mid-August. It was at that time that growth accelerated, climbing steadily through September, October, and November, <a href="https://radar.cloudflare.com/year-in-review/2025#internet-traffic-growth"><u>peaking at 19% growth</u></a> for the year. Aided by a late-November increase, 2025’s rate of growth is about 10% higher than the 17% growth observed in 2024. In <a href="https://blog.cloudflare.com/radar-2024-year-in-review/#global-internet-traffic-grew-17-2-in-2024"><u>past years</u></a>, we have also observed traffic growth accelerating in the back half of the year, although in 2022-2024, that acceleration started in July. It’s not clear why this year’s growth was seemingly delayed by several weeks.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3I9BSisZlIKlCrANpDTBtx/deb202dba9ca9aa7e23379bab6d81412/BLOG-3077_3_-_traffic-internet_traffic_growth_-_worldwide.png" />
          </figure><p><sup><i>Internet traffic trends in 2025, worldwide</i></sup></p><p><a href="https://radar.cloudflare.com/year-in-review/2025/bw#internet-traffic-growth"><u>Botswana</u></a> saw the highest peak growth, reaching 298% above baseline on November 8, and ending the period 295% over baseline. (More on what accounts for that growth in the Starlink section below.) Botswana and <a href="https://radar.cloudflare.com/year-in-review/2025/sd#internet-traffic-growth"><u>Sudan</u></a> were the only countries/regions to see traffic more than double over the course of the year, although some others experienced peak increases over 100% at some point during the year.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1z4fQNQvLZM5li5h7JWeIq/ed3afd5c7d2412a7426f3e7c4985be33/BLOG-3077_4_-_traffic-internet_traffic_growth_-_Botswana.png" />
          </figure><p><sup><i>Internet traffic trends in 2025, Botswana</i></sup></p><p>The impact of extended Internet disruptions are clearly visible within the graphs as well. For example, on October 29, the <a href="https://radar.cloudflare.com/year-in-review/2025/tz#internet-traffic-growth"><u>Tanzanian</u></a> government imposed an Internet shutdown there in response to election day protests. That shutdown lasted just a day, but another one followed from October 30 until November 3. Although traffic in the country had increased more than 40% above baseline ahead of the shutdowns, the disruption ultimately dropped traffic more than 70% below baseline — a rapid reversal. Traffic recovered quickly after connectivity was restored. A similar pattern was observed in <a href="https://radar.cloudflare.com/year-in-review/2025/jm#internet-traffic-growth"><u>Jamaica</u></a>, where Internet traffic spiked ahead of the arrival of <a href="https://x.com/CloudflareRadar/status/1983188999461319102?s=20"><u>Hurricane Melissa</u></a> on October 28, and then dropped significantly after the storm caused power outages and infrastructure damage on the island. Traffic began to rebound after the storm’s passing, returning to a level just above baseline by early December.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4dVMnD0mQvl4sB1bbn6kka/a7c433aaf2df3319328b27156bf70618/BLOG-3077_5_-_traffic-internet_traffic_growth_-_Tanzania.png" />
          </figure><p><sup><i>Internet traffic trends in 2025, Tanzania</i></sup></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4dovYDK7vTfjsL9FBNAvjE/a80a0c8fe69cce81ecc03605ae874859/BLOG-3077_6_-_traffic-internet_traffic_growth_-_Jamaica.png" />
          </figure><p><sup><i>Internet traffic trends in 2025, Jamaica</i></sup></p>
    <div>
      <h3>The top 10 most popular Internet services saw some year-over-year shifts, while the category lists saw a number of new entrants</h3>
      <a href="#the-top-10-most-popular-internet-services-saw-some-year-over-year-shifts-while-the-category-lists-saw-a-number-of-new-entrants">
        
      </a>
    </div>
    <p>For the Year in Review, we look at the 11-month year-to-date period. In addition to an “overall” ranked list, we also rank services across nine categories, based on analysis of anonymized query data of traffic to our <a href="https://1.1.1.1/dns"><u>1.1.1.1 public DNS resolver</u></a> from millions of users around the world. For the purposes of these rankings, domains that belong to a single Internet service are grouped together.</p><p>Google and Facebook once again held the top two spots among the <a href="https://radar.cloudflare.com/year-in-review/2025/#internet-services"><u>top 10</u></a>. Although the other members of the top 10 list remained consistent with 2024’s rankings, there was some movement in the middle. Microsoft, Instagram, and YouTube all moved higher; Amazon Web Services (AWS) dropped one spot lower, while TikTok fell four spots.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4vMi7DU13dkmLCkhEvvzVO/bdc5b0baa3b140c6112abf3b7414da83/BLOG-3077_7_-_traffic-topinternetservices.png" />
          </figure><p><sup><i>Top Internet services in 2025, worldwide</i></sup></p><p>Among Generative AI services, ChatGPT/OpenAI remained at the top of the list. But there was movement elsewhere, highlighting the dynamic nature of the industry. Services that moved up the rankings include Perplexity, Claude/Anthropic, and GitHub Copilot. New entries in the top 10 for 2025 include Google Gemini, Windsurf AI, Grok/xAI, and DeepSeek.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/vUiNheIzMym9Mr3TPK3yN/c4684bb93696e31dcd689b1a150d35cd/BLOG-3077_8_-_Generative_AI.png" />
          </figure><p><sup><i>Top Generative AI services in 2025, worldwide</i></sup></p><p>Other categories saw movement within their lists as well – Shopee (“the leading e-commerce online shopping platform in Southeast Asia and Taiwan”) is a new entrant to the E-Commerce list, and HBO Max joined the Video Streaming ranking. These categorical rankings, as well as trends seen by specific services, are explored in more detail in <a href="https://blog.cloudflare.com/radar-2025-year-in-review-internet-services/"><u>a separate blog post</u></a>.</p><p>In addition, this year we are also providing top Internet services insights at a country/region level for the Overall, Generative AI, Social Media, and Messaging categories. (In 2024, we only shared Overall insights.)</p>
    <div>
      <h3>Starlink traffic doubled in 2025, including traffic from over 20 new countries/regions</h3>
      <a href="#starlink-traffic-doubled-in-2025-including-traffic-from-over-20-new-countries-regions">
        
      </a>
    </div>
    <p>SpaceX Starlink’s satellite-based Internet service continues to be a popular option for bringing connectivity to unserved or underserved areas, as well as to users on <a href="https://starlink.com/business/aviation"><u>planes</u></a> and <a href="https://starlink.com/business/maritime"><u>boats</u></a>. We analyzed aggregate request traffic volumes associated with Starlink's primary <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/"><u>autonomous system</u></a> (<a href="https://radar.cloudflare.com/as14593"><u>AS14593</u></a>) to track the growth in usage of the service throughout 2025. The request volume shown on the trend line in the chart represents a seven-day trailing average. </p><p>Globally, <a href="https://radar.cloudflare.com/year-in-review/2025/#starlink-traffic-trends"><u>traffic from Starlink</u></a> continued to see consistent growth throughout 2025, with total request volume up 2.3x across the year. We tend to see rapid traffic growth when Starlink service becomes available in a country/region, and that trend continues in 2025. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4d7DF8FT1RuK8rbrFfUu1E/c05645dc7640e11794b35770bc0bcd70/BLOG-3077_9_-_traffic-starlink-worldwide.png" />
          </figure><p><sup><i>Starlink traffic growth in 2025, worldwide</i></sup></p><p>That’s exactly what we saw in the more than 20 new countries/regions where <a href="https://x.com/starlink"><u>@Starlink</u></a> announced availability: within days, Starlink traffic in those places increased rapidly. These included <a href="https://radar.cloudflare.com/year-in-review/2025/am#starlink-traffic-trends"><u>Armenia</u></a>, <a href="https://radar.cloudflare.com/year-in-review/2025/ne#starlink-traffic-trends"><u>Niger</u></a>, <a href="https://radar.cloudflare.com/year-in-review/2025/lk#starlink-traffic-trends"><u>Sri Lanka</u></a>, and <a href="https://radar.cloudflare.com/year-in-review/2025/sx#starlink-traffic-trends"><u>Sint Maarten</u></a>.</p><p>We also saw Starlink traffic from a number of locations that are not currently <a href="https://starlink.com/map"><u>marked for service availability</u></a>. However, there are IPv4 and/or IPv6 prefixes associated with these countries in Starlink’s <a href="https://geoip.starlinkisp.net/feed.csv"><u>published geofeed</u></a>. Given the ability for Starlink users to <a href="https://starlink.com/roam"><u>roam</u></a> with their service (and equipment), this traffic likely comes from roaming users in those areas.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4knmSgVn4FFyMm3ZRNRvuq/887455ee737217a7f9bad2cedbbff009/BLOG-3077_10_-_traffic-starlink-niger.png" />
          </figure><p><sup><i>Starlink traffic growth in 2025, Niger</i></sup></p><p>Of countries/regions where service was active before 2025, <a href="https://radar.cloudflare.com/year-in-review/2025/bj#starlink-traffic-trends"><u>Benin</u></a>, <a href="https://radar.cloudflare.com/year-in-review/2025/tl#starlink-traffic-trends"><u>Timor-Leste</u></a>, and <a href="https://radar.cloudflare.com/year-in-review/2025/bw#starlink-traffic-trends"><u>Botswana</u></a> had some of the largest traffic growth, at 51x, 19x, and 16x respectively. Starlink service availability in <a href="https://x.com/Starlink/status/1720438167944499638"><u>Benin</u></a> was first announced in November 2023, <a href="https://x.com/Starlink/status/1866631930902622360"><u>Timor-Leste</u></a> in December 2024, and <a href="https://x.com/Starlink/status/1828840132688130322"><u>Botswana</u></a> in August 2024.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/PlOuYo67dUghmsSVtzd5k/d8ff2816e5703cc425c403c52bd56be1/BLOG-3077_11_-_traffic-starlink-botswana.png" />
          </figure><p><sup><i>Starlink traffic growth in 2025, Botswana</i></sup></p><p>Similar services, such as <a href="https://leo.amazon.com/"><u>Amazon Leo</u></a>, <a href="https://www.eutelsat.com/satellite-services/tv-internet-home/satellite-internet-home-business-konnect"><u>Eutelsat Konnect</u></a>, and China’s <a href="https://en.wikipedia.org/wiki/Qianfan"><u>Qianfan</u></a>, continue to grow their satellite constellations and move towards commercial availability. We hope to review traffic growth across these services in the future as well.</p>
    <div>
      <h3>Googlebot was again responsible for the highest volume of request traffic to Cloudflare in 2025 as it crawled millions of Cloudflare customer sites for search indexing and AI training</h3>
      <a href="#googlebot-was-again-responsible-for-the-highest-volume-of-request-traffic-to-cloudflare-in-2025-as-it-crawled-millions-of-cloudflare-customer-sites-for-search-indexing-and-ai-training">
        
      </a>
    </div>
    <p>To look at the aggregate request traffic Cloudflare saw in 2025 from the entire IPv4 Internet, we can use a <a href="https://en.wikipedia.org/wiki/Hilbert_curve"><u>Hilbert curve</u></a>, which allows us to visualize a sequence of IPv4 addresses in a two-dimensional pattern that keeps nearby IP addresses close to each other, making them <a href="https://xkcd.com/195/"><u>useful</u></a> for surveying the Internet's IPv4 address space. Within the <a href="https://radar.cloudflare.com/year-in-review/2025/#ipv4-traffic-distribution"><u>visualization</u></a>, we aggregate IPv4 addresses into <a href="https://www.ripe.net/about-us/press-centre/IPv4CIDRChart_2015.pdf"><u>/20</u></a> prefixes, meaning that at the highest zoom level, each square represents traffic from 4,096 IPv4 addresses. This level of aggregation keeps the amount of data used for the visualization manageable. See the <a href="https://blog.cloudflare.com/radar-2024-year-in-review/#googlebot-was-responsible-for-the-highest-volume-of-request-traffic-to-cloudflare-in-2024-as-it-retrieved-content-from-millions-of-cloudflare-customer-sites-for-search-indexing"><u>2024 Year in Review blog post</u></a> for additional details about the visualization.</p><p>For the third year in a row, the IP address block that had the maximum request volume to Cloudflare during 2025 was Google’s <a href="https://radar.cloudflare.com/routing/prefix/66.249.64.0/20"><u>66.249.64.0/20</u></a> –  <a href="https://developers.google.com/static/search/apis/ipranges/googlebot.json"><u>one of several</u></a> used by the <a href="https://developers.google.com/search/docs/crawling-indexing/googlebot"><u>Googlebot</u></a> web crawler to retrieve content for search indexing and AI training. That a Googlebot IP address block ranked again as the top request traffic source is unsurprising, given the number of web properties on Cloudflare’s network and <a href="#googlebot-was-responsible-for-more-than-a-quarter-of-verified-bot-traffic"><u>Googlebot’s aggressive crawling activity</u></a>. The Googlebot prefix accounted for nearly 4x as much IPv4 request traffic as the next largest traffic source, 146.20.240.0/20, which is part of a <a href="https://radar.cloudflare.com/routing/prefix/146.20.0.0/16"><u>larger block of IPv4 address space announced by Rackspace Hosting</u></a>. As a cloud and hosting provider, Rackspace supports many different types of customers and applications, so the driver of the observed traffic to Cloudflare isn’t known.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5NpjYc7D7ykOlLh837jarL/59c2bd9927a2fb16bb39973f4d8d1db8/BLOG-3077_12_-_traffic-ipv4distribution-googlebot.png" />
          </figure><p><i><sup>Zoomed Hilbert curve view showing the address block that generated the highest volume of requests in 2025</sup></i></p><p>This year, we’ve added the ability to search for an autonomous system (ASN) to the visualization, allowing you to see how broadly a network provider’s IP address holdings are distributed across the IPv4 universe. </p><p>One example is AS16509 (AMAZON-02, used with AWS), which shows the results of Amazon’s acquisitions of <a href="https://toonk.io/aws-and-their-billions-in-ipv4-addresses/index.html"><u>large amounts of IPv4 address space</u></a> over the years. Another example is AS7018 (ATT-INTERNET4, AT&amp;T), which is one of the largest <a href="https://radar.cloudflare.com/routing/us#ases-registered-in-united-states"><u>announcers of IPv4 address space in the United States</u></a>. Much of the traffic we see from this ASN comes from <a href="https://radar.cloudflare.com/routing/prefix/12.0.0.0/8"><u>12.0.0.0/8</u></a>, a block of over 16 million IPv4 addresses that has been <a href="https://wq.apnic.net/apnic-bin/whois.pl?searchtext=12.147.5.178"><u>owned by AT&amp;T since 1983</u></a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/42mehcaIRV4Kp9h6P86z6d/436e033e353710419fcc49865d765258/BLOG-3077_13_-_traffic-ipv4distribution-as7018.png" />
          </figure><p><sup><i>Hilbert curve showing the IPv4 address blocks from AS7018 that sent traffic to Cloudflare in 2025</i></sup></p>
    <div>
      <h3>The share of human-generated Web traffic that is post-quantum encrypted has grown to 52%</h3>
      <a href="#the-share-of-human-generated-web-traffic-that-is-post-quantum-encrypted-has-grown-to-52">
        
      </a>
    </div>
    <p>“<a href="https://en.wikipedia.org/wiki/Post-quantum_cryptography"><u>Post-quantum</u></a>” refers to a set of cryptographic techniques designed to protect encrypted data from “<a href="https://en.wikipedia.org/wiki/Harvest_now,_decrypt_later"><u>harvest now, decrypt later</u></a>” attacks by adversaries that have the ability to capture and store current data for future decryption by sufficiently advanced quantum computers. The Cloudflare Research team has been <a href="https://blog.cloudflare.com/sidh-go/"><u>working on post-quantum cryptography since 2017</u></a>, and regularly publishes <a href="https://blog.cloudflare.com/pq-2025/"><u>updates</u></a> on the state of the post-quantum Internet.</p><p>After seeing <a href="https://radar.cloudflare.com/year-in-review/2024#post-quantum-encryption"><u>significant growth in 2024</u></a>, the global share of <a href="https://radar.cloudflare.com/year-in-review/2025/#post-quantum-encryption"><u>post-quantum encrypted traffic</u></a> nearly doubled throughout 2025, from 29% at the start of the year to 52% in early December. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/qqehh1EqKIMi7xNcSr8SN/c24962ce446e153fbd37c9abe7254f78/BLOG-3077_14_-_traffic-postquantum-worldwide.png" />
          </figure><p><sup><i>Post-quantum encrypted TLS 1.3 traffic growth in 2025, worldwide</i></sup></p><p>Twenty-eight countries/regions saw their share of post-quantum encrypted traffic more than double throughout the year, including significant growth in <a href="https://radar.cloudflare.com/year-in-review/2025/pr#post-quantum-encryption"><u>Puerto Rico</u></a> and <a href="https://radar.cloudflare.com/year-in-review/2025/kw#post-quantum-encryption"><u>Kuwait</u></a>. Kuwait’s share nearly tripled, from 13% to 37%, and Puerto Rico’s share grew from 20% to 49%. </p><p>Those three were among others that saw significant share growth in mid-September, <a href="https://9to5mac.com/2025/09/09/apple-announces-ios-26-release-date-september-15/"><u>concurrent with</u></a> Apple releasing operating system updates, in which “<i>TLS-protected connections will </i><a href="https://support.apple.com/en-us/122756"><i><u>automatically advertise support for hybrid, quantum-secure key exchange</u></i></a><i> in TLS 1.3</i>”. In Kuwait and Puerto Rico, over half of request traffic is from mobile devices, and approximately half comes from iOS devices in both locations as well, so it is not surprising that this software update resulted in a significant increase in post-quantum traffic share</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5Y65KuTezdGnAfilj9Xosr/a74b60f9f24322827ea89f9ad1eef035/BLOG-3077_15_-_traffic-postquantum-puertorico.png" />
          </figure><p><sup><i>Post-quantum encrypted TLS 1.3 traffic growth in 2025, Puerto Rico</i></sup></p><p>To that end, the share of post-quantum encrypted traffic from Apple iOS devices <a href="https://radar.cloudflare.com/explorer?dataSet=http&amp;groupBy=post_quantum&amp;filters=botClass%253DLIKELY_HUMAN%252Cos%253DiOS&amp;dt=2025-09-01_2025-09-28"><u>grew significantly in September</u></a> after iOS 26 was officially released. Just <a href="https://x.com/CloudflareRadar/status/1969159602999640535?s=20"><u>four days after release</u></a>, the global share of requests with post-quantum support from iOS devices grew from just under 2% to 11%. By <a href="https://radar.cloudflare.com/explorer?dataSet=http&amp;groupBy=post_quantum&amp;filters=deviceType%253DMobile%252Cos%253DiOS%252CbotClass%253DLikely_Human&amp;dt=2025-12-01_2025-12-07"><u>early December</u></a>, more than 25% of requests from iOS devices used post-quantum encryption.</p>
    <div>
      <h3>Googlebot was responsible for more than a quarter of Verified Bot traffic</h3>
      <a href="#googlebot-was-responsible-for-more-than-a-quarter-of-verified-bot-traffic">
        
      </a>
    </div>
    <p>The new <a href="https://radar.cloudflare.com/bots/directory?kind=all"><u>Bots Directory</u></a> on Cloudflare Radar provides a wealth of information about <a href="https://developers.cloudflare.com/bots/concepts/bot/verified-bots/"><u>Verified Bots</u></a> and <a href="https://developers.cloudflare.com/bots/concepts/bot/signed-agents/"><u>Signed Agents</u></a>, including their operators, categories, and associated user agents, links to documentation, and traffic trends. Verified Bots must conform to a <a href="https://developers.cloudflare.com/bots/concepts/bot/verified-bots/policy/"><u>set of requirements</u></a> as well as being verified through either <a href="https://developers.cloudflare.com/bots/reference/bot-verification/web-bot-auth/"><u>Web Bot Auth</u></a> or <a href="https://developers.cloudflare.com/bots/reference/bot-verification/ip-validation/"><u>IP validation</u></a>. A signed agent is controlled by an end user and a verified signature-agent from their <a href="https://developers.cloudflare.com/bots/reference/bot-verification/web-bot-auth/"><u>Web Bot Auth</u></a> implementation, and must conform to a separate <a href="https://developers.cloudflare.com/bots/concepts/bot/signed-agents/policy/"><u>set of requirements</u></a>.</p><p><a href="https://radar.cloudflare.com/bots/directory/google"><u>Googlebot</u></a> is used to crawl Web site content for search indexing and AI training, and it was far and away the <a href="https://radar.cloudflare.com/year-in-review/2025/#per-bot-traffic"><u>most active bot seen by Cloudflare</u></a> throughout 2025. It was most active between mid-February and mid-July, peaking in mid-April, and was responsible for over 28% of traffic from Verified Bots. Other Google-operated bots that were responsible for notable amounts of traffic included <a href="https://radar.cloudflare.com/bots/directory/googleads"><u>Google AdsBot</u></a> (used to monitor Web sites where Google ads are served), <a href="https://radar.cloudflare.com/bots/directory/googleimageproxy"><u>Google Image Proxy</u></a> (used to retrieve and cache images embedded in email messages), and <a href="https://radar.cloudflare.com/bots/directory/google-other"><u>GoogleOther</u></a> (used by various product teams for fetching publicly accessible content from sites).</p><p>OpenAI’s <a href="https://radar.cloudflare.com/bots/directory/gptbot"><u>GPTBot</u></a>, which crawls content for AI training, was the next most active bot, originating about 7.5% of Verified Bot traffic, with fairly volatile crawling activity during the first half of the year. Microsoft’s <a href="https://radar.cloudflare.com/bots/directory/bing"><u>Bingbot</u></a> crawls Web site content for search indexing and AI training and generated 6% of Verified Bot traffic throughout the year, showing relatively stable activity. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/01CNwrALbfJ1DBJpX3hHvw/58f278f76b4e57d095e5e61b879f3728/BLOG-3077_16_-_traffic-verifiedbot-bots.png" />
          </figure><p><sup><i>Verified Bot traffic trends in 2025, worldwide</i></sup></p><p>Search engine crawlers and AI crawlers are the two most active Verified Bot categories, with traffic patterns mapping closely to the leading bots in those categories, including GoogleBot and OpenAI’s GPTBot. <a href="https://radar.cloudflare.com/bots/directory?category=SEARCH_ENGINE_CRAWLER&amp;kind=all"><u>Search engine crawlers</u></a> were responsible for 40% of Verified Bot traffic, with <a href="https://radar.cloudflare.com/bots/directory?category=AI_CRAWLER&amp;kind=all"><u>AI crawlers</u></a> generating half as much (20%). <a href="https://radar.cloudflare.com/bots/directory?category=SEARCH_ENGINE_OPTIMIZATION&amp;kind=all"><u>Search engine optimization</u></a> bots were also quite active, driving over 13% of requests from Verified Bots.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6IFOI7astEqMk1fqLPvhMK/860c1b28fe6d2987b7bcd8510d1495b5/BLOG-3077_17_-_traffic-verifiedbots-category.png" />
          </figure><p><sup><i>Verified Bot traffic trends by category in 2025, worldwide</i></sup></p>
    <div>
      <h2>AI insights</h2>
      <a href="#ai-insights">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7IY2MCHqrWK7wPO5XSrHwc/2d4622db6417472e2702c31a95d31cef/BLOG-3077_18_-_.png" />
          </figure>
    <div>
      <h2> Crawl volume from dual-purpose Googlebot dwarfed other AI bots and crawlers</h2>
      <a href="#crawl-volume-from-dual-purpose-googlebot-dwarfed-other-ai-bots-and-crawlers">
        
      </a>
    </div>
    <p>In September, a Cloudflare <a href="https://blog.cloudflare.com/building-a-better-internet-with-responsible-ai-bot-principles/"><u>blog post</u></a> laid out a proposal for responsible AI bot principles, one of which was “AI bots should have one distinct purpose and declare it.” In the <a href="https://radar.cloudflare.com/ai-insights#ai-bot-best-practices"><u>AI bots best practices overview</u></a> on Radar, we note that several bot operators have dual-purpose crawlers, including Google and Microsoft.</p><p>Because <a href="https://radar.cloudflare.com/bots/directory/google"><u>Googlebot</u></a> crawls for both search engine indexing and AI training, we have included it in this year’s <a href="https://radar.cloudflare.com/year-in-review/2025/#ai-bot-and-crawler-traffic"><u>AI crawler overview</u></a>. In 2025, its crawl volume dwarfed that of other leading AI bots. Request traffic began to increase in mid-February, peaking in late April, and then slowly declined through late July. After that, it grew gradually into the end of the year. <a href="https://radar.cloudflare.com/bots/directory/bing"><u>Bingbot</u></a> also has a similar dual purpose, although its crawl volume is a fraction of Googlebot’s. Bingbot’s crawl activity trended generally upwards across the year.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/14AYO1s8q9J0zN9gcTaz0h/d60ad6cdd7af04938d98eda081bea834/BLOG-3077_19_-_ai-botandcrawlertraffic.png" />
          </figure><p><sup><i>AI crawler traffic trends in 2025, worldwide</i></sup></p><p>OpenAI’s <a href="https://radar.cloudflare.com/bots/directory/gptbot"><u>GPTBot</u></a> is used to crawl content that may be used in training OpenAI's generative AI foundation models. Its crawling activity was quite volatile across the year, reaching its highest levels in June, but it ended November slightly above the crawl levels seen at the beginning of the year. </p><p>Crawl volume for OpenAI’s <a href="https://radar.cloudflare.com/bots/directory/chatgpt-user"><u>ChatGPT-User</u></a>, which visits Web pages when users ask ChatGPT or a CustomGPT questions, saw sustained growth over the course of the year, with a weekly usage pattern becoming more evident starting in mid-February, suggesting increasing usage at schools and in the workplace. Peak request volumes were as much as 16x higher than at the beginning of the year. A drop in activity was also evident in the June to August timeframe, when many students were out of school and many professionals took vacation time. </p><p><a href="https://radar.cloudflare.com/bots/directory/oai-searchbot"><u>OAI-SearchBot</u></a>, which is used to link to and surface websites in search results in ChatGPT's search features, saw crawling activity grow gradually through August, then several traffic spikes in August and September, before starting to grow more aggressively heading into October, with peak request volume during a late October spike approximately 5x higher than the beginning of the year.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2Y39lUtvOLcaxwSwop4Egs/b9790ef1314a35ff811e4ed09d875271/BLOG-3077_20_-_image59.png" />
          </figure><p><sup><i>OpenAI crawler traffic trends in 2025, worldwide</i></sup></p><p>Crawling by Anthropic’s ClaudeBot effectively doubled through the first half of the year, but gradually declined during the second half, returning to a level approximately 10% higher than the start of the year. Perplexity’s PerplexityBot crawling traffic grew slowly through January and February, but saw a big jump in activity from mid-March into April. After that, growth was more gradual through October, before seeing a significant increase again in November, winding up about 3.5x higher than where it started the year.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4PgjYaCVUzZgmt23SdKj6q/142ebab34ffbea6dd6770bcebdf2f1d2/BLOG-3077_21_-_image42.png" />
          </figure><p><sup><i>ClaudeBot traffic trends in 2025, worldwide</i></sup></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/hkDU4jX6T7GibKUxDqycO/c0eab7d698916d05ef7314973974ef5d/BLOG-3077_22_-_.png" />
          </figure><p><sup><i>PerplexityBot traffic trends in 2025, worldwide</i></sup></p><p>ByteDance’s Bytespider, one of 2024’s top AI crawlers, saw crawling volume below several other training bots, and its activity dropped across the year, continuing the decline observed last year.</p>
    <div>
      <h3>AI “user action” crawling increased by over 15x in 2025</h3>
      <a href="#ai-user-action-crawling-increased-by-over-15x-in-2025">
        
      </a>
    </div>
    <p>Most AI bot crawling is done for one of three <a href="https://radar.cloudflare.com/year-in-review/2025/#ai-crawler-traffic-by-purpose"><u>purposes</u></a>: training, which gathers Web site content for AI model training; search, which indexes Web site content for search functionality available on AI platforms; and user action, which visits Web sites in response to user questions posed to a chatbot. Note that search crawling may also include crawling for <a href="https://developers.cloudflare.com/ai-search/concepts/what-is-rag/"><u>Retrieval-Augmented Generation (RAG)</u></a>, which enables a content owner to bring their own data into LLM generation without retraining or fine-tuning a model. (A fourth “undeclared” purpose captures traffic from AI bots whose crawling purpose is unclear or unknown.)</p><p>Crawling for model training is responsible for the overwhelming majority of AI crawler traffic, reaching as much as 7-8x search crawling and 32x user action crawling at peak. The training traffic figure is heavily influenced by OpenAI’s GPTBot, and as such, it followed a very similar pattern through the year.</p><p>Crawling for search was strongest through mid-March, when it dropped by approximately 40%. It returned to more gradual growth after that, though it ended the surveyed time period just under 10% lower than the start of the year.</p><p>User action crawling started 2025 with the lowest crawl volume of the three defined purposes, but more than doubled through January and February. It again doubled in early March, and from there, it continued to grow throughout the year, up over 21x from January through early December. This growth maps very closely to the traffic trends seen for OpenAI’s ChatGPT-User bot.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/Cs9yjb8rpfwiOgfGmYGxx/7e11b9014a69b84af3b7b25cde4e73ac/BLOG-3077_23_-_ai-crawlpurpose-useraction.png" />
          </figure><p><sup><i>User action crawler traffic trends in 2025, worldwide</i></sup></p>
    <div>
      <h3>While other AI bots accounted for 4.2% of HTML request traffic, Googlebot alone accounted for 4.5%</h3>
      <a href="#while-other-ai-bots-accounted-for-4-2-of-html-request-traffic-googlebot-alone-accounted-for-4-5">
        
      </a>
    </div>
    <p>AI bots have frequently been in the news during 2025 as content owners raise concerns about the amount of traffic that they are generating, especially as much of it <a href="https://blog.cloudflare.com/content-independence-day-no-ai-crawl-without-compensation/"><u>does not translate into</u></a> end users being referred back to the source Web sites. To better understand the impact of AI bot crawling activity, as compared to non-AI bots and human Web usage, we analyzed request traffic for HTML content across Cloudflare’s customer base and <a href="https://radar.cloudflare.com/year-in-review/2025/#ai-traffic-share"><u>classified it</u></a> as coming from a human, an AI bot, or another “non-AI” type of bot. (Note that because we are focusing on just HTML content here, the bot and human shares of traffic will differ from that shown on Radar, which analyzes request traffic for all content types.) Because Googlebot crawls so actively, and is dual-purpose, we have broken its share out separately in this analysis.</p><p>Throughout 2025, we found that traffic from AI bots accounted for an average of 4.2% of HTML requests. The share varied widely throughout the year, dropping as low as 2.4% in early April, and reaching as high as 6.4% in late June.</p><p>To that end, non-AI bots started 2025 responsible for half of requests to HTML pages, seven percentage points above human-generated traffic. This gap grew as wide as 25 percentage points during the first few days of June. However, these traffic shares began to draw closer together starting in mid June, and starting on September 11, entered a period where the human generated share of HTML traffic sometimes exceeded that of non-AI bots. As of December 2, human traffic generated 47% of HTML requests, and non-AI bots generated 44%.</p><p>Googlebot is a particularly voracious crawler, and this year it originated 4.5% of HTML requests, a share slightly larger than AI bots in aggregate. Starting the year at just under 2.5%, its share ramped quickly over the next four months, peaking at 11% in late April. It subsequently fell back towards its starting point over the next several months, and then grew again during the second half of the year, ending with a 5% share. This share shift largely mirrors Googlebot’s crawling activity as discussed above.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/69Kmxq3C29UO0AM7yWOJmY/411e1fe6e4799ae08cfdfec8783a8a71/BLOG-3077_24_-_ai-aibottrafficshare.png" />
          </figure><p><sup><i>HTML traffic shares by bot type in 2025, worldwide</i></sup></p>
    <div>
      <h3>Anthropic had the highest crawl-to-refer ratio among the leading AI and search platforms</h3>
      <a href="#anthropic-had-the-highest-crawl-to-refer-ratio-among-the-leading-ai-and-search-platforms">
        
      </a>
    </div>
    <p>We <a href="https://blog.cloudflare.com/ai-search-crawl-refer-ratio-on-radar/"><u>launched the crawl-to-refer ratio metric on Radar</u></a> on July 1 to track how often a given AI or search platform sends traffic to a site relative to how often it crawls that site. A high ratio means a whole lot of AI crawling without sending actual humans to a Web site.</p><p>It can be a volatile metric, with the values shifting day-by-day as crawl activity and referral traffic change. This <a href="https://blog.cloudflare.com/ai-search-crawl-refer-ratio-on-radar/#how-does-this-measurement-work"><u>metric compares</u></a> total number of requests from relevant user agents associated with a given search or AI platform where the response was of Content-type: text/html by the total number of requests for HTML content where the Referer header contained a hostname associated with a given search or AI platform. </p><p>Anthropic had the highest <a href="https://radar.cloudflare.com/year-in-review/2025/#crawl-refer-ratio"><u>crawl-to-refer ratios this year</u></a>, reaching as much as 500,000:1, although they were quite erratic from January through May. Both the magnitude and erratic nature of the metric was likely due to sparse referral traffic over that time period. After that, the ratios became more consistent, but remained higher than others, ranging from ~25,000:1 to ~100,000:1.</p><p>OpenAI’s ratios over time were quite spiky, and reached as much as 3,700:1 in March. These shifts may be due to the stabilization of GPTBot crawling activity, coupled with increased usage of ChatGPT search functionality, which includes links back to source Web sites within its responses. Users following those links would increase Referer counts, potentially lowering the ratio. (Assuming that crawl traffic wasn’t increasing at a similar or greater rate.)</p><p>Perplexity had the lowest crawl-to-refer ratios of the major AI platforms, starting the year below 100:1 before spiking in late March above 700:1, concurrent with a spike of crawl traffic seen from PerplexityBot.  Settling back down after the spike, peak ratio values generally remained below 400:1, and below 200:1 from September onwards.</p><p>Among search platforms, Microsoft’s ratio unexpectedly exhibited a cyclical weekly pattern, reaching its lowest levels on Thursdays, and peaking on Sundays. Peak ratio values were generally in the 50:1 to 70:1 range across the year. Starting the year just over 3:1, Google’s crawl-to-refer ratio increased steadily through April, reaching as high as 30:1. After peaking, it fell somewhat erratically through mid-July, dropping back to 3:1, although it has been slowly increasing through the latter half of 2025. DuckDuckGo’s ratio remained below 1:1 for the first three calendar quarters of 2025, but experienced a sudden jump to 1.5:1 in mid-October and stayed elevated for the remainder of the period.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/Z0LM4kJGevPxirhokT85o/401363b41b9f5987fe06976197967d9a/BLOG-3077_25_-_ai-crawltoreferratios.png" />
          </figure><p><sup><i>AI &amp; search platform crawl-to-refer ratios in 2025, worldwide</i></sup></p>
    <div>
      <h3>AI crawlers were the most frequently fully disallowed user agents found in robots.txt files</h3>
      <a href="#ai-crawlers-were-the-most-frequently-fully-disallowed-user-agents-found-in-robots-txt-files">
        
      </a>
    </div>
    <p>The robots.txt file, formally defined in <a href="https://www.rfc-editor.org/rfc/rfc9309.html"><u>RFC 9309</u></a> as the Robots Exclusion Protocol, is a text file that content owners can use to signal to Web crawlers which parts of a Web site the crawlers are allowed to access, using directives to explicitly allow or disallow search and AI crawlers from their whole site, or just parts of it. The directives within the file are effectively a “keep out” sign and don’t provide any formal access control. Having said that, Cloudflare’s <a href="https://blog.cloudflare.com/control-content-use-for-ai-training/#putting-up-a-guardrail-with-cloudflares-managed-robots-txt"><u>managed robots.txt</u></a> feature automatically updates a site’s existing robots.txt or creates a robots.txt file on the site that includes directives asking popular AI bot operators to not use the content for AI model training. In addition, our <a href="https://blog.cloudflare.com/ai-audit-enforcing-robots-txt/"><u>AI Crawl Control</u></a> capabilities can track violations of a site’s robots.txt directives, and give the site owner the ability to block requests from the offending user agent.</p><p>On Cloudflare Radar, we provide <a href="https://radar.cloudflare.com/ai-insights#ai-user-agents-found-in-robotstxt"><u>insight</u></a> into the number of robots.txt files found among our top 10,000 <a href="https://radar.cloudflare.com/domains"><u>domains</u></a> and the full/partial disposition of the allow and disallow directives found within the files for selected crawler user agents. (In this context, “full” refers to directives that apply to the whole site, and “partial” refers to directives that apply to specified paths or file types.) <a href="https://radar.cloudflare.com/year-in-review/2025/#robots-txt"><u>Within the Year in Review microsite</u></a>, we show how the disposition of these directives changed over the course of 2025.</p><p>The user agents with the highest number of fully disallowed directives are those associated with <a href="https://www.cloudflare.com/learning/ai/how-to-block-ai-crawlers/">AI crawlers</a>, including GPTBot, ClaudeBot, and <a href="https://www.theatlantic.com/technology/2025/11/common-crawl-ai-training-data/684567/"><u>CCBot</u></a>. The directives for Googlebot and Bingbot crawlers, used for both search indexing and AI training, leaned heavily towards partial disallow, likely focused on cordoning off login endpoints and other non-content areas of a site. For these two bots, directives applying to the whole site remained a small fraction of the total number of disallow directives observed through the year. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6hCZ4jExApvVaK2CrEulZO/5eb528b8851868d0c90b56e638ffae86/BLOG-3077_26_-_ai-robotstxt-disallow.png" />
          </figure><p><sup><i>Robots.txt disallow directives by user agent</i></sup></p><p>The number of explicit allow directives found across the discovered robots.txt files was a fraction of the observed disallow directives, likely because allow is the default policy, absent any specific directive. Googlebot had the largest number of explicit allow directives, although over half of them were partial allows. Allow directives targeting AI crawlers were found across fewer domains, with directives targeting OpenAI’s crawlers leaning more towards explicit full allows. </p><p><a href="https://developers.google.com/crawling/docs/crawlers-fetchers/google-common-crawlers#google-extended"><u>Google-Extended</u></a> is a user agent token that web publishers can use to manage whether content that Google crawls from their sites may be used for training <a href="https://deepmind.google/models/gemini/"><u>Gemini models</u></a> or providing site content from the Google Search index to Gemini, and the number of allow directives targeting it tripled during the year — most partially allowed access at the start of the year, while the end of the year saw a larger number of directives that explicitly allowed full site access than those that allowed access to just some of the site’s content. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6hCZ4jExApvVaK2CrEulZO/5eb528b8851868d0c90b56e638ffae86/BLOG-3077_26_-_ai-robotstxt-disallow.png" />
          </figure><p><sup><i>Robots.txt allow directives by user agent</i></sup></p>
    <div>
      <h3>On Workers AI, Meta’s llama-3-8b-instruct model was the most popular model, and text generation was the most popular task type</h3>
      <a href="#on-workers-ai-metas-llama-3-8b-instruct-model-was-the-most-popular-model-and-text-generation-was-the-most-popular-task-type">
        
      </a>
    </div>
    <p>The AI model landscape is rapidly evolving, with providers regularly releasing more powerful models, capable of tasks like text and image generation, speech recognition, and image classification. Cloudflare collaborates with AI model providers to ensure that <a href="https://developers.cloudflare.com/workers-ai/models/"><u>Workers AI supports these models</u></a> as soon as possible following their release, and we <a href="https://blog.cloudflare.com/replicate-joins-cloudflare/"><u>recently acquired Replicate</u></a> to greatly expand our catalog of supported models. In <a href="https://blog.cloudflare.com/expanded-ai-insights-on-cloudflare-radar/#popularity-of-models-and-tasks-on-workers-ai"><u>February 2025</u></a>, we introduced visibility on Radar into the popularity of publicly available supported <a href="https://radar.cloudflare.com/ai-insights/#workers-ai-model-popularity"><u>models</u></a> as well as the types of <a href="https://radar.cloudflare.com/ai-insights/#workers-ai-task-popularity"><u>tasks</u></a> that these models perform, based on customer account share. </p><p><a href="https://radar.cloudflare.com/year-in-review/2025/#workers-ai-model-and-task-popularity"><u>Throughout the year</u></a>, Meta’s <a href="https://developers.cloudflare.com/workers-ai/models/llama-3-8b-instruct/"><u>llama-3-8b-instruct</u></a> model was dominant, with an account share (36.3%) more than three times larger than the next most popular models, OpenAI’s <a href="https://developers.cloudflare.com/workers-ai/models/whisper/"><u>whisper</u></a> (10.1%) and Stability AI’s <a href="https://developers.cloudflare.com/workers-ai/models/stable-diffusion-xl-base-1.0/"><u>stable-diffusion-xl-base-1.0</u></a> (9.8%). Both Meta and BAAI (Beijing Academy of Artificial Intelligence) had multiple models among the top 10, and the top 10 models had an account share of 89%, with the balance spread across a long tail of other models.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1a3GPm3cqrr0KcK6nCeLRZ/fd5ba576f02518c50fd6efbe312cacae/BLOG-3077_28_-_ai-workersaimostpopularmodels.png" />
          </figure><p><sup><i>Most popular models on Workers AI in 2025, worldwide</i></sup></p><p>Task popularity was driven in large part by the top models, with text generation, text-to-image, and automatic speech recognition topping the list. Text generation was used by 48.2% of Workers AI customer accounts, nearly four times more than the text-to-image share of 12.3% and automatic speech recognition’s 11.0% share. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3JxZW6bB7q0kxnzPrh454m/b057fd945ce521aceaf0e8cd27b14f3d/BLOG-3077_29_-_ai-workersaimostpopulartasks.png" />
          </figure><p><sup><i>Most popular tasks on Workers AI in 2025, worldwide</i></sup></p>
    <div>
      <h2>What’s being crawled</h2>
      <a href="#whats-being-crawled">
        
      </a>
    </div>
    <p>In addition to the year-to-date analysis presented above, below we present point-in-time analyses of what is being crawled. Note that these insights are not included in the Year in Review microsite.</p>
    <div>
      <h3>Crawling by geographic region</h3>
      <a href="#crawling-by-geographic-region">
        
      </a>
    </div>
    <p>Within the AI section of Year in Review, we are looking at traffic from AI bots and crawlers globally, without regard for the geography associated with the account that owns the content being crawled. If we drill down a level geographically, using data from October 2025, and look at which bots generate the most crawling traffic for sites owned by customers with a billing address in a given geographic region, we find that Googlebot accounts for between 35% and 55% of crawler traffic in each region.</p><p>OpenAI’s GPTBot or Microsoft’s Bingbot are second most active, with crawling shares of 13-14%. In the developed economies across North America, Europe, and Oceania, Bingbot maintains a solid lead over AI crawlers. But for sites based in fast-growing markets across South America and Asia, GPTBot holds a slimmer lead over Bingbot.</p><table><tr><th><p><b>Geographic region</b></p></th><th><p><b>Top crawlers</b></p></th></tr><tr><td><p>North America</p></td><td><p>Googlebot (45.5%)
Bingbot (14.0%)</p><p>Meta-ExternalAgent (7.7%)</p></td></tr><tr><td><p>South America</p></td><td><p>Googlebot (44.2%)
GPTBot (13.8%)
Bingbot (13.5%)</p></td></tr><tr><td><p>Europe</p></td><td><p>Googlebot (48.6%)
Bingbot (13.2%)
GPTBot (10.8%)</p></td></tr><tr><td><p>Asia</p></td><td><p>Googlebot (39.0%)
GPTBot (14.0%)
Bingbot (12.6%)</p></td></tr><tr><td><p>Africa</p></td><td><p>Googlebot (35.8%)
Bingbot (13.7%)
GPTBot (13.1%)</p></td></tr><tr><td><p>Oceania</p></td><td><p>Googlebot (54.2%)
Bingbot (13.8%)
GPTBot (6.6%)</p></td></tr></table>
    <div>
      <h3>Crawling by industry</h3>
      <a href="#crawling-by-industry">
        
      </a>
    </div>
    <p>In analyzing AI crawler activity by customer industry during October 2025, we found that Retail and Computer Software consistently attracted the most AI crawler traffic, together representing just over 40% of all activity.</p><p>Others in the top 10 accounted for much smaller shares of crawling activity. These top 10 industries accounted for just under 70% of crawling, with the balance spread across a long tail of other industries.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2N55U6SrN7zKkCp66hmhFz/304b038e492e4eda249f3b1fdb664b4a/BLOG-3077_30_-_AI-crawlbyindustry.png" />
          </figure><p><sup><i>Industry share of AI crawling activity, October 2025</i></sup></p>
    <div>
      <h2>Adoption &amp; usage</h2>
      <a href="#adoption-usage">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/73LdMVjBBlMOnQGi8LF4oy/f659eaf5d95219e5b54d62b9e16db809/BLOG-3077_31_-_image35.png" />
          </figure>
    <div>
      <h3>iOS devices generated 35% of mobile device traffic globally – and more than half of device traffic in many countries</h3>
      <a href="#ios-devices-generated-35-of-mobile-device-traffic-globally-and-more-than-half-of-device-traffic-in-many-countries">
        
      </a>
    </div>
    <p>The two leading mobile device operating systems globally are <a href="https://en.wikipedia.org/wiki/IOS"><u>Apple’s iOS</u></a> and <a href="https://en.wikipedia.org/wiki/Android_(operating_system)"><u>Google’s Android</u></a>. By analyzing information in the <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/User-Agent"><u>User-Agent</u></a> header included with each Web request, we can calculate the distribution of traffic by client operating system throughout the year. Android devices generate the majority of mobile device traffic globally, due to the wide distribution of price points, form factors, and capabilities of such devices.</p><p>Globally, the <a href="https://radar.cloudflare.com/year-in-review/2025/#ios-vs-android"><u>share of traffic from iOS</u></a> grew slightly <a href="https://blog.cloudflare.com/radar-2024-year-in-review/#globally-nearly-one-third-of-mobile-device-traffic-was-from-apple-ios-devices-android-had-a-90-share-of-mobile-device-traffic-in-29-countries-regions-peak-ios-mobile-device-traffic-share-was-over-60-in-eight-countries-regions"><u>year-over-year</u></a>, up two percentage points to 35% in 2025. Looking at the top countries for iOS traffic share, <a href="https://radar.cloudflare.com/year-in-review/2025/mc#ios-vs-android"><u>Monaco</u></a> had the highest share, at 70%, and iOS drove 50% or more of mobile device traffic in a total of 30 countries/regions, including <a href="https://radar.cloudflare.com/year-in-review/2025/dk#ios-vs-android"><u>Denmark</u></a> (65%), <a href="https://radar.cloudflare.com/year-in-review/2025/jp#ios-vs-android"><u>Japan</u></a> (57%), and <a href="https://radar.cloudflare.com/year-in-review/2025/pr#ios-vs-android"><u>Puerto Rico</u></a> (52%).</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/btCnb93d23FUPVfkupEGb/79574bfd6f045f88d6331caf488f37a5/BLOG-3077_32_-_adoption-iosvsandroid.png" />
          </figure><p><sup><i>Distribution of mobile device traffic by operating system in 2025, worldwide</i></sup></p><p>For countries/regions with higher Android usage, the shares were significantly larger. Twenty-seven had Android adoption above 90% in 2025, with <a href="https://radar.cloudflare.com/year-in-review/2025/pg#ios-vs-android"><u>Papua New Guinea</u></a> the highest at 97%. <a href="https://radar.cloudflare.com/year-in-review/2025/sd#ios-vs-android"><u>Sudan</u></a>, <a href="https://radar.cloudflare.com/year-in-review/2025/mw#ios-vs-android"><u>Malawi</u></a>, <a href="https://radar.cloudflare.com/year-in-review/2025/bd#ios-vs-android"><u>Bangladesh</u></a>, and <a href="https://radar.cloudflare.com/year-in-review/2025/et#ios-vs-android"><u>Ethiopia</u></a> also registered an Android share of 95% or more. Android was responsible for 50% or more of mobile device traffic in 175 countries/regions, with the <a href="https://radar.cloudflare.com/year-in-review/2025/bs#ios-vs-android"><u>Bahamas</u></a>’ 51% share placing it at the bottom of that list. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2SAm11BSUjgT2uBOfMT4dU/67d85c4786bb8bfe924f92f28956e5b6/BLOG-3077_33_-_adoption-iosvsandroid-map.png" />
          </figure><p><sup><i>Distribution of iOS and Android usage in 2025</i></sup></p>
    <div>
      <h3>The shares of global Web requests using HTTP/3 and HTTP/2 both increased slightly in 2025</h3>
      <a href="#the-shares-of-global-web-requests-using-http-3-and-http-2-both-increased-slightly-in-2025">
        
      </a>
    </div>
    <p>HTTP (HyperText Transfer Protocol) is the protocol that makes the Web work. Over the last 30+ years, it has gone through several major revisions. The first standardized version, <a href="https://datatracker.ietf.org/doc/html/rfc1945"><u>HTTP/1.0</u></a>, was adopted in 1996, <a href="https://www.rfc-editor.org/rfc/rfc2616.html"><u>HTTP/1.1</u></a> in 1999, and <a href="https://www.rfc-editor.org/rfc/rfc7540.html"><u>HTTP/2</u></a> in 2015. <a href="https://www.rfc-editor.org/rfc/rfc9114.html"><u>HTTP/3</u></a>, standardized in 2022, marked a significant update, running on top of a new transport protocol known as <a href="https://blog.cloudflare.com/the-road-to-quic/"><u>QUIC</u></a>. Using QUIC as its underlying transport allows <a href="https://www.cloudflare.com/learning/performance/what-is-http3/"><u>HTTP/3</u></a> to establish connections more quickly, as well as deliver improved performance by mitigating the effects of packet loss and network changes. Because it also provides encryption by default, using HTTP/3 mitigates the risk of attacks. </p><p><a href="https://radar.cloudflare.com/year-in-review/2025/#http-versions"><u>Globally in 2025</u></a>, 50% of requests to Cloudflare were made over HTTP/2, HTTP/1.x accounted for 29%, and the remaining 21% were made via HTTP/3. These shares are largely unchanged <a href="https://radar.cloudflare.com/year-in-review/2024#http-versions"><u>from 2024</u></a> — HTTP/2 and HTTP/3 gained just fractions of a percentage point this year.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1GdxQoS6Zgx6IPgHapkS8N/07d2d023e2e91f58793e7b4359faa263/BLOG-3077_34_-_adoption-httpversions.png" />
          </figure><p><sup><i>Distribution of traffic by HTTP version in 2025, worldwide</i></sup></p><p>Geographically, usage of HTTP/3 appears to be both increasing and spreading. Last year, we noted that we had found eight countries/regions sending more than a third of their requests over HTTP/3. In 2025, 15 countries/regions sent more than a third of requests over HTTP/3, with Georgia’s 38% adoption just exceeding 2024’s top adoption rate of 37% in Réunion. (Looking at <a href="https://radar.cloudflare.com/adoption-and-usage/ge?dateStart=2025-01-01&amp;dateEnd=2025-12-02"><u>historical data</u></a>, Georgia <a href="https://radar.cloudflare.com/adoption-and-usage/ge?dateStart=2025-01-01&amp;dateEnd=2025-01-07"><u>started the year</u></a> around 46% HTTP/3 adoption, but dropped through the first half of the year before leveling off.) Armenia had the largest increase in HTTP/3 adoption year-over-year, jumping from 25% to 37%. </p><p>Seven countries/regions saw overall HTTP/3 usage levels below 10% due to high levels of bot-originated HTTP/1.x traffic. These include Hong Kong, Dominica, Singapore, Ireland, Iran, Seychelles, and Gibraltar. </p>
    <div>
      <h3>JavaScript-based libraries and frameworks remained integral tools for building Web sites</h3>
      <a href="#javascript-based-libraries-and-frameworks-remained-integral-tools-for-building-web-sites">
        
      </a>
    </div>
    <p>To deliver a modern Web site, developers must capably integrate a growing collection of libraries and frameworks with third-party tools and platforms. All of these components must work together to ensure a performant, feature-rich, problem-free user experience. As in past years, we used <a href="https://radar.cloudflare.com/scan"><u>Cloudflare Radar’s URL Scanner</u></a> to scan Web sites associated with the <a href="https://radar.cloudflare.com/domains"><u>top 5,000 domains</u></a> to identify the <a href="https://radar.cloudflare.com/year-in-review/2025/#website-technologies"><u>most popular technologies and services</u></a> used across eleven categories. </p><p><a href="https://jquery.com/"><u>jQuery</u></a> is self-described as a fast, small, and feature-rich JavaScript library, and our scan found it on 8x as many sites as <a href="https://kenwheeler.github.io/slick/"><u>Slick</u></a>, a JavaScript library used to display image carousels. <a href="https://react.dev/"><u>React</u></a> remained the top JavaScript framework used for building Web interfaces, found on twice as many scanned sites as <a href="https://vuejs.org/"><u>Vue.js</u></a>. PHP, node.js, and Java remained the most popular programming languages/technologies, holding a commanding lead over other languages, including Ruby, Python, Perl, and C.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/QBZ6xnDPw9i3y7EBhTqsd/f232925caf1cf3caa91e80a4e16d5ba8/BLOG-3077_35_-_adoption-websitetechnologies.png" />
          </figure><p><sup><i>Top Web site technologies, JavaScript libraries category in 2025</i></sup></p><p><a href="https://wordpress.org/"><u>WordPress</u></a> remained the most popular content management system (CMS), though its share of scanned sites dropped to 47%, with the difference distributed across gains seen by multiple challengers. <a href="https://www.hubspot.com/"><u>HubSpot</u></a> and <a href="https://business.adobe.com/products/marketo.html"><u>Marketo</u></a> remained the top marketing automation platforms, with a combined share 10% higher YoY. Among A/B testing tools, <a href="https://vwo.com/"><u>VWO</u></a>’s share grew by eight percentage points year-over-year, extending its lead over <a href="https://www.optimizely.com/"><u>Optimizely</u></a>, while <a href="https://support.google.com/analytics/answer/12979939?hl=en"><u>Google Optimize</u></a>, which was sunsetted in September 2023, saw its share fall from 14% to 4%.</p>
    <div>
      <h3>One-fifth of automated API requests were made by Go-based clients</h3>
      <a href="#one-fifth-of-automated-api-requests-were-made-by-go-based-clients">
        
      </a>
    </div>
    <p>Application programming interfaces (APIs) are the foundation of modern dynamic Web sites and both Web-based and native applications. These sites and applications rely heavily on automated API calls to provide customized information. Analyzing the Web traffic protected and delivered by Cloudflare, we can identify requests being made to API endpoints. By applying heuristics to these API-related requests determined to not be coming from a person using a browser or native mobile application, we can identify the <a href="https://radar.cloudflare.com/year-in-review/2025/#api-client-language-popularity"><u>top languages used to build API clients</u></a>.</p><p>In 2025, 20% of automated API requests were made by Go-based clients, representing significant growth from Go’s 12% share in 2024. Python’s share also increased year-over-year, growing from 9.6% to 17%. Java jumped to third place, reaching an 11.2% share, up from 7.4% in 2024. <a href="http://node.js"><u>Node.js</u></a>, last year’s second-most popular language, saw its share fall to just 8.3% in 2025, pushing it down to fourth place, while .NET remained at the bottom of the top five, dropping to just 2.3%.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4tntP1mMqqsH5Bjj0r6xyc/0b03ad6b7257b7b935e102d78ec6bdb4/BLOG-3077_36_-_image56.png" />
          </figure><p><sup><i>Most popular automated API client languages in 2025</i></sup></p>
    <div>
      <h3>Google remains the top search engine, with Yandex, Bing, and DuckDuckGo distant followers</h3>
      <a href="#google-remains-the-top-search-engine-with-yandex-bing-and-duckduckgo-distant-followers">
        
      </a>
    </div>
    <p>Cloudflare is in a unique position to measure <a href="https://radar.cloudflare.com/year-in-review/2025/#search-engine-market-share"><u>search engine market share</u></a> because we protect websites and applications for millions of customers. To that end, since the fourth quarter of 2021, we have been publishing quarterly <a href="https://radar.cloudflare.com/reports"><u>reports</u></a> on this data. We use the HTTP <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referer"><u>referer header</u></a> to identify the search engine sending traffic to customer sites and applications, and present the market share data as an overall aggregate, as well as broken out by device type and operating system. (Device type and operating system insights are based on the <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/User-Agent"><u>User-Agent</u></a> and <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Client_hints"><u>Client Hints</u></a> HTTP request headers.)</p><p>Globally, Google referred the most traffic to sites protected and delivered by Cloudflare, with a nearly 90% share in 2025. The other search engines in the top 5 include Bing (3.1%), Yandex (2.0%), Baidu (1.4%), and DuckDuckGo (1.2%). Looking at trends across the year, Yandex dropped from a 2.5% share in May to a 1.5% share in July, while Baidu grew from 0.9% in April to 1.6% in June.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7As9GnMsW9ru3h0RaH0zoX/55e396801f33af890b24aa871f989be5/BLOG-3077_37_-_adoption-searchenginemarketshare.png" />
          </figure><p><sup><i>Overall search engine market share in 2025, worldwide</i></sup></p><p>Yandex users are primarily based in <a href="https://radar.cloudflare.com/year-in-review/2025/ru#search-engine-market-share"><u>Russia</u></a>, where the domestic platform holds a 65% market share, almost double that of Google at 34%. In the <a href="https://radar.cloudflare.com/year-in-review/2025/cz#search-engine-market-share"><u>Czech Republic</u></a>, users prefer Google (84%), but local search engine Seznam’s 7.7% share is a strong showing compared to the second place search engines in other countries. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/fUk9r7hXP0SaMiFiFa3UK/ea4e213f4ac2fb55273e731eacdc10a4/BLOG-3077_38_-_adoption-searchenginemarketshare-czechrepublic.png" />
          </figure><p><sup><i>Overall search engine market share in 2025, Czech Republic</i></sup></p><p>For traffic from “desktop” systems aggregated globally, Google’s market share drops to about 80%, while Bing’s jumps to nearly 11%. This is likely driven by the continued market dominance of Windows-based systems: On Windows, Google refers just 76% of traffic, while Bing refers about 14%. For traffic from mobile devices, Google holds almost 93% of market share, with the same share seen for traffic from both Android and iOS devices.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5ATWm3D3Jp8v0Pob2qibkw/71869e620f0ec7fb42e636d8da6840d7/BLOG-3077_39_-_adoption-searchenginemarketshare-windows.png" />
          </figure><p><sup><i>Overall search engine market share in 2025, Windows-based systems</i></sup></p><p>For additional details, including search engines aggregated under “Other”, please refer to the quarterly <a href="https://radar.cloudflare.com/reports/search-engines"><u>Search Engine Referral Reports</u></a> on Cloudflare Radar.</p>
    <div>
      <h3>Chrome remains the top browser across platforms and operating systems – except on iOS, where Safari has the largest share</h3>
      <a href="#chrome-remains-the-top-browser-across-platforms-and-operating-systems-except-on-ios-where-safari-has-the-largest-share">
        
      </a>
    </div>
    <p>Cloudflare is also in a unique position to measure <a href="https://radar.cloudflare.com/year-in-review/2025/#browser-market-share"><u>browser market share</u></a>, and we have been publishing quarterly <a href="https://radar.cloudflare.com/reports"><u>reports</u></a> on the topic for several years. To identify the browser and associated operating system making content requests, we use information from the <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/User-Agent"><u>User-Agent</u></a> and <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Client_hints"><u>Client Hints</u></a> HTTP headers. We present browser market share data as an overall aggregate, as well as broken out by device type and operating system. Note that the shares of browsers available on both desktop and mobile devices, such as Google Chrome or Apple Safari, are presented in aggregate.</p><p>Globally, two-thirds of request traffic to Cloudflare came from Chrome in 2025, similar to its share last year. Safari, available exclusively on Apple devices, was the second most-popular browser, with a 15.4% market share. They were followed by Microsoft Edge (7.4%), Mozilla Firefox (3.7%) and Samsung Internet (2.3%). </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6NH8hVOr8lxytXTdrCARAk/ac7173e80db1b39da11c2564a3ae4980/BLOG-3077_40_-_adoption-browsermarketshare.png" />
          </figure><p><sup><i>Overall browser market share in 2025, worldwide</i></sup></p><p>In <a href="https://radar.cloudflare.com/year-in-review/2025/ru#browser-market-share"><u>Russia</u></a>, Chrome remains the most popular with a 44% share, but the domestic Yandex Browser comes in a strong second with a 33% market share, as compared to the sub-10% shares for Safari, Edge, and Opera. Interestingly, the Yandex Browser actually beat Chrome by a percentage point (39% to 38%) in June before giving up significant market share to Chrome as the year progressed.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2PGmYbREZR4xvALWdrRqzF/737b9550291d3d5cacfc85cbe72e3551/BLOG-3077_41_-_adoption-browsermarketshare-Russia.png" />
          </figure><p><sup><i>Overall browser market share in 2025, Russia</i></sup></p><p>As the default browser on iOS, Safari is far and away the most popular on such devices, with a 79% market share, four times Chrome’s 19% share. Less than 1% of requests come from DuckDuckGo, Firefox, and QQ Browser (developed in China by Tencent). In contrast, on Android, 85% of requests are from Chrome, while vendor-provided Samsung Internet is a distant second with a 6.6% share. Huawei Browser, another vendor-provided browser, is third at just 1%. And despite being the default browser on Windows, Edge’s 19% share pales in comparison to Chrome, which leads with a 69% share on that operating system.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/zXj6HWrNSNdAWnDXIrLc5/79b47c9671a1c7691b1fde68749d5812/BLOG-3077_42_-_adoption-browsermarketshare-ios.png" />
          </figure><p><sup><i>Overall browser market share in 2025, iOS devices</i></sup></p><p>For additional details, including browsers aggregated under “Other”, please refer to the quarterly <a href="https://radar.cloudflare.com/reports/browser"><u>Browser Market Share Reports</u></a> on Cloudflare Radar.</p>
    <div>
      <h2>Connectivity</h2>
      <a href="#connectivity">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1ZkJ7IDSXBHzKnK9RSNHsY/f042e40576b2380a77282831fe194398/BLOG-3077_43_-_image13.png" />
          </figure>
    <div>
      <h3>Almost half of the 174 major Internet outages observed around the world in 2025 were due to government-directed regional and national shutdowns of Internet connectivity</h3>
      <a href="#almost-half-of-the-174-major-internet-outages-observed-around-the-world-in-2025-were-due-to-government-directed-regional-and-national-shutdowns-of-internet-connectivity">
        
      </a>
    </div>
    <p>Internet outages continue to be an ever-present threat, and the potential impact of these outages continues to grow, as they can lead to economic losses, disrupted educational and government services, and limited communications. During 2025, we covered significant Internet disruptions and their associated causes in our quarterly summary posts (<a href="https://blog.cloudflare.com/q1-2025-internet-disruption-summary/"><u>Q1</u></a>, <a href="https://blog.cloudflare.com/q2-2025-internet-disruption-summary/"><u>Q2</u></a>, <a href="https://blog.cloudflare.com/q3-2025-internet-disruption-summary/"><u>Q3</u></a>) as well standalone posts covering major outages in <a href="https://blog.cloudflare.com/how-power-outage-in-portugal-spain-impacted-internet/"><u>Portugal &amp; Spain</u></a> and <a href="https://blog.cloudflare.com/nationwide-internet-shutdown-in-afghanistan/"><u>Afghanistan</u></a>. The <a href="https://radar.cloudflare.com/outage-center"><u>Cloudflare Radar Outage Center</u></a> tracks these Internet outages, and uses Cloudflare traffic data for insights into their scope and duration.</p><p>Nearly half of the <a href="https://radar.cloudflare.com/year-in-review/2025/#internet-outages"><u>observed outages</u></a> this year were related to Internet shutdowns intended to prevent cheating on academic exams. Countries including <a href="https://x.com/CloudflareRadar/status/1930310203083210760"><u>Iraq</u></a>, <a href="https://x.com/CloudflareRadar/status/1952002641896288532"><u>Syria</u></a>, and <a href="https://blog.cloudflare.com/q3-2025-internet-disruption-summary/#sudan"><u>Sudan</u></a> again implemented regular multi-hour shutdowns over the course of several weeks during exam periods. Other government-directed shutdowns in <a href="https://x.com/CloudflareRadar/status/1924531952993841639"><u>Libya</u></a> and <a href="https://x.com/CloudflareRadar/status/1983502557868666900"><u>Tanzania</u></a> were implemented in response to protests and civil unrest, while in <a href="https://blog.cloudflare.com/nationwide-internet-shutdown-in-afghanistan/"><u>Afghanistan</u></a>, the Taliban ordered the shutdown of fiber optic Internet connectivity in multiple provinces as part of a drive to “prevent immorality.”</p><p>Cable cuts, affecting both submarine and domestic fiber optic infrastructure, were also a leading cause of Internet disruptions in 2025. These cuts resulted in network providers in countries/regions including the <a href="https://blog.cloudflare.com/q3-2025-internet-disruption-summary/#texas-united-states"><u>United States</u></a>, <a href="https://blog.cloudflare.com/q3-2025-internet-disruption-summary/#south-africa"><u>South Africa</u></a>, <a href="https://blog.cloudflare.com/q2-2025-internet-disruption-summary/#digicel-haiti"><u>Haiti</u></a>, <a href="https://blog.cloudflare.com/q3-2025-internet-disruption-summary/#pakistan-united-arab-emirates"><u>Pakistan</u></a>, and <a href="https://x.com/CloudflareRadar/status/1910709632756019219"><u>Hong Kong</u></a> experiencing service disruptions lasting from several hours to several days. Other notable outages include one caused by a <a href="https://bsky.app/profile/radar.cloudflare.com/post/3ltf6jtxd5s2p"><u>fire</u></a> in a telecom building in Cairo, Egypt, which disrupted Internet connectivity across multiple service providers for several days, and another in <a href="https://x.com/CloudflareRadar/status/1983188999461319102"><u>Jamaica</u></a>, where damage caused by Hurricane Melissa resulted in lower Internet traffic from the island for over a week.</p><p>Within the <a href="https://radar.cloudflare.com/year-in-review/2025#internet-outages"><u>timeline</u></a> on the Year in Review microsite, hovering over a dot will display information about that outage, and clicking on it will link to additional insights.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7gC9MsV4mObyNllxyQPzDy/cfe5dcee5e751e00309f7b4f6902a03e/BLOG-3077_44_-_connectivity-internetoutages.png" />
          </figure><p><sup><i>Over 170 major Internet outages were observed around the world during 2025</i></sup></p>
    <div>
      <h3>Globally, less than a third of dual-stack requests were made over IPv6, while in India, over two-thirds were</h3>
      <a href="#globally-less-than-a-third-of-dual-stack-requests-were-made-over-ipv6-while-in-india-over-two-thirds-were">
        
      </a>
    </div>
    <p>Available IPv4 address space has been largely exhausted <a href="https://ipv4.potaroo.net/"><u>for a decade or more</u></a>, though solutions like <a href="https://en.wikipedia.org/wiki/Network_address_translation"><u>Network Address Translation</u></a> have enabled network providers to stretch limited IPv4 resources. This has served in part to slow the adoption of <a href="https://www.rfc-editor.org/rfc/rfc1883"><u>IPv6</u></a>, designed in the mid-1990s as a successor protocol to IPv4, and offers an expanded address space intended to better support the expected growth in the number of Internet-connected devices.</p><p>For nearly 15 years, Cloudflare has been a vocal and active advocate for IPv6 as well, launching solutions including <a href="https://blog.cloudflare.com/introducing-cloudflares-automatic-ipv6-gatewa/"><u>Automatic IPv6 Gateway</u></a> in 2011, which enabled free IPv6 support for all of our customers and <a href="https://blog.cloudflare.com/i-joined-cloudflare-on-monday-along-with-5-000-others"><u>IPv6 support by default for all of our customers</u></a> in 2014. Simplistically, server-side support is only half of what is needed to drive IPv6 adoption, because end user connections need to support it as well. By aggregating and analyzing the IP version used for requests made to Cloudflare across the year, we can get insight into the distribution of traffic across IPv6 and IPv4.</p><p><a href="https://radar.cloudflare.com/year-in-review/2025/#ipv6-adoption"><u>Globally</u></a>, 29% of IPv6-capable (“<a href="https://www.techopedia.com/definition/19025/dual-stack-network"><u>dual-stack</u></a>”) requests for content were made over IPv6, up a percentage point from <a href="https://radar.cloudflare.com/year-in-review/2024#ipv6-adoption"><u>28% in 2024</u></a>. India again topped the list with an IPv6 adoption rate of 67%, followed by just three other countries/regions (<a href="https://radar.cloudflare.com/year-in-review/2025/my#ipv6-adoption"><u>Malaysia</u></a>, <a href="https://radar.cloudflare.com/year-in-review/2025/sa#ipv6-adoption"><u>Saudi Arabia</u></a>, and <a href="https://radar.cloudflare.com/year-in-review/2025/uy#ipv6-adoption"><u>Uruguay</u></a>) that also made more than half of such requests over IPv6, the same as last year. Some of the largest gains were seen in <a href="https://radar.cloudflare.com/year-in-review/2025/bz#ipv6-adoption"><u>Belize</u></a>, which grew from 4.3% to 24% year-over-year, and <a href="https://radar.cloudflare.com/year-in-review/2025/qa#ipv6-adoption"><u>Qatar</u></a>, which saw its adoption nearly double to 33% in 2025. Unfortunately, some countries/regions still lag the leaders, with 94 seeing adoption rates below 10%, including <a href="https://radar.cloudflare.com/year-in-review/2025/ru#ipv6-adoption"><u>Russia</u></a> (8.6%), <a href="https://radar.cloudflare.com/year-in-review/2025/ie#ipv6-adoption"><u>Ireland</u></a> (6.5%), and <a href="https://radar.cloudflare.com/year-in-review/2025/hk#ipv6-adoption"><u>Hong Kong</u></a> (3.0%). Even further behind are the 20 countries/regions with adoption rates below 1%, including <a href="https://radar.cloudflare.com/year-in-review/2025/tz#ipv6-adoption"><u>Tanzania</u></a> (0.9%), <a href="https://radar.cloudflare.com/year-in-review/2025/sy#ipv6-adoption"><u>Syria</u></a> (0.3%), and <a href="https://radar.cloudflare.com/year-in-review/2025/gi#ipv6-adoption"><u>Gibraltar</u></a> (0.1%).</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2NkFC1eLbAPdpJv6WPkvHT/26a260f8068656f8ed4aa0a28009a5d9/BLOG-3077_45_-_connectivity-ipv6.png" />
          </figure><p><sup><i>Distribution of traffic by IP version in 2025, worldwide</i></sup></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3Mzu2k3Xs1YZVNhpZpx9xH/23d19f5057b52690e2def65bc2c9c64a/BLOG-3077_46_-_connectivity-ipv6-top5.png" />
          </figure><p><sup><i>Top five countries for IPv6 adoption in 2025</i></sup></p>
    <div>
      <h3>European countries had some of the highest download speeds, all above 200 Mbps. Spain remained consistently among the top locations across measured Internet quality metrics</h3>
      <a href="#european-countries-had-some-of-the-highest-download-speeds-all-above-200-mbps-spain-remained-consistently-among-the-top-locations-across-measured-internet-quality-metrics">
        
      </a>
    </div>
    <p>Over the past decade or so, we have turned to Internet speed tests for many purposes: keeping our service providers honest, troubleshooting a problematic connection, or showing off a particularly high download speed on social media. In fact, we’ve become conditioned to focus on download speeds as the primary measure of a connection’s quality. While it is absolutely an important metric, for increasingly popular use cases — like videoconferencing, live-streaming, and online gaming — strong upload speeds and low latency are also critical. However, even when Internet providers offer service tiers that include high symmetric speeds and lower latency, consumer adoption is often mixed due to cost, availability, or other issues.</p><p>Tests on <a href="https://speed.cloudflare.com/"><u>speed.cloudflare.com</u></a> measure both download and upload speeds, as well as loaded and unloaded latency. By aggregating the results of <a href="https://radar.cloudflare.com/year-in-review/2025/#internet-quality"><u>tests taken around the world during 2025</u></a>, we can get a country/region perspective on average values for these <a href="https://developers.cloudflare.com/radar/glossary/#connection-quality"><u>connection quality</u></a> metrics, as well as insight into the distribution of the measurements.</p><p>Europe was well-represented among those with the highest average download speeds in 2025. <a href="https://radar.cloudflare.com/year-in-review/2025/es#internet-quality"><u>Spain</u></a>, <a href="https://radar.cloudflare.com/year-in-review/2025/hu#internet-quality"><u>Hungary</u></a>, <a href="https://radar.cloudflare.com/year-in-review/2025/pt#internet-quality"><u>Portugal</u></a>, <a href="https://radar.cloudflare.com/year-in-review/2025/dk#internet-quality"><u>Denmark</u></a>, <a href="https://radar.cloudflare.com/year-in-review/2025/ro#internet-quality"><u>Romania</u></a>, and <a href="https://radar.cloudflare.com/year-in-review/2025/fr#internet-quality"><u>France</u></a> were all in the top 10, with both Spain and Hungary averaging download speeds above 300 Mbps. Spain’s average grew by 25 Mbps from 2024, while Hungary’s jumped 46 Mbps. Meanwhile, Asian countries had many of the highest average upload speeds, with <a href="https://radar.cloudflare.com/year-in-review/2025/kr#internet-quality"><u>South Korea</u></a>, <a href="https://radar.cloudflare.com/year-in-review/2025/mo#internet-quality"><u>Macau</u></a>, <a href="https://radar.cloudflare.com/year-in-review/2025/sg#internet-quality"><u>Singapore</u></a>, and <a href="https://radar.cloudflare.com/year-in-review/2025/jp#internet-quality"><u>Japan</u></a> reaching the top 10, all seeing averages in excess of 130 Mbps.</p><p>But it was Spain that topped the list for the upload metric as well at 206 Mbps, up 13 Mbps from 2024. The country’s strong showing across both speed metrics is potentially attributable to <a href="https://commission.europa.eu/projects/unico-broadband_en"><u>“UNICO-Broadband,”</u></a> a “<i>call for projects by telecommunications operators aiming at the deployment of high-speed broadband infrastructure capable of providing services at symmetric speeds of at least 300 Mbps, scalable at 1 Gbps,</i>” which aimed to cover 100 % of the population in 2025.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4pZCAQEMEmbUjXkIUzAwUP/8aec93e96debe19d496396a6e6cd1db7/BLOG-3077_47_-_connectivity-downloadspeeds.png" />
          </figure><p><sup><i>Countries/regions with the highest download speeds in 2025, worldwide</i></sup></p><p>As noted above, low latency connections are needed to provide users with good <a href="https://www.screenbeam.com/wifihelp/wifibooster/how-to-reduce-latency-or-lag-in-gaming-2/#:~:text=Latency%20is%20measured%20in%20milliseconds,%2C%2020%2D40ms%20is%20optimal."><u>gaming</u></a> and <a href="https://www.haivision.com/glossary/video-latency/#:~:text=Low%20latency%20is%20typically%20defined,and%20streaming%20previously%20recorded%20events."><u>videoconferencing/streaming</u></a> experiences. The <a href="https://blog.cloudflare.com/introducing-radar-internet-quality-page/#connection-speed-quality-data-is-important"><u>latency metric</u></a> can be broken down into loaded and idle latency. The former measures latency on a loaded connection, where bandwidth is actively being consumed, while the latter measures latency on an “idle” connection, when there is no other network traffic present. (These definitions are from the speed test application’s perspective.) </p><p>In 2025, a number of European countries were among those with both the lowest idle and loaded latencies. For average idle latency, <a href="https://radar.cloudflare.com/year-in-review/2025/is#internet-quality"><u>Iceland</u></a> measured the lowest at 13 ms, just 2 ms better than <a href="https://radar.cloudflare.com/year-in-review/2025/md#internet-quality"><u>Moldova</u></a>. In addition to these two, <a href="https://radar.cloudflare.com/year-in-review/2025/pt#internet-quality"><u>Portugal</u></a>, <a href="https://radar.cloudflare.com/year-in-review/2025/es#internet-quality"><u>Spain</u></a>, and <a href="https://radar.cloudflare.com/year-in-review/2025/hu#internet-quality"><u>Hungary</u></a> also ranked among the top 10, all with average idle latencies below 20 ms. Moldova topped the list of countries/regions with the lowest average loaded latency, at 73 ms. Hungary, Spain, <a href="https://radar.cloudflare.com/year-in-review/2025/be#internet-quality"><u>Belgium</u></a>, Portugal, <a href="https://radar.cloudflare.com/year-in-review/2025/sk#internet-quality"><u>Slovakia</u></a>, and <a href="https://radar.cloudflare.com/year-in-review/2025/si#internet-quality"><u>Slovenia</u></a> were also part of the top 10, all with average loaded latencies below 100 ms.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4yFdtVsghuBNrCe0sqdEuS/1ed59c6a972f2c511ed567ef69863f39/BLOG-3077_48_-_connectivity-latency-moldova.png" />
          </figure><p><sup><i>Measured idle/loaded latency, Moldova</i></sup></p>
    <div>
      <h3>London and Los Angeles were hotspots for Cloudflare speed test activity in 2025</h3>
      <a href="#london-and-los-angeles-were-hotspots-for-cloudflare-speed-test-activity-in-2025">
        
      </a>
    </div>
    <p>As we discussed above, the speed test at <a href="http://speed.cloudflare.com"><u>speed.cloudflare.com</u></a> measures a user’s connection speeds and latency. We reviewed the aggregate findings from those tests, highlighting the countries/regions with the best results. However, we also wondered about test activity around the world -– where are users most concerned about their connection quality, and how frequently do they perform tests? <a href="https://radar.cloudflare.com/year-in-review/2025/#speed-tests"><u>A new animated Year in Review visualization illustrates speed test activity</u></a>, aggregated weekly.</p><p>Data is aggregated at a regional level and the associated activity is plotted on the map, with circles sized based on the number of tests taken each week. Note that locations with fewer than 100 speed tests per week are not plotted. Looking at test volume across the year, the greater London and Los Angeles areas were most active, as were Tokyo and Hong Kong and several U.S. cities.</p><p>Animating the graph to see changes across the year, a number of week-over-week surges in test volume are visible. These include in the Nairobi, Kenya, area during the seven-day period ending June 10; in the Tehran, Iran, area the period ending July 29; across multiple areas in Russia the period ending August 5; and in the Karnataka, India, area the period ending October 28. It isn’t clear what drove these increases in test volume — the <a href="https://radar.cloudflare.com/outage-center?dateStart=2025-01-01&amp;dateEnd=2025-12-02"><u>Cloudflare Radar Outage Center</u></a> does not show any observed Internet outages impacting those areas around those times, so it is unlikely to be subscribers testing the restoration of connectivity.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/73PtVEdvkENBbF5O8qD8ij/482d15f05359cbf6ae24fb606ed61793/BLOG-3077_49_-_connectivity-globalspeedtestactivity.png" />
          </figure><p><sup><i>Cloudflare speed test activity by location in 2025</i></sup></p>
    <div>
      <h3>More than half of request traffic comes from mobile devices in 117 countries/regions</h3>
      <a href="#more-than-half-of-request-traffic-comes-from-mobile-devices-in-117-countries-regions">
        
      </a>
    </div>
    <p>For better or worse, over the last quarter-century, mobile devices have become an indispensable part of everyday life. Adoption varies around the world — statistics from <a href="https://blogs.worldbank.org/en/voices/Mobile-phone-ownership-is-widespread-Why-is-digital-inclusion-still-lagging"><u>the World Bank</u></a> show multiple countries/regions with mobile phone ownership above 90%, while in several others, ownership rates are below 10%, as of October 2025. In some countries/regions, mobile devices primarily connect to the Internet via Wi-Fi, while other countries/regions are “mobile first,” where 4G/5G services are the primary means of Internet access.</p><p>Information contained within the <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/User-Agent"><u>User-Agent</u></a> header included with each request to Cloudflare enables us to categorize it as coming from a mobile, desktop, or other type of device. <a href="https://radar.cloudflare.com/year-in-review/2025/#mobile-vs-desktop"><u>Aggregating this categorization globally across 2025</u></a> found that 43% of requests were from mobile devices, up from <a href="https://radar.cloudflare.com/year-in-review/2024#mobile-vs-desktop"><u>41% in 2024</u></a>. The balance came from “classic” laptop and desktop type devices. Similar to an observation <a href="https://blog.cloudflare.com/radar-2024-year-in-review/#41-3-of-global-traffic-comes-from-mobile-devices-in-nearly-100-countries-regions-the-majority-of-traffic-comes-from-mobile-devices"><u>made last year</u></a>, these traffic shares were in line with those measured in Year in Review reports dating back to 2022, suggesting that mobile device usage has achieved a “steady state.”</p><p>In 117 countries/regions, more than half of requests came from mobile devices, led by <a href="https://radar.cloudflare.com/year-in-review/2025/sd#mobile-vs-desktop"><u>Sudan</u></a> and <a href="https://radar.cloudflare.com/year-in-review/2025/mw#mobile-vs-desktop"><u>Malawi</u></a> at 75% and 74% respectively. Five other African countries/regions — <a href="https://radar.cloudflare.com/year-in-review/2025/sz#mobile-vs-desktop"><u>Eswatini (Swaziland)</u></a>, <a href="https://radar.cloudflare.com/year-in-review/2025/ye#mobile-vs-desktop"><u>Yemen</u></a>, <a href="https://radar.cloudflare.com/year-in-review/2025/bw#mobile-vs-desktop"><u>Botswana</u></a>, <a href="https://radar.cloudflare.com/year-in-review/2025/mz#mobile-vs-desktop"><u>Mozambique</u></a>, and <a href="https://radar.cloudflare.com/year-in-review/2025/so#mobile-vs-desktop"><u>Somalia</u></a> — also had mobile request shares above 70% in 2025, in line with <a href="https://voxdev.org/topic/understanding-mobile-phone-and-internet-use-across-world"><u>strong mobile phone ownership</u></a> in the region. Among countries/regions with low mobile device traffic share, <a href="https://radar.cloudflare.com/year-in-review/2025/gi#mobile-vs-desktop"><u>Gibraltar</u></a> was the only one below 10% (at 5.1%), with just six others originating less than a quarter of requests from mobile devices. This is fewer than in <a href="https://radar.cloudflare.com/year-in-review/2024#mobile-vs-desktop"><u>2024</u></a>, when a dozen countries/regions had a mobile share below 25%.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/fcUaDzUxKouChLsJzfQf5/13e3eb93633c6d5ed017378022218505/BLOG-3077_50_-_connectivity-mobiledesktop.png" />
          </figure><p><sup><i>Distribution of traffic by device type in 2025, worldwide</i></sup></p><p><sup><i></i></sup></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6X1wD6uZUA4eB5vyf3vwl6/72a9445980b21e2917424eca151c77b4/BLOG-3077_51_-_connectivity-mobiledesktop-map.png" />
          </figure><p><sup><i>Global distribution of traffic by device type in 2025</i></sup></p>
    <div>
      <h2>Security</h2>
      <a href="#security">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1X1yOLxEicpVw5U4ukcAQF/f7d0b02841a8220151a66cd6f0226302/BLOG-3077_52_-_image18.png" />
          </figure>
    <div>
      <h3>6% of global traffic over Cloudflare’s network was mitigated by our systems — either as potentially malicious or for customer-defined reasons</h3>
      <a href="#6-of-global-traffic-over-cloudflares-network-was-mitigated-by-our-systems-either-as-potentially-malicious-or-for-customer-defined-reasons">
        
      </a>
    </div>
    <p>Cloudflare automatically mitigates attack traffic targeting customer websites and applications using <a href="https://www.cloudflare.com/learning/ddos/what-is-a-ddos-attack/"><u>DDoS</u></a> mitigation techniques or <a href="https://developers.cloudflare.com/waf/managed-rules/"><u>Web Application Firewall (WAF) Managed Rules</u></a>, protecting them from a variety of threats posed by malicious actors. We also enable customers to mitigate traffic, even if it isn’t malicious, using techniques like <a href="https://developers.cloudflare.com/waf/rate-limiting-rules/"><u>rate-limiting</u></a> requests or <a href="https://developers.cloudflare.com/waf/tools/ip-access-rules/"><u>blocking all traffic from a given location</u></a>. The need to do so may be driven by regulatory or business requirements. We looked at the overall share of traffic to Cloudflare’s network throughout 2025 that was mitigated for any reason, as well as the share that was blocked as a DDoS attack or by WAF Managed Rules.</p><p>This year, <a href="https://radar.cloudflare.com/year-in-review/2025/#mitigated-traffic"><u>6.2% of global traffic was mitigated</u></a>, down a quarter of a percentage point <a href="https://radar.cloudflare.com/year-in-review/2024#mitigated-traffic"><u>from 2024</u></a>. 3.3% of traffic was mitigated as a DDoS attack, or by managed rules, up one-tenth of a percentage point year over year. General mitigations were applied to more than 10% of the traffic coming from over 30 countries/regions, while 14 countries/regions had DDoS/WAF mitigations applied to more than 10% of originated traffic. Both counts were down in comparison to 2024. </p><p>Equatorial Guinea had the largest shares of mitigated traffic with 40% generally mitigated and 29% with DDoS/WAF mitigations applied. These shares grew over the last year, from 26% (general) and 19% (DDoS/WAF). In contrast, Dominica had the smallest shares of mitigated traffic, with just 0.7% of traffic mitigated, with DDoS/WAF mitigations applied to just 0.1%.</p><p>The large increase in mitigated traffic seen during July in the graph below is due to a very large DDoS attack campaign that primarily targeted a single Cloudflare customer domain.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5xzs0onu96x2qCwGRNHrPW/a730564c03b600f793ae92df8ad38ee8/BLOG-3077_53_-_security-mitigatedtraffic.png" />
          </figure><p><sup><i>Mitigated traffic trends in 2025, worldwide</i></sup></p>
    <div>
      <h3>40% of global bot traffic came from the United States, with Amazon Web Services and Google Cloud originating a quarter of global bot traffic</h3>
      <a href="#40-of-global-bot-traffic-came-from-the-united-states-with-amazon-web-services-and-google-cloud-originating-a-quarter-of-global-bot-traffic">
        
      </a>
    </div>
    <p>A <a href="https://developers.cloudflare.com/bots/concepts/bot/"><u>bot</u></a> is a software application programmed to do certain tasks, and Cloudflare uses advanced <a href="https://blog.cloudflare.com/bots-heuristics/"><u>heuristics</u></a> to differentiate between bot traffic and human traffic, <a href="https://developers.cloudflare.com/bots/concepts/bot-score/"><u>scoring</u></a> each request on the likelihood that it originates from a bot or a human user. By monitoring traffic suspected to be from bots, site and application owners can spot and, if necessary, block potentially malicious activity. However, not all bots are malicious — bots can also be helpful, and Cloudflare maintains a <a href="https://radar.cloudflare.com/bots/directory?kind=all"><u>directory of verified bots</u></a> that includes those used for things like <a href="https://radar.cloudflare.com/bots/directory?category=SEARCH_ENGINE_CRAWLER&amp;kind=all"><u>search engine indexing</u></a>, <a href="https://radar.cloudflare.com/bots/directory?category=SECURITY&amp;kind=all"><u>security scanning</u></a>, and <a href="https://radar.cloudflare.com/bots/directory?category=MONITORING_AND_ANALYTICS&amp;kind=all"><u>site/application monitoring</u></a>. Regardless of intent, we analyzed <a href="https://radar.cloudflare.com/year-in-review/2025/#bot-traffic-sources"><u>where bot traffic was originating from in 2025</u></a>, using the IP address of a request to identify the network (<a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/"><u>autonomous system</u></a>) and country/region associated with the bot making the request. </p><p>Globally, the top 10 countries/regions accounted for 71% of observed bot traffic. Forty percent originated from the United States, far ahead of Germany’s 6.5% share. The US share was up over five percentage points <a href="https://radar.cloudflare.com/year-in-review/2024#bot-traffic-sources"><u>from 2024</u></a>, while Germany’s share was down a fraction of a percentage point. The remaining countries in the top 10 all contributed bot traffic shares below 5% in 2025.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/29tI5aXT8HeRwmzHMyFaTt/0081d745e48499966611a4d2f3a14f2e/BLOG-3077_54_-_security-bottraffic-countries.png" />
          </figure><p><sup><i>Global bot traffic distribution by source country/region in 2025</i></sup></p><p>Looking at bot traffic by network, we found that cloud platforms remained among the leading sources. This is due to a number of factors, including the ease of using automated tools to quickly provision compute resources, their relatively low cost, their broadly distributed geographic footprints, and the platforms’ high-bandwidth Internet connectivity. </p><p>Two autonomous systems associated with Amazon Web Services accounted for a total of 14.4% of observed bot traffic, and two associated with Google Cloud were responsible for a combined 9.7% of bot traffic. They were followed by Microsoft Azure, which originated 5.5% of bot traffic. The shares from all three platforms were up as compared to 2024. These cloud platforms have a strong regional data center presence in many of the countries/regions in the top 10. Elsewhere, around the world, local telecommunications providers frequently accounted for the largest shares of automated bot traffic observed in those countries/regions.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3NCt3TgkYWbl9cQmZH2QZW/3ed0e512bdff74025dd34744b989dc41/BLOG-3077_55_-_security-bottraffic-asns.png" />
          </figure><p><sup><i>Global bot traffic distribution by source network in 2025</i></sup></p>
    <div>
      <h3>Organizations in the "People and Society” vertical were the most targeted during 2025</h3>
      <a href="#organizations-in-the-people-and-society-vertical-were-the-most-targeted-during-2025">
        
      </a>
    </div>
    <p>Attackers are constantly shifting their tactics and targets, mixing things up in an attempt to evade detection, or based on the damage they intend to cause. They may try to cause financial harm to businesses by targeting ecommerce sites during a busy shopping period, make a political statement by attacking government-related or civil society sites, or attempt to knock opponents offline by attacking a game server. To identify vertical-targeted attack activity during 2025, we analyzed mitigated traffic for customers that had an associated industry and vertical within their customer record. Mitigated traffic was aggregated weekly by source country/region across 17 target verticals.</p><p>Organizations in the "People and Society” vertical were the <a href="https://radar.cloudflare.com/year-in-review/2025/#most-attacked-industries"><u>most targeted across the year</u></a>, with 4.4% of global mitigated traffic targeting the vertical. Customers classified as “People and Society” include religious institutions, nonprofit organizations, civic &amp; social organizations, and libraries. The vertical started out the year with under 2% of mitigated traffic, but saw the share jump to 10% the week of March 5, and increase to over 17% by the end of the month. Other attack surges targeting these sites occurred in late April (to 19.1%) and early July (to 23.2%). Many of these types of organizations are protected by Cloudflare’s Project Galileo, and <a href="https://blog.cloudflare.com/celebrating-11-years-of-project-galileo-global-impact/"><u>this blog post</u></a> details the attacks and threats they experienced in 2024 and 2025.</p><p>Gambling/Games, the <a href="https://radar.cloudflare.com/year-in-review/2024#most-attacked-industries"><u>most-targeted vertical last year</u></a>, saw its share of mitigated attacks drop by more than half year-over-year, to just 2.6%. While one might expect to see attacks targeting gambling sites peak around major sporting events like the Super Bowl and March Madness, such a trend was not evident, as attack share peaked at 6.5% the week of March 5 — a month after the Super Bowl, and a couple of weeks before the start of March Madness.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6HqH4NQhC77KEgh1Z3tJDw/a9787f0913ad8160607a1cb21de6347a/BLOG-3077_56_-_security-mostattackedverticals.png" />
          </figure><p><sup><i>Global mitigated traffic share by vertical in 2025, summary view</i></sup></p>
    <div>
      <h3>Routing security, measured as the shares of RPKI valid routes and covered IP address space, saw continued improvement throughout 2025</h3>
      <a href="#routing-security-measured-as-the-shares-of-rpki-valid-routes-and-covered-ip-address-space-saw-continued-improvement-throughout-2025">
        
      </a>
    </div>
    <p><a href="https://www.cloudflare.com/learning/security/glossary/what-is-bgp/"><u>Border Gateway Protocol (BGP)</u></a> is the Internet’s core routing protocol, enabling traffic to flow between source and destination by communicating routes between networks. However, because it relies on trust between connected networks, incorrect information shared between peers (intentionally or not) can send traffic to the wrong place — potentially to <a href="https://blog.cloudflare.com/bgp-leaks-and-crypto-currencies/"><u>systems under control of an attacker</u></a>. To address this, <a href="https://blog.cloudflare.com/rpki/"><u>Resource Public Key Infrastructure (RPKI)</u></a> was developed as a cryptographic method of signing records that associate a BGP route announcement with the correct originating autonomous system (AS) number to ensure that the information being shared originally came from a network that is allowed to do so. Cloudflare has been a vocal advocate for routing security, including as a founding participant in the <a href="https://www.internetsociety.org/news/press-releases/2020/leading-cdn-and-cloud-providers-join-manrs-to-improve-routing-security/"><u>MANRS CDN and Cloud Programme</u></a> and by providing a <a href="https://isbgpsafeyet.com/"><u>public tool</u></a> that enables users to test whether their Internet provider has implemented BGP safely. </p><p>We analyzed data available on Cloudflare Radar’s <a href="https://radar.cloudflare.com/routing"><u>Routing page</u></a> to determine the share of <a href="https://rpki.readthedocs.io/en/latest/about/help.html"><u>RPKI valid routes</u></a> and how that share changed throughout 2025, as well as determining the <a href="https://radar.cloudflare.com/year-in-review/2025/#routing-security"><u>share of IP address space covered by valid routes</u></a>. The latter metric is noteworthy because a route announcement covering a large amount of IP address space (millions of IPv4 addresses) has a greater potential impact than an announcement covering a small block of IP address space (hundreds of IPv4 addresses).</p><p>We started 2025 with 50% valid IPv4 routes, growing to 53.9% by December 2. The share of valid IPv6 routes increased to 60.1%, up 4.7 percentage points. Looking at the global share of IP address space covered by valid routes, IPv4 increased to 48.5%, a three percentage point increase. The share of IPv6 address space covered by valid routes fell slightly to 61.6%. Although the year-over-year changes for these metrics are slowing, we have made significant progress over the last five years. Since the start of 2020, the share of RPKI valid IPv4 routes and IPv4 address space have both grown by approximately 3x.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4EtRqY7MgRKLxjsLIlNuis/013b3bf92c6d3b173cd8086b1ff370c4/BLOG-3077_57_-_security-routingsecurity-routes.png" />
          </figure><p><sup><i>Shares of global RPKI valid routing entries by IP version in 2025</i></sup></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3JEv5ViM6qYdYxSzE6sbYD/4f89f5acbd2aeef55562fbee63dd2f07/BLOG-3077_58_-_security-routingsecurity-addressspace.png" />
          </figure><p><sup><i>Shares of globally announced IP address space covered by RPKI valid routes in 2025</i></sup></p><p><a href="https://radar.cloudflare.com/year-in-review/2025/bb#routing-security"><u>Barbados</u></a> saw the biggest growth in the share of valid IPv4 routes, growing from 2.2% to 20.8%. Looking at valid IPv6 routes, <a href="https://radar.cloudflare.com/year-in-review/2025/ml#routing-security"><u>Mali</u></a> saw the most significant share growth in 2025, from 10.0% to 58.3%. </p><p>Barbados also experienced the biggest increase in the share of IPv4 space covered by valid routes, jumping from just 2.0% to 18.6%. For IPv6 address space, both <a href="https://radar.cloudflare.com/year-in-review/2025/tj#routing-security"><u>Tajikistan</u></a> and <a href="https://radar.cloudflare.com/year-in-review/2025/dm#routing-security"><u>Dominica</u></a> went from having effectively no space covered by valid routes at the start of the year, to 5.5% and 3.5% respectively. </p>
    <div>
      <h3>Hyper-volumetric DDoS attack sizes grew significantly throughout the year </h3>
      <a href="#hyper-volumetric-ddos-attack-sizes-grew-significantly-throughout-the-year">
        
      </a>
    </div>
    <p>In our quarterly DDoS Report series (<a href="https://blog.cloudflare.com/ddos-threat-report-for-2025-q1/"><u>Q1</u></a>, <a href="https://blog.cloudflare.com/ddos-threat-report-for-2025-q2/"><u>Q2</u></a>, <a href="https://blog.cloudflare.com/ddos-threat-report-2025-q3/"><u>Q3</u></a>), we have highlighted the increasing frequency and size of hyper-volumetric network layer attacks targeting Cloudflare customers and Cloudflare’s infrastructure. We define a “hyper-volumetric network layer attack” as one that operates at Layer 3/4 and that peaks at more than one terabit per second (1 Tbps) or more than one billion packets per second (1 Bpps). These reports provide a quarterly perspective, but we also wanted to <a href="https://radar.cloudflare.com/year-in-review/2025/#ddos-attacks"><u>show a view of activity across the year</u></a> to understand when attackers are most active, and how attack sizes have grown over time. </p><p>Looking at hyper-volumetric attack activity in 2025 from a Tbps perspective, July saw the largest number of such attacks, at over 500, while February saw the fewest, at just over 150. Attack intensity remained generally below 5 Tbps, although a 10 Tbps attack blocked at the end of August was a harbinger of things to come. This attack was the first of a campaign of &gt;10 Tbps attacks that took place during the first week of September, ahead of a series of &gt;20 Tbps attacks during the last week of the month. In early October, multiple increasingly larger hyper-volumetric attacks were observed, with the largest for the month <a href="https://blog.cloudflare.com/ddos-threat-report-2025-q3/#aisuru-breaking-records-with-ultrasophisticated-hyper-volumetric-ddos-attacks"><u>peaking at 29.7 Tbps</u></a>. However, that record was soon eclipsed, as an early November attack reached 31.4 Tbps.</p><p>From a Bpps perspective, hyper-volumetric attack activity was much lower, with November experiencing the most (over 140), while just three were seen in February and June. Attack intensity across the year generally remained below 4 Bpps through late August, though a succession of increasingly larger attacks were seen over the next several months, peaking in October. Although the intensity of most of the 110+ attacks blocked in October was below 5 Bpps, a 14 Bpps attack seen during the month was the largest hyper-volumetric attack by packets per second blocked during the year, besting five other successive record-setting attacks that occurred in September.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5q4Ruw6z07JUGXF6FsZMTv/414a388b7f10eff0940a460e1356e938/BLOG-3077_59_-_security-hypervolumetricddos.png" />
          </figure><p><sup><i>Peak DDoS attack sizes in 2025</i></sup></p>
    <div>
      <h2>Email security</h2>
      <a href="#email-security">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1mchtw8EWCzTpDs3K4jQ1A/3b740b7facca7869a4a191808e94ef45/BLOG-3077_60_-_image12.png" />
          </figure>
    <div>
      <h3>More than 5% of email messages analyzed by Cloudflare were found to be malicious</h3>
      <a href="#more-than-5-of-email-messages-analyzed-by-cloudflare-were-found-to-be-malicious">
        
      </a>
    </div>
    <p><a href="https://www.signite.io/emails-are-still-king"><u>Recent statistics</u></a> suggest that email remains the top communication channel for external business contact, despite the growing enterprise use of collaboration/messaging apps. Given its broad enterprise usage, attackers still find it to be an attractive entry point into corporate networks. Generative AI tools <a href="https://blog.cloudflare.com/dispelling-the-generative-ai-fear-how-cloudflare-secures-inboxes-against-ai-enhanced-phishing/"><u>make it easier</u></a> to craft highly targeted malicious emails that convincingly impersonate trusted brands or legitimate senders (like corporate executives) but contain deceptive links, dangerous attachments, or other types of threats. <a href="https://www.cloudflare.com/zero-trust/products/email-security/"><u>Cloudflare Email Security</u></a> protects customers from email-based attacks, including those carried out through targeted malicious email messages. </p><p>In 2025, an <a href="https://radar.cloudflare.com/year-in-review/2025/#malicious-emails"><u>average of 5.6% of emails analyzed by Cloudflare were found to be malicious</u></a>. The share of messages processed by Cloudflare Email Security that were found to be malicious generally ranged between 4% and 6% throughout most of the year. Our data shows a jump in malicious email share starting in October, likely due to an improved classification system implemented by Cloudflare Email Security.  </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/422qqM5R83j6IkdbWdasYR/696a68ded36a67dba1b73e045ab5bb28/BLOG-3077_61_-_emailsecurity-maliciousemailpercentage.png" />
          </figure><p><sup><i>Global malicious email share trends in 2025</i></sup></p>
    <div>
      <h3>Deceptive links, identity deception, and brand impersonation were the most common types of threats found in malicious email messages</h3>
      <a href="#deceptive-links-identity-deception-and-brand-impersonation-were-the-most-common-types-of-threats-found-in-malicious-email-messages">
        
      </a>
    </div>
    <p>Deceptive links were the <a href="https://radar.cloudflare.com/year-in-review/2025/#top-email-threats"><u>top malicious email threat category in 2025</u></a>, found in 52% of messages, up from <a href="https://radar.cloudflare.com/year-in-review/2024#top-email-threats"><u>43% in 2024</u></a>. Since the display text for a hyperlink in HTML can be arbitrarily set, attackers can make a URL appear as if it links to a benign site when, in fact, it is actually linking to a malicious resource that can be used to steal login credentials or download malware. The share of processed emails containing deceptive links was as high as 70% in late April, and again in mid-November.</p><p>Identity deception occurs when an attacker sends an email claiming to be someone else. They may do this using domains that look similar, are spoofed, or use display name tricks to appear to be coming from a trusted domain. Brand impersonation is a form of identity deception where an attacker sends a phishing message that impersonates a recognizable company or brand. Brand impersonation may also use display name spoofing or domain impersonation. Identity deception (38%) and brand impersonation (32%) were growing threats in 2025, up from 35% and 23% respectively in 2024. Both saw an increase in mid-November.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1sq7v5IqOTPZZ5DwCnr8Mv/762e5bd4dda4c34475ffb5507898a08a/BLOG-3077_62_-_emailsecurity-maliciousemail-threatcategory.png" />
          </figure><p><sup><i>Email threat category trends in 2025, worldwide</i></sup></p>
    <div>
      <h3>Nearly all of the email messages from the .christmas and .lol Top Level Domains were found to be either spam or malicious</h3>
      <a href="#nearly-all-of-the-email-messages-from-the-christmas-and-lol-top-level-domains-were-found-to-be-either-spam-or-malicious">
        
      </a>
    </div>
    <p>In addition to providing traffic, geographic distribution, and digital certificate insights for Top Level Domains (TLDs) like <a href="https://radar.cloudflare.com/tlds/com"><u>.com</u></a> or <a href="https://radar.cloudflare.com/tlds/us"><u>.us</u></a>, Cloudflare Radar also provides insights into the <a href="https://radar.cloudflare.com/security/email#most-abused-tlds"><u>“most abused” TLDs</u></a> – those with domains that we have found are originating the largest shares of malicious and spam email among messages analyzed by Cloudflare Email Security. The analysis is based on the sending domain’s TLD, found in the From: header of an email message. For example, if a message came from sender@example.com, then example.com is the sending domain, and .com is the associated TLD. For the Year in Review analysis, we only included TLDs from which we saw an average minimum of 30 messages per hour.</p><p>Based on <a href="https://radar.cloudflare.com/year-in-review/2025/#most-abused-tlds"><u>messages analyzed throughout 2025</u></a>, we found that <a href="https://radar.cloudflare.com/tlds/christmas"><u>.christmas</u></a> and <a href="https://radar.cloudflare.com/tlds/lol"><u>.lol</u></a> were the most abused TLDs, with 99.8% and 99.6% of messages from these TLDs respectively characterized as either spam or malicious. Sorting the list of TLDs by malicious email share, <a href="https://radar.cloudflare.com/tlds/cfd"><u>.cfd</u></a> and <a href="https://radar.cloudflare.com/tlds/sbs"><u>.sbs</u></a> both had more than 90% of analyzed emails categorized as malicious. The <a href="https://radar.cloudflare.com/tlds/best"><u>.best</u></a> TLD was the worst in terms of spam email share, with 69% of email messages characterized as spam.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/tTPjf9VkDFDnzaKCUXE9y/93e88ce8e7f65ef6373308f805b0219f/BLOG-3077_63_-_emailsecurity-maliciousemail-mostabusedtlds.png" />
          </figure><p><sup><i>TLDs originating the largest total shares of malicious and spam email in 2025</i></sup></p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Although the Internet and the Web continue to evolve and change over time, it appears that some of the key metrics have become fairly stable. However, we expect that others, such as those metrics tracking AI trends, will shift over the coming years as that space evolves at a rapid pace. </p><p>We encourage you to visit the <a href="https://radar.cloudflare.com/year-in-review/2025"><u>Cloudflare Radar 2025 Year In Review microsite</u></a> and explore the trends for your country/region, and consider how they impact your organization as you plan for 2026. You can also get near real-time insight into many of these metrics and trends on <a href="https://radar.cloudflare.com/"><u>Cloudflare Radar</u></a>. And as noted above, for insights into the top Internet services across multiple industry categories and countries/regions, we encourage you to read the <a href="https://blog.cloudflare.com/radar-2025-year-in-review-internet-services/"><u>companion Year in Review blog post</u></a>.</p><p>If you have any questions, you can contact the Cloudflare Radar team at <a><u>radar@cloudflare.com</u></a> or on social media at <a href="https://twitter.com/CloudflareRadar"><u>@CloudflareRadar</u></a> (X), <a href="https://noc.social/@cloudflareradar"><u>https://noc.social/@cloudflareradar</u></a> (Mastodon), and <a href="https://bsky.app/profile/radar.cloudflare.com"><u>radar.cloudflare.com</u></a> (Bluesky).</p>
    <div>
      <h2>Acknowledgements</h2>
      <a href="#acknowledgements">
        
      </a>
    </div>
    <p>As the saying goes, it takes a village to make our annual Year in Review happen, from aggregating and analyzing the data, to creating the microsite, to developing associated content. I’d like to acknowledge those team members that contributed to this year’s effort, with thanks going out to: Jorge Pacheco, Sabina Zejnilovic, Carlos Azevedo, Mingwei Zhang, Sofia Cardita (data analysis); André Páscoa, Nuno Pereira (frontend development); João Tomé (Most Popular Internet Services); David Fidalgo, Janet Villarreal, and the internationalization team (translations); Jackie Dutton, Kari Linder, Guille Lasarte (Communications); Laurel Wamsley (blog editing); and Paula Tavares (Engineering Management), as well as other colleagues across Cloudflare for their support and assistance.</p> ]]></content:encoded>
            <category><![CDATA[Year in Review]]></category>
            <category><![CDATA[Radar]]></category>
            <category><![CDATA[Trends]]></category>
            <category><![CDATA[Internet Trends]]></category>
            <category><![CDATA[Internet Traffic]]></category>
            <category><![CDATA[Outage]]></category>
            <category><![CDATA[Internet Quality]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[AI]]></category>
            <guid isPermaLink="false">2Mp06VKep73rBpdUmywpQ2</guid>
            <dc:creator>David Belson</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare outage on December 5, 2025]]></title>
            <link>https://blog.cloudflare.com/5-december-2025-outage/</link>
            <pubDate>Fri, 05 Dec 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare experienced a significant traffic outage on  December 5, 2025, starting approximately at 8:47 UTC. The incident lasted approximately 25 minutes before resolution. We are sorry for the impact that it caused to our customers and the Internet. The incident was not caused by an attack and was due to configuration changes being applied to attempt to mitigate a recent industry-wide vulnerability impacting React Server Components. ]]></description>
            <content:encoded><![CDATA[ <p><i><sup></sup></i></p><p><i><sup>Note: This post was updated to clarify the relationship of the internal WAF tool with the incident on Dec. 5.</sup></i></p><p>On December 5, 2025, at 08:47 UTC (all times in this blog are UTC), a portion of Cloudflare’s network began experiencing significant failures. The incident was resolved at 09:12 (~25 minutes total impact), when all services were fully restored.</p><p>A subset of customers were impacted, accounting for approximately 28% of all HTTP traffic served by Cloudflare. Several factors needed to combine for an individual customer to be affected as described below.</p><p>The issue was not caused, directly or indirectly, by a cyber attack on Cloudflare’s systems or malicious activity of any kind. Instead, it was triggered by changes being made to our body parsing logic while attempting to detect and mitigate an industry-wide vulnerability <a href="https://blog.cloudflare.com/waf-rules-react-vulnerability/"><u>disclosed this week</u></a> in React Server Components.</p><p>Any outage of our systems is unacceptable, and we know we have let the Internet down again following the incident on November 18. We will be publishing details next week about the work we are doing to stop these types of incidents from occurring.</p>
    <div>
      <h3>What happened</h3>
      <a href="#what-happened">
        
      </a>
    </div>
    <p>The graph below shows HTTP 500 errors served by our network during the incident timeframe (red line at the bottom), compared to unaffected total Cloudflare traffic (green line at the top).</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/43yFyHQhKjhPoLh4yB7pQ8/c1eb08a3e056530311e6056ecac522ed/image1.png" />
          </figure><p>Cloudflare's Web Application Firewall (WAF) provides customers with protection against malicious payloads, allowing them to be detected and blocked. To do this, Cloudflare’s proxy buffers HTTP request body content in memory for analysis. Before today, the buffer size was set to 128KB.</p><p>As part of our ongoing work to protect customers who use React against a critical vulnerability, <a href="https://nvd.nist.gov/vuln/detail/CVE-2025-55182"><u>CVE-2025-55182</u></a>, we started rolling out an increase to our buffer size to 1MB, the default limit allowed by Next.js applications, to make sure as many customers as possible were protected.</p><p>This first change was being rolled out using our gradual deployment system. During rollout, we noticed that our internal WAF testing tool did not support the increased buffer size. As this internal test tool was not needed at that time and had no effect on customer traffic, we made a second change to turn it off.</p><p>This second change of turning off our WAF testing tool was implemented using our global configuration system. This system does not perform gradual rollouts, but rather propagates changes within seconds to the entire fleet of servers in our network and is under review <a href="https://blog.cloudflare.com/18-november-2025-outage/"><u>following the outage we experienced on November 18</u></a>. </p><p>Unfortunately, in our FL1 version of our proxy, under certain circumstances, the second change of turning off our WAF rule testing tool caused an error state that resulted in 500 HTTP error codes to be served from our network.</p><p>As soon as the change propagated to our network, code execution in our FL1 proxy reached a bug in our rules module which led to the following Lua exception: </p>
            <pre><code>[lua] Failed to run module rulesets callback late_routing: /usr/local/nginx-fl/lua/modules/init.lua:314: attempt to index field 'execute' (a nil value)</code></pre>
            <p>resulting in HTTP code 500 errors being issued.</p><p>The issue was identified shortly after the change was applied, and was reverted at 09:12, after which all traffic was served correctly.</p><p>Customers that have their web assets served by our older FL1 proxy <b>AND</b> had the Cloudflare Managed Ruleset deployed were impacted. All requests for websites in this state returned an HTTP 500 error, with the small exception of some test endpoints such as <code>/cdn-cgi/trace</code>.</p><p>Customers that did not have the configuration above applied were not impacted. Customer traffic served by our China network was also not impacted.</p>
    <div>
      <h3>The runtime error</h3>
      <a href="#the-runtime-error">
        
      </a>
    </div>
    <p>Cloudflare’s rulesets system consists of sets of rules which are evaluated for each request entering our system. A rule consists of a filter, which selects some traffic, and an action which applies an effect to that traffic. Typical actions are “<code>block</code>”, “<code>log</code>”, or “<code>skip</code>”. Another type of action is “<code>execute</code>”, which is used to trigger evaluation of another ruleset.</p><p>Our internal logging system uses this feature to evaluate new rules before we make them available to the public. A top level ruleset will execute another ruleset containing test rules. It was these test rules that we were attempting to disable.</p><p>We have a killswitch subsystem as part of the rulesets system which is intended to allow a rule which is misbehaving to be disabled quickly. This killswitch system receives information from our global configuration system mentioned in the prior sections. We have used this killswitch system on a number of occasions in the past to mitigate incidents and have a well-defined Standard Operating Procedure, which was followed in this incident.</p><p>However, we have never before applied a killswitch to a rule with an action of “<code>execute</code>”. When the killswitch was applied, the code correctly skipped the evaluation of the execute action, and didn’t evaluate the sub-ruleset pointed to by it. However, an error was then encountered while processing the overall results of evaluating the ruleset:</p>
            <pre><code>if rule_result.action == "execute" then
  rule_result.execute.results = ruleset_results[tonumber(rule_result.execute.results_index)]
end</code></pre>
            <p>This code expects that, if the ruleset has action=”execute”, the “rule_result.execute” object will exist. However, because the rule had been skipped, the rule_result.execute object did not exist, and Lua returned an error due to attempting to look up a value in a nil value.</p><p>This is a straightforward error in the code, which had existed undetected for many years. This type of code error is prevented by languages with strong type systems. In our replacement for this code in our new FL2 proxy, which is written in Rust, the error did not occur.</p>
    <div>
      <h3>What about the changes being made after the incident on November 18, 2025?</h3>
      <a href="#what-about-the-changes-being-made-after-the-incident-on-november-18-2025">
        
      </a>
    </div>
    <p>We made an unrelated change that caused a similar, <a href="https://blog.cloudflare.com/18-november-2025-outage/"><u>longer availability incident</u></a> two weeks ago on November 18, 2025. In both cases, a deployment to help mitigate a security issue for our customers propagated to our entire network and led to errors for nearly all of our customer base.</p><p>We have spoken directly with hundreds of customers following that incident and shared our plans to make changes to prevent single updates from causing widespread impact like this. We believe these changes would have helped prevent the impact of today’s incident but, unfortunately, we have not finished deploying them yet.</p><p>We know it is disappointing that this work has not been completed yet. It remains our first priority across the organization. In particular, the projects outlined below should help contain the impact of these kinds of changes:</p><ul><li><p><b>Enhanced Rollouts &amp; Versioning</b>: Similar to how we slowly deploy software with strict health validation, data used for rapid threat response and general configuration needs to have the same safety and blast mitigation features. This includes health validation and quick rollback capabilities among other things.</p></li><li><p><b>Streamlined break glass capabilities:</b> Ensure that critical operations can still be achieved in the face of additional types of failures. This applies to internal services as well as all standard methods of interaction with the Cloudflare control plane used by all Cloudflare customers.</p></li><li><p><b>"Fail-Open" Error Handling: </b>As part of the resilience effort, we are replacing the incorrectly applied hard-fail logic across all critical Cloudflare data-plane components. If a configuration file is corrupt or out-of-range (e.g., exceeding feature caps), the system will log the error and default to a known-good state or pass traffic without scoring, rather than dropping requests. Some services will likely give the customer the option to fail open or closed in certain scenarios. This will include drift-prevention capabilities to ensure this is enforced continuously.</p></li></ul><p>Before the end of next week we will publish a detailed breakdown of all the resiliency projects underway, including the ones listed above. While that work is underway, we are locking down all changes to our network in order to ensure we have better mitigation and rollback systems before we begin again.</p><p>These kinds of incidents, and how closely they are clustered together, are not acceptable for a network like ours. On behalf of the team at Cloudflare we want to apologize for the impact and pain this has caused again to our customers and the Internet as a whole.</p>
    <div>
      <h3>Timeline</h3>
      <a href="#timeline">
        
      </a>
    </div>
    <table><tr><td><p>Time (UTC)</p></td><td><p>Status</p></td><td><p>Description</p></td></tr><tr><td><p>08:47</p></td><td><p>INCIDENT start</p></td><td><p>Configuration change deployed and propagated to the network</p></td></tr><tr><td><p>08:48</p></td><td><p>Full impact</p></td><td><p>Change fully propagated</p></td></tr><tr><td><p>08:50</p></td><td><p>INCIDENT declared</p></td><td><p>Automated alerts</p></td></tr><tr><td><p>09:11</p></td><td><p>Change reverted</p></td><td><p>Configuration change reverted and propagation start</p></td></tr><tr><td><p>09:12</p></td><td><p>INCIDENT end</p></td><td><p>Revert fully propagated, all traffic restored</p></td></tr></table><p></p> ]]></content:encoded>
            <category><![CDATA[Outage]]></category>
            <category><![CDATA[Post Mortem]]></category>
            <guid isPermaLink="false">7lRsBDx09Ye8w2dhpF0Yc</guid>
            <dc:creator>Dane Knecht</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare outage on November 18, 2025]]></title>
            <link>https://blog.cloudflare.com/18-november-2025-outage/</link>
            <pubDate>Tue, 18 Nov 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare suffered a service outage on November 18, 2025. The outage was triggered by a bug in generation logic for a Bot Management feature file causing many Cloudflare services to be affected. 
 ]]></description>
            <content:encoded><![CDATA[ <p>On 18 November 2025 at 11:20 UTC (all times in this blog are UTC), Cloudflare's network began experiencing significant failures to deliver core network traffic. This showed up to Internet users trying to access our customers' sites as an error page indicating a failure within Cloudflare's network.  </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3ony9XsTIteX8DNEFJDddJ/7da2edd5abca755e9088002a0f5d1758/BLOG-3079_2.png" />
          </figure><p><b>The issue was not caused, directly or indirectly, by a cyber attack or malicious activity of any kind.</b> Instead, it was triggered by a change to one of our database systems' permissions which caused the database to output multiple entries into a “feature file” used by our Bot Management system. That feature file, in turn, doubled in size. The larger-than-expected feature file was then propagated to all the machines that make up our network.</p><p>The software running on these machines to route traffic across our network reads this feature file to keep our Bot Management system up to date with ever changing threats. The software had a limit on the size of the feature file that was below its doubled size. That caused the software to fail.</p><p>After we initially wrongly suspected the symptoms we were seeing were caused by a hyper-scale DDoS attack, we correctly identified the core issue and were able to stop the propagation of the larger-than-expected feature file and replace it with an earlier version of the file. Core traffic was largely flowing as normal by 14:30. We worked over the next few hours to mitigate increased load on various parts of our network as traffic rushed back online. As of 17:06 all systems at Cloudflare were functioning as normal.</p><p>We are sorry for the impact to our customers and to the Internet in general. Given Cloudflare's importance in the Internet ecosystem any outage of any of our systems is unacceptable. That there was a period of time where our network was not able to route traffic is deeply painful to every member of our team. We know we let you down today.</p><p>This post is an in-depth recount of exactly what happened and what systems and processes failed. It is also the beginning, though not the end, of what we plan to do in order to make sure an outage like this will not happen again.</p>
    <div>
      <h2>The outage</h2>
      <a href="#the-outage">
        
      </a>
    </div>
    <p>The chart below shows the volume of 5xx error HTTP status codes served by the Cloudflare network. Normally this should be very low, and it was right up until the start of the outage. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7GdZcWhEqNjwOmLcsKOXT0/fca7e6970d422d04c81b2baafb988cbe/BLOG-3079_3.png" />
          </figure><p>The volume prior to 11:20 is the expected baseline of 5xx errors observed across our network. The spike, and subsequent fluctuations, show our system failing due to loading the incorrect feature file. What’s notable is that our system would then recover for a period. This was very unusual behavior for an internal error.</p><p>The explanation was that the file was being generated every five minutes by a query running on a ClickHouse database cluster, which was being gradually updated to improve permissions management. Bad data was only generated if the query ran on a part of the cluster which had been updated. As a result, every five minutes there was a chance of either a good or a bad set of configuration files being generated and rapidly propagated across the network.</p><p>This fluctuation made it unclear what was happening as the entire system would recover and then fail again as sometimes good, sometimes bad configuration files were distributed to our network. Initially, this led us to believe this might be caused by an attack. Eventually, every ClickHouse node was generating the bad configuration file and the fluctuation stabilized in the failing state.</p><p>Errors continued until the underlying issue was identified and resolved starting at 14:30. We solved the problem by stopping the generation and propagation of the bad feature file and manually inserting a known good file into the feature file distribution queue. And then forcing a restart of our core proxy.</p><p>The remaining long tail in the chart above is our team restarting remaining services that had entered a bad state, with 5xx error code volume returning to normal at 17:06.</p><p>The following services were impacted:</p><table><tr><th><p><b>Service / Product</b></p></th><th><p><b>Impact description</b></p></th></tr><tr><td><p>Core CDN and security services</p></td><td><p>HTTP 5xx status codes. The screenshot at the top of this post shows a typical error page delivered to end users.</p></td></tr><tr><td><p>Turnstile</p></td><td><p>Turnstile failed to load.</p></td></tr><tr><td><p>Workers KV</p></td><td><p>Workers KV returned a significantly elevated level of HTTP 5xx errors as requests to KV’s “front end” gateway failed due to the core proxy failing.</p></td></tr><tr><td><p>Dashboard</p></td><td><p>While the dashboard was mostly operational, most users were unable to log in due to Turnstile being unavailable on the login page.</p></td></tr><tr><td><p>Email Security</p></td><td><p>While email processing and delivery were unaffected, we observed a temporary loss of access to an IP reputation source which reduced spam-detection accuracy and prevented some new-domain-age detections from triggering, with no critical customer impact observed. We also saw failures in some Auto Move actions; all affected messages have been reviewed and remediated.</p></td></tr><tr><td><p>Access</p></td><td><p>Authentication failures were widespread for most users, beginning at the start of the incident and continuing until the rollback was initiated at 13:05. Any existing Access sessions were unaffected.</p><p>
</p><p>All failed authentication attempts resulted in an error page, meaning none of these users ever reached the target application while authentication was failing. Successful logins during this period were correctly logged during this incident. </p><p>
</p><p>Any Access configuration updates attempted at that time would have either failed outright or propagated very slowly. All configuration updates are now recovered.</p></td></tr></table><p>As well as returning HTTP 5xx errors, we observed significant increases in latency of responses from our CDN during the impact period. This was due to large amounts of CPU being consumed by our debugging and observability systems, which automatically enhance uncaught errors with additional debugging information.</p>
    <div>
      <h2>How Cloudflare processes requests, and how this went wrong today</h2>
      <a href="#how-cloudflare-processes-requests-and-how-this-went-wrong-today">
        
      </a>
    </div>
    <p>Every request to Cloudflare takes a well-defined path through our network. It could be from a browser loading a webpage, a mobile app calling an API, or automated traffic from another service. These requests first terminate at our HTTP and TLS layer, then flow into our core proxy system (which we call FL for “Frontline”), and finally through Pingora, which performs cache lookups or fetches data from the origin if needed.</p><p>We previously shared more detail about how the core proxy works <a href="https://blog.cloudflare.com/20-percent-internet-upgrade/."><u>here</u></a>. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6qlWXM3gh4SaYYvsGc7mFV/99294b22963bb414435044323aed7706/BLOG-3079_4.png" />
          </figure><p>As a request transits the core proxy, we run the various security and performance products available in our network. The proxy applies each customer’s unique configuration and settings, from enforcing WAF rules and DDoS protection to routing traffic to the Developer Platform and R2. It accomplishes this through a set of domain-specific modules that apply the configuration and policy rules to traffic transiting our proxy.</p><p>One of those modules, Bot Management, was the source of today’s outage. </p><p>Cloudflare’s <a href="https://www.cloudflare.com/application-services/products/bot-management/"><u>Bot Management</u></a> includes, among other systems, a machine learning model that we use to generate bot scores for every request traversing our network. Our customers use bot scores to control which bots are allowed to access their sites — or not.</p><p>The model takes as input a “feature” configuration file. A feature, in this context, is an individual trait used by the machine learning model to make a prediction about whether the request was automated or not. The feature configuration file is a collection of individual features.</p><p>This feature file is refreshed every few minutes and published to our entire network and allows us to react to variations in traffic flows across the Internet. It allows us to react to new types of bots and new bot attacks. So it’s critical that it is rolled out frequently and rapidly as bad actors change their tactics quickly.</p><p>A change in our underlying ClickHouse query behaviour (explained below) that generates this file caused it to have a large number of duplicate “feature” rows. This changed the size of the previously fixed-size feature configuration file, causing the bots module to trigger an error.</p><p>As a result, HTTP 5xx error codes were returned by the core proxy system that handles traffic processing for our customers, for any traffic that depended on the bots module. This also affected Workers KV and Access, which rely on the core proxy.</p><p>Unrelated to this incident, we were and are currently migrating our customer traffic to a new version of our proxy service, internally known as <a href="https://blog.cloudflare.com/20-percent-internet-upgrade/"><u>FL2</u></a>. Both versions were affected by the issue, although the impact observed was different.</p><p>Customers deployed on the new FL2 proxy engine, observed HTTP 5xx errors. Customers on our old proxy engine, known as FL, did not see errors, but bot scores were not generated correctly, resulting in all traffic receiving a bot score of zero. Customers that had rules deployed to block bots would have seen large numbers of false positives. Customers who were not using our bot score in their rules did not see any impact.</p><p>Throwing us off and making us believe this might have been an attack was another apparent symptom we observed: Cloudflare’s status page went down. The status page is hosted completely off Cloudflare’s infrastructure with no dependencies on Cloudflare. While it turned out to be a coincidence, it led some of the team diagnosing the issue to believe that an attacker may be targeting both our systems as well as our status page. Visitors to the status page at that time were greeted by an error message:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7LwbB5fv7vdoNRWWDGN7ia/dad8cef76eee1305e0216d74a813612b/BLOG-3079_5.png" />
          </figure><p>In the internal incident chat room, we were concerned that this might be the continuation of the recent spate of high volume <a href="https://techcommunity.microsoft.com/blog/azureinfrastructureblog/defending-the-cloud-azure-neutralized-a-record-breaking-15-tbps-ddos-attack/4470422"><u>Aisuru</u></a> <a href="https://blog.cloudflare.com/defending-the-internet-how-cloudflare-blocked-a-monumental-7-3-tbps-ddos/"><u>DDoS attacks</u></a>:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3Ph13HSsOGC0KYRfoeZmSy/46522e46ed0132d2ea551aef4c71a5d6/BLOG-3079_6.png" />
          </figure>
    <div>
      <h3>The query behaviour change</h3>
      <a href="#the-query-behaviour-change">
        
      </a>
    </div>
    <p>I mentioned above that a change in the underlying query behaviour resulted in the feature file containing a large number of duplicate rows. The database system in question uses ClickHouse’s software.</p><p>For context, it’s helpful to know how ClickHouse distributed queries work. A ClickHouse cluster consists of many shards. To query data from all shards, we have so-called distributed tables (powered by the table engine <code>Distributed</code>) in a database called <code>default</code>. The Distributed engine queries underlying tables in a database <code>r0</code>. The underlying tables are where data is stored on each shard of a ClickHouse cluster.</p><p>Queries to the distributed tables run through a shared system account. As part of efforts to improve our distributed queries security and reliability, there’s work being done to make them run under the initial user accounts instead.</p><p>Before today, ClickHouse users would only see the tables in the <code>default</code> database when querying table metadata from ClickHouse system tables such as <code>system.tables</code> or <code>system.columns</code>.</p><p>Since users already have implicit access to underlying tables in <code>r0</code>, we made a change at 11:05 to make this access explicit, so that users can see the metadata of these tables as well. By making sure that all distributed subqueries can run under the initial user, query limits and access grants can be evaluated in a more fine-grained manner, avoiding one bad subquery from a user affecting others.</p><p>The change explained above resulted in all users accessing accurate metadata about tables they have access to. Unfortunately, there were assumptions made in the past, that the list of columns returned by a query like this would only include the “<code>default</code>” database:</p><p><code>SELECT
  name,
  type
FROM system.columns
WHERE
  table = 'http_requests_features'
order by name;</code></p><p>Note how the query does not filter for the database name. With us gradually rolling out the explicit grants to users of a given ClickHouse cluster, after the change at 11:05 the query above started returning “duplicates” of columns because those were for underlying tables stored in the r0 database.</p><p>This, unfortunately, was the type of query that was performed by the Bot Management feature file generation logic to construct each input “feature” for the file mentioned at the beginning of this section. </p><p>The query above would return a table of columns like the one displayed (simplified example):</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/ZIC5X8vMM7ifbJc0vxgLD/49dd33e7267bdb03b265ee0acccf381d/Screenshot_2025-11-18_at_2.51.24%C3%A2__PM.png" />
          </figure><p>However, as part of the additional permissions that were granted to the user, the response now contained all the metadata of the <code>r0</code> schema effectively more than doubling the rows in the response ultimately affecting the number of rows (i.e. features) in the final file output. </p>
    <div>
      <h3>Memory preallocation</h3>
      <a href="#memory-preallocation">
        
      </a>
    </div>
    <p>Each module running on our proxy service has a number of limits in place to avoid unbounded memory consumption and to preallocate memory as a performance optimization. In this specific instance, the Bot Management system has a limit on the number of machine learning features that can be used at runtime. Currently that limit is set to 200, well above our current use of ~60 features. Again, the limit exists because for performance reasons we preallocate memory for the features.</p><p>When the bad file with more than 200 features was propagated to our servers, this limit was hit — resulting in the system panicking. The FL2 Rust code that makes the check and was the source of the unhandled error is shown below:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/640fjk9dawDk7f0wJ8Jm5S/668bcf1f574ae9e896671d9eee50da1b/BLOG-3079_7.png" />
          </figure><p>This resulted in the following panic which in turn resulted in a 5xx error:</p><p><code>thread fl2_worker_thread panicked: called Result::unwrap() on an Err value</code></p>
    <div>
      <h3>Other impact during the incident</h3>
      <a href="#other-impact-during-the-incident">
        
      </a>
    </div>
    <p>Other systems that rely on our core proxy were impacted during the incident. This included Workers KV and Cloudflare Access. The team was able to reduce the impact to these systems at 13:04, when a patch was made to Workers KV to bypass the core proxy. Subsequently, all downstream systems that rely on Workers KV (such as Access itself) observed a reduced error rate. </p><p>The Cloudflare Dashboard was also impacted due to both Workers KV being used internally and Cloudflare Turnstile being deployed as part of our login flow.</p><p>Turnstile was impacted by this outage, resulting in customers who did not have an active dashboard session being unable to log in. This showed up as reduced availability during two time periods: from 11:30 to 13:10, and between 14:40 and 15:30, as seen in the graph below.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/nB2ZlYyXiGTNngsVotyjN/479a0f9273c160c63925be87592be023/BLOG-3079_8.png" />
          </figure><p>The first period, from 11:30 to 13:10, was due to the impact to Workers KV, which some control plane and dashboard functions rely upon. This was restored at 13:10, when Workers KV bypassed the core proxy system.

The second period of impact to the dashboard occurred after restoring the feature configuration data. A backlog of login attempts began to overwhelm the dashboard. This backlog, in combination with retry attempts, resulted in elevated latency, reducing dashboard availability. Scaling control plane concurrency restored availability at approximately 15:30.</p>
    <div>
      <h2>Remediation and follow-up steps</h2>
      <a href="#remediation-and-follow-up-steps">
        
      </a>
    </div>
    <p>Now that our systems are back online and functioning normally, work has already begun on how we will harden them against failures like this in the future. In particular we are:</p><ul><li><p>Hardening ingestion of Cloudflare-generated configuration files in the same way we would for user-generated input</p></li><li><p>Enabling more global kill switches for features</p></li><li><p>Eliminating the ability for core dumps or other error reports to overwhelm system resources</p></li><li><p>Reviewing failure modes for error conditions across all core proxy modules</p></li></ul><p>Today was Cloudflare's worst outage <a href="https://blog.cloudflare.com/details-of-the-cloudflare-outage-on-july-2-2019/"><u>since 2019</u></a>. We've had outages that have made our <a href="https://blog.cloudflare.com/post-mortem-on-cloudflare-control-plane-and-analytics-outage/"><u>dashboard unavailable</u></a>. Some that have caused <a href="https://blog.cloudflare.com/cloudflare-service-outage-june-12-2025/"><u>newer features</u></a> to not be available for a period of time. But in the last 6+ years we've not had another outage that has caused the majority of core traffic to stop flowing through our network.</p><p>An outage like today is unacceptable. We've architected our systems to be highly resilient to failure to ensure traffic will always continue to flow. When we've had outages in the past it's always led to us building new, more resilient systems.</p><p>On behalf of the entire team at Cloudflare, I would like to apologize for the pain we caused the Internet today. </p><table><tr><th><p>Time (UTC)</p></th><th><p>Status</p></th><th><p>Description</p></th></tr><tr><td><p>11:05</p></td><td><p>Normal.</p></td><td><p>Database access control change deployed.</p></td></tr><tr><td><p>11:28</p></td><td><p>Impact starts.</p></td><td><p>Deployment reaches customer environments, first errors observed on customer HTTP traffic.</p></td></tr><tr><td><p>11:32-13:05</p></td><td><p>The team investigated elevated traffic levels and errors to Workers KV service.</p><p>

</p></td><td><p>The initial symptom appeared to be degraded Workers KV response rate causing downstream impact on other Cloudflare services.</p><p>
</p><p>Mitigations such as traffic manipulation and account limiting were attempted to bring the Workers KV service back to normal operating levels.</p><p>
</p><p>The first automated test detected the issue at 11:31 and manual investigation started at 11:32. The incident call was created at 11:35.</p></td></tr><tr><td><p>13:05</p></td><td><p>Workers KV and Cloudflare Access bypass implemented — impact reduced.</p></td><td><p>During investigation, we used internal system bypasses for Workers KV and Cloudflare Access so they fell back to a prior version of our core proxy. Although the issue was also present in prior versions of our proxy, the impact was smaller as described below.</p></td></tr><tr><td><p>13:37</p></td><td><p>Work focused on rollback of the Bot Management configuration file to a last-known-good version.</p></td><td><p>We were confident that the Bot Management configuration file was the trigger for the incident. Teams worked on ways to repair the service in multiple workstreams, with the fastest workstream a restore of a previous version of the file.</p></td></tr><tr><td><p>14:24</p></td><td><p>Stopped creation and propagation of new Bot Management configuration files.</p></td><td><p>We identified that the Bot Management module was the source of the 500 errors and that this was caused by a bad configuration file. We stopped automatic deployment of new Bot Management configuration files.</p></td></tr><tr><td><p>14:24</p></td><td><p>Test of new file complete.</p></td><td><p>We observed successful recovery using the old version of the configuration file and then focused on accelerating the fix globally.</p></td></tr><tr><td><p>14:30</p></td><td><p>Main impact resolved. Downstream impacted services started observing reduced errors.</p></td><td><p>A correct Bot Management configuration file was deployed globally and most services started operating correctly.</p></td></tr><tr><td><p>17:06</p></td><td><p>All services resolved. Impact ends.</p></td><td><p>All downstream services restarted and all operations fully restored.</p></td></tr></table><p></p> ]]></content:encoded>
            <category><![CDATA[Outage]]></category>
            <category><![CDATA[Post Mortem]]></category>
            <category><![CDATA[Bot Management]]></category>
            <guid isPermaLink="false">oVEUcpjyyDA8DSSXiE7E6</guid>
            <dc:creator>Matthew Prince</dc:creator>
        </item>
        <item>
            <title><![CDATA[Online outages: Q3 2025 Internet disruption summary]]></title>
            <link>https://blog.cloudflare.com/q3-2025-internet-disruption-summary/</link>
            <pubDate>Tue, 28 Oct 2025 12:00:00 GMT</pubDate>
            <description><![CDATA[ In Q3 2025, we observed Internet disruptions around the world resulting from government directed shutdowns, power outages, cable cuts, a cyberattack, an earthquake, a fire, and technical problems, as well as several with unexplained causes. ]]></description>
            <content:encoded><![CDATA[ <p>In the third quarter, we observed Internet disruptions with a wide variety of known causes, as well as several with <a href="#no-definitive-cause"><u>no definitive or published cause</u></a>. Once again, we unfortunately saw a number of <a href="#government-directed-shutdowns"><u>government-directed shutdowns</u></a>, including exam-related shutdowns in <a href="#sudan"><u>Sudan</u></a>, <a href="#syria"><u>Syria</u></a>, and <a href="#iraq"><u>Iraq</u></a>. <a href="#fiber-optic-cable-damage"><u>Cable cuts</u></a>, both submarine and terrestrial, caused Internet outages, including one caused by a <a href="#texas-united-states"><u>stray bullet</u></a>. <a href="#gibraltar"><u>A rogue contractor</u></a>, among other events, caused power outages that impacted Internet connectivity. Damage from an <a href="#earthquake"><u>earthquake</u></a> and a <a href="#fire-causes-infrastructure-damage"><u>fire</u></a> caused service disruptions, as did a targeted <a href="#targeted-cyberattack"><u>cyberattack</u></a>. And a myriad of <a href="#technical-problems"><u>technical issues</u></a>, including issues with <a href="#china"><u>China’s Great Firewall</u></a>, resulted in traffic losses across multiple countries.</p><p>As we have noted in the past, this post is intended as a summary overview of observed and confirmed disruptions, and is not an exhaustive or complete list of issues that have occurred during the quarter. A larger list of detected traffic anomalies is available in the <a href="https://radar.cloudflare.com/outage-center#traffic-anomalies"><u>Cloudflare Radar Outage Center</u></a>. These anomalies are detected through significant deviations from expected traffic patterns observed across our network. Note that both bytes-based and request-based traffic graphs are used within the post to illustrate the impact of the observed disruptions — the choice of metric to include was generally made based on which better illustrated the impact of the disruption.</p>
    <div>
      <h2>Government-directed shutdowns</h2>
      <a href="#government-directed-shutdowns">
        
      </a>
    </div>
    
    <div>
      <h3>Sudan</h3>
      <a href="#sudan">
        
      </a>
    </div>
    <p>Regular drops in traffic from <a href="https://radar.cloudflare.com/sd"><u>Sudan</u></a> were observed between 12:00-15:00 UTC (14:00-17:00 local time) each day from July 7-10. Partial outages were observed at <a href="https://radar.cloudflare.com/traffic/as15706?dateStart=2025-07-06&amp;dateEnd=2025-07-12#http-traffic"><u>Sudatel (AS15706)</u></a>, and near-complete outages at <a href="https://radar.cloudflare.com/traffic/as36998?dateStart=2025-07-06&amp;dateEnd=2025-07-12#http-traffic"><u>SDN Mobitel (AS36998)</u></a> and <a href="https://radar.cloudflare.com/traffic/as36972?dateStart=2025-07-06&amp;dateEnd=2025-07-12#http-traffic"><u>MTN Sudan (AS36972)</u></a>. Similar drops were also seen in traffic to our <a href="https://1.1.1.1/dns"><u>1.1.1.1 DNS resolver</u></a> from these impacted <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/"><u>ASNs</u></a>.</p><p>We have observed Sudan implementing government-directed Internet shutdowns in the past (<a href="https://blog.cloudflare.com/sudans-exam-related-internet-shutdowns/"><u>2021</u></a>, <a href="https://blog.cloudflare.com/syria-sudan-algeria-exam-internet-shutdown/#sudan"><u>2022</u></a>), and given that the timing aligns with the last four days of <a href="https://www.suna-sd.net/posts/ministry-of-education-publishes-schedule-for-postponed-2024-secondary-school-certificate-examinations"><u>postponed 2024 secondary school certificate examinations</u></a>, in addition to fitting the pattern of short-duration disruptions repeating across multiple days, we believe that these drops in traffic were exam-related shutdowns as well. </p>
    <div>
      <h3>Syria</h3>
      <a href="#syria">
        
      </a>
    </div>
    <p>In our <a href="https://blog.cloudflare.com/q2-2025-internet-disruption-summary/#syria"><u>second quarter post</u></a>, we covered the cellular connectivity-focused exam-related Internet shutdowns that <a href="https://radar.cloudflare.com/sy"><u>Syria</u></a> chose to implement this year in an effort to limit their impact. During the second quarter, the shutdowns associated with the “Basic Education Certificate” took place on June 21, 24, and 29 between 05:15 - 06:00 UTC (08:15 - 09:00 local time). Exams and associated shutdowns for the “Secondary Education Certificate” were scheduled to take place between July 12 and August 3, and during that period, we observed six additional Internet disruptions in Syria on July 12, 17, 21, 28, 31, and August 3, as shown in the graph below.</p><p>At the end of the exam period, the <a href="https://t.me/TrbyaGov/2352"><u>Syrian Ministry of Education posted a Telegram message</u></a> that was presumably intended to justify the shutdowns, and the focus on cellular connectivity. Translated, it said in part:</p><p>“<i>As part of its efforts to ensure the integrity of the examination process, and in coordination with relevant authorities, the Ministry of Education was able to uncover organized exam cheating networks in three examination centers in Lattakia Governorate. These networks used advanced electronic technologies and devices in their attempt to manipulate the exam process.</i></p><p><i>The network was seized in cooperation with the Lattakia Education Directorate, following close monitoring and detection of suspicious attempts. It was found that members of the network used small earphones, wireless communication devices, and mobile phones equipped with advanced transmission and reception technologies, which contradict educational values and violate the integrity of the examination process and the principle of justice.</i>”</p>
    <div>
      <h3>Venezuela </h3>
      <a href="#venezuela">
        
      </a>
    </div>
    <p>A slightly more unusual government directed shutdown took place in <a href="https://radar.cloudflare.com/ve"><u>Venezuela</u></a> on August 18 when Venezuelan provider <a href="https://radar.cloudflare.com/as22313"><u>SuperCable (AS22313)</u></a> ceased service. An <a href="https://x.com/vesinfiltro/status/1957601745321783746"><u>X post</u></a> from Venezuelan industry watcher <a href="https://vesinfiltro.org/"><u>VE sin Filtro</u></a> published a notification from <a href="https://conatel.gob.ve/"><u>CONATEL, the National Commission of Telecommunications in Venezuela</u></a>, that notified SuperCable that as of March 14, 2025, its authority to operate in the country had been revoked, and established a 60 day transition period so that users could find another provider. Another <a href="https://x.com/vesinfiltro/status/1957595268221632929"><u>X post from VE sin Filtro</u></a> shared an email that SuperCable subscribers received from the company announcing the end of the service and, and noted that half an hour after the email was sent, subscribers were left without Internet connectivity. Traffic began to fall at 15:00 UTC (11:00 local time), and was gone after 15:30 UTC (11:30 local time). Connectivity remained shut down through the end of the quarter.</p><p>Interestingly, we did not see a corresponding full loss of announced IP address space when traffic disappeared. However, such full losses did occur between <a href="https://radar.cloudflare.com/routing/as22313?dateStart=2025-08-17&amp;dateEnd=2025-08-23"><u>August 19-21</u></a>, and again briefly on <a href="https://radar.cloudflare.com/routing/as22313?dateStart=2025-09-14&amp;dateEnd=2025-09-20"><u>September 16</u></a>. The number of announced /24s (blocks of 256 IPv4 addresses) fell from 95 to 63 on <a href="https://radar.cloudflare.com/routing/as22313?dateStart=2025-09-24&amp;dateEnd=2025-09-30"><u>September 25</u></a>, and remained at that level through the end of the quarter.</p>
    <div>
      <h3>Iraq</h3>
      <a href="#iraq">
        
      </a>
    </div>
    <p>Similar to Syria, we covered the latest rounds of exam-related Internet shutdowns in <a href="https://radar.cloudflare.com/iq"><u>Iraq</u></a> in our <a href="https://blog.cloudflare.com/q2-2025-internet-disruption-summary/#iraq"><u>second quarter blog post</u></a>. In that post, we noted that the shutdowns in the main part of the country ran until July 3 for <a href="https://www.facebook.com/Iraq.Ministry.of.Education/posts/pfbid0a7VuMttRxdoGWwuaymy38LcZw9jscz3Dfxup4aUue2LeRBPuU2c7vnDsZKbgCkE2l"><u>preparatory school exams</u></a>, and through July 6 in the Kurdistan region. These can be seen in the graph below.</p><p>The <a href="https://pulse.internetsociety.org/en/shutdowns/exams-shutdown-kurdistan-iraq-25-august-2025/"><u>Kurdistan Regional Government in Iraq ordered Internet services to be suspended</u></a> on August 23 between 03:30 and 04:45 UTC (6:30-7:45 local time), and again every Saturday, Monday, and Wednesday until September 8 to prevent cheating on the <a href="https://www.kurdistan24.net/ckb/story/859388/%D9%88%DB%95%D8%B2%D8%A7%D8%B1%DB%95%D8%AA%DB%8C-%DA%AF%D9%88%D8%A7%D8%B3%D8%AA%D9%86%DB%95%D9%88%DB%95-%D9%84%DB%95-%DA%95%DB%86%DA%98%D8%A7%D9%86%DB%8C-%D8%AA%D8%A7%D9%82%DB%8C%DA%A9%D8%B1%D8%AF%D9%86%DB%95%D9%88%DB%95%DA%A9%D8%A7%D9%86%DB%8C-%D9%BE%DB%86%D9%84%DB%8C-12-%D9%87%DB%8E%DA%B5%DB%95%DA%A9%D8%A7%D9%86%DB%8C-%D8%A6%DB%8C%D9%86%D8%AA%DB%95%D8%B1%D9%86%DB%8E%D8%AA-%DA%95%D8%A7%D8%AF%DB%95%DA%AF%DB%8C%D8%B1%DB%8E%D9%86"><u>second round of grade 12 exams</u></a>. Similar to last quarter, <a href="https://radar.cloudflare.com/as206206"><u>KNET (AS206206)</u></a>, <a href="https://radar.cloudflare.com/as21277"><u>Newroz Telecom (AS21277)</u></a>, <a href="https://radar.cloudflare.com/as48492"><u>IQ Online (AS48492)</u></a>, and <a href="https://radar.cloudflare.com/as59625"><u>KorekTel (AS59625)</u></a> were impacted by the ordered shutdowns.</p><p>In the main part of the country, starting on August 26, the latest round of <a href="https://pulse.internetsociety.org/en/shutdowns/internet-shutdown-for-iraq-exam-26-august-2025/"><u>Internet shutdowns for high school exams</u></a> began, scheduled through September 13, taking place between 03:00-05:00 UTC (06:00-08:00 local time). Networks impacted by these shutdowns included <a href="https://radar.cloudflare.com/traffic/as199739"><u>Earthlink (AS199739)</u></a>, <a href="https://radar.cloudflare.com/traffic/as51684"><u>Asiacell (AS51684)</u></a>, <a href="https://radar.cloudflare.com/traffic/as59588"><u>Zainas (AS59588)</u></a>, <a href="https://radar.cloudflare.com/traffic/as58322"><u>Halasat (AS58322)</u></a>, and <a href="https://radar.cloudflare.com/traffic/as203214"><u>HulumTele (AS203214)</u></a>.</p>
    <div>
      <h3>Afghanistan</h3>
      <a href="#afghanistan">
        
      </a>
    </div>
    <p>In mid-September, the Taliban <a href="https://amu.tv/200798/"><u>ordered the shutdown of fiber optic Internet connectivity</u></a> in multiple provinces across <a href="https://radar.cloudflare.com/af"><u>Afghanistan</u></a>, as part of a drive to “prevent immorality”. It was the first such ban issued since the Taliban took full control of the country in August 2021. As many as <a href="https://amu.tv/200798/"><u>15 provinces</u></a> experienced shutdowns, and these regional shutdowns <a href="https://www.afghanstudiescenter.org/taliban-internet-shutdown-blocks-thousands-of-afghan-students-from-online-classes/"><u>blocked</u></a> Afghani students from attending online classes, <a href="https://theweek.com/world-news/afghanistan-taliban-high-speed-internet-women-education"><u>impacted</u></a> commerce and banking, and <a href="https://www.dw.com/en/afghanistan-whats-at-stake-as-taliban-cut-internet/a-74043564"><u>limited access</u></a> to government agencies and institutions such as passport and registration offices, customs offices.</p><p>Less than two weeks later, just after 11:30 UTC (16:00 local time) on Monday, September 29, 2025, subscribers of wired Internet providers in <a href="https://radar.cloudflare.com/traffic/af"><u>Afghanistan</u></a> experienced a <a href="https://x.com/CloudflareRadar/status/1972649804821057727"><u>brief service interruption</u></a>, lasting until just before 12:00 UTC (16:30 local time). Mobile providers <a href="https://radar.cloudflare.com/explorer?dataSet=netflows&amp;loc=&amp;dt=1d&amp;asn=as131284&amp;compAsn=as38742&amp;timeCompare=2025-09-21"><u>Afghan Wireless (AS38472) and Etisalat (AS131284)</u></a> remained available during that period. However, just after 12:30 UTC (17:00 local time), the Internet was <a href="https://x.com/CloudflareRadar/status/1972682041759076637"><u>completely shut down</u></a>, taking the country completely offline.</p><p>These shutdowns are reviewed in more detail in our September 30 blog post, <a href="https://blog.cloudflare.com/nationwide-internet-shutdown-in-afghanistan/"><i><u>Nationwide Internet shutdown in Afghanistan extends localized disruptions</u></i></a>. Connectivity was restored around 11:45 UTC (16:15 local time) on October 1.</p>
    <div>
      <h2>Fiber optic cable damage</h2>
      <a href="#fiber-optic-cable-damage">
        
      </a>
    </div>
    
    <div>
      <h3>Dominican Republic</h3>
      <a href="#dominican-republic">
        
      </a>
    </div>
    <p>On July 7, a <a href="https://x.com/ClaroRD/status/1942286349006168091"><u>post on X from Claro</u></a> alerted subscribers to a service disruption caused by damage to two fiber optic cables. According to a <a href="https://x.com/ClaroRD/status/1942368212160516305"><u>subsequent post</u></a>, one was damaged by work being done by <a href="http://coraavega.gob.do"><u>CORAAVEGA</u></a> (La Vega Water And Sewerage Corporation) and the other by work being done by the Dominican Electric Transmission Company. As a result of the damage, traffic from <a href="https://radar.cloudflare.com/as6400"><u>Claro (AS6400)</u></a> began to drop just before 16:00 UTC (12:00 local time), falling just over two-thirds compared to the prior week. Claro’s technicians were able to quickly locate the faults and repair them, with traffic recovering around 18:00 UTC (14:00 local time).</p>
    <div>
      <h3>Angola</h3>
      <a href="#angola">
        
      </a>
    </div>
    <p>Between 12:45-15:45 UTC (13:45-16:45 local time) on July 19, users in <a href="https://radar.cloudflare.com/ao"><u>Angola</u></a> experienced an Internet disruption, with <a href="https://radar.cloudflare.com/as37119"><u>Unitel Angola (AS37119)</u></a> experiencing <a href="https://radar.cloudflare.com/explorer?dataSet=netflows&amp;loc=as37119&amp;dt=2025-07-19_2025-07-19&amp;timeCompare=2025-07-12#query"><u>as much as a 95% drop in traffic</u></a> as compared to the previous week, and <a href="https://radar.cloudflare.com/traffic/as327932?dateStart=2025-07-19&amp;dateEnd=2025-07-19"><u>Connectis (AS327932)</u></a> suffering a complete outage. According to an <a href="https://x.com/unitelao/status/1946644209370358120"><u>X post from Unitel Angola</u></a>, it “<i>was caused by a disruption at our partner Angola Cables, resulting from public road works that affected the national fiber optic interconnections.</i>”</p><p>However, the timing of the disruption coincided with protests over the rise in diesel fuel prices, and local non-governmental organizations <a href="https://www.verangola.net/va/en/072025/Society/45242/Angolan-NGOs-consider-internet-shutdown-during-Saturday%27s-protests-a-dictatorial-measure.htm"><u>disputed</u></a> Unitel Angola’s explanation, <a href="https://myemail.constantcontact.com/STATEMENT-OF-REPUDIATION--ON-THE-INTERNET-SHUTDOWN-DURING-THE-DEMONSTRATIONS-OF-JULY-19-.html"><u>claiming</u></a> that it was actually due to a government-directed Internet shutdown. Multiple Angolan network providers experienced a drop in announced IP address space during the period the Internet disruption occurred, and analysis of routing information for these networks finds that they share <a href="https://radar.cloudflare.com/as37468"><u>Angola Cables (AS37468)</u></a> as an upstream provider, lending some credence to the explanation from Unitel Angola.</p>
    <div>
      <h3>Haiti</h3>
      <a href="#haiti">
        
      </a>
    </div>
    <p><a href="https://radar.cloudflare.com/as27653"><u>Digicel Haiti (AS27653)</u></a> is no stranger to Internet disruptions caused by damage to both terrestrial and submarine cables, experiencing such problems during the <a href="https://blog.cloudflare.com/q1-2025-internet-disruption-summary/#haiti"><u>first</u></a> and <a href="https://blog.cloudflare.com/q2-2025-internet-disruption-summary/#digicel-haiti"><u>second</u></a> quarters of 2025, as well as <a href="https://blog.cloudflare.com/q1-2024-internet-disruption-summary/#digicel-haiti"><u>first</u></a>, <a href="https://blog.cloudflare.com/q2-2024-internet-disruption-summary/#haiti"><u>second</u></a>, and <a href="https://blog.cloudflare.com/q3-2024-internet-disruption-summary/#haiti"><u>third</u></a> quarters of 2024. The most recent such disruption occurred on August 26, when they experienced two different cuts on their fiber optic infrastructure, <a href="https://x.com/jpbrun30/status/1960437559558869220"><u>according to an X post</u></a> from the company’s Director General. Traffic <a href="https://radar.cloudflare.com/explorer?dataSet=http&amp;loc=as27653&amp;dt=2025-08-26_2025-08-26&amp;timeCompare=2025-08-19#result"><u>dropped by approximately 80%</u></a> during the disruption, which lasted from 19:30-23:00 UTC (15:30-19:00 UTC).</p>
    <div>
      <h3>Pakistan &amp; United Arab Emirates</h3>
      <a href="#pakistan-united-arab-emirates">
        
      </a>
    </div>
    <p>Telegeography’s <a href="https://www.submarinecablemap.com/"><u>Submarine Cable Map</u></a> shows that the Red Sea has a high density of submarine cables that carry data between Europe, Africa, and Asia. Cuts to these cables <a href="https://www.wired.com/story/houthi-internet-cables-ship-anchor-path/"><u>can significantly impact connectivity</u></a>, ranging from increased latency on international connections to complete outages. The impacts may only affect a single country, or they may disrupt multiple countries connected to a damaged cable. On September 6, <a href="https://radar.cloudflare.com/as17557"><u>Pakistan Telecom (AS17557)</u></a> <a href="https://x.com/PTCLOfficial/status/1964203180876521559"><u>posted a message on X</u></a> that stated “<i>We would like to inform that submarine cable cuts have occurred in Saudi waters near Jeddah, impacting partial bandwidth capacity on </i><a href="https://www.submarinecablemap.com/submarine-cable/seamewe-4"><i><u>SMW4</u></i></a><i> and </i><a href="https://www.submarinecablemap.com/submarine-cable/imewe"><i><u>IMEWE</u></i></a><i> systems. As a result, internet users in Pakistan may experience some service degradation during peak hours.</i>” (Initial reporting that the cable cuts occurred near Jeddah were apparently incorrect, as the <a href="https://www.linkedin.com/feed/update/urn:li:activity:7379509758598406144?commentUrn=urn%3Ali%3Acomment%3A%28activity%3A7379509758598406144%2C7379684775701245952%29&amp;dashCommentUrn=urn%3Ali%3Afsd_comment%3A%287379684775701245952%2Curn%3Ali%3Aactivity%3A7379509758598406144%29"><u>damage occurred in Yemeni waters</u></a>.)</p><p>Looking at the impact in Pakistan, we observed traffic drop by 25-30% in Sindh and Punjab between 12:00-20:00 UTC (17:00 - 01:00 local time).</p><p>In the <a href="https://radar.cloudflare.com/ae"><u>United Arab Emirates</u></a>, Etisalat alerted customers via <a href="https://x.com/eAndUAE/status/1964655864117346578"><u>a post on X</u></a> that they “<i>may experience slowness in data services due to an interruption in the international submarine cables.</i>” Between 11:00-22:00 UTC (15:00-02:00 local time) on September 6, traffic from <a href="https://radar.cloudflare.com/as8966"><u>AS8966 (Etisalat)</u></a> <a href="https://x.com/CloudflareRadar/status/1964727360764469339"><u>dropped as much as 28%</u></a>.</p><p>Also in the UAE, service provider <a href="https://radar.cloudflare.com/as15802"><u>du (AS15802)</u></a> told their customers via a post on X that “<i>You may experience some slowness in our data services due to an International submarine cable cut.</i>” This slowness is visible in Radar’s Internet quality metrics for the network between 11:00-22:00 UTC (15:00-02:00 local time) on September 6, with <a href="https://radar.cloudflare.com/quality/as15802?dateStart=2025-09-06&amp;dateEnd=2025-09-06#bandwidth"><u>median bandwidth</u></a> dropping by more than half, from 25 Mbps to as low as 9.8 Mbps, and <a href="https://radar.cloudflare.com/quality/as15802?dateStart=2025-09-06&amp;dateEnd=2025-09-06#latency"><u>median latency</u></a> doubling from 30 ms to over 60 ms.</p><p>The graphs below provide <a href="https://x.com/CloudflareRadar/status/1964817678541205758"><u>another view of the impact</u></a> of the cable cuts, based on Cloudflare network probes between New Delhi (del-c) to London (lhr-a) and Bombay (bom-c) to Frankfurt (fra-a). For the former pair of data centers, mean latency grew by approximately 20%, and for the latter pair, by approximately 30%, starting around 23:00 UTC on September 5. (The stable latency line at the bottom of both graphs represents probes going over the Cloudflare backbone, which was not impacted by the cable cuts.)</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/MqZmljASqeJlMQO4UFUDw/eb067e32492eecb151eb3d8f4db89bf4/image24.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5C9XAWuaBwASAibBbN5HV4/778c2ad24adaea37f3e0e04c59250fc3/image32.png" />
          </figure>
    <div>
      <h3>Texas, United States</h3>
      <a href="#texas-united-states">
        
      </a>
    </div>
    <p>Fiber optic cables are frequently damaged by errant ship anchors (submarine) or construction equipment (terrestrial), but on September 26, <a href="https://www.wfaa.com/article/tech/stray-bullet-caused-major-spectrum-outages-north-texas/287-e72cdefc-6a0a-4a1e-b181-6d02bc60b732"><u>a stray bullet damaged a cable</u></a> in the Dallas, Texas area, disrupting Internet connectivity for <a href="https://radar.cloudflare.com/as11427"><u>Spectrum (AS11427)</u></a> customers. Spectrum <a href="https://x.com/Ask_Spectrum/status/1971651914283851975"><u>acknowledged the service interruption</u></a> in a post on X, followed by <a href="https://x.com/Ask_Spectrum/status/1971722840279077229"><u>another post</u></a> four and a half hours later stating that the issue had been resolved. Although neither post cited the bullet as the cause of the disruption, <a href="https://www.wfaa.com/article/tech/stray-bullet-caused-major-spectrum-outages-north-texas/287-e72cdefc-6a0a-4a1e-b181-6d02bc60b732"><u>news reports</u></a> attributed the claim to a Spectrum spokesperson. Overall, the disruption was fairly nominal, lasting for just two hours between 18:00-20:00 UTC (13:00-15:00 local time), with traffic dropping less than 25% as compared to the prior week.</p>
    <div>
      <h3>South Africa</h3>
      <a href="#south-africa">
        
      </a>
    </div>
    <p>“Major cable breaks” disrupted Internet connectivity for customers of <a href="https://radar.cloudflare.com/as37457"><u>Telkom (AS37457)</u></a> in <a href="https://radar.cloudflare.com/za"><u>South Africa</u></a> on September 27. Although Telkom acknowledged the <a href="https://x.com/TelkomZA/status/1971901592413913294"><u>initial service disruption</u></a> and its <a href="https://x.com/TelkomZA/status/1971921589316080109"><u>subsequent resolution</u></a> in posts on X, it didn’t provide any information about the cause in these posts. However, it apparently later <a href="https://mybroadband.co.za/news/cellular/612245-telkom-network-suffers-national-outage.html"><u>issued a statement</u></a>, stating “<i>Telkom confirms that mobile voice and data services, which were disrupted earlier on Saturday due to major cable breaks, have now been fully restored nationwide.</i>” The disruption lasted six hours, from 08:00-14:00 UTC (10:00-16:00 local time), with traffic dropping as much as 50% as compared to the previous week.</p>
    <div>
      <h2>Power outages cause Internet disruptions</h2>
      <a href="#power-outages-cause-internet-disruptions">
        
      </a>
    </div>
    
    <div>
      <h3>Tanzania</h3>
      <a href="#tanzania">
        
      </a>
    </div>
    <p>A reported <a href="https://x.com/airtel_tanzania/status/1940072844446359845"><u>power outage at one of Airtel Tanzania's data centers</u></a> on July 1 resulted in a multi-hour disruption in connectivity for its mobile customers. The service interruption occurred between 11:30-18:00 UTC (14:30-21:00 local time), with traffic dropping on <a href="https://radar.cloudflare.com/as37133"><u>Airtel Tanzania (AS37133)</u></a> by as much as 40% as compared to the previous week.</p>
    <div>
      <h3>Czech Republic</h3>
      <a href="#czech-republic">
        
      </a>
    </div>
    <p>According to the Industry and Trade Ministry in the <a href="https://radar.cloudflare.com/cz"><u>Czech Republic</u></a>, <a href="https://www.reuters.com/world/europe/czech-republic-hit-by-major-power-outage-2025-07-04/"><u>a fallen power cable caused a widespread power outage</u></a> on July 4. This power outage impacted Internet connectivity within the country, with <a href="https://x.com/CloudflareRadar/status/1941237676730089797"><u>traffic dropping</u></a> by as much as 32%. Traffic fell just after the power outage began at 10:00 UTC (12:00 local time), and although it was <a href="https://www.reuters.com/world/europe/czech-republic-hit-by-major-power-outage-2025-07-04/"><u>“nearly fully resolved”</u></a> by 16:00 UTC (18:00 local time), traffic did not return to expected levels until closer to 20:00 UTC (22:00 local time). This trailing traffic recovery aligns with a <a href="https://www.expats.cz/czech-news/article/czechia-picks-up-the-pieces-after-power-outage-why-it-happened-and-what-the-future-holds"><u>published report</u></a> that noted “<i>While ČEPS, the national transmission system operator, restored full grid functionality by mid-afternoon, tens of thousands remained without electricity into the evening.</i>”</p>
    <div>
      <h3>St. Vincent and the Grenadines</h3>
      <a href="#st-vincent-and-the-grenadines">
        
      </a>
    </div>
    <p>On <a href="https://radar.cloudflare.com/vc"><u>St. Vincent and the Grenadines</u></a>, the St Vincent Electricity Services Limited (VINLEC) <a href="https://www.facebook.com/VINLECSVG/posts/st-vincent-electricity-services-limited-vinlec-experienced-a-system-failure-at-a/1308214567765820/"><u>stated in a Facebook post</u></a> that a “system failure” caused a power outage that affected customers on mainland St. Vincent. According to <a href="https://www.vinlec.com/"><u>VINLEC</u></a>, the system failed at approximately 11:30 local time on August 16 (03:30 UTC on August 17), and power was restored to all customers just after 04:00 local time on August 17 (08:00 UTC). During the four-hour power outage, which also disrupted Internet connectivity, traffic dropped by as much as 80% below expected levels.</p>
    <div>
      <h3>Curaçao</h3>
      <a href="#curacao">
        
      </a>
    </div>
    <p>In <a href="https://radar.cloudflare.com/cw"><u>Curaçao</u></a>, a series of Facebook posts from <a href="https://www.aqualectra.com/"><u>Aqualectra</u></a>, the island’s water and power company, <a href="https://www.facebook.com/AqualectraUtilityCuracao/posts/pfbid02wBV7CqovjuSTX52NCpYVqKAjzGkgoAurCUVnrVDCqKEA8hNpyRoh96SaGTUQ7C8Ll"><u>confirmed</u></a> that there was a power outage, and provided updates on the <a href="https://www.facebook.com/AqualectraUtilityCuracao/posts/pfbid017xNQW9sbLnmXEHo3y8mU22cbKtdzYXoKfVL7fFJ1pomMTHitty5wg5ZjN1YnMDgl"><u>progress</u></a> towards <a href="https://www.facebook.com/AqualectraUtilityCuracao/posts/pfbid021MAkFoaSVZiN8inieUxryV3ACVhZy1bjkSmp5MgG5PgceSWZ1X6i6SJAD7z1gM32l"><u>restoration</u></a>. The impact of the power outage to Internet connectivity was visible in traffic disruptions across several Internet service providers, including <a href="https://radar.cloudflare.com/as52233"><u>Flow (AS52233)</u></a> and <a href="https://radar.cloudflare.com/as11081"><u>UTS (AS11081)</u></a>. The observed disruptions lasted for most of the day, with traffic dropping around 06:45 UTC (02:45 local time) and recovering to expected levels around 23:45 UTC (19:45 local time). During the disruption, <a href="https://bsky.app/profile/radar.cloudflare.com/post/3lxf4cn53cv2p"><u>the country's traffic dropped by over 80%</u></a> as compared to the previous week, with Flow experiencing a near complete outage.</p>
    <div>
      <h3>Cuba</h3>
      <a href="#cuba">
        
      </a>
    </div>
    <p>Wide-scale power outages occur all too frequently in <a href="https://radar.cloudflare.com/cu"><u>Cuba</u></a>, and when power is lost, Internet connectivity follows. We have <a href="https://www.google.com/search?q=cuba+power+outage+site%3Ablog.cloudflare.com"><u>covered many such events in this series of blog posts</u></a> over the last several years, and the latest occurred on September 10. That morning, <a href="https://x.com/OSDE_UNE/status/1965770929675608214"><u>an X post</u></a> from the <a href="https://www.unionelectrica.cu/"><u>Unión Eléctrica de Cuba</u></a> reported the collapse of the national electric power system at 09:14 local time (13:14 UTC) following the unexpected shutdown of the <a href="https://www.gem.wiki/Antonio_Guiteras_Thermoelectric_Power_Plant_(CTE)"><u>Antonio Guiteras Thermoelectric Power Plant (CTE)</u></a>. The island’s Internet traffic dropped by nearly 60% (as compared to expected levels) almost immediately, and remained lower than normal for over a day, returning to expected levels around 17:15 UTC on September 11 (13:15 local time) when the Ministerio de Energía y Minas de Cuba <a href="https://x.com/EnergiaMinasCub/status/1966191043952410754"><u>posted on X</u></a> that the national electric system had been restored.</p>
    <div>
      <h3>Gibraltar</h3>
      <a href="#gibraltar">
        
      </a>
    </div>
    <p>A contractor cutting through three high voltage cables caused a nationwide power outage in <a href="https://radar.cloudflare.com/gi"><u>Gibraltar</u></a> on September 16, according to a <a href="https://www.facebook.com/gibraltargovernment/posts/pfbid0ZDLtEtVEYwSgKGn6J3eWgvneMo1mhB6cTrhHpTgLKhguL9ZqB5qfT4ijrUDsqFhrl"><u>Facebook post from the Gibraltar government</u></a>. This power outage resulted in a disruption to Internet traffic between 11:15-18:30 UTC (13:15-20:30 local time), <a href="https://bsky.app/profile/radar.cloudflare.com/post/3lyykvuty7c2s"><u>falling as low as 80%</u></a> below the previous week.</p>
    <div>
      <h2>Earthquake</h2>
      <a href="#earthquake">
        
      </a>
    </div>
    
    <div>
      <h3>Kamchatka Peninsula, Russia</h3>
      <a href="#kamchatka-peninsula-russia">
        
      </a>
    </div>
    <p>A <a href="https://earthquake.usgs.gov/earthquakes/eventpage/us6000qw60/executive"><u>magnitude 8.8 earthquake</u></a> struck the <a href="https://radar.cloudflare.com/traffic/2125072"><u>Kamchatka Peninsula</u></a> in <a href="https://radar.cloudflare.com/ru"><u>Russia</u></a> at 23:24 UTC on July 29 (11:24 local time on July 30), and was powerful enough to trigger <a href="https://www.reuters.com/business/environment/huge-quake-russia-triggers-tsunami-warnings-around-pacific-2025-07-30/"><u>tsunami warnings</u></a> for <a href="https://radar.cloudflare.com/jp"><u>Japan</u></a>, <a href="https://radar.cloudflare.com/traffic/5879092"><u>Alaska</u></a>, <a href="https://radar.cloudflare.com/traffic/5855797"><u>Hawaii</u></a>, <a href="https://radar.cloudflare.com/gu"><u>Guam</u></a>, and other Russian regions. The graphs below show that there was an immediate impact to Internet traffic across several networks in the region, including <a href="https://radar.cloudflare.com/as12389"><u>Rostelecom (AS12389)</u></a> and <a href="https://radar.cloudflare.com/as42742"><u>InterkamService (AS42742)</u></a>, where traffic dropped by 75% or more. While traffic started to recover almost immediately across both providers, traffic on Rostelecom approached expected levels much more quickly than on InterkamService.</p>
    <div>
      <h2>Targeted cyberattack</h2>
      <a href="#targeted-cyberattack">
        
      </a>
    </div>
    
    <div>
      <h3>Yemen</h3>
      <a href="#yemen">
        
      </a>
    </div>
    <p>A <a href="https://www.yemenmonitor.com/en/Details/ArtMID/908/ArticleID/147420"><u>cyberattack targeting Houthi-controlled YemenNet</u></a> <a href="https://radar.cloudflare.com/as30873"><u>(AS30873)</u></a> on August 11 briefly disrupted connectivity across the network in <a href="https://radar.cloudflare.com/ye"><u>Yemen</u></a>. A significant drop in traffic occurred at around 14:15 UTC (17:15 local time), recovering by 15:00 UTC (18:00 local time). This observed drop in traffic aligns with the reported timing and duration of the attack, which was focused on YemenNet’s ADSL infrastructure.</p><p>The attack also apparently impacted YemenNet’s routing, as announced IPv4 address space began to decline as the attack commenced. Although the attack ended within an hour after it started, announced address space remained depressed for approximately an additional hour, reaching as low as 510 /24s (blocks of 256 IPv4 addresses) being announced, down from a “steady state” of 870 /24s.</p>
    <div>
      <h2>Fire causes infrastructure damage</h2>
      <a href="#fire-causes-infrastructure-damage">
        
      </a>
    </div>
    
    <div>
      <h3>Egypt</h3>
      <a href="#egypt">
        
      </a>
    </div>
    <p>A <a href="https://english.alarabiya.net/News/north-africa/2025/07/07/a-fire-at-a-telecom-company-in-cairo-injures-14-and-temporarily-disrupts-service"><u>fire at the Ramses Central Exchange in Cairo, Egypt</u></a> on July 7 disrupted telecommunications services for a number of providers with infrastructure in the facility. The fire broke out in a Telecom Egypt equipment room, and impacted connectivity across multiple providers, including <a href="https://radar.cloudflare.com/as36992"><u>Etisalat (AS36992)</u></a>, <a href="https://radar.cloudflare.com/as37069"><u>Mobinil (AS37069)</u></a>, <a href="https://radar.cloudflare.com/as24863"><u>Orange Egypt (AS24863)</u></a>, and <a href="https://radar.cloudflare.com/as24835"><u>Vodafone Egypt (AS24835)</u></a>. Internet traffic across these providers initially dropped at 14:30 UTC (17:30 local time). Recovery to expected levels varied across the providers, with Etisalat recovering by July 9, Vodafone and Mobinil by July 10, and Orange Egypt on July 11.</p><p>On July 10, Telecom Egypt <a href="https://www.zawya.com/en/economy/north-africa/telecom-egypt-restores-services-after-ramses-central-fire-s2msr114"><u>announced</u></a> that services affected by the fire had been restored, after operations were transferred to alternative exchanges.</p>
    <div>
      <h2>Technical problems</h2>
      <a href="#technical-problems">
        
      </a>
    </div>
    
    <div>
      <h3>Starlink</h3>
      <a href="#starlink">
        
      </a>
    </div>
    <p>Global satellite Internet service provider <a href="https://radar.cloudflare.com/as14593"><u>Starlink (AS14593)</u></a> acknowledged a July 24 network outage through a <a href="https://x.com/Starlink/status/1948474586699571518"><u>post on X</u></a>. The Vice President of Network Engineering at SpaceX explained, in a <a href="https://x.com/michaelnicollsx/status/1948509258024452488"><u>subsequent X post</u></a>, that “<i>The outage was due to failure of key internal software services that operate the core network.</i>”</p><p>Traffic initially dropped around 19:15 UTC, and the disruption lasted approximately 2.5 hours. The impact of the Starlink outage was particularly noticeable in countries including <a href="https://x.com/CloudflareRadar/status/1948491791574986771"><u>Yemen and Sudan</u></a>, where traffic dropped by approximately 50%, as well as in <a href="https://x.com/CloudflareRadar/status/1948497510235820236"><u>Zimbabwe, South Sudan, and Chad</u></a>.</p>
    <div>
      <h3>China</h3>
      <a href="#china">
        
      </a>
    </div>
    <p>At around 16:30 UTC on August 19 (00:30 local time on August 20), we observed an anomalous 25% drop in <a href="https://radar.cloudflare.com/cn"><u>China’s</u></a> Internet traffic. Our analysis of related metrics found that this disruption caused a drop in the share of IPv4 traffic, as well as a spike in the share of HTTP traffic (meaning that HTTPS traffic share had fallen), as shown in the graphs below.</p><p>Further analysis also found the share of <a href="https://blog.cloudflare.com/tcp-resets-timeouts/#sources-of-anomalous-connections"><u>TCP connections terminated in the Post SYN stage</u></a> doubled during the observed outage, from 39% to 78%, as shown below. The cause of these unusual observations was ultimately uncovered by a <a href="https://gfw.report/blog/gfw_unconditional_rst_20250820/en/"><u>Great Firewall Report blog post</u></a>, which stated, in part: “<i>Between approximately 00:34 and 01:48 (Beijing Time, UTC+8) on August 20, 2025, the Great Firewall of China (GFW) exhibited anomalous behavior by unconditionally injecting forged TCP RST+ACK packets to disrupt all connections on TCP port 443. This incident caused massive disruption of the Internet connections between China and the rest of the world. … The responsible device does not match the fingerprints of any known GFW devices, suggesting that </i><b><i>the incident was caused by either a new GFW device or a known device operating in a novel or misconfigured state</i></b><i>.</i>” This explanation is consistent with the anomalies visible in the Radar graphs.</p>
    <div>
      <h3>Pakistan</h3>
      <a href="#pakistan">
        
      </a>
    </div>
    <p>Subscribers of <a href="https://radar.cloudflare.com/as23674"><u>Nayatel (AS23674)</u></a> experienced an approximately 90 minute disruption to Internet connectivity on September 24, due to a <a href="https://x.com/nayatelpk/status/1970791157404954809"><u>reported outage at an upstream provider</u></a>. Traffic dropped as much as 57% between around 09:15-10:45 UTC (14:15-15:45 local). <a href="https://radar.cloudflare.com/as38193"><u>Transworld (AS38193)</u></a> is one of several <a href="https://radar.cloudflare.com/routing/as23674?dateStart=2025-09-24&amp;dateEnd=2025-09-24#connectivity"><u>upstream providers</u></a> to Nayatel, and a more significant drop in traffic is visible for that network, lasting from around 09:15-12:15 UTC (14:15-17:15 local time). The Nayatel disruption was likely less significant than the one seen at Transworld because Transworld is upstream of only a portion of the prefixes originated by Nayatel — traffic from other Nayatel prefixes was carried by other providers that remained available.</p>
    <div>
      <h2>No definitive cause</h2>
      <a href="#no-definitive-cause">
        
      </a>
    </div>
    
    <div>
      <h3>Iran</h3>
      <a href="#iran">
        
      </a>
    </div>
    <p>Several weeks after experiencing a <a href="https://blog.cloudflare.com/q2-2025-internet-disruption-summary/#iran"><u>full Internet shutdown</u></a>, <a href="https://radar.cloudflare.com/ir"><u>Iran</u></a> again experienced a sudden drop in Internet traffic around 21:00 UTC on July 5 (00:30 local time on July 6), with <a href="https://x.com/CloudflareRadar/status/1941640046005617038"><u>traffic falling 80%</u></a> as compared to the prior week. While most of the “unknown” disruptions covered in this series of posts are observed but have no associated acknowledgement or explanation, this disruption had multiple competing explanations.</p><p>A <a href="https://www.iranintl.com/en/202507067645"><u>published report</u></a> noted “<i>IRNA, Iran’s official news agency, cited the state-run Telecommunications Infrastructure Company, reporting a national-level disruption in international connectivity that affected most internet service providers Saturday night. Yet government officials have not publicly addressed the cause.</i>” However, posts from civil society groups that follow Internet connectivity in Iran (<a href="https://github.com/net4people/bbs/issues/497"><u>net4people</u></a>, <a href="https://x.com/filterbaan/status/1941628644125724793"><u>FilterWatch</u></a>) suggested that the disruption was again due to an intentional shutdown. And a <a href="https://x.com/filterbaan/status/1941628644125724793"><u>post thread on X</u></a> referenced, and disputed, a claim that the disruption was due to a DDoS attack. Unfortunately, no definitive root cause for this disruption could be found.</p>
    <div>
      <h3>Colombia</h3>
      <a href="#colombia">
        
      </a>
    </div>
    <p>Customers of Claro Colombia experienced an Internet disruption that lasted just over 30 minutes on August 6, with <a href="https://x.com/CloudflareRadar/status/1953168943423864954"><u>traffic falling two-thirds or more</u></a> as compared to the prior week between 16:45 - 17:20 UTC. The disruption affected multiple ASNs owned by Claro, including <a href="https://radar.cloudflare.com/as10620"><u>AS10620</u></a>, <a href="https://radar.cloudflare.com/as14080"><u>AS14080</u></a>, and <a href="https://radar.cloudflare.com/as26611"><u>AS26611</u></a>. (The Telmex Colombia and Comcel names shown in the graphs below are historical – Telmex and Comcel <a href="https://es.wikipedia.org/wiki/Claro_(Colombia)"><u>merged in 2012</u></a> and have operated under the Claro brand since then.) Claro did not acknowledge the disruption on social media, nor did it provide any explanation for it.</p>
    <div>
      <h3>Pakistan</h3>
      <a href="#pakistan">
        
      </a>
    </div>
    <p>A near-complete outage at <a href="https://radar.cloudflare.com/pk"><u>Pakistani</u></a> backbone provider <a href="https://radar.cloudflare.com/as17557"><u>PTCL (AS17557)</u></a> caused traffic from the network provider to drop 90% at 16:10 UTC (21:10 local time) on August 19. PTCL acknowledged the issue in a <a href="https://x.com/PTCLOfficial/status/1957873019084255347"><u>post on X</u></a>, noting “<i>We are currently facing data connectivity challenges on our PTCL and Ufone services.</i>” Although they <a href="https://x.com/PTCLOfficial/status/1957977425377391076"><u>published a subsequent post</u></a> several hours later after service was restored, they did not provide any additional information about the cause of the outage. However, <a href="https://bloompakistan.com/nationwide-internet-disruption-hits-pakistan-ptcl-ufone-nayatel-services-severely-affected/"><u>one published report</u></a> claimed “<i>The disruption was primarily caused by a technical fault in PTCL’s fiber optic infrastructure.</i>” while <a href="https://bloompakistan.com/nationwide-internet-disruption-hits-pakistan-ptcl-ufone-nayatel-services-severely-affected/"><u>another report</u></a> claimed “<i>According to industry sources, the internet disruption in Pakistan may be connected to a technical fault in the fiber optic backbone or issues with main internet providers responsible for international online traffic.</i></p><p>Interestingly, <a href="https://radar.cloudflare.com/dns/as17557?dateStart=2025-08-19&amp;dateEnd=2025-08-19#dns-query-volume"><u>traffic from PTCL to Cloudflare’s 1.1.1.1 DNS resolver</u></a> spiked as the outage began, and the <a href="https://radar.cloudflare.com/dns/as17557?dateStart=2025-08-19&amp;dateEnd=2025-08-19#dns-transport-protocol"><u>share of requests made over UDP</u></a> grew from 94% to 99%. In addition, <a href="https://radar.cloudflare.com/routing/as17557?dateStart=2025-08-19&amp;dateEnd=2025-08-19"><u>routing data</u></a> shows that there was also a small drop in announced IPv4 address space coincident with the outage. However, these additional observations do not necessarily confirm a “technical fault in PTCL’s fiber optic infrastructure” as the ultimate cause of the disruption.</p>
    <div>
      <h3>South Africa</h3>
      <a href="#south-africa">
        
      </a>
    </div>
    <p>To their credit, <a href="https://radar.cloudflare.com/za"><u>South African</u></a> provider <a href="https://radar.cloudflare.com/as37053"><u>RSAWEB (AS37053)</u></a> <a href="https://netnotice.rsaweb.co.za/cmfe4mzqc0001ngqrbyfq0waj"><u>quickly acknowledged an issue</u></a> with their FTTx and Enterprise connectivity on September 10, but neither their initial post nor subsequent updates provided any information on the cause of the problem. Whatever the cause, it resulted in a near-complete loss of Internet traffic from RSAWEB between 15:00 and 16:30 UTC (17:00 - 18:30 local time).</p>Routing data also shows a loss of just two announced /24 address blocks concurrent with the outage, dropping from 470 to 468. Unless all of RSAWEB’s outbound traffic was flowing through this limited amount of IP address space, it seems unusual that the withdrawal of just 512 IPv4 addresses from the=e routing table would have such a significant impact on the network’s traffic.<p></p>
    <div>
      <h3>SpaceX Starlink</h3>
      <a href="#spacex-starlink">
        
      </a>
    </div>
    <p>After experiencing a <a href="#starlink"><u>brief disruption in July</u></a> due to a software failure, <a href="https://radar.cloudflare.com/as14593"><u>Starlink (AS14593)</u></a> suffered another short disruption between 04:00-05:00 UTC on September 15. Although Starlink generally acknowledges disruptions to their global network on <a href="https://x.com/Starlink"><u>their X account</u></a>, and often providing a root cause, in this case they <a href="https://www.datacenterdynamics.com/en/news/starlink-suffers-brief-monday-outage-globally/"><u>apparently published an acknowledgement</u></a> on X, but deleted it after the issue was resolved. In addition to the drop in traffic, we observed a concurrent drop in announced IPv4 address space and spike in BGP announcements (likely withdrawals), suggesting that the disruption may have been caused by a network-related issue.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>The recent <a href="https://blog.cloudflare.com/new-regional-internet-traffic-and-certificate-transparency-insights-on-radar/"><u>launch of regional traffic insights</u></a> on Radar brings yet another perspective to our ability to investigate observed Internet traffic anomalies. We can now drill down at regional and network levels, as well as exploring the impact across DNS traffic, connection bandwidth and latency, TCP connection tampering, and announced IP address space, helping us understand the impact of such events. And while these blog posts feature graphs from <a href="https://radar.cloudflare.com/"><u>Radar</u></a> and the <a href="https://radar.cloudflare.com/explorer"><u>Radar Data Explorer</u></a>, the underlying data is available from our <a href="https://developers.cloudflare.com/api/resources/radar/"><u>rich API</u></a>. You can use the API to retrieve data to do your own local monitoring or analysis, or the <a href="https://github.com/cloudflare/mcp-server-cloudflare/tree/main/apps/radar#cloudflare-radar-mcp-server-"><u>Radar MCP server</u></a> to incorporate Radar data into your AI tools.</p><p>The Cloudflare Radar team is constantly monitoring for Internet disruptions, sharing our observations on the <a href="https://radar.cloudflare.com/outage-center"><u>Cloudflare Radar Outage Center</u></a>, via social media, and in posts on <a href="https://blog.cloudflare.com/tag/cloudflare-radar/"><u>blog.cloudflare.com</u></a>. Follow us on social media at <a href="https://twitter.com/CloudflareRadar"><u>@CloudflareRadar</u></a> (X), <a href="https://noc.social/@cloudflareradar"><u>noc.social/@cloudflareradar</u></a> (Mastodon), and <a href="https://bsky.app/profile/radar.cloudflare.com"><u>radar.cloudflare.com</u></a> (Bluesky), or contact us via <a><u>email</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Radar]]></category>
            <category><![CDATA[Internet Shutdown]]></category>
            <category><![CDATA[Internet Traffic]]></category>
            <category><![CDATA[Outage]]></category>
            <category><![CDATA[Consumer Services]]></category>
            <category><![CDATA[Internet Trends]]></category>
            <guid isPermaLink="false">6d4g6SeHoMoMsnUve0rdrq</guid>
            <dc:creator>David Belson</dc:creator>
        </item>
        <item>
            <title><![CDATA[Nationwide Internet shutdown in Afghanistan extends localized disruptions]]></title>
            <link>https://blog.cloudflare.com/nationwide-internet-shutdown-in-afghanistan/</link>
            <pubDate>Tue, 30 Sep 2025 10:05:00 GMT</pubDate>
            <description><![CDATA[ On September 29, 2025, Internet connectivity was completely shut down across Afghanistan, impacting business, education, finance, and government services. ]]></description>
            <content:encoded><![CDATA[ <p>Just after 11:30 UTC (16:00 local time) on Monday, September 29, 2025, subscribers of wired Internet providers in <a href="https://radar.cloudflare.com/traffic/af"><u>Afghanistan</u></a> experienced a <a href="https://x.com/CloudflareRadar/status/1972649804821057727"><u>brief service interruption</u></a>, lasting until just before 12:00 UTC (16:30 local time). Cloudflare <a href="https://radar.cloudflare.com/explorer?dataSet=netflows&amp;loc=&amp;dt=1d&amp;asn=as131284&amp;compAsn=as38742&amp;timeCompare=2025-09-21"><u>traffic data for AS38472 (Afghan Wireless) and AS131284 (Etisalat)</u></a> shows that traffic from these mobile providers remained available during that period.</p><p>However, just after 12:30 UTC (17:00 local time), the Internet was <a href="https://x.com/CloudflareRadar/status/1972682041759076637"><u>completely shut down</u></a>, with Afghani news outlet TOLOnews initially <a href="https://x.com/TOLONewsEnglish/status/1972641017745588605"><u>reporting in a post on X</u></a> that “<i>Sources have confirmed to TOLOnews that today (Monday), afternoon, fiber-optic Internet will be shut down across the country.</i>” This shutdown is likely an extension of the regional shutdowns of fiber optic connections that took place earlier in September, and it will <a href="https://www.dw.com/en/afghanistan-taliban-shuts-down-internet-indefinitely/a-74181089"><u>reportedly</u></a> remain in force “until further notice”. (The earlier regional shutdowns are discussed in more detail below.)</p><p>While Monday’s first shutdown was only partial, with mobile connectivity apparently remaining available, the graphs below show that the second event took the country completely offline, with <a href="https://radar.cloudflare.com/traffic/af?dateStart=2025-09-29&amp;dateEnd=2025-09-29#traffic-trends"><u>web</u></a> and <a href="https://radar.cloudflare.com/dns/af?dateStart=2025-09-29&amp;dateEnd=2025-09-29#dns-query-volume"><u>DNS</u></a> traffic dropping to zero at a national level, as seen in the graphs below.</p><p>While the shutdown will impact subscribers to fixed and mobile Internet services, it also “<a href="https://www.turkiyetoday.com/world/afghanistan-descends-into-total-communications-blackout-under-taliban-order-3207737"><u>threatens to paralyze critical services including banking, customs operations and emergency communications</u></a>” across the country. The <a href="https://x.com/TOLONewsEnglish/status/1972641017745588605"><u>X post from TOLOnews</u></a> also noted that television and radio networks would face disruptions.</p><p>HTTP request traffic is traffic coming from web browsers, applications, and automated tools, and is a clear signal of the availability of Internet connectivity. The graph below shows this request volume dropping sharply as the shutdown was implemented.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6x0Wdv7U6SzS7jXrfQuETT/135e0e512741c79e969e4e34800f02d7/image9.png" />
          </figure><p><sup><i>HTTP request traffic from Afghanistan, September 29, 2025</i></sup></p><p>Cloudflare sends bytes back in response to those HTTP requests (“HTTP bytes”), as well as sending bytes back in response to traffic associated with other services, such as our <a href="https://1.1.1.1/dns"><u>1.1.1.1 DNS resolver</u></a>, <a href="https://www.cloudflare.com/application-services/products/dns/"><u>authoritative DNS</u></a>, <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-devices/warp/"><u>WARP</u></a>, etc. (“total bytes”). Cloudflare stopped receiving client traffic from the services when the shutdown began, causing the bytes transferred in response to drop to zero.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3qDezs9ngevvCWvDiCAG2C/1c21c568ca730fa0f5fc15964c619c2b/image6.png" />
          </figure><p><sup><i>Internet traffic from Afghanistan, September 29, 2025</i></sup></p><p><a href="https://1.1.1.1/dns"><u>1.1.1.1</u></a> is Cloudflare’s privacy-focused DNS resolver, and processes DNS lookup requests from clients. As connectivity was cut, traffic to the service disappeared.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4pSshBSiJVVwJs8Ro8DAv0/eb77f8e08299c58155ffb4dccad8ac01/image10.png" />
          </figure><p><sup><i>DNS query traffic to Cloudflare’s 1.1.1.1 resolver from Afghanistan, September 29, 2025</i></sup></p><p>At a <a href="https://radar.cloudflare.com/traffic/af?dateStart=2025-09-29&amp;dateEnd=2025-09-29#traffic-volume-by-region"><u>regional</u></a> level, it appears that traffic from Kabul fell slightly later than traffic from the other regions, trailing them by approximately a half hour.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5l9bImRqxgDdWGOHXCKgYC/b4c6e0eae1b314ce750c4aa6581c3321/image12.png" />
          </figure><p><sup><i>HTTP request traffic from the top five provinces in Afghanistan, September 29, 2025</i></sup></p><p>The delay in traffic loss seen in Kabul may be associated with a more gradual loss of traffic seen at <a href="https://radar.cloudflare.com/AS38742"><u>AS38742 (Afghan Wireless)</u></a>, which saw traffic approach zero just after 13:00 UTC (17:30 local time). This conjecture is supported by a <a href="https://kabulnow.com/2025/09/taliban-order-nationwide-shutdown-of-internet-and-mobile-services-in-afghanistan/"><u>published report</u></a> that noted “Residents across Kabul and several provincial cities reported on Monday that fiber-optic services were no longer available, with only limited mobile data functioning briefly before signal towers stopped working altogether.”</p><p>Interestingly, it appears that as of 00:00 UTC (04:30 local time) on September 30, we continue to see a very small amount of traffic from this network. (This is in contrast to other networks, whose lines disappeared from the graph around 12:30 UTC (17:00 local time)).</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3MMrOcXD5yg4GSDT9Y1p08/31de2f6c92043241db214e982a89556c/image7.png" />
          </figure><p><sup><i>HTTP request traffic from the top 10 ASNs in Afghanistan, September 29, 2025</i></sup></p><p>Network providers announce IP address space that they are responsible for to other networks, enabling the routing of traffic to and from those IP addresses. When these announcements are withdrawn, the resources in that address space, whether clients or servers, can no longer reach, or are no longer reachable from, the rest of the Internet.</p><p>In Afghanistan, announced IPv4 address space dropped rapidly as the shutdown was implemented, falling by two-thirds from 604 to 197 announced /24s (blocks of 256 IPv4 addresses) in the first 20 minutes, and then dropping further over the next 90 minutes. Through the end of the day, several networks continued to announce a small amount of IPv4 address space: four /24s from <a href="https://radar.cloudflare.com/AS38742"><u>AS38742 (Afghan Wireless)</u></a>, two from <a href="https://radar.cloudflare.com/AS149024"><u>AS149024 (Afghan Bawar ICT Services)</u></a>, and one each from <a href="https://radar.cloudflare.com/AS138322"><u>AS138322 (Afghan Wireless)</u></a> and <a href="https://radar.cloudflare.com/AS136479"><u>AS136479 (Cyber Telecom)</u></a>.</p><p>Afghan Wireless is a mobile connectivity provider, and <a href="http://afghanbawar.com/"><u>Afghan Bawar</u></a> and <a href="https://cts.af/about-us/"><u>Cyber Telecom</u></a> appear to offer wireless/mobile services as well. The <a href="https://radar.cloudflare.com/routing/prefix/152.36.203.0/24"><u>prefixes</u></a> still visible from Afghan Wireless appear to be routed through <a href="https://radar.cloudflare.com/as17557"><u>AS17557 (Pakistan Telecom)</u></a>, while the prefixes from the other two providers (<a href="https://radar.cloudflare.com/routing/prefix/163.223.180.0/23"><u>Afghan Bawar</u></a>, <a href="https://radar.cloudflare.com/routing/prefix/103.126.5.0/24"><u>Cyber Telecom</u></a>) appear to be routed through <a href="https://radar.cloudflare.com/as40676"><u>AS40676 (Psychz Networks)</u></a>, a US-based solutions provider.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/11yzruult8bJD7p15lEwiG/e19f71e771f8c1162717b75675ecf94e/image5.png" />
          </figure><p><sup><i>Announced IPv4 address space from Afghanistan, September 29, 2025</i></sup></p><p>Announced IPv6 address space fell as well, though not quite as catastrophically, dropping by three-fourths almost immediately, from 262,407 /48s (blocks of over 1.2 septillion IPv6 addresses) to 65,542.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1s5p2v84LAyw1igdAKuaNT/4c597dca9a55c4f4a8a9c69c60e8a022/image1.png" />
          </figure><p><sup><i>Announced IPv6 address space from Afghanistan, September 29, 2025</i></sup></p>
    <div>
      <h3>Regional shutdowns by the Taliban to prevent “immoral activities”</h3>
      <a href="#regional-shutdowns-by-the-taliban-to-prevent-immoral-activities">
        
      </a>
    </div>
    <p>In mid-September, the Taliban <a href="https://amu.tv/200798/"><u>ordered the shutdown of fiber optic Internet connectivity</u></a> in multiple provinces across Afghanistan, as part of a drive to “prevent immorality”. It was the first such ban issued since the Taliban took full control of the country in August 2021.</p><p>These regional shutdowns <a href="https://www.afghanstudiescenter.org/taliban-internet-shutdown-blocks-thousands-of-afghan-students-from-online-classes/"><u>blocked</u></a> Afghani students from attending online classes, <a href="https://theweek.com/world-news/afghanistan-taliban-high-speed-internet-women-education"><u>impacted</u></a> commerce and banking, and <a href="https://www.dw.com/en/afghanistan-whats-at-stake-as-taliban-cut-internet/a-74043564"><u>limited access</u></a> to government agencies and institutions such as passport and registration offices, customs offices. As many as <a href="https://amu.tv/200798/"><u>15 provinces</u></a> experienced shutdowns, and we review the observed impacts across several of them below, using the regional traffic data <a href="https://blog.cloudflare.com/new-regional-internet-traffic-and-certificate-transparency-insights-on-radar/"><u>recently made available</u></a> on Cloudflare Radar.</p><p><a href="https://radar.cloudflare.com/traffic/1147288?dateStart=2025-09-01&amp;dateEnd=2025-09-28"><u>Balkh</u></a> appeared to be one of the earliest targeted provinces, with traffic dropping midday (UTC) on September 15. While some nominal recovery occurred on September 23, traffic remained well below pre-shutdown levels.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1t0mFLkPjixHM9m7NWnMha/d855d9dc8301dae33e3ec7abb4f9232c/image2.png" />
          </figure><p><sup><i>Internet traffic from Balkh, Afghanistan, September 1-28, 2025</i></sup></p><p>After several days of peak traffic levels double those seen in previous weeks, traffic in <a href="https://radar.cloudflare.com/traffic/1123230?dateStart=2025-09-01&amp;dateEnd=2025-09-28"><u>Takhar</u></a> fell on September 16, remaining near zero until September 21, when a small amount of connectivity was apparently restored.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3l7wfMGBMt8pOVxFb3NvvX/b99ad045b8a2eb0a55f25d3f448fe29e/image8.png" />
          </figure><p><sup><i>Internet traffic from Takhar, Afghanistan, September 1-28, 2025</i></sup></p><p>In <a href="https://radar.cloudflare.com/traffic/1138335?dateStart=2025-09-01&amp;dateEnd=2025-09-28"><u>Kandahar</u></a>, lower peak traffic volumes are visible between September 17 and September 21. The partial restoration of traffic is coincident with the restoration of Internet services highlighted in a <a href="https://menafn.com/1110093436/Internet-Services-Restored-in-Some-Areas-of-Afghanistans-Kandahar"><u>published report</u></a>, though it notes that “The restoration of services is limited to point-to-point connections for key government offices, including banks, customs offices, and the Directorate for National ID Cards.”</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6lzuyfuIoH76wzw8wpNR7n/e3d2baf026e9ab9d3fa0ecd7926fc127/image11.png" />
          </figure><p><sup><i>Internet traffic from Kandahar, Afghanistan, September 1-28, 2025</i></sup></p><p><a href="https://radar.cloudflare.com/traffic/1147537?dateStart=2025-09-01&amp;dateEnd=2025-09-28"><u>Baghlan</u></a> experienced an anomalous spike in traffic on September 16, with total traffic spiking 3x higher than peaks seen during the previous weeks. However, on September 17, traffic dropped to a fraction of pre-shutdown levels. Except for a return to near-normal levels on September 21 &amp; 22, the disruption remained in place through the end of the month.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5OuHHfoiGIbcUP7qWhM6Mb/cc3da98c3127c7d27a416d02adc716fd/image14.png" />
          </figure><p><sup><i>Internet traffic from Baghlan, Afghanistan, September 1-28, 2025</i></sup></p><p>Traffic in <a href="https://radar.cloudflare.com/traffic/1132366?dateStart=2025-09-01&amp;dateEnd=2025-09-28"><u>Nangarhar</u></a> was disrupted between September 19-22, but quickly recovered to pre-shutdown levels once restored.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5FYwcwHLwlj5VC9HIyB5tX/a6c047b3fe0f1a909b8e38452405a7eb/image13.png" />
          </figure><p><sup><i>Internet traffic from Nangarhar, Afghanistan, September 1-28, 2025</i></sup></p><p>After experiencing an apparent issue at the start of the month, Internet traffic in <a href="https://radar.cloudflare.com/traffic/1131461?dateStart=2025-09-01&amp;dateEnd=2025-09-28"><u>Oruzgan</u></a>, again fell on September 19. After an apparent complete shutdown, on September 23, a small amount of traffic was again visible.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5QoIbIy0hbaKhWq7Mb8WAD/44cb159f930ed2328ceff1e4197d4d12/image4.png" />
          </figure><p><sup><i>Internet traffic from Oruzgan, Afghanistan, September 1-28, 2025</i></sup></p><p>Internet connectivity was also disrupted in the province of <a href="https://radar.cloudflare.com/traffic/1140025?dateStart=2025-09-01&amp;dateEnd=2025-09-28"><u>Herat</u></a>, although differently. From September 22-25, partial Internet outages were implemented between 16:30-03:30 UTC (21:00-08:00 local time), with traffic volumes dropping to approximately half of those seen at the same time the prior weeks. The intent of these “Internet curfew” shutdowns is unclear, but Herat residents <a href="https://tolonews.com/afghanistan-195915"><u>noted</u></a> that they “severely disrupted their business and educational activities”.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1C94mQTOp9CxMJAH0qxp8T/fb2a9456f09d67125e505963254a1080/image3.png" />
          </figure><p><sup><i>Internet traffic from Herat, Afghanistan, September 16-29, 2025</i></sup></p><p>While <a href="https://blog.cloudflare.com/tag/internet-shutdown/"><u>Internet shutdowns</u></a> remain all too common around the world, most (though not all) are comparatively short-lived, and are generally in response to a local event, such as exams, unrest/riots, elections, etc. Given the broad impact of this shutdown across all facets of daily personal, social, and professional life in Afghanistan, <a href="https://amu.tv/201377/"><u>analysts state</u></a> that it "could deepen Afghanistan’s digital isolation, further damage its struggling economy and drive more Afghans out of work at a time when humanitarian needs are already severe."</p>
    <div>
      <h3>Where can I learn more?</h3>
      <a href="#where-can-i-learn-more">
        
      </a>
    </div>
    <p>You can follow the latest state of <a href="https://radar.cloudflare.com/traffic/af"><u>Internet connectivity in Afghanistan</u></a> on Cloudflare Radar. The Cloudflare Radar team will continue to monitor traffic from Afghanistan as well, sharing our observations on the <a href="https://radar.cloudflare.com/outage-center"><u>Cloudflare Radar Outage Center</u></a>, via social media, and in posts on <a href="https://blog.cloudflare.com/tag/cloudflare-radar/"><u>blog.cloudflare.com</u></a>. Follow us on social media at <a href="https://twitter.com/CloudflareRadar"><u>@CloudflareRadar</u></a> (X), <a href="https://noc.social/@cloudflareradar"><u>noc.social/@cloudflareradar</u></a> (Mastodon), and <a href="https://bsky.app/profile/radar.cloudflare.com"><u>radar.cloudflare.com</u></a> (Bluesky), or contact us via email.</p> ]]></content:encoded>
            <category><![CDATA[Radar]]></category>
            <category><![CDATA[Internet Shutdown]]></category>
            <category><![CDATA[Internet Traffic]]></category>
            <category><![CDATA[Outage]]></category>
            <guid isPermaLink="false">7MCAuGOYyNejN3pChXzmW7</guid>
            <dc:creator>David Belson</dc:creator>
        </item>
        <item>
            <title><![CDATA[A deep dive into Cloudflare’s September 12, 2025 dashboard and API outage]]></title>
            <link>https://blog.cloudflare.com/deep-dive-into-cloudflares-sept-12-dashboard-and-api-outage/</link>
            <pubDate>Sat, 13 Sep 2025 07:19:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare’s Dashboard and a set of related APIs were unavailable or partially available for an hour starting on Sep 12, 17:57 UTC.  The outage did not affect the serving of cached files via the  ]]></description>
            <content:encoded><![CDATA[ <p></p>
    <div>
      <h2>What Happened</h2>
      <a href="#what-happened">
        
      </a>
    </div>
    <p>We had an outage in our Tenant Service API which led to a broad outage of many of our APIs and the Cloudflare Dashboard. </p><p>The incident’s impact stemmed from several issues, but the immediate trigger was a bug in the dashboard. This bug caused repeated, unnecessary calls to the Tenant Service API. The API calls were managed by a React useEffect hook, but we mistakenly included a problematic object in its dependency array. Because this object was recreated on every state or prop change, React treated it as “always new,” causing the useEffect to re-run each time. As a result, the API call executed many times during a single dashboard render instead of just once. This behavior coincided with a service update to the Tenant Service API, compounding instability and ultimately overwhelming the service, which then failed to recover.</p><p>When the Tenant Service became overloaded, it had an impact on other APIs and the dashboard because Tenant Service is part of our API request authorization logic.  Without Tenant Service, API request authorization can not be evaluated.  When authorization evaluation fails, API requests return 5xx status codes.</p><p>We’re very sorry about the disruption.  The rest of this blog goes into depth on what happened, and what steps we are taking to prevent it from happening again.  </p>
    <div>
      <h2>Timeline</h2>
      <a href="#timeline">
        
      </a>
    </div>
    <table><tr><th><p>Time (UTC)</p></th><th><p>Description</p></th></tr><tr><td><p>2025-09-12 16:32</p></td><td><p>A new version of the Cloudflare Dashboard is released which contains a bug that will trigger many more calls to the /organizations endpoint, including retries in the event of failure.</p></td></tr><tr><td><p>2025-09-12 17:50</p></td><td><p>A new version of the Tenant API Service is deployed.</p></td></tr><tr><td><p>2025-09-12 17:57</p></td><td><p>The Tenant API Service becomes overwhelmed as new versions are deploying. Dashboard Availability begins to drop <b>IMPACT START</b></p></td></tr><tr><td><p>2025-09-12 18:17</p></td><td><p>After providing more resources to the Tenant API Service, the Cloudflare API climbs to 98% availability, but the dashboard does not recover. <b>IMPACT DECREASE</b></p></td></tr><tr><td><p>2025-09-12 18:58</p></td><td><p>In an attempt to restore dashboard availability, some erroring codepaths were removed and a new version of the Tenant Service is released. This was ultimately a bad change and causes API Impact again. <b>IMPACT INCREASE</b></p></td></tr><tr><td><p>2025-09-12 19:01</p></td><td><p>In an effort to relieve traffic against the Tenant API Service, a temporary ratelimiting rule is published.</p></td></tr><tr><td><p>2025-09-12 19:12</p></td><td><p>The problematic changes to the Tenant API Service are reverted, and Dashboard Availability returns to 100%. <b>IMPACT END</b></p></td></tr></table>
    <div>
      <h3>Dashboard availability</h3>
      <a href="#dashboard-availability">
        
      </a>
    </div>
    <p>The Cloudflare dashboard was severely impacted throughout the full duration of the incident.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2OI52n6YrKdSBxQIShw5al/3ecb621d4e6a56d43699d16e0e984055/BLOG-3011_1.png" />
          </figure>
    <div>
      <h3>API availability</h3>
      <a href="#api-availability">
        
      </a>
    </div>
    <p>The Cloudflare API was severely impacted for two periods during the incident when the Tenant API Service was down.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/19udRtEzYji4dSRwljaYxP/44f42a9cbc92cba22fdead28c177da21/BLOG-3011_2.png" />
          </figure>
    <div>
      <h2>How we responded</h2>
      <a href="#how-we-responded">
        
      </a>
    </div>
    <p>Our first goal in an incident is to restore service.  Often that involves fixing the underlying issue directly, but not always.  In this case we noticed increased usage across our Tenant Service, so we focused on reducing the load and increasing the available resources.  We installed a global rate limit on the Tenant Service to help regulate the load.  The Tenant Service is a GoLang process that runs on Kubernetes in a subset of our datacenters.  We increased the number of pods available as well to help improve throughput.  While we did this, we had others on the team continue to investigate why we were seeing the unusually high usage.  Ultimately, increasing the resources available to the tenant service helped with availability but was insufficient to restore normal service.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/UOY1fEUaSzxRE6tNrsBPu/fd02638a5d2e37e47f5c9a9888b5eac3/BLOG-3011_3.png" />
          </figure><p>After the Tenant Service began reporting healthy again and the API largely recovered, we still observed a considerable number of errors being reported from the service. We theorized that these were responsible for the ongoing Dashboard availability issues and made a patch to the service with the expectation that it would improve the API health and restore the dashboard to a healthy state. Ultimately this change degraded service further and was quickly reverted. The second outage can be seen in the graph above.</p><p>It’s painful to have an outage like this.  That said, there were a few things that helped lessen the impact.  Our automatic alerting service quickly identified the correct people to join the call and start working on remediation.  Additionally, this was a failure in the control plane which has strict separation of concerns from the data plane.  Thus, the outage did not affect services on Cloudflare’s network.  The majority of users at Cloudflare were unaffected unless they were making configuration changes or using our dashboard.</p>
    <div>
      <h2>Going forward</h2>
      <a href="#going-forward">
        
      </a>
    </div>
    <p>We believe it’s important to learn from our mistakes and this incident is an opportunity to make some improvements.  Those improvements can be categorized as either ways to reduce / eliminate the impact of a similar change or as improvements to our observability tooling to better inform the team during future events.</p>
    <div>
      <h3>Reducing impact</h3>
      <a href="#reducing-impact">
        
      </a>
    </div>
    <p>We use Argo Rollouts for releasing, which monitors deployments for errors and automatically rolls back that service on a detected error.  We’ve been migrating our services over to Argo Rollouts but have not yet updated the Tenant Service to use it.  Had it been in place, we would have automatically rolled back the second Tenant Service update limiting the second outage.  This work had already been scheduled by the team and we’ve increased the priority of the migration.</p><p>When we restarted the Tenant Service, everyone’s dashboard began to re-authenticate with the API.  This caused the API to become unstable again causing issues with everyone’s dashboard.  This pattern is a common one often referred to as a Thundering Herd.  Once a resource or service is made available, everyone tries to use it all at once. This is common, but was amplified by the bug in our dashboard logic. The fix for this behavior has already been released via a hotfix shortly after the impact was over.  We’ll be introducing changes to the dashboard that include random delays to spread out retries and reduce contention as well. </p><p>Finally, the Tenant Service was not allocated sufficient capacity to handle spikes in load like this. We’ve allocated substantially more resources to this service, and are improving the monitoring so that we will be proactively alerted before this service hits capacity limits.</p>
    <div>
      <h3>Improving visibility</h3>
      <a href="#improving-visibility">
        
      </a>
    </div>
    <p>We immediately saw an increase in our API usage but found it difficult to identify which requests were retries vs new requests.  Had we known that we were seeing a sustained large volume of new requests, it would have made it easier to identify the issue as a loop in the dashboard.  We are adding changes to how we call our APIs from our dashboard to include additional information, including if the request is a retry or new request.</p><p>We’re very sorry about the disruption.  We will continue to investigate this issue and make improvements to our systems and processes.</p> ]]></content:encoded>
            <category><![CDATA[Outage]]></category>
            <category><![CDATA[Post Mortem]]></category>
            <guid isPermaLink="false">7xKJsK5ZM4e3RrUVd8IVPQ</guid>
            <dc:creator>Tom Lianza</dc:creator>
            <dc:creator>Joaquin Madruga</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare incident on August 21, 2025]]></title>
            <link>https://blog.cloudflare.com/cloudflare-incident-on-august-21-2025/</link>
            <pubDate>Fri, 22 Aug 2025 00:58:00 GMT</pubDate>
            <description><![CDATA[ On August 21, 2025, an influx of traffic directed toward clients hosted in AWS us-east-1 caused severe congestion on links between Cloudflare and us-east-1. In this post, we explain the details. ]]></description>
            <content:encoded><![CDATA[ <p>On August 21, 2025, an influx of traffic directed toward clients hosted in the Amazon Web Services (AWS) us-east-1 facility caused severe congestion on links between Cloudflare and AWS us-east-1. This impacted many users who were connecting to or receiving connections from Cloudflare via servers in AWS us-east-1 in the form of high latency, packet loss, and failures to origins.</p><p>Customers with origins in AWS us-east-1 began experiencing impact at 16:27 UTC. The impact was substantially reduced by 19:38 UTC, with intermittent latency increases continuing until 20:18 UTC.</p><p>This was a regional problem between Cloudflare and AWS us-east-1, and global Cloudflare services were not affected. The degradation in performance was limited to traffic between Cloudflare and AWS us-east-1. The incident was a result of a surge of traffic from a single customer that overloaded Cloudflare's links with AWS us-east-1. It was a network congestion event, not an attack or a BGP hijack.</p><p>We’re very sorry for this incident. In this post, we explain what the failure was, why it occurred, and what we’re doing to make sure this doesn’t happen again.</p>
    <div>
      <h2>Background</h2>
      <a href="#background">
        
      </a>
    </div>
    <p>Cloudflare helps anyone to build, connect, protect, and accelerate their websites on the Internet. Most customers host their websites on origin servers that Cloudflare does not operate. To make their sites fast and secure, they put Cloudflare in front as a <a href="https://www.cloudflare.com/learning/cdn/glossary/reverse-proxy/"><u>reverse proxy</u></a>. </p><p>When a visitor requests a page, Cloudflare will first inspect the request. If the content is already cached on Cloudflare’s global network, or if the customer has configured Cloudflare to serve the content directly, Cloudflare will respond immediately, delivering the content without contacting the origin. If the content cannot be served from cache, we fetch it from the origin, serve it to the visitor, and cache it along the way (if it is eligible). The next time someone requests that same content, we can serve it directly from cache instead of making another round trip to the origin server. </p><p>When Cloudflare responds to a request with the cached content, it will send the response traffic over internal Data Center Interconnect (DCI) links through a series of network equipment and eventually reach the routers that represent our network edge (our “edge routers”) as shown below:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/23A3EjLWZ9Z9EW6jRejDi2/3febaedc062c61031d38de91b215b363/BLOG-2938_2.png" />
          </figure><p>Our internal network capacity is designed to be larger than the available traffic demand in a location to account for failures of redundant links, failover from other locations, traffic engineering within or between networks, or even traffic surges from users. The majority of Cloudflare’s network links were operating normally, but some edge router links to an AWS peering switch had insufficient capacity to handle this particular surge. </p>
    <div>
      <h2>What happened</h2>
      <a href="#what-happened">
        
      </a>
    </div>
    <p>At approximately 16:27 UTC on August, 21, 2025, a customer started sending many requests from AWS us-east-1 to Cloudflare for objects in Cloudflare’s cache. These requests generated a volume of response traffic that saturated all available direct peering connections between Cloudflare and AWS. This initial saturation became worse when AWS, in an effort to alleviate the congestion, withdrew some BGP advertisements to Cloudflare over some of the congested links. This action rerouted traffic to an additional set of peering links connected to Cloudflare via an offsite network interconnection switch, which subsequently also became saturated, leading to significant performance degradation. The impact became worse for two reasons: One of the direct peering links was operating at half-capacity due to a pre-existing failure, and the Data Center Interconnect (DCI) that connected Cloudflare’s edge routers to the offsite switch was due for a capacity upgrade. The diagram below illustrates this using approximate capacity estimates:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6lQgbq0PNeaeC3R9i8J5fV/d4a6df7b17d30ec33b6c4ea69bae61eb/BLOG-2938_3.png" />
          </figure><p>In response, our incident team immediately engaged with our partners at AWS to address the issue. Through close collaboration, we successfully alleviated the congestion and fully restored services for all affected customers.</p>
    <div>
      <h2>Timeline</h2>
      <a href="#timeline">
        
      </a>
    </div>
    <table><tr><th><p><b>Time</b></p></th><th><p><b>Description</b></p></th></tr><tr><td><p>2025-08-21 16:27 UTC</p></td><td><p>Traffic surge for single customer begins, doubling total traffic from Cloudflare to AWS</p><p><b>IMPACT START</b></p></td></tr><tr><td><p>2025-08-21 16:37 UTC</p></td><td><p>AWS begins withdrawing prefixes from Cloudflare on congested PNI (Private Network Interconnect) BGP sessions</p></td></tr><tr><td><p>2025-08-21 16:44 UTC</p></td><td><p>Network team is alerted to internal congestion in Ashburn (IAD)</p></td></tr><tr><td><p>2025-08-21 16:45 UTC</p></td><td><p>Network team is evaluating options for response, but AWS prefixes are unavailable on paths that are not congested due to their withdrawals</p></td></tr><tr><td><p>2025-08-21 17:22 UTC</p></td><td><p>AWS BGP prefixes withdrawals result in a higher amount of dropped traffic</p><p><b>IMPACT INCREASE</b></p></td></tr><tr><td><p>2025-08-21 17:45 UTC</p></td><td><p>Incident is raised for customer impact in Ashburn (IAD)</p></td></tr><tr><td><p>2025-08-21 19:05 UTC</p></td><td><p>Rate limiting of single customer causing traffic surge decreases congestion</p></td></tr><tr><td><p>2025-08-21 19:27 UTC</p></td><td><p>Network team additional traffic engineering actions fully resolve congestion</p><p><b>IMPACT DECREASE</b></p></td></tr><tr><td><p>2025-08-21 19:45 UTC</p></td><td><p>AWS begins reverting BGP withdrawals as requested by Cloudflare</p></td></tr><tr><td><p>2025-08-21 20:07 UTC</p></td><td><p>AWS finishes normalizing BGP prefix announcements to Cloudflare over IAD PNIs</p></td></tr><tr><td><p>2025-08-21 20:18 UTC</p></td><td><p><b>IMPACT END</b></p></td></tr></table><p>When impact started, we saw a significant amount of traffic related to one customer, resulting in congestion:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3pfUKAP3TfVgUseokIKnvf/114c19e3b12c59da980a2d89a719b7db/BLOG-2938_4.png" />
          </figure><p>This was handled by manual traffic actions both from Cloudflare and AWS. You can see some of the attempts by AWS to alleviate the congestion by looking at the number of IP prefixes AWS is advertising to Cloudflare during the duration of the outage. The lines in different colors correspond to the number of prefixes advertised per BGP session with us. The dips indicate AWS attempting to mitigate by withdrawing prefixes from the BGP sessions in an attempt to steer traffic elsewhere:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6WRQYJDJMUeh1ghWCLFvsa/df1e27124fc975e287c6504f0945a2ca/BLOG-2938_5.png" />
          </figure><p>The congestion in the network caused network queues on the routers to grow significantly and begin dropping packets. Our edge routers were dropping high priority packets consistently during the outage, as seen in the chart below, which shows the queue drops for our Ashburn routers during the impact period:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4zTkZJ5ZwDSIPHD5Wj19Vi/fc7e144ea7cb90b9f4705342989c3669/BLOG-2938_6.png" />
          </figure><p>
The primary impact to customers as a result of this congestion would have been latency, loss (timeouts), or low throughput. We have a set of latency Service Level Objectives defined which imitate customer requests back to their origins measuring availability and latency. We can see that during the impact period, the percentage of requests whose latency fails to meet the target SLO threshold dips below an acceptable level in lock step with the packet drops during the outage:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2rt0mrwBOfNPfIczonc20W/2a1b9f20cd625737cffe309ee0aae608/BLOG-2938_7.png" />
          </figure><p>After the congestion was alleviated, there was a brief period where both AWS and Cloudflare were attempting to normalize the prefix advertisements that had been adjusted to attempt to mitigate the congestion. That caused a long tail of latency that may have impacted some customers, which is why you see the packet drops resolve before the customer latencies are restored.</p>
    <div>
      <h2>Remediations and follow-up steps</h2>
      <a href="#remediations-and-follow-up-steps">
        
      </a>
    </div>
    <p>This event has underscored the need for enhanced safeguards to ensure that one customer's usage patterns cannot negatively affect the broader ecosystem. Our key takeaways are the necessity of architecting for better customer isolation to prevent any single entity from monopolizing shared resources and impacting the stability of the platform for others, and augmenting our network infrastructure to have sufficient capacity to meet demand. </p><p>To prevent a recurrence of this issue, we are implementing a multi-phased mitigation strategy. In the short and medium term: </p><ul><li><p>We are developing a mechanism to selectively deprioritize a customer’s traffic if it begins to congest the network to a degree that impacts others.</p></li><li><p>We are expediting the Data Center Interconnect (DCI) upgrades which will provide network capacity significantly above what it is today.</p></li><li><p>We are working with AWS to make sure their and our BGP traffic engineering actions do not conflict with one another in the future.</p></li></ul><p>Looking further ahead, our long-term solution involves building a new, enhanced traffic management system. This system will allot network resources on a per-customer basis, creating a budget that, once exceeded, will prevent a customer's traffic from degrading the service for anyone else on the platform. This system will also allow us to automate many of the manual actions that were taken to attempt to remediate the congestion seen during this incident.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Customers accessing AWS us-east-1 through Cloudflare experienced an outage due to insufficient network congestion management during an unusual high-traffic event.</p><p>We are sorry for the disruption this incident caused for our customers. We are actively making these improvements to ensure improved stability moving forward and to prevent this problem from happening again.</p><p>
</p> ]]></content:encoded>
            <category><![CDATA[Outage]]></category>
            <category><![CDATA[Post Mortem]]></category>
            <guid isPermaLink="false">4561MhtGYAXUCbb1Y5vNWa</guid>
            <dc:creator>David Tuber</dc:creator>
            <dc:creator>Emily Music</dc:creator>
            <dc:creator>Bryton Herdes</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare 1.1.1.1 incident on July 14, 2025]]></title>
            <link>https://blog.cloudflare.com/cloudflare-1-1-1-1-incident-on-july-14-2025/</link>
            <pubDate>Tue, 15 Jul 2025 15:05:00 GMT</pubDate>
            <description><![CDATA[ July 14th, 2025, Cloudflare made a change to our service topologies that caused an outage for 1.1.1.1 on the edge, causing downtime for 62 minutes for customers using the 1.1.1.1 public DNS Resolver. ]]></description>
            <content:encoded><![CDATA[ <p>On 14 July 2025, Cloudflare made a change to our service topologies that caused an outage for 1.1.1.1 on the edge, resulting in downtime for 62 minutes for customers using the 1.1.1.1 public DNS Resolver as well as intermittent degradation of service for Gateway DNS.</p><p>Cloudflare's 1.1.1.1 Resolver service became unavailable to the Internet starting at 21:52 UTC and ending at 22:54 UTC. The majority of 1.1.1.1 users globally were affected. For many users, not being able to resolve names using the 1.1.1.1 Resolver meant that basically all Internet services were unavailable. This outage can be observed on <a href="https://radar.cloudflare.com/dns?dateStart=2025-07-14&amp;dateEnd=2025-07-15"><u>Cloudflare Radar</u></a>.</p><p>The outage occurred because of a misconfiguration of legacy systems used to maintain the infrastructure that advertises Cloudflare’s IP addresses to the Internet.</p><p>This was a global outage. During the outage, Cloudflare's 1.1.1.1 Resolver was unavailable worldwide.</p><p>We’re very sorry for this outage. The root cause was an internal configuration error and <u>not</u> the result of an attack or a <a href="https://blog.cloudflare.com/cloudflare-1111-incident-on-june-27-2024/"><u>BGP hijack</u></a>. In this blog, we’re going to talk about what the failure was, why it occurred, and what we’re doing to make sure this doesn’t happen again.</p>
    <div>
      <h2><b>Background</b></h2>
      <a href="#background">
        
      </a>
    </div>
    <p>Cloudflare <a href="https://blog.cloudflare.com/announcing-1111"><u>introduced</u></a> the <a href="https://one.one.one.one/"><u>1.1.1.1</u></a> public DNS Resolver service in 2018. Since the announcement, 1.1.1.1 has become one of the most popular DNS Resolver IP addresses and it is free for anyone to use.</p><p>Almost all of Cloudflare's services are made available to the Internet using a routing method known as <a href="https://www.cloudflare.com/learning/cdn/glossary/anycast-network/"><u>anycast</u></a>, a well-known technique intended to allow traffic for popular services to be served in many different locations across the Internet, increasing capacity and performance. This is the best way to ensure we can globally manage our traffic, but also means that problems with the advertisement of this address space can result in a global outage.   </p><p>Cloudflare announces these anycast routes to the Internet in order for traffic to those addresses to be delivered to a Cloudflare data center, providing services from many different places. Most Cloudflare services are provided globally, like the 1.1.1.1 public DNS Resolver, but a subset of services are specifically constrained to particular regions. </p><p>These services are part of our <a href="https://developers.cloudflare.com/data-localization/"><u>Data Localization Suite</u></a> (DLS), which allows customers to configure Cloudflare in a variety of ways to meet their compliance needs across different countries and regions. One of the ways in which Cloudflare manages these different requirements is to make sure the right service's IP addresses are Internet-reachable only where they need to be, so your traffic is handled correctly worldwide. A particular service has a matching "service topology" – that is, traffic for a service should be routed only to a <a href="https://blog.cloudflare.com/introducing-the-cloudflare-data-localization-suite/"><u>particular set of locations</u></a>.</p><p>On June 6, during a release to prepare a service topology for a future DLS service, a configuration error was introduced: the prefixes associated with the 1.1.1.1 Resolver service were inadvertently included alongside the prefixes that were intended for the new DLS service. This configuration error sat dormant in the production network as the new DLS service was not yet in use,  but it set the stage for the outage on July 14. Since there was no immediate change to the production network there was no end-user impact, and because there was no impact, no alerts were fired.</p>
    <div>
      <h2><b>Incident Timeline</b></h2>
      <a href="#incident-timeline">
        
      </a>
    </div>
    <table><tr><td><p>Time (UTC)</p></td><td><p>Event</p></td></tr><tr><td><p>2025-06-06 17:38</p></td><td><p><b>ISSUE INTRODUCED - NO IMPACT</b></p><p>
</p><p>A configuration change was made for a DLS service that was not yet in production. This configuration change accidentally included a reference to the 1.1.1.1 Resolver service and, by extension, the prefixes associated with the 1.1.1.1 Resolver service.</p><p>
</p><p>This change did not result in a change of network configuration, and so routing for the 1.1.1.1 Resolver was not affected.</p><p>
</p><p>Since there was no change in traffic, no alerts fired, but the misconfiguration lay dormant for a future release. </p></td></tr><tr><td><p>2025-07-14 21:48</p></td><td><p><b>IMPACT START</b></p><p>
</p><p>A configuration change was made for the same DLS service. The change attached a test location to the non-production service; this location itself was not live, but the change triggered a refresh of network configuration globally.</p><p>
</p><p>Due to the earlier configuration error linking the 1.1.1.1 Resolver's IP addresses to our non-production service, those 1.1.1.1 IPs were inadvertently included when we changed how the non-production service was set up.</p><p>
</p><p>The 1.1.1.1 Resolver prefixes started to be withdrawn from production Cloudflare data centers globally.</p></td></tr><tr><td><p>2025-07-14 21:52</p></td><td><p>DNS traffic to 1.1.1.1 Resolver service begins to drop globally</p></td></tr><tr><td><p>2025-07-14 21:54</p></td><td><p>Related, non-causal event: BGP origin hijack of 1.1.1.0/24 exposed by withdrawal of routes from Cloudflare. This <b>was not</b> a cause of the service failure, but an unrelated issue that was suddenly visible as that prefix was withdrawn by Cloudflare. </p></td></tr><tr><td><p>2025-07-14 22:01</p><p>
</p></td><td><p><b>IMPACT DETECTED</b></p><p>
</p><p>Internal service health alerts begin to fire for the 1.1.1.1 Resolver</p></td></tr><tr><td><p>2025-07-14 22:01</p></td><td><p><b>INCIDENT DECLARED</b></p></td></tr><tr><td><p>2025-07-14 22:20</p></td><td><p><b>FIX DEPLOYED</b></p><p>
</p><p>Revert was initiated to restore the previous configuration. To accelerate full restoration of service, a manually triggered action is validated in testing locations before being executed.</p></td></tr><tr><td><p>2025-07-14 22:54</p></td><td><p><b>IMPACT ENDS</b></p><p>
</p><p>Resolver alerts cleared and DNS traffic on Resolver prefixes return to normal levels</p></td></tr><tr><td><p>2025-07-14 22:55</p></td><td><p><b>INCIDENT RESOLVED</b></p></td></tr></table>
    <div>
      <h2><b>Impact</b></h2>
      <a href="#impact">
        
      </a>
    </div>
    <p>Any traffic coming to Cloudflare via 1.1.1.1 Resolver services on these IPs was impacted. Traffic to each of these addresses were also impacted on the corresponding routes. </p>
            <pre><code>1.1.1.0/24
1.0.0.0/24 
2606:4700:4700::/48
162.159.36.0/24
162.159.46.0/24
172.64.36.0/24
172.64.37.0/24
172.64.100.0/24
172.64.101.0/24
2606:4700:4700::/48
2606:54c1:13::/48
2a06:98c1:54::/48</code></pre>
            <p>When the impact started we observed an immediate and significant drop in queries over UDP, TCP and <a href="https://www.rfc-editor.org/rfc/rfc7858"><u>DNS over TLS (DoT)</u></a>. Most users have 1.1.1.1, 1.0.0.1, 2606:4700:4700::1111, or 2606:4700:4700::1001 configured as their DNS server. Below you can see the query rate for each of the individual protocols and how they were impacted during the incident:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/XATlkx1Im1QhnBTJL3ER5/6cc65fce22bd66815c348dac555a1501/image1.png" />
          </figure><p>It’s worth noting that <a href="https://developers.cloudflare.com/1.1.1.1/encryption/dns-over-https/"><u>DoH (DNS-over-HTTPS)</u></a> traffic remained relatively stable as most DoH users use the domain <a href="http://cloudflare-dns.com"><u>cloudflare-dns.com</u></a>, configured manually or through their browser, to access the public DNS resolver, rather than by IP address. DoH remained available and traffic was mostly unaffected as <a href="http://cloudflare-dns.com"><u>cloudflare-dns.com</u></a> uses a different set of IP addresses. Some DNS traffic over UDP that also used different IP addresses remained mostly unaffected as well.</p><p>As the corresponding prefixes were withdrawn, no traffic sent to those addresses could reach Cloudflare. We can see this in the timeline for the BGP announcements for 1.1.1.0/24:
</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/c28k2YwaBVLevqmpV4cjG/f923ecef419b71e5b70cb6a6ca616bbd/image5.png" />
          </figure><p><sup><i>Pictured above is the timeline for BGP withdrawal and re-announcement of 1.1.1.0/24 globally</i></sup></p><p>When looking at the query rate of the withdrawn IPs it can be observed that almost no traffic arrives during the impact window. When the initial fix was applied at 22:20 UTC, a large spike in traffic can be seen before it drops off again. This spike is due to clients retrying their queries. When we started announcing the withdrawn prefixes again, queries were able to reach Cloudflare once more. It took until 22:54 UTC before routing was restored in all locations and traffic returned to mostly normal levels.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5vfTPQ6ndKXzsgphist0Mg/610477306f1f056b4cdf98fbbe274e5b/image6.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/67oZjnT3jx272udhoA5hp7/8c41c972162f81d020cb5d189885882a/image3.png" />
          </figure>
    <div>
      <h2><b>Technical description of the error and how it happened</b></h2>
      <a href="#technical-description-of-the-error-and-how-it-happened">
        
      </a>
    </div>
    
    <div>
      <h3>Failure of 1.1.1.1 Resolver Service</h3>
      <a href="#failure-of-1-1-1-1-resolver-service">
        
      </a>
    </div>
    <p>As described above, a configuration change on June 6 introduced an error in the service topology for a pre-production, DLS service. On July 14, a second change to that service was made: an offline data center location was added to the service topology for the pre-production DNS service in order to allow for some internal testing. This change triggered a refresh of the global configuration of the associated routes, and it was at this point that the impact from the earlier configuration error was felt. The service topology for the 1.1.1.1 Resolver's prefixes was reduced from all locations down to a single, offline location. The effect was to trigger the global and immediate withdrawal of all 1.1.1.1 prefixes.</p><p>As routes to 1.1.1.1 were withdrawn, the 1.1.1.1 service itself became unavailable. Alerts fired and an incident was declared.</p>
    <div>
      <h3>Technical Investigation and Analysis</h3>
      <a href="#technical-investigation-and-analysis">
        
      </a>
    </div>
    <p>The way that Cloudflare manages service topologies has been refined over time and currently consist of a combination of a legacy and a strategic system that are synced. Cloudflare's IP ranges are currently bound and configured across these systems that  dictate where an IP range should be announced (in terms of datacenter location) on the edge network. The legacy approach of hard-coding explicit lists of data center locations and attaching them to particular prefixes has proved error-prone, since (for example) bringing a new data center online requires many different lists to be updated and synced consistently. This model also has a significant flaw in that updates to the configuration do not follow a progressive deployment methodology: Even though this release was peer-reviewed by multiple engineers, the change didn’t go through a series of canary deployments before reaching every Cloudflare data center. Our newer approach is to describe service topologies without needing to hard-code IP addresses, which better accommodate expansions to new locations and customer scenarios while also allowing for a staged deployment model, so changes can propagate slowly with health monitoring. During the migration between these approaches, we need to maintain both systems and synchronize data between them, which looks like this:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1ofHPUKzoes5uJY7VluA0F/b39b729457ef62361443f7c83444d8fe/image2.png" />
          </figure><p>Initial alerts were triggered for the DNS Resolver at 22:01, indicating query, proxy, and data center failures. While investigating the alerts, we noted traffic toward the Resolver prefixes had drastically dropped and was no longer being received at our edge data centers. Internally, we use BGP to control route advertisements, and we found the Resolver routes from servers were completely missing.</p><p>Once our configuration error had been exposed and Cloudflare systems had withdrawn the routes from our routing table, all of the 1.1.1.1 routes should have disappeared entirely from the global Internet routing table. However, this isn’t what happened with the prefix 1.1.1.0/24. Instead, we got reports from <a href="https://radar.cloudflare.com/routing/anomalies/hijack-107469"><u>Cloudflare Radar</u></a> that Tata Communications India (AS4755) had started advertising 1.1.1.0/24: from the perspective of the routing system, this looked exactly like a prefix hijack. This was unexpected to see while we were troubleshooting the routing problem, but to be perfectly clear: <b>this BGP hijack was not the cause of the outage.</b> We are following up with Tata Communications.</p>
    <div>
      <h3>Restoring the 1.1.1.1 Service</h3>
      <a href="#restoring-the-1-1-1-1-service">
        
      </a>
    </div>
    <p>We reverted to the previous configuration at 22:20 UTC. Near instantly, we began readvertising the BGP prefixes which were previously withdrawn from the routers, including 1.1.1.0/24. This restored 1.1.1.1 traffic levels to roughly 77% of what they were prior to the incident. However, during the period since withdrawal, approximately 23% of the fleet of edge servers had been automatically reconfigured to remove required IP bindings as a result of the topology change. To add the configurations back, these servers needed to be reconfigured with our change management system which is not an instantaneous process by default for safety. </p><p>The process by which the IP bindings can be restored normally takes some time, as the network in individual locations is designed to be updated over a course of multiple hours. We implement a progressive rollout, rather than on all nodes at once to ensure we don’t introduce additional impact. However, given the severity of the incident, we accelerated the rollout of the fix after verifying the changes in testing locations to restore service as quickly and safely as possible. Normal traffic levels were observed at 22:54 UTC.</p>
    <div>
      <h2><b>Remediation and follow-up steps</b></h2>
      <a href="#remediation-and-follow-up-steps">
        
      </a>
    </div>
    <p>We take incidents like this seriously, and we recognise the impact that this incident had. Though this specific issue has been resolved, we have identified several steps we can take to mitigate the risk of a similar problem occurring in the future. We are implementing the following plan as a result of this incident:</p><p><b>Staging Addressing Deployments: </b>Legacy components do not leverage a gradual, staged deployment methodology. Cloudflare will deprecate these systems which enables modern progressive and health mediated deployment processes to provide earlier indication in a staged manner and rollback accordingly.</p><p><b>Deprecating Legacy Systems:</b> We are currently in an intermediate state in which current and legacy components need to be updated concurrently, so we will be migrating addressing systems away from risky deployment methodologies like this one. We will accelerate our deprecation of the legacy systems in order to provide higher standards for documentation and test coverage.</p>
    <div>
      <h2><b>Conclusion</b></h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Cloudflare's 1.1.1.1 DNS Resolver service fell victim to an internal configuration error.</p><p>We are sorry for the disruption this incident caused for our customers. We are actively making these improvements to ensure improved stability moving forward and to prevent this problem from happening again.</p> ]]></content:encoded>
            <category><![CDATA[Outage]]></category>
            <category><![CDATA[IPv4]]></category>
            <category><![CDATA[1.1.1.1]]></category>
            <category><![CDATA[WARP]]></category>
            <guid isPermaLink="false">5rRaCTCC50CW9n2PKjL7xY</guid>
            <dc:creator>Ash Pallarito</dc:creator>
            <dc:creator>Joe Abley</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare service outage June 12, 2025]]></title>
            <link>https://blog.cloudflare.com/cloudflare-service-outage-june-12-2025/</link>
            <pubDate>Thu, 12 Jun 2025 22:00:00 GMT</pubDate>
            <description><![CDATA[ Multiple Cloudflare services, including Workers KV, Access, WARP and the Cloudflare dashboard, experienced an outage for up to 2 hours and 28 minutes on June 12, 2025. ]]></description>
            <content:encoded><![CDATA[ <p>On June 12, 2025, Cloudflare suffered a significant service outage that affected a large set of our critical services, including Workers KV, WARP, Access, Gateway, Images, Stream, Workers AI, Turnstile and Challenges, AutoRAG, Zaraz, and parts of the Cloudflare Dashboard.</p><p>This outage lasted 2 hours and 28 minutes, and globally impacted all Cloudflare customers using the affected services. The cause of this outage was due to a failure in the underlying storage infrastructure used by our Workers KV service, which is a critical dependency for many Cloudflare products and relied upon for configuration, authentication and asset delivery across the affected services. Part of this infrastructure is backed by a third-party cloud provider, which experienced an outage today and directly impacted availability of our KV service.</p><p>We’re deeply sorry for this outage: this was a failure on our part, and while the proximate cause (or trigger) for this outage was a third-party vendor failure, we are ultimately responsible for our chosen dependencies and how we choose to architect around them.</p><p>This was not the result of an attack or other security event. No data was lost as a result of this incident. Cloudflare Magic Transit and Magic WAN, DNS, Cache, proxy, WAF and related services were not directly impacted by this incident.</p>
    <div>
      <h2>What was impacted?</h2>
      <a href="#what-was-impacted">
        
      </a>
    </div>
    <p>As a rule, Cloudflare designs and builds our services on our own platform building blocks, and as such many of Cloudflare’s products are built to rely on the Workers KV service. </p><p>The following table details the impacted services, including the user-facing impact, operation failures, and increases in error rates observed:</p><table><tr><th><p><b>Product/Service</b></p></th><th><p><b>Impact</b></p></th></tr><tr><td><p><b>Workers KV</b></p><p>

</p></td><td><p>Workers KV saw 90.22% of requests failing: any key-value pair not cached and that required to retrieve the value from Workers KV's origin storage backends resulted in failed requests with response code 503 or 500. </p><p>The remaining requests were successfully served from Workers KV's cache (status code 200 and 404) or returned errors within our expected limits and/or error budget.</p><p>This did not impact data stored in Workers KV.</p></td></tr><tr><td><p><b>Access</b></p><p>

</p></td><td><p>Access uses Workers KV to store application and policy configuration along with user identity information.</p><p>During the incident Access failed 100% of identity based logins for all application types including Self-Hosted, SaaS and Infrastructure. User Identity information was unavailable to other services like WARP and Gateway during this incident. Access is designed to fail closed when it cannot successfully fetch policy configuration or a user’s identity. </p><p>Active Infrastructure Application SSH sessions with command logging enabled failed to save logs due to a Workers KV dependency. </p><p>Access’ System for Cross Domain Identity (SCIM) service was also impacted due to its reliance on Workers KV and Durable Objects (which depended on KV) to store user information. During this incident, user identities were not updated due to Workers KV updates failures. These failures would result in a 500 returned to identity providers. Some providers may require a manual re-synchronization but most customers would have seen immediate service restoration once Access’ SCIM service was restored due to retry logic by the identity provider.</p><p>Service authentication based logins (e.g. service token, Mutual TLS, and IP-based policies) and Bypass policies were unaffected. No Access policy edits or changes were lost during this time.</p></td></tr><tr><td><p><b>Gateway</b></p><p>

</p></td><td><p>This incident did not affect most Gateway DNS queries, including those over IPv4, IPv6, DNS over TLS (DoT), and DNS over HTTPS (DoH).</p><p>However, there were two exceptions:</p><p>DoH queries with identity-based rules failed. This happened because Gateway couldn't retrieve the required user’s identity information.</p><p>Authenticated DoH was disrupted for some users. Users with active sessions with valid authentication tokens were unaffected, but those needing to start new sessions or refresh authentication tokens could not.</p><p>Users of Gateway proxy, egress, and TLS decryption were unable to connect, register, proxy, or log traffic.</p><p>This was due to our reliance on Workers KV to retrieve up-to-date identity and device posture information. Each of these actions requires a call to Workers KV, and when unavailable, Gateway is designed to fail closed to prevent traffic from bypassing customer-configured rules.</p></td></tr><tr><td><p><b>WARP</b></p><p>


</p></td><td><p>The WARP client was impacted due to core dependencies on Access and Workers KV, which is required for device registration and authentication. As a result, no new clients were able to connect or sign up during the incident.</p><p>Existing WARP client users sessions that were routed through the Gateway proxy experienced disruptions, as Gateway was unable to perform its required policy evaluations.</p><p>Additionally, the WARP emergency disconnect override was rendered unavailable because of a failure in its underlying dependency, Workers KV.</p><p>Consumer WARP saw a similar sporadic impact as the Zero Trust version.</p></td></tr><tr><td><p><b>Dashboard</b></p><p>

</p></td><td><p>Dashboard user logins and most of the existing dashboard sessions were unavailable. This was due to an outage affecting Turnstile, DO, KV, and Access. The specific causes for login failures were:</p><p>Standard Logins (User/Password): Failed due to Turnstile unavailability.</p><p>Sign-in with Google (OIDC) Logins: Failed due to a KV dependency issue.</p><p>SSO Logins: Failed due to a full dependency on Access.</p><p>The Cloudflare v4 API was not impacted during this incident.</p></td></tr><tr><td><p><b>Challenges and Turnstile</b></p><p>

</p></td><td><p>The Challenge platform that powers Cloudflare Challenges and Turnstile saw a high rate of failure and timeout for siteverify API requests during the incident window due to its dependencies on Workers KV and Durable Objects.</p><p>We have kill switches in place to disable these calls in case of incidents and outages such as this. We activated these kill switches as a mitigation so that eyeballs are not blocked from proceeding. Notably, while these kill switches were active, Turnstile’s siteverify API (the API that validates issued tokens) could redeem valid tokens multiple times, potentially allowing for attacks where a bad actor might try to use a previously valid token to bypass. </p><p>There was no impact to Turnstile’s ability to detect bots. A bot attempting to solve a challenge would still have failed the challenge and thus, not receive a token. </p></td></tr><tr><td><p><b>Browser Isolation</b></p><p>

</p></td><td><p>Existing Browser Isolation sessions via Link-based isolation were impacted due to a reliance on Gateway for policy evaluation.</p><p>New link-based Browser Isolation sessions could not be initiated due to a dependency on Cloudflare Access. All Gateway-initiated isolation sessions failed due its Gateway dependency.</p></td></tr><tr><td><p><b>Images</b></p><p>

</p></td><td><p>Batch uploads to Cloudflare Images were impacted during the incident window, with a 100% failure rate at the peak of the incident. Other uploads were not impacted.</p><p>Overall image delivery dipped to around 97% success rate. Image Transformations were not significantly impacted, and Polish was not impacted.</p></td></tr><tr><td><p><b>Stream</b></p><p>

</p></td><td><p>Stream’s error rate exceeded 90% during the incident window as video playlists were unable to be served. Stream Live observed a 100% error rate.</p><p>Video uploads were not impacted.</p></td></tr><tr><td><p><b>Realtime</b></p><p>

</p></td><td><p>The Realtime TURN (Traversal Using Relays around NAT) service uses KV and was heavily impacted. Error rates were near 100% for the duration of the incident window.</p><p>The Realtime SFU service (Selective Forwarding Unit) was unable to create new sessions, although existing connections were maintained. This caused a reduction to 20% of normal traffic during the impact window. </p></td></tr><tr><td><p><b>Workers AI</b></p><p>

</p></td><td><p>All inference requests to Workers AI failed for the duration of the incident. Workers AI depends on Workers KV for distributing configuration and routing information for AI requests globally.</p></td></tr><tr><td><p><b>Pages &amp; Workers Assets</b></p><p>

</p></td><td><p>Static assets served by Cloudflare Pages and Workers Assets (such as HTML, JavaScript, CSS, images, etc) are stored in Workers KV, cached, and retrieved at request time. Workers Assets saw an average error rate increase of around 0.06% of total requests during this time. </p><p>During the incident window, Pages error rate peaked to ~100% and all Pages builds could not complete. </p></td></tr><tr><td><p><b>AutoRAG</b></p><p>

</p></td><td><p>AutoRAG relies on Workers AI models for both document conversion and generating vector embeddings during indexing, as well as LLM models for querying and search. AutoRAG was unavailable during the incident window because of the Workers AI dependency.</p></td></tr><tr><td><p><b>Durable Objects</b></p><p>

</p></td><td><p>SQLite-backed Durable Objects share the same underlying storage infrastructure as Workers KV. The average error rate during the incident window peaked at 22%, and dropped to 2% as services started to recover.</p><p>Durable Object namespaces using the legacy key-value storage were not impacted.</p></td></tr><tr><td><p><b>D1</b></p><p>
</p></td><td><p>D1 databases share the same underlying storage infrastructure as Workers KV and Durable Objects.</p><p>Similar to Durable Objects, the average error rate during the incident window peaked at 22%, and dropped to 2% as services started to recover.</p></td></tr><tr><td><p><b>Queues &amp; Event Notifications</b></p><p>
</p></td><td><p>Queues message operations including–pushing and consuming–were unavailable during the incident window.</p><p>Queues uses KV to map each Queue to underlying Durable Objects that contain queued messages.</p><p>Event Notifications use Queues as their underlying delivery mechanism.</p></td></tr><tr><td><p><b>AI Gateway</b></p><p>

</p></td><td><p>AI Gateway is built on top of Workers and relies on Workers KV for client and internal configurations. During the incident window, AI Gateway saw error rates peak at 97% of requests until dependencies recovered.</p></td></tr><tr><td><p><b>CDN</b></p><p>

</p></td><td><p>Automated traffic management infrastructure was operational but acted with reduced efficacy during the impact period. In particular, registration requests from Zero Trust clients increased substantially as a result of the outage.</p><p>The increase in requests imposed additional load in several Cloudflare locations, triggering response from automated traffic management. In response to these conditions, systems rerouted incoming CDN traffic to nearby locations, reducing impact to customers. There was a portion of traffic that was not rerouted as expected and is under investigation. CDN requests impacted by this issue would experience elevated latency, HTTP 499 errors, and / or HTTP 503 errors. Impacted Cloudflare service areas included São Paulo, Philadelphia, Atlanta, and Raleigh.</p></td></tr><tr><td><p><b>Workers / Workers for Platforms</b></p><p>

</p></td><td><p>Workers and Workers for Platforms rely on a third party service for uploads. During the incident window, Workers saw an overall error rate peak to ~2% of total requests. Workers for Platforms saw an overall error rate peak to ~10% of total requests during the same time period. </p></td></tr><tr><td><p><b>Workers Builds (CI/CD)
 </b></p><p>
</p></td><td><p>Starting at 18:03 UTC Workers builds could not receive new source code management push events due to Access being down.</p><p>100% of new Workers Builds failed during the incident window.</p></td></tr><tr><td><p><b>Browser Rendering</b></p><p>

</p></td><td><p>Browser Rendering depends on Browser Isolation for browser instance infrastructure.</p><p>Requests to both the REST API and via the Workers Browser Binding were 100% impacted during the incident window.</p></td></tr><tr><td><p><b>Zaraz</b></p><p>
</p></td><td><p>100% of requests were impacted during the incident window. Zaraz relies on Workers KV configs for websites when handling eyeball traffic. Due to the same dependency, attempts to save updates to Zaraz configs were unsuccessful during this period, but our monitoring shows that only a single user was affected.</p></td></tr></table>
    <div>
      <h2>Background</h2>
      <a href="#background">
        
      </a>
    </div>
    <p>Workers KV is built as what we call a “coreless” service which means there should be no single point of failure as the service runs independently in each of our locations worldwide. However, Workers KV today relies on a central data store to provide a source of truth for data. A failure of that store caused a complete outage for cold reads and writes to the KV namespaces used by services across Cloudflare.</p><p>Workers KV is in the process of being transitioned to significantly more resilient infrastructure for its central store: regrettably, we had a gap in coverage which was exposed during this incident. Workers KV removed a storage provider as we worked to re-architect KV’s backend, including migrating it to Cloudflare R2, to prevent data consistency issues (caused by the original data syncing architecture), and to improve support for data residency requirements.</p><p>One of our principles is to build Cloudflare services on our own platform as much as possible, and Workers KV is no exception. Many of our internal and external services rely heavily on Workers KV, which under normal circumstances helps us deliver the most robust services possible, instead of service teams attempting to build their own storage services. In this case, the cascading impact from the failure from Workers KV exacerbated the issue and significantly broadened the blast radius. </p>
    <div>
      <h2>Incident timeline and impact</h2>
      <a href="#incident-timeline-and-impact">
        
      </a>
    </div>
    <p>The incident timeline, including the initial impact, investigation, root cause, and remediation, are detailed below. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7CBPPVgr3GroJP2EcD3yvB/6073457ce6263e7e05f6eb3d796ddd48/BLOG-2847_2.png" />
          </figure><p><i><sub>Workers KV error rates to storage infrastructure. 91% of requests to KV failed during the incident window.</sub></i></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5sGFmSsNHD9yXwJ9Ea5jov/78349be4318cb738cdd72643e69f7bdb/BLOG-2847_1.png" />
          </figure><p><i><sub>Cloudflare Access percentage of successful requests. Cloudflare Access relies directly on Workers KV and serves as a good proxy to measure Workers KV availability over time.</sub></i></p><p>All timestamps referenced are in Coordinated Universal Time (UTC).</p><table><tr><th><p><b>Time</b></p></th><th><p><b>Event</b></p></th></tr><tr><td><p>2025-06-12 17:52</p></td><td><p><b>INCIDENT START
</b>Cloudflare WARP team begins to see registrations of new devices fail and begin to investigate these failures and declares an incident.</p></td></tr><tr><td><p>2025-06-12 18:05</p></td><td><p>Cloudflare Access team received an alert due to a rapid increase in error rates.</p><p>Service Level Objectives for multiple services drop below targets and trigger alerts across those teams.</p></td></tr><tr><td><p>2025-06-12 18:06</p></td><td><p>Multiple service-specific incidents are combined into a single incident as we identify a shared cause (Workers KV unavailability). Incident priority upgraded to P1.</p></td></tr><tr><td><p>2025-06-12 18:21</p></td><td><p>Incident priority upgraded to P0 from P1 as severity of impact becomes clear.</p></td></tr><tr><td><p>2025-06-12 18:43</p></td><td><p>Cloudflare Access begins exploring options to remove Workers KV dependency by migrating to a different backing datastore with the Workers KV engineering team. This was proactive in the event the storage infrastructure continued to be down.</p></td></tr><tr><td><p>2025-06-12 19:09</p></td><td><p>Zero Trust Gateway began working to remove dependencies on Workers KV by gracefully degrading rules that referenced Identity or Device Posture state.</p></td></tr><tr><td><p>2025-06-12 19:32</p></td><td><p>Access and Device Posture force drop identity and device posture requests to shed load on Workers KV until third-party service comes back online.</p></td></tr><tr><td><p>2025-06-12 19:45</p></td><td><p>Cloudflare teams continue to work on a path to deploying a Workers KV release against an alternative backing datastore and having critical services write configuration data to that store.</p></td></tr><tr><td><p>2025-06-12 20:23</p></td><td><p>Services begin to recover as storage infrastructure begins to recover. We continue to see a non-negligible error rate and infrastructure rate limits due to the influx of services repopulating caches.</p></td></tr><tr><td><p>2025-06-12 20:25</p></td><td><p>Access and Device Posture restore calling Workers KV as third-party service is restored.</p></td></tr><tr><td><p>2025-06-12 20:28</p></td><td><p><b>IMPACT END 
</b>Service Level Objectives return to pre-incident level. Cloudflare teams continue to monitor systems to ensure services do not degrade as dependent systems recover.</p></td></tr><tr><td><p>
</p></td><td><p><b>INCIDENT END
</b>Cloudflare team see all affected services return to normal function. Service level objective alerts are recovered.</p></td></tr></table>
    <div>
      <h2>Remediation and follow-up steps</h2>
      <a href="#remediation-and-follow-up-steps">
        
      </a>
    </div>
    <p>We’re taking immediate steps to improve the resiliency of services that depend on Workers KV and our storage infrastructure. This includes existing planned work that we are accelerating as a result of this incident.</p><p>This encompasses several workstreams, including efforts to avoid singular dependencies on storage infrastructure we do not own, improving the ability for us to recover critical services (including Access, Gateway and WARP) </p><p>Specifically:</p><ul><li><p>(Actively in-flight): Bringing forward our work to improve the redundancy within Workers KV’s storage infrastructure, removing the dependency on any single provider. During the incident window we began work to cut over and backfill critical KV namespaces to our own infrastructure, in the event the incident continued. </p></li><li><p>(Actively in-flight): Short-term blast radius remediations for individual products that were impacted by this incident so that each product becomes resilient to any loss of service caused by any single point of failure, including third party dependencies.</p></li><li><p>(Actively in-flight): Implementing tooling that allows us to progressively re-enable namespaces during storage infrastructure incidents. This will allow us to ensure that key dependencies, including Access and WARP, are able to come up without risking a denial-of-service against our own infrastructure as caches are repopulated.</p></li></ul><p>This list is not exhaustive: our teams continue to revisit design decisions and assess the infrastructure changes we need to make in both the near (immediate) term and long term to mitigate the incidents like this going forward.</p><p>This was a serious outage, and we understand that organizations and institutions that are large and small depend on us to protect and/or run their websites, applications, zero trust and network infrastructure.  Again we are deeply sorry for the impact and are working diligently to improve our service resiliency. </p> ]]></content:encoded>
            <category><![CDATA[Outage]]></category>
            <category><![CDATA[Post Mortem]]></category>
            <guid isPermaLink="false">2dN6tdkhtvWTTgAgbfzaSX</guid>
            <dc:creator>Jeremy Hartman</dc:creator>
            <dc:creator>CJ Desai</dc:creator>
        </item>
        <item>
            <title><![CDATA[How the April 28, 2025, power outage in Portugal and Spain impacted Internet traffic and connectivity]]></title>
            <link>https://blog.cloudflare.com/how-power-outage-in-portugal-spain-impacted-internet/</link>
            <pubDate>Mon, 28 Apr 2025 21:30:00 GMT</pubDate>
            <description><![CDATA[ A massive power outage struck significant portions of Portugal and Spain at 10:34 UTC on April 28, disrupting everyday activities and services. ]]></description>
            <content:encoded><![CDATA[ <p>A massive <a href="https://www.reuters.com/world/europe/large-parts-spain-portugal-hit-by-power-outage-2025-04-28/"><u>power outage struck significant portions of Portugal and Spain</u></a> at 10:34 UTC on April 28, grinding transportation to a halt, shutting retail businesses, and otherwise disrupting everyday activities and services. Parts of France were also reportedly impacted by the power outage. Portugal’s electrical grid operator <a href="https://www.bbc.com/news/live/c9wpq8xrvd9t?post=asset%3Aa1493644-407b-44c0-aef9-c6f64d7fad0e#post"><u>blamed</u></a> the outage on a "<i>fault in the Spanish electricity grid</i>”, and <a href="https://www.bbc.com/news/live/c9wpq8xrvd9t?post=asset%3Addda9592-0346-4fe8-a17a-2261efc1ba5b#post"><u>later stated</u></a> that "<i>due to extreme temperature variations in the interior of Spain, there were anomalous oscillations in the very high voltage lines (400 kilovolts), a phenomenon known as 'induced atmospheric vibration'</i>" and that "<i>These oscillations caused synchronisation failures between the electrical systems, leading to successive disturbances across the interconnected European network</i>." However, the operator later <a href="https://sicnoticias.pt/pais/2025-04-28-e-falso-que-fenomeno-atmosferico-raro-tenha-estado-na-origem-do-apagao-1a078544"><u>denied</u></a> these claims. </p><p>The breadth of Cloudflare’s network and our customer base provides us with a unique perspective on Internet resilience, enabling us to observe the Internet impact of this power outage at both a local and national level, as well as at a network level, across traffic, network quality, and routing metrics.</p>
    <div>
      <h2>Impacts in Portugal</h2>
      <a href="#impacts-in-portugal">
        
      </a>
    </div>
    
    <div>
      <h3>Country level</h3>
      <a href="#country-level">
        
      </a>
    </div>
    <p>In Portugal, Internet traffic dropped as the power grid failed, with traffic immediately dropping by half as compared to the previous week, falling to approximately 90% below the previous week within the next five hours.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/21ezFb3KrFDR36Gn66twwd/99ed60e71501eccf948e7e1d2c0eb3a3/BLOG-2817_2.png" />
          </figure><p>Request traffic from users in Portugal to Cloudflare’s <a href="https://1.1.1.1/dns"><u>1.1.1.1 DNS resolver</u></a> also fell when the power went out, initially dropping by 40% as compared to the previous week, and falling further over the next several hours. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/GgzpC2ll27P1cfv0dQlFw/69dfabcfddd2dcb4db5f00885b12c67f/BLOG-2817_3.png" />
          </figure>
    <div>
      <h3>Network level</h3>
      <a href="#network-level">
        
      </a>
    </div>
    <p>At a network level, the loss of Internet traffic from local providers including NOS, Vodafone, MEO, and NOWO was swift and significant. The Cloudflare Radar graphs below show that traffic from those networks effectively evaporated over the hours after the power outage began. The <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/"><u>autonomous systems (ASNs)</u></a> shown below for these providers may carry a mix of fixed and mobile broadband traffic. However, MEO breaks out at least some of their mobile traffic onto a separate ASN, and the graph below for MEO-MOVEL (AS42863) shows that request traffic from that network more than doubled after the power went out, as subscribers turned to their mobile devices for information about what was happening. However, despite the initial spike, this mobile traffic also fell over the next several hours, dropping to approximately half of the volume seen the prior week.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4fys8MdtL3ImmHS6b25x0i/b1ccbd24818e4ec93170ae9f699c9727/BLOG-2817_4.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5LCdCDX9XoOHMgdVRzyjD0/1e276a9bb6eae716ee22059d88517615/BLOG-2817_5.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4fxFCuhYkBHXSDDizyj6AX/87d9e5f6e9e610cff365d3f5be8c75cb/BLOG-2817_6.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5RfJ5HwTx3AqMKJ6c5FaaI/f595795b57dfb1b54fb6013ccca532fc/BLOG-2817_7.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7kYztdmWgq6n8lZgokElhh/a34f81d3dce5c259ee43b567ebbb8e9a/BLOG-2817_8.png" />
          </figure>
    <div>
      <h3>Regional level</h3>
      <a href="#regional-level">
        
      </a>
    </div>
    <p>In addition to looking at traffic at a national and network level, we can also look at traffic at a regional level. As noted above, the power outage did not impact every region of the country. The traffic graphs below show the changes in Internet traffic from the parts of Portugal where an impact was observed.</p><p>In Lisbon and Porto, a sharp, but limited drop in traffic was observed as the power outage began, with traffic recovering slightly almost as quickly. However, traffic gradually declined in the subsequent hours, in contrast to the other regions reviewed below.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/78BxDMSIsfvNyoPUczJ1Cv/aa4cf6cd71684726fc891c63dcb5108f/BLOG-2817_9.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/23KJh2ptN7NynAACvmFH9L/d71dc49bcd20ede70847592fb0d55b72/BLOG-2817_10.png" />
          </figure><p>The most significant immediate traffic drops were observed in Aveiro, Beja, Bragança, Castelo Branco, Évora, Faro, Guarda, Portalegre, Santarém, Viana do Castelo, Vila Real, and Viseu. In these areas, traffic fell and then quickly stabilized at very low volumes. In Braga and Setúbal, traffic declined more gradually after the initial drop.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4tF73OLez4RM8I7x2I9ST0/06751ebb8bfdc22bc243939881eb2d16/BLOG-2817_11.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7fYFHRgvyG8jlSZtIQ55St/1b3656d6212c1dd0a1e8ee6be5863f26/BLOG-2817_12.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/JKrwsr5k82cDreQluW336/61cc1d1b278a42995e71f98a02924d8c/BLOG-2817_13.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5q736FpU8TFEFZzNMl3ahB/b4f0105be84f00a9dc28d63ffb820124/BLOG-2817_14.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/717jnbhzFFBGxC8oKQzbWc/44ca9f1d5ca08104434f651750825bc9/BLOG-2817_15.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7sghSM0J7Hx793fpzBNhBa/8e105feb10216547aa6fc4c829c5c95f/BLOG-2817_16.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5TRbyRB0hrjClK3S74Z6U1/71dc72b9b2bff67fd58fa43d5566aefb/BLOG-2817_17.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2IEvNIigmrvmQ0qELWpLgO/63388e385162c421bb5ddf225993d300/BLOG-2817_18.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2KWHQhpaMdU8EZUAjvJyZn/8de4ec173e79971e06cdbbc931e3f149/BLOG-2817_19.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5NCILxkS2oymGlRpKcoiG6/89977e5bc093ce81311585f397fff97e/BLOG-2817_20.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5s16pM0WdaX2bwDZuMuxda/af239a2a28abfeb5ce7da93c1094dd8c/BLOG-2817_21.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1jkQrdlsCsGr4qyXX8EN42/7d90bbc1afcf138094a0f87d88b342e7/BLOG-2817_22.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/xnoq8NQjx0nMruchenCRW/4a0a29367a0a4903a7623a9c3db72031/BLOG-2817_23.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1sN1UZ4YYy7ZuOOy98KyIw/7c837a3034144937b8b9f3d43e0d7ffa/BLOG-2817_24.png" />
          </figure>
    <div>
      <h3>Network quality</h3>
      <a href="#network-quality">
        
      </a>
    </div>
    <p>The power outage also impacted the quality of connectivity at a national level in Portugal. Prior to the loss of power, median download speeds across the country were around 40 Mbps, but within several hours after the state of the outage, fell as low as 15 Mbps. As expected, latency at a country level saw an opposite impact. Prior to the loss of power, median latency was around 20 ms. However, it gradually grew to as much as 50 ms. The lower download speeds and higher latency are likely due to the congestion of the network links that remained available.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5IZu3b640cB11ZjnqQGcFh/886c356c4d8124be7f8b2ce01c9582cc/BLOG-2817_25.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2nq7wmaWEZOgusVYw9v1it/8685d5ad7a343f22b94e09c72a34a0d5/BLOG-2817_26.png" />
          </figure>
    <div>
      <h3>Routing</h3>
      <a href="#routing">
        
      </a>
    </div>
    <p><a href="https://www.cloudflare.com/the-net/network-infrastructure/">Network infrastructure </a>in Portugal was also impacted by the power outage, with the impact seen as a drop in announced IP address space. (This means that portions of Portuguese providers’ networks are no longer visible to the rest of the Internet.) The number of announced IPv4 /24s (blocks of 256 IPv4 addresses) dropped by ~300 (around 1.2%), and the number of announced IPv6 /48s (blocks of over 1.2 octillion IPv6 addresses) dropped from 17,928,551 to 16,355,607 (around 9%). Address space began to drop further after 16:00 UTC, possibly as a result of backup power being exhausted and associated network infrastructure falling offline.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/ZaC5NQzz6dzDzo5ywePst/417192abd6fc942ff85d06e247f7de66/BLOG-2817_27.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/asO2sxhD9Adue3q0FiBJv/40b8159855bd3c2156df49360adf898b/BLOG-2817_28.png" />
          </figure>
    <div>
      <h2>Impacts in Spain</h2>
      <a href="#impacts-in-spain">
        
      </a>
    </div>
    
    <div>
      <h3>Country level</h3>
      <a href="#country-level">
        
      </a>
    </div>
    <p>In Spain, Internet traffic dropped as the power grid failed, with traffic immediately dropping by around 60% as compared to the previous week, falling to approximately 80% below the previous week within the next five hours.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4NWiuqfI5Z3MI5Q8fTXJM/df6438db91848c3faff11aaae43a7328/BLOG-2817_29.png" />
          </figure><p>Request traffic from users in Spain to Cloudflare’s <a href="https://1.1.1.1/dns"><u>1.1.1.1 DNS resolver</u></a> also fell when the power went out, initially dropping by 54% as compared to the previous week, but quickly stabilizing. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1F1N4JwFD2543isQ7laC6k/cf7ad7cce146ed18fd84ad9a32872e6e/BLOG-2817_30.png" />
          </figure>
    <div>
      <h3>Network level</h3>
      <a href="#network-level">
        
      </a>
    </div>
    <p>At a network level, traffic volumes from the <a href="https://radar.cloudflare.com/traffic/es?dateRange=1d#autonomous-systems"><u>top five ASNs in Spain</u></a> fell rapidly once power was lost, with most declining gradually over the next several hours. In contrast, traffic from Digi Spain Telecom (AS57269) fell quickly, but then stabilized at the lower level. In comparison to the previous week, traffic from these providers fell between 75% and 93% in the hours after the power outage began.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4oFLKubgeo2NsffE9tGx2f/89af580571c94e14703b218feadb058b/BLOG-2817_31.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/YTkS67VGnW9ACB3t58kBF/672f7e3e7d990bc7f33aa9a5155e133f/BLOG-2817_32.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/56sGr0XxMAifw2ZqYnvSWU/07f168a415a2f0801bb2aebb1a7b45f9/BLOG-2817_33.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5CpRU7YzB4LnTJppDnN2DZ/e479899ec5d6a0034c38361abf311d2f/BLOG-2817_34.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/qYPxLDJQM2bfv0LTrY547/694f327f221bbc094580dbea731527d1/BLOG-2817_35.png" />
          </figure>
    <div>
      <h3>Regional level</h3>
      <a href="#regional-level">
        
      </a>
    </div>
    <p>In most of the impacted regions in Spain, traffic dropped off quickly and stabilized, or continued to fall further. However, some recovery in traffic is also evident, and can be seen in Navarre, La Rioja, Cantabria, and Basque Country. This traffic recovery is likely associated with an initial restoration of power in those regions, as an <a href="https://www.ree.es/es/sala-de-prensa/actualidad/nota-de-prensa/2025/04/proceso-de-recuperacion-de-la-tension-en-el-sistema-electrico-peninsular"><u>update</u></a> from <a href="https://www.redeia.com/en/about-us/our-brand"><u>Red Eléctrica</u></a> (operator of Spain’s national electricity grid) noted that “<i>Electricity is now available in parts of Catalonia, Aragon, the Basque Country, Galicia, Asturias, Navarre, Castile and León, Extremadura, Andalusia, and La Rioja.</i>”</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6ziGXO8PBjWNAwhiYNa8hT/9af98f7fad1b22f482445d90314a524e/BLOG-2817_36.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/17WcnWkXfG1BlxBCUwp5fW/1688269ff5dd6dbad06c09278e9ccd8d/BLOG-2817_37.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5ho5Cc18AYxd8G54mtmfcH/739731e105a9236a00a6531a1a61c10f/BLOG-2817_38.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1K4A82RT9L0tCXUBEVjZOw/a0798877806ac2c071cf82e71101a558/BLOG-2817_39.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6zFHmbct6zR2155GbfyM0F/af7f5aad5076bafc373127f7e5da82be/BLOG-2817_40.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2FBbLHYIziEcd0qJJM02s1/404bcca11c826a758d174ce4ff94a638/BLOG-2817_41.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/349JW0IsbTDKRALrlQd91P/97b397344d72d3e6322a572e9c562ee2/BLOG-2817_42.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5BNKpHFhRfNOlv1smU5BXr/9cda4655d68648c216f741cb200f0c51/BLOG-2817_43.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2g9WMm6XRkUvJxpW8pE0QJ/214f04aa2f8caa307e5f422db7a85dd3/BLOG-2817_44.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/TeXN8cdr56fUFyNnbBtXo/f5ec765241d0b4af2f15eab70b65aa5c/BLOG-2817_45.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2h2R1k3eabXg7qB0XvfP0t/637dd2050cbc0e25b6ae441fe79bb14c/BLOG-2817_46.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/37gqrGzlp0dIFinCinmAYk/6ed9f0b5c5d73490bbc539e577125df2/BLOG-2817_47.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4Cr2Otys8KDJA0amR3PpUj/f51a2f1ffdbb50ce4f3ca7501352fc5d/BLOG-2817_48.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1K415209i5YGlJmcMuoRMr/1b5285b4872b1dc904ddb0f3a0417704/BLOG-2817_49.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/AGjopp98UyZh52X5Jf1F9/df72d3f181012e07bb566aec3ab01ca6/BLOG-2817_50.png" />
          </figure>
    <div>
      <h3>Network quality</h3>
      <a href="#network-quality">
        
      </a>
    </div>
    <p>The power outage also impacted the quality of connectivity at a national level in Spain. Prior to the loss of power, median download speeds across the country were around 35 Mbps, but within several hours after the state of the outage, fell as low as 19 Mbps. Interestingly, the median bandwidth didn’t see the clean gradual decline as it did in Portugal, instead falling and recovering twice before gradually declining.</p><p>As expected, latency at a country level saw a significant increase. Prior to the loss of power, median latency was around 22 ms, but grew to as much as 40 ms. As in Portugal, the lower download speeds and higher latency are likely due to the congestion of the network links that remained available.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6SvaDFDPdleRDctpTezUIf/65834285ceb6d7dedc76c97095871859/BLOG-2817_51.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6BxQUuqVQdIHxro6XDAzUC/83e6001ffaa1492079cf54f25c36785c/BLOG-2817_52.png" />
          </figure>
    <div>
      <h3>Routing</h3>
      <a href="#routing">
        
      </a>
    </div>
    <p>Similar to Portugal, network infrastructure in Spain was also impacted by the power outage, with the impact seen as a drop in announced IP address space. By 14:30 UTC, the number of announced IPv4 /24 address blocks had fallen by around 2.4%, and continued to drop further over the following hours. The number of announced IPv6 /48 address blocks fell by over 8% during that same time span, and also continued to drop in the following hours.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6o4hGGWXhf8aoqkqUUCNnO/5ad97dafb731679c368f837f69fc7044/BLOG-2817_53.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3ZfoHrCrwHAMACg0oCTceD/8f816416c4cde1c562aa519099eed058/BLOG-2817_54.png" />
          </figure>
    <div>
      <h2>Impacts in other European countries</h2>
      <a href="#impacts-in-other-european-countries">
        
      </a>
    </div>
    <p>Parts of Andorra and France were also <a href="https://www.euronews.com/my-europe/2025/04/28/spain-portugal-and-parts-of-france-hit-by-massive-power-outage"><u>reportedly impacted</u></a> by the power outage, with additional outages reported as far away as Belgium. At a national level, no traffic disruptions were evident in any of the countries.</p><p>
</p><p>Analysis of traffic at a regional level in France shows a slight decline concurrent with the power outage in several regions, but the drops were nominal in comparison to Spain and Portugal, and traffic volumes recovered to expected levels within 90 minutes. No impact was evident at a regional level in Andorra.</p><p>It appears that Morocco may have been impacted in some fashion by the power outage, or at least Orange Maroc was. In a <a href="https://x.com/OrangeMaroc/status/1916866583047147690"><u>post on X</u></a>, the provider stated (translated) “<i>Internet traffic has been disrupted following a massive power outage in Spain and Portugal, which is affecting international connections.</i>” Cloudflare Radar shows that traffic from the network fell sharply around 12:00 UTC, 90 minutes after the power outage began, with a full outage beginning around 15:00 UTC.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/62cA1IF7zleHp8RpCZu2Dq/2733086494e7d05b3feac5fd700e1c0f/BLOG-2817_55.png" />
          </figure>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Power restoration in Spain had already started as this post was being written, and full recovery will likely take hours to days. As power is restored, Internet traffic and other metrics will recover as well. The current state of Internet connectivity in <a href="https://radar.cloudflare.com/es"><u>Spain</u></a> and <a href="https://radar.cloudflare.com/pt"><u>Portugal</u></a> can be tracked on Cloudflare Radar.</p><p>The Cloudflare Radar team is constantly monitoring for Internet disruptions, sharing our observations on the <a href="https://radar.cloudflare.com/outage-center"><u>Cloudflare Radar Outage Center</u></a>, via social media, and in posts on <a href="https://blog.cloudflare.com/tag/cloudflare-radar/"><u>blog.cloudflare.com</u></a>. Follow us on social media at <a href="https://twitter.com/CloudflareRadar"><u>@CloudflareRadar</u></a> (X), <a href="https://noc.social/@cloudflareradar"><u>noc.social/@cloudflareradar</u></a> (Mastodon), and <a href="https://bsky.app/profile/radar.cloudflare.com"><u>radar.cloudflare.com</u></a> (Bluesky), or contact us via <a>email</a>.</p> ]]></content:encoded>
            <category><![CDATA[Radar]]></category>
            <category><![CDATA[Internet Traffic]]></category>
            <category><![CDATA[Outage]]></category>
            <category><![CDATA[Internet Quality]]></category>
            <category><![CDATA[Traffic]]></category>
            <guid isPermaLink="false">3Zqv5LUUJauYsk7Dn0BIye</guid>
            <dc:creator>David Belson</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare incident on March 21, 2025]]></title>
            <link>https://blog.cloudflare.com/cloudflare-incident-march-21-2025/</link>
            <pubDate>Tue, 25 Mar 2025 01:40:38 GMT</pubDate>
            <description><![CDATA[ On March 21, 2025, multiple Cloudflare services, including R2 object storage experienced an elevated rate of error responses. Here’s what caused the incident, the impact, and how we are making sure it ]]></description>
            <content:encoded><![CDATA[ <p>Multiple Cloudflare services, including <a href="https://www.cloudflare.com/developer-platform/products/r2/"><u>R2 object storage</u></a>, experienced an elevated rate of errors for 1 hour and 7 minutes on March 21, 2025 (starting at 21:38 UTC and ending 22:45 UTC). During the incident window, 100% of write operations failed and approximately 35% of read operations to R2 failed globally. Although this incident started with R2, it impacted other Cloudflare services including <a href="https://www.cloudflare.com/developer-platform/products/cache-reserve/"><u>Cache Reserve</u></a>, <a href="https://www.cloudflare.com/developer-platform/products/cloudflare-images/"><u>Images</u></a>, <a href="https://developers.cloudflare.com/logs/edge-log-delivery/"><u>Log Delivery</u></a>, <a href="https://www.cloudflare.com/developer-platform/products/cloudflare-stream/"><u>Stream</u></a>, and <a href="https://developers.cloudflare.com/vectorize/"><u>Vectorize</u></a>.</p><p>While rotating credentials used by the R2 Gateway service (R2's API frontend) to authenticate with our storage infrastructure, the R2 engineering team inadvertently deployed the new credentials (ID and key pair) to a development instance of the service instead of production. When the old credentials were deleted from our storage infrastructure (as part of the key rotation process), the production R2 Gateway service did not have access to the new credentials. This ultimately resulted in R2’s Gateway service not being able to authenticate with our storage backend. There was no data loss or corruption that occurred as part of this incident: any in-flight uploads or mutations that returned successful HTTP status codes were persisted.</p><p>Once the root cause was identified and we realized we hadn’t deployed the new credentials to the production R2 Gateway service, we deployed the updated credentials and service availability was restored. </p><p>This incident happened because of human error and lasted longer than it should have because we didn’t have proper visibility into which credentials were being used by the Gateway Worker to authenticate with our storage infrastructure. </p><p>We’re deeply sorry for this incident and the disruption it may have caused to you or your users. We hold ourselves to a high standard and this is not acceptable. This blog post exactly explains the impact, what happened and when, and the steps we are taking to make sure this failure (and others like it) doesn’t happen again.</p>
    <div>
      <h2>What was impacted?</h2>
      <a href="#what-was-impacted">
        
      </a>
    </div>
    <p><b>The primary incident window occurred between 21:38 UTC and 22:45 UTC.</b></p><p>The following table details the specific impact to R2 and Cloudflare services that depend on, or interact with, R2:</p>
<div><table><thead>
  <tr>
    <th><span>Product/Service</span></th>
    <th><span>Impact</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>R2</span></td>
    <td><span>All customers using Cloudflare R2 would have experienced an elevated error rate during the primary incident window. Specifically:</span><br /><br /><span>* Object write operations had a 100% error rate.</span><br /><br /><span>* Object reads had an approximate error rate of 35% globally. Individual customer error rate varied during this window depending on access patterns. Customers accessing public assets through custom domains would have seen a reduced error rate as cached object reads were not impacted.</span><br /><br /><span>* Operations involving metadata only (e.g., head and list operations) were not impacted.</span><br /><br /><span>There was no data loss or risk to data integrity within R2's storage subsystem. This incident was limited to a temporary authentication issue between R2's API frontend and our storage infrastructure.</span></td>
  </tr>
  <tr>
    <td><span>Billing</span></td>
    <td><span>Billing uses R2 to store customer invoices. During the primary incident window, customers may have experienced errors when attempting to download/access past Cloudflare invoices.</span></td>
  </tr>
  <tr>
    <td><span>Cache Reserve</span></td>
    <td><span>Cache Reserve customers observed an increase in requests to their origin during the incident window as an increased percentage of reads to R2 failed. This resulted in an increase in requests to origins to fetch assets unavailable in Cache Reserve during this period.</span><br /><br /><span>User-facing requests for assets to sites with Cache Reserve did not observe failures as cache misses failed over to the origin.</span></td>
  </tr>
  <tr>
    <td><span>Email Security</span></td>
    <td><span>Email Security depends on R2 for customer-facing metrics. During the primary incident window, customer-facing metrics would not have updated.</span></td>
  </tr>
  <tr>
    <td><span>Images</span></td>
    <td><span>All (100% of) uploads failed during the primary incident window. Successful delivery of stored images dropped to approximately 25%.</span></td>
  </tr>
  <tr>
    <td><span>Key Transparency Auditor</span></td>
    <td><span>All (100% of) operations failed during the primary incident window due to dependence on R2 writes and/or reads. Once the incident was resolved, service returned to normal operation immediately.</span></td>
  </tr>
  <tr>
    <td><span>Log Delivery</span></td>
    <td><span>Log delivery (for Logpush and Logpull) was delayed during the primary incident window, resulting in significant delays (up to 70 minutes) in log processing. All logs were delivered after incident resolution.</span></td>
  </tr>
  <tr>
    <td><span>Stream</span></td>
    <td><span>All (100% of) uploads failed during the primary incident window. Successful Stream video segment delivery dropped to 94%. Viewers may have seen video stalls every minute or so, although actual impact would have varied.</span><br /><br /><span>Stream Live was down during the primary incident window as it depends on object writes.</span></td>
  </tr>
  <tr>
    <td><span>Vectorize</span></td>
    <td><span>Queries and operations against Vectorize indexes were impacted during the incident window. During the incident window, Vectorize customers would have seen an increased error rate for read queries to indexes and all (100% of) insert and upsert operation failed as Vectorize depends on R2 for persistent storage.</span></td>
  </tr>
</tbody></table></div>
    <div>
      <h2>Incident timeline</h2>
      <a href="#incident-timeline">
        
      </a>
    </div>
    <p><b>All timestamps referenced are in Coordinated Universal Time (UTC).</b></p>
<div><table><colgroup>
<col></col>
<col></col>
</colgroup>
<thead>
  <tr>
    <th><span>Time</span></th>
    <th><span>Event</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>Mar 21, 2025 - 19:49 UTC</span></td>
    <td><span>The R2 engineering team started the credential rotation process. A new set of credentials (ID and key pair) for storage infrastructure was created. Old credentials were maintained to avoid downtime during credential change over.</span></td>
  </tr>
  <tr>
    <td><span>Mar 21, 2025 - 20:19 UTC</span></td>
    <td><span>Set updated production secret (</span><span>wrangler secret put</span><span>) and executed </span><span>wrangler deploy</span><span> command to deploy R2 Gateway service with updated credentials. </span><br /><br /><span>Note: We later discovered the </span><span>--env</span><span> parameter was inadvertently omitted for both Wrangler commands. This resulted in credentials being deployed to the Worker assigned to the </span><span>default</span><span> environment instead of the Worker assigned to the </span><span>production</span><span> environment.</span></td>
  </tr>
  <tr>
    <td><span>Mar 21, 2025 - 20:20 UTC</span></td>
    <td><span>The R2 Gateway service Worker assigned to the </span><span>default</span><span> environment is now using the updated storage infrastructure credentials.</span><br /><br /><span>Note: This was the wrong Worker, the </span><span>production</span><span> environment should have been explicitly set. But, at this point, we incorrectly believed the credentials were updated on the correct production Worker.</span></td>
  </tr>
  <tr>
    <td><span>Mar 21, 2025 - 20:37 UTC</span></td>
    <td><span>Old credentials were removed from our storage infrastructure to complete the credential rotation process.</span></td>
  </tr>
  <tr>
    <td><span>Mar 21, 2025 - 21:38 UTC</span></td>
    <td><span>– IMPACT BEGINS –</span><br /><br /><span>R2 availability metrics begin to show signs of service degradation. The impact to R2 availability metrics was gradual and not immediately obvious because there was a delay in the propagation of the previous credential deletion to storage infrastructure.</span></td>
  </tr>
  <tr>
    <td><span>Mar 21, 2025 - 21:45 UTC</span></td>
    <td><span>R2 global availability alerts are triggered (indicating 2% of error budget burn rate).</span><br /><br /><span>The R2 engineering team began looking at operational dashboards and logs to understand impact.</span></td>
  </tr>
  <tr>
    <td><span>Mar 21, 2025 - 21:50 UTC</span></td>
    <td><span>Internal incident declared.</span></td>
  </tr>
  <tr>
    <td><span>Mar 21, 2025 - 21:51 UTC</span></td>
    <td><span>R2 engineering team observes gradual but consistent decline in R2 availability metrics for both read and write operations. Operations involving metadata only (e.g., head and list operations) were not impacted.</span><br /><br /><span>Given gradual decline in availability metrics, R2 engineering team suspected a potential regression in propagation of new credentials in storage infrastructure.</span></td>
  </tr>
  <tr>
    <td><span>Mar 21, 2025 - 22:05 UTC</span></td>
    <td><span>Public incident status page published.</span></td>
  </tr>
  <tr>
    <td><span>Mar 21, 2025 - 22:15 UTC</span></td>
    <td><span>R2 engineering team created a new set of credentials (ID and key pair) for storage infrastructure in an attempt to force re-propagation.</span><br /><br /><span>Continued monitoring operational dashboards and logs.</span></td>
  </tr>
  <tr>
    <td><span>Mar 21, 2025 - 22:20 UTC</span></td>
    <td><span>R2 engineering team saw no improvement in availability metrics. Continued investigating other potential root causes.</span></td>
  </tr>
  <tr>
    <td><span>Mar 21, 2025 - 22:30 UTC</span></td>
    <td><span>R2 engineering team deployed a new set of credentials (ID and key pair) to R2 Gateway service Worker. This was to validate whether there was an issue with the credentials we had pushed to gateway service.</span><br /><br /><span>Environment parameter was still omitted in the </span><span>deploy</span><span> and </span><span>secret put</span><span> commands, so this deployment was still to the wrong non-production Worker.</span></td>
  </tr>
  <tr>
    <td><span>Mar 21, 2025 - 22:36 UTC</span></td>
    <td><span>– ROOT CAUSE IDENTIFIED –</span><br /><br /><span>The R2 engineering team discovered that credentials had been deployed to a non-production Worker by reviewing production Worker release history.</span></td>
  </tr>
  <tr>
    <td><span>Mar 21, 2025 - 22:45 UTC</span></td>
    <td><span>– IMPACT ENDS –</span><br /><br /><span>Deployed credentials to correct production Worker. R2 availability recovered.</span></td>
  </tr>
  <tr>
    <td><span>Mar 21, 2025 - 22:54 UTC</span></td>
    <td><span>The incident is considered resolved.</span></td>
  </tr>
</tbody></table></div>
    <div>
      <h2>Analysis</h2>
      <a href="#analysis">
        
      </a>
    </div>
    <p>R2’s architecture is primarily composed of three parts: R2 production gateway Worker (serves requests from S3 API, REST API, Workers API), metadata service, and storage infrastructure (stores encrypted object data).</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1G9gfdEE4RIuMIUeNz42RN/25d1b4cca187a4a24c600a43ba51fb71/BLOG-2793_2.png" />
          </figure><p>The R2 Gateway Worker uses credentials (ID and key pair) to securely authenticate with our distributed storage infrastructure. We rotate these credentials regularly as a best practice security precaution.</p><p>Our key rotation process involves the following high-level steps:</p><ol><li><p>Create a new set of credentials (ID and key pair) for our storage infrastructure. At this point, the old credentials are maintained to avoid downtime during credential change over.</p></li><li><p>Set the new credential secret for the R2 production gateway Worker using the <code>wrangler secret put</code> command.</p></li><li><p>Set the new updated credential ID as an environment variable in the R2 production gateway Worker using the <code>wrangler deploy</code> command. At this point, new storage credentials start being used by the gateway Worker.</p></li><li><p>Remove previous credentials from our storage infrastructure to complete the credential rotation process.</p></li><li><p>Monitor operational dashboards and logs to validate change over.</p></li></ol><p>The R2 engineering team uses <a href="https://developers.cloudflare.com/workers/wrangler/environments/"><u>Workers environments</u></a> to separate production and development environments for the R2 Gateway Worker. Each environment defines a separate isolated Cloudflare Worker with separate environment variables and secrets. </p><p>Critically, both <code>wrangler secret put</code> and <code>wrangler deploy</code> commands default to the default environment if the --env command line parameter is not included. In this case, due to human error, we inadvertently omitted the --env parameter and deployed the new storage credentials to the wrong Worker (<code>default</code> environment instead of <code>production</code>). To correctly deploy storage credentials to the production R2 Gateway Worker, we need to specify <code>--env production</code>.</p><p>The action we took on step 4 above to remove the old credentials from our storage infrastructure caused authentication errors, as the R2 Gateway production Worker still had the old credentials. This is ultimately what resulted in degraded availability.</p><p>The decline in R2 availability metrics was gradual and not immediately obvious because there was a delay in the propagation of the previous credential deletion to storage infrastructure. This accounted for a delay in our initial discovery of the problem. Instead of relying on availability metrics after updating the old set of credentials, we should have explicitly validated which token was being used by the R2 Gateway service to authenticate with R2's storage infrastructure.</p><p>Overall, the impact on read availability was significantly mitigated by our intermediate cache that sits in front of storage and continued to serve requests.</p>
    <div>
      <h2>Resolution</h2>
      <a href="#resolution">
        
      </a>
    </div>
    <p>Once we identified the root cause, we were able to resolve the incident quickly by deploying the new credentials to the production R2 Gateway Worker. This resulted in an immediate recovery of R2 availability.</p>
    <div>
      <h2>Next steps</h2>
      <a href="#next-steps">
        
      </a>
    </div>
    <p>This incident happened because of human error and lasted longer than it should have because we didn’t have proper visibility into which credentials were being used by the R2 Gateway Worker to authenticate with our storage infrastructure.</p><p>We have taken immediate steps to prevent this failure (and others like it) from happening again:</p><ul><li><p>Added logging tags that include the suffix of the credential ID the R2 Gateway Worker uses to authenticate with our storage infrastructure. With this change, we can explicitly confirm which credential is being used.</p></li><li><p>Related to the above step, our internal processes now require explicit confirmation that the suffix of the new token ID matches logs from our storage infrastructure before deleting the previous token.</p></li><li><p>Require that key rotation takes place through our hotfix release tooling instead of relying on manual wrangler command entry which introduces human error. Our hotfix release deploy tooling explicitly enforces the environment configuration and contains other safety checks.</p></li><li><p>While it’s been an implicit standard that this process involves at least two humans to validate the changes ahead as we progress, we’ve updated our relevant SOPs (standard operating procedures) to include this explicitly.</p></li><li><p><b>In Progress</b>: Extend our existing closed loop health check system that monitors our endpoints to test new keys, automate reporting of their status through our alerting platform, and ensure global propagation prior to releasing the gateway Worker. </p></li><li><p><b>In Progress</b>: To expedite triage on any future issues with our distributed storage endpoints, we are updating our <a href="https://www.cloudflare.com/learning/performance/what-is-observability/">observability</a> platform to include views of upstream success rates that bypass caching to give clearer indication of issues serving requests for any reason.</p></li></ul><p>The list above is not exhaustive: as we work through the above items, we will likely uncover other improvements to our systems, controls, and processes that we’ll be applying to improve R2’s resiliency, on top of our business-as-usual efforts. We are confident that this set of changes will prevent this failure, and related credential rotation failure modes, from occurring again. Again, we sincerely apologize for this incident and deeply regret any disruption it has caused you or your users.</p> ]]></content:encoded>
            <category><![CDATA[Outage]]></category>
            <category><![CDATA[Post Mortem]]></category>
            <guid isPermaLink="false">4I4XNCQlRirlf9SaA9ySTS</guid>
            <dc:creator>Phillip Jones</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare incident on February 6, 2025]]></title>
            <link>https://blog.cloudflare.com/cloudflare-incident-on-february-6-2025/</link>
            <pubDate>Fri, 07 Feb 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[ On Thursday, February 6, 2025, we experienced an outage with our object storage service (R2) and products that rely on it. Here's what happened and what we're doing to fix this going forward. ]]></description>
            <content:encoded><![CDATA[ <p>Multiple Cloudflare services, including our <a href="https://www.cloudflare.com/developer-platform/products/r2/"><u>R2 object storage</u></a>, were unavailable for 59 minutes on Thursday, February 6, 2025. This caused all operations against R2 to fail for the duration of the incident, and caused a number of other Cloudflare services that depend on R2 — including <a href="https://www.cloudflare.com/developer-platform/products/cloudflare-stream/"><u>Stream</u></a>, <a href="https://www.cloudflare.com/developer-platform/products/cloudflare-images/"><u>Images</u></a>, <a href="https://www.cloudflare.com/developer-platform/products/cache-reserve/"><u>Cache Reserve</u></a>, <a href="https://www.cloudflare.com/developer-platform/products/vectorize/"><u>Vectorize</u></a> and <a href="https://developers.cloudflare.com/logs/edge-log-delivery/"><u>Log Delivery</u></a> — to suffer significant failures.</p><p>The incident occurred due to human error and insufficient validation safeguards during a routine abuse remediation for a report about a phishing site hosted on R2. The action taken on the complaint resulted in an advanced product disablement action on the site that led to disabling the production R2 Gateway service responsible for the R2 API.  </p><p>Critically, this incident did <b>not</b> result in the loss or corruption of any data stored on R2. </p><p>We’re deeply sorry for this incident: this was a failure of a number of controls, and we are prioritizing work to implement additional system-level controls related not only to our abuse processing systems, but so that we continue to reduce the blast radius of <i>any</i> system- or human- action that could result in disabling any production service at Cloudflare.</p>
    <div>
      <h2>What was impacted?</h2>
      <a href="#what-was-impacted">
        
      </a>
    </div>
    <p>All customers using Cloudflare R2 would have observed a 100% failure rate against their R2 buckets and objects during the primary incident window. Services that depend on R2 (detailed in the table below) observed heightened error rates and failure modes depending on their usage of R2.</p><p>The primary incident window occurred between 08:14 UTC to 09:13 UTC, when operations against R2 had a 100% error rate. Dependent services (detailed below) observed increased failure rates for operations that relied on R2.</p><p>From 09:13 UTC to 09:36 UTC, as R2 recovered and clients reconnected, the backlog and resulting spike in client operations caused load issues with R2's metadata layer (built on Durable Objects). This impact was significantly more isolated: we observed a 0.09% increase in error rates in calls to Durable Objects running in North America during this window. </p><p>The following table details the impacted services, including the user-facing impact, operation failures, and increases in error rates observed:</p><table><tr><td><p><b>Product/Service</b></p></td><td><p><b>Impact</b></p></td></tr><tr><td><p><b>R2</b></p></td><td><p>100% of operations against R2 buckets and objects, including uploads, downloads, and associated metadata operations were impacted during the primary incident window. During the secondary incident window, we observed a &lt;1% increase in errors as clients reconnected and increased pressure on R2's metadata layer.</p><p>There was no data loss within the R2 storage subsystem: this incident impacted the HTTP frontend of R2. Separation of concerns and blast radius management meant that the underlying R2 infrastructure was unaffected by this.</p></td></tr><tr><td><p><b>Stream</b></p></td><td><p>100% of operations (upload &amp; streaming delivery) against assets managed by Stream were impacted during the primary incident window.</p></td></tr><tr><td><p><b>Images</b></p></td><td><p>100% of operations (uploads &amp; downloads) against assets managed by Images were impacted during the primary incident window.</p><p>Impact to Image Delivery was minor: success rate dropped to 97% as these assets are fetched from existing customer backends and do not rely on intermediate storage.</p></td></tr><tr><td><p><b>Cache Reserve</b></p></td><td><p>Cache Reserve customers observed an increase in requests to their origin during the incident window as 100% of operations failed. This resulted in an increase in requests to origins to fetch assets unavailable in Cache Reserve during this period. This impacted less than 0.049% of all cacheable requests served during the incident window.</p><p>User-facing requests for assets to sites with Cache Reserve did not observe failures as cache misses failed over to the origin.</p></td></tr><tr><td><p><b>Log Delivery</b></p></td><td><p>Log delivery was delayed during the primary incident window, resulting in significant delays (up to an hour) in log processing, as well as some dropped logs. </p><p>Specifically:</p><p>Non-R2 delivery jobs would have experienced up to 4.5% data loss during the incident. This level of data loss could have been different between jobs depending on log volume and buffer capacity in a given location.</p><p>R2 delivery jobs would have experienced up to 13.6% data loss during the incident. </p><p>R2 is a major destination for Cloudflare Logs. During the primary incident window, all available resources became saturated attempting to buffer and deliver data to R2. This prevented other jobs from acquiring resources to process their queues. Data loss (dropped logs) occurred when the job queues expired their data (to allow for new, incoming data). The system recovered when we enabled a kill switch to stop processing jobs sending data to R2.</p></td></tr><tr><td><p><b>Durable Objects</b></p></td><td><p>Durable Objects, and services that rely on it for coordination &amp; storage, were impacted as the stampeding horde of clients re-connecting to R2 drove an increase in load.</p><p>We observed a 0.09% actual increase in error rates in calls to Durable Objects running in North America, starting at 09:13 UTC and recovering by 09:36 UTC.</p></td></tr><tr><td><p><b>Cache Purge</b></p></td><td><p>Requests to the Cache Purge API saw a 1.8% error rate (HTTP 5xx) increase and a 10x increase in p90 latency for purge operations during the primary incident window. Error rates returned to normal immediately after this.</p></td></tr><tr><td><p><b>Vectorize</b></p></td><td><p>Queries and operations against Vectorize indexes were impacted during the primary incident window. 75% of queries to indexes failed (the remainder were served out of cache) and 100% of insert, upsert, and delete operations failed during the incident window as Vectorize depends on R2 for persistent storage. Once R2 recovered, Vectorize systems recovered in full.</p><p>We observed no continued impact during the secondary incident window, and we have not observed any index corruption as the Vectorize system has protections in place for this.</p></td></tr><tr><td><p><b>Key Transparency Auditor</b></p></td><td><p>100% of signature publish &amp; read operations to the KT auditor service failed during the primary incident window. No third party reads occurred during this window and thus were not impacted by the incident.</p></td></tr><tr><td><p><b>Workers &amp; Pages</b></p></td><td><p>A small volume (0.002%) of deployments to Workers and Pages projects failed during the primary incident window. These failures were limited to services with bindings to R2, as our control plane was unable to communicate with the R2 service during this period.</p></td></tr></table>
    <div>
      <h2>Incident timeline and impact</h2>
      <a href="#incident-timeline-and-impact">
        
      </a>
    </div>
    <p>The incident timeline, including the initial impact, investigation, root cause, and remediation, are detailed below.</p><p><b>All timestamps referenced are in Coordinated Universal Time (UTC).</b></p>
<div><table><colgroup>
<col></col>
<col></col>
</colgroup>
<thead>
  <tr>
    <th><span>Time</span></th>
    <th><span>Event</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>2025-02-06 08:12</span></td>
    <td><span>The R2 Gateway service is inadvertently disabled while responding to an abuse report.</span></td>
  </tr>
  <tr>
    <td><span>2025-02-06 08:14</span></td>
    <td><span>-- IMPACT BEGINS --</span></td>
  </tr>
  <tr>
    <td><span>2025-02-06 08:15</span></td>
    <td><span>R2 service metrics begin to show signs of service degradation.</span></td>
  </tr>
  <tr>
    <td><span>2025-02-06 08:17</span></td>
    <td><span>Critical R2 alerts begin to fire due to our service no longer responding to our health checks.</span></td>
  </tr>
  <tr>
    <td><span>2025-02-06 08:18</span></td>
    <td><span>R2 on-call engaged and began looking at our operational dashboards and service logs to understand impact to availability.</span></td>
  </tr>
  <tr>
    <td><span>2025-02-06 08:23</span></td>
    <td><span>Sales engineering escalated to the R2 engineering team that customers are experiencing a rapid increase in HTTP 500’s from all R2 APIs.</span></td>
  </tr>
  <tr>
    <td><span>2025-02-06 08:25 </span></td>
    <td><span>Internal incident declared.</span></td>
  </tr>
  <tr>
    <td><span>2025-02-06 08:33</span></td>
    <td><span>R2 on-call was unable to identify the root cause and escalated to the lead on-call for assistance.</span></td>
  </tr>
  <tr>
    <td><span>2025-02-06 08:42</span></td>
    <td><span>Root cause identified as R2 team reviews service deployment history and configuration, which surfaces the action and the validation gap that allowed this to impact a production service.</span></td>
  </tr>
  <tr>
    <td><span>2025-02-06 08:46</span></td>
    <td><span>On-call attempts to re-enable the R2 Gateway service using our internal admin tooling, however this tooling was unavailable because it relies on R2.</span></td>
  </tr>
  <tr>
    <td><span>2025-02-06 08:49</span></td>
    <td><span>On-call escalates to an operations team who has lower level system access and can re-enable the R2 Gateway service. </span></td>
  </tr>
  <tr>
    <td><span>2025-02-06 08:57</span></td>
    <td><span>The operations team engaged and began to re-enable the R2 Gateway service.</span></td>
  </tr>
  <tr>
    <td><span>2025-02-06 09:09</span></td>
    <td><span>R2 team triggers a redeployment of the R2 Gateway service.</span></td>
  </tr>
  <tr>
    <td><span> 2025-02-06 09:10</span></td>
    <td><span>R2 began to recover as the forced re-deployment rolled out as clients were able to reconnect to R2.</span></td>
  </tr>
  <tr>
    <td><span>2025-02-06 09:13</span></td>
    <td><span>-- IMPACT ENDS --</span><br /><span>R2 availability recovers to within its service-level objective (SLO). Durable Objects begins to observe a slight increase in error rate (0.09%) for Durable Objects running in North America due to the spike in R2 clients reconnecting.</span></td>
  </tr>
  <tr>
    <td><span>2025-02-06 09:36</span></td>
    <td><span>The Durable Objects error rate recovers.</span></td>
  </tr>
  <tr>
    <td><span>2025-02-06 10:29</span></td>
    <td><span>The incident is closed after monitoring error rates.</span></td>
  </tr>
</tbody></table></div><p>At the R2 service level, our internal Prometheus metrics showed R2’s SLO near-immediately drop to 0% as R2’s Gateway service stopped serving all requests and terminated in-flight requests.</p><p>The slight delay in failure was due to the product disablement action taking 1–2 minutes to take effect as well as our configured metrics aggregation intervals:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4pbONRcG99RWttIUyGqnI6/bad397f73762a706285ea143ed2418b3/BLOG-2685_2.png" />
          </figure><p>For context, R2’s architecture separates the Gateway service, which is responsible for authenticating and serving requests to R2’s S3 &amp; REST APIs and is the “front door” for R2 — its metadata store (built on Durable Objects), our intermediate caches, and the underlying, distributed storage subsystem responsible for durably storing objects. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/E2cgDKA2zGwaQDBs31tPk/4272c94625fd788148d16a90cc7cceaa/Image_20250206_172217_707.png" />
          </figure><p>During the incident, all other components of R2 remained up: this is what allowed the service to recover so quickly once the R2 Gateway service was restored and re-deployed. The R2 Gateway acts as the coordinator for all work when operations are made against R2. During the request lifecycle, we validate authentication and authorization, write any new data to a new immutable key in our object store, then update our metadata layer to point to the new object. When the service was disabled, all running processes stopped.</p><p>While this means that all in-flight and subsequent requests fail, anything that had received a HTTP 200 response had already succeeded with no risk of reverting to a prior version when the service recovered. This is critical to R2’s consistency guarantees and mitigates the chance of a client receiving a successful API response without the underlying metadata <i>and </i>storage infrastructure having persisted the change.  </p>
    <div>
      <h2>Deep dive </h2>
      <a href="#deep-dive">
        
      </a>
    </div>
    <p><b>Due to human error and insufficient validation safeguards in our admin tooling, the R2 Gateway service was taken down as part of a routine remediation for a phishing URL.</b></p><p>During a routine abuse remediation, action was taken on a complaint that inadvertently disabled the R2 Gateway service instead of the specific endpoint/bucket associated with the report. This was a failure of multiple system level controls (first and foremost) and operator training. </p><p>A key system-level control that led to this incident was in how we identify (or "tag") internal accounts used by our teams. Teams typically have multiple accounts (dev, staging, prod) to reduce the blast radius of any configuration changes or deployments, but our abuse processing systems were not explicitly configured to identify these accounts and block disablement actions against them. Instead of disabling the specific endpoint associated with the abuse report, the system allowed the operator to (incorrectly) disable the R2 Gateway service. </p><p>Once we identified this as the cause of the outage, remediation and recovery was inhibited by the lack of direct controls to revert the product disablement action and the need to engage an operations team with lower level access than is routine. The R2 Gateway service then required a re-deployment in order to rebuild its routing pipeline across our edge network.</p><p>Once re-deployed, clients were able to re-connect to R2, and error rates for dependent services (including Stream, Images, Cache Reserve and Vectorize) returned to normal levels.</p>
    <div>
      <h2>Remediation and follow-up steps</h2>
      <a href="#remediation-and-follow-up-steps">
        
      </a>
    </div>
    <p>We have taken immediate steps to resolve the validation gaps in our tooling to prevent this specific failure from occurring in the future.</p><p>We are prioritizing several work-streams to implement stronger, system-wide controls (defense-in-depth) to prevent this, including how we provision internal accounts so that we are not relying on our teams to correctly and reliably tag accounts. A key theme to our remediation efforts here is around removing the need to rely on training or process, and instead ensuring that our systems have the right guardrails and controls built-in to prevent operator errors.</p><p>These work-streams include (but are not limited to) the following:</p><ul><li><p><b>Actioned: </b>deployed additional guardrails implemented in the Admin API to prevent product disablement of services running in internal accounts.</p></li><li><p><b>Actioned</b>: Product disablement actions in the abuse review UI have been disabled while we add more robust safeguards. This will prevent us from inadvertently repeating similar high-risk manual actions.</p></li><li><p><b>In-flight</b>: Changing how we create all internal accounts (staging, dev, production) to ensure that all accounts are correctly provisioned into the correct organization. This must include protections against creating standalone accounts to avoid re-occurrence of this incident (or similar) in the future.</p></li><li><p><b>In-flight: </b>Further restricting access to product disablement actions beyond the remediations recommended by the system to a smaller group of senior operators.</p></li><li><p><b>In-flight</b>: Two-party approval required for ad-hoc product disablement actions. Going forward, if an investigator requires additional remediations, they must be submitted to a manager or a person on our approved remediation acceptance list to approve their additional actions on an abuse report. </p></li><li><p><b>In-flight</b>: Expand existing abuse checks that prevent accidental blocking of internal hostnames to also prevent any product disablement action of products associated with an internal Cloudflare account.  </p></li><li><p><b>In-flight</b>: Internal accounts are being moved to our new Organizations model ahead of public release of this feature. The R2 production account was a member of this organization, but our abuse remediation engine did not have the necessary protections to prevent acting against accounts within this organization.</p></li></ul><p>We’re continuing to discuss &amp; review additional steps and effort that can continue to reduce the blast radius of any system- or human- action that could result in disabling any production service at Cloudflare.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>We understand this was a serious incident, and we are painfully aware of — and extremely sorry for — the impact it caused to customers and teams building and running their businesses on Cloudflare.</p><p>This is the first (and ideally, the last) incident of this kind and duration for R2, and we’re committed to improving controls across our systems and workflows to prevent this in the future.</p> ]]></content:encoded>
            <category><![CDATA[Post Mortem]]></category>
            <category><![CDATA[Outage]]></category>
            <category><![CDATA[undefined]]></category>
            <guid isPermaLink="false">mDiwAePfMfpVHMlYrfrFu</guid>
            <dc:creator>Matt Silverlock</dc:creator>
            <dc:creator>Javier Castro</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare 2024 Year in Review]]></title>
            <link>https://blog.cloudflare.com/radar-2024-year-in-review/</link>
            <pubDate>Mon, 09 Dec 2024 14:05:00 GMT</pubDate>
            <description><![CDATA[ The 2024 Cloudflare Radar Year in Review is our fifth annual review of Internet trends and patterns at both a global and country/region level. ]]></description>
            <content:encoded><![CDATA[ <p>The <a href="https://radar.cloudflare.com/year-in-review/2024">2024 Cloudflare Radar Year in Review</a> is our fifth annual review of Internet trends and patterns observed throughout the year at both a global and country/region level across a variety of metrics. In this year’s review, we have added several new traffic, adoption, connectivity, and email security metrics, as well as the ability to do year-over-year and geographic comparisons for selected metrics. </p><p>Below, we present a summary of key findings, and then explore them in more detail in subsequent sections.</p>
    <div>
      <h2>Key Findings</h2>
      <a href="#key-findings">
        
      </a>
    </div>
    
    <div>
      <h3>Traffic</h3>
      <a href="#traffic">
        
      </a>
    </div>
    <ul><li><p>Global Internet traffic grew 17.2% in 2024. <a href="#global-internet-traffic-grew-17-2-in-2024"><u>🔗</u></a></p></li><li><p>Google maintained its position as the most popular Internet service overall. OpenAI remained at the top of the Generative AI category. Binance remained at the top of the Cryptocurrency category. WhatsApp remained the top Messaging platform, and Facebook remained the top Social Media site. <a href="#google-maintained-its-position-as-the-most-popular-internet-service-openai-binance-whatsapp-and-facebook-led-their-respective-categories"><u>🔗</u></a></p></li><li><p>Global traffic from Starlink grew 3.3x in 2024, in line with last year’s growth rate. After initiating service in Malawi in July 2023, Starlink traffic from that country grew 38x in 2024. As Starlink added new markets, we saw traffic grow rapidly in those locations. <a href="#global-traffic-from-starlink-grew-3-3x-in-2024-in-line-with-last-years-growth-rate-after-initiating-service-in-malawi-in-july-2023-starlink-traffic-from-that-country-grew-38x-in-2024"><u>🔗</u></a></p></li><li><p>Googlebot, Google’s web crawler, was responsible for the highest volume of request traffic to Cloudflare in 2024, as it retrieved content from millions of Cloudflare customer sites for search indexing. <a href="#google-maintained-its-position-as-the-most-popular-internet-service-openai-binance-whatsapp-and-facebook-led-their-respective-categories"><u>🔗</u></a></p></li><li><p>Traffic from ByteDance’s AI crawler (Bytespider) gradually declined over the course of 2024. Anthropic’s AI crawler (ClaudeBot) first started showing signs of ongoing crawling activity in April, then declined after an initial peak in May &amp; June. <a href="#among-ai-bots-and-crawlers-bytespider-bytedance-traffic-gradually-declined-over-the-course-of-2024-while-claudebot-anthropic-was-more-active-during-the-back-half-of-the-year"><u>🔗</u></a></p></li><li><p>13.0% of TLS 1.3 traffic is using post-quantum encryption. <a href="#13-0-of-tls-1-3-traffic-is-using-post-quantum-encryption"><u>🔗</u></a></p></li></ul>
    <div>
      <h3>Adoption &amp; Usage</h3>
      <a href="#adoption-usage">
        
      </a>
    </div>
    <ul><li><p>Globally, nearly one-third of mobile device traffic was from Apple iOS devices. Android had a &gt;90% share of mobile device traffic in 29 countries/regions; peak iOS mobile device traffic share was over 60% in eight countries/regions. <a href="#globally-nearly-one-third-of-mobile-device-traffic-was-from-apple-ios-devices-android-had-a-90-share-of-mobile-device-traffic-in-29-countries-regions-peak-ios-mobile-device-traffic-share-was-over-60-in-eight-countries-regions"><u>🔗</u></a></p></li><li><p>Globally, nearly half of web requests used HTTP/2, with 20.5% using HTTP/3. Usage of both versions was up slightly from 2023. <a href="#globally-nearly-half-of-web-requests-used-http-2-with-20-5-using-http-3"><u>🔗</u></a></p></li><li><p>React, PHP, and jQuery were among the most popular technologies used to build websites, while HubSpot, Google, and WordPress were among the most popular vendors of supporting services and platforms. <a href="#react-php-and-jquery-were-among-the-most-popular-technologies-used-to-build-websites-while-hubspot-google-and-wordpress-were-among-the-most-popular-vendors-of-supporting-services-and-platforms"><u>🔗</u></a></p></li><li><p>Go surpassed NodeJS as the most popular language used for making automated API requests. <a href="#go-surpassed-nodejs-as-the-most-popular-language-used-for-making-automated-api-requests"><u>🔗</u></a></p></li><li><p>Google is far and away the most popular search engine globally, across all platforms. On mobile devices and operating systems, Baidu is a distant second. Bing is a distant second across desktop and Windows devices, with DuckDuckGo second most popular on macOS. Shares vary by platform and country/region. <a href="#google-is-the-most-popular-search-engine-globally-across-all-platforms-on-mobile-devices-os-baidu-is-a-distant-second-bing-is-a-distant-second-across-desktop-and-windows-devices-with-duckduckgo-second-most-popular-on-macos"><u>🔗</u></a></p></li><li><p>Google Chrome is far and away the most popular browser overall. While this is also true on macOS devices, Safari usage is well ahead of Chrome on iOS devices. On Windows, Edge is the second most popular browser as it comes preinstalled and is the initial default. <a href="#google-chrome-is-the-most-popular-browser-overall-while-also-true-on-macos-devices-safari-usage-is-well-ahead-of-chrome-on-ios-devices-on-windows-edge-is-the-second-most-popular-browser"><u>🔗</u></a></p></li></ul>
    <div>
      <h3>Connectivity</h3>
      <a href="#connectivity">
        
      </a>
    </div>
    <ul><li><p>225 major Internet disruptions were observed globally in 2024, with many due to government-directed regional and national shutdowns of Internet connectivity. Cable cuts and power outages were also leading causes. <a href="#225-major-internet-outages-were-observed-around-the-world-in-2024-with-many-due-to-government-directed-regional-and-national-shutdowns-of-internet-connectivity"><u>🔗</u></a></p></li><li><p>Aggregated across 2024, 28.5% of IPv6-capable requests were made over IPv6. India and Malaysia were the strongest countries, at 68.9% and 59.6% IPv6 adoption respectively. <a href="#globally-nearly-half-of-web-requests-used-http-2-with-20-5-using-http-3"><u>🔗</u></a></p></li><li><p>The top 10 countries ranked by Internet speed all had average download speeds above 200 Mbps. Spain was consistently among the top locations across the measured Internet quality metrics. <a href="#the-top-10-countries-ranked-by-internet-speed-all-had-average-download-speeds-above-200-mbps-spain-was-consistently-among-the-top-locations-across-measured-internet-quality-metrics"><u>🔗</u></a></p></li><li><p>41.3% of global traffic comes from mobile devices. In nearly 100 countries/regions, the majority of traffic comes from mobile devices. <a href="#41-3-of-global-traffic-comes-from-mobile-devices-in-nearly-100-countries-regions-the-majority-of-traffic-comes-from-mobile-devices"><u>🔗</u></a></p></li><li><p>20.7% of TCP connections are unexpectedly terminated before any useful data can be exchanged. <a href="#20-7-of-tcp-connections-are-unexpectedly-terminated-before-any-useful-data-can-be-exchanged"><u>🔗</u></a></p></li></ul>
    <div>
      <h3>Security</h3>
      <a href="#security">
        
      </a>
    </div>
    <ul><li><p>6.5% of global traffic was mitigated by Cloudflare's systems as being potentially malicious or for customer-defined reasons. In the United States, the share of mitigated traffic grew to 5.1%, while in South Korea, it dropped slightly to 8.1%. In 44 countries/regions, over 10% of traffic was mitigated. <a href="#6-5-of-global-traffic-was-mitigated-by-cloudflares-systems-as-being-potentially-malicious-or-for-customer-defined-reasons"><u>🔗</u></a></p></li><li><p>The United States was responsible for over a third of global bot traffic. Amazon Web Services was responsible for 12.7% of global bot traffic, and 7.8% came from Google. <a href="#the-united-states-was-responsible-for-over-a-third-of-global-bot-traffic-amazon-web-services-was-responsible-for-12-7-of-global-bot-traffic-and-7-8-came-from-google"><u>🔗</u></a></p></li><li><p>Globally, Gambling/Games was the most attacked industry, slightly ahead of 2023’s most targeted industry, Finance. <a href="#globally-gambling-games-was-the-most-attacked-industry-slightly-ahead-of-2023s-most-targeted-industry-finance"><u>🔗</u></a></p></li><li><p>Log4j, a vulnerability discovered in 2021, remains a persistent threat and was actively targeted throughout 2024. <a href="#log4j-remains-a-persistent-threat-and-was-actively-targeted-throughout-2024"><u>🔗</u></a></p></li><li><p>Routing security, measured as the share of RPKI valid routes and the share of covered IP address space, continued to improve globally throughout 2024. We saw a 4.7% increase in RPKI valid IPv4 address space in 2024, and a 6.4% increase in RPKI valid routes in 2024. <a href="#routing-security-measured-as-the-share-of-rpki-valid-routes-and-the-share-of-covered-ip-address-space-continued-to-improve-globally-throughout-2024"><u>🔗</u></a></p></li></ul>
    <div>
      <h3>Email Security</h3>
      <a href="#email-security">
        
      </a>
    </div>
    <ul><li><p>An average of 4.3% of emails were determined to be malicious in 2024, although this figure was likely influenced by spikes observed in March, April, and May. Deceptive links and identity deception were the two most common types of threats found in malicious email messages. <a href="#an-average-of-4-3-of-emails-were-determined-to-be-malicious-in-2024"><u>🔗</u></a></p></li><li><p>Over 99% of the email messages processed by Cloudflare Email Security from the .bar, .rest, and .uno top level domains (TLDs) were found to be either spam or malicious in nature. <a href="#over-99-of-the-email-messages-processed-by-cloudflare-email-security-from-the-bar-rest-and-uno-top-level-domains-tlds-were-found-to-be-either-spam-or-malicious-in-nature"><u>🔗</u></a></p></li></ul>
    <div>
      <h2>Introduction</h2>
      <a href="#introduction">
        
      </a>
    </div>
    <p>Over the last four years (<a href="https://blog.cloudflare.com/cloudflare-radar-2020-year-in-review/"><u>2020</u></a>, <a href="https://blog.cloudflare.com/cloudflare-radar-2021-year-in-review/"><u>2021</u></a>, <a href="https://blog.cloudflare.com/radar-2022-year-in-review/"><u>2022</u></a>, <a href="https://blog.cloudflare.com/radar-2023-year-in-review/"><u>2023</u></a>), we have aggregated perspectives from <a href="https://radar.cloudflare.com/"><u>Cloudflare Radar</u></a> into an annual Year In Review, illustrating the Internet’s patterns across multiple areas over the course of that year. The <a href="https://radar.cloudflare.com/year-in-review/2024"><u>Cloudflare Radar 2024 Year In Review</u></a> microsite continues that tradition, featuring interactive charts, graphs, and maps you can use to explore and compare notable Internet trends observed throughout this past year.</p><p>Cloudflare’s <a href="https://www.cloudflare.com/network"><u>network</u></a> currently spans more than 330 cities in over 120 countries/regions, serving an average of over 63 million HTTP(S) requests per second for millions of Internet properties, in addition to handling over 42 million DNS requests per second on average. The resulting data generated by this usage, combined with data from other complementary Cloudflare tools, enables Radar to provide unique near-real time perspectives on the patterns and trends around security, traffic, performance, and usage that we observe across the Internet. </p><p>The 2024 Year In Review is organized into five sections: <a href="https://radar.cloudflare.com/year-in-review/2024#traffic"><u>Traffic</u></a>, <a href="https://radar.cloudflare.com/year-in-review/2024#adoption-and-usage"><u>Adoption &amp; Usage</u></a>, <a href="https://radar.cloudflare.com/year-in-review/2024#connectivity"><u>Connectivity</u></a>, <a href="https://radar.cloudflare.com/year-in-review/2024#security"><u>Security</u></a>, and <a href="https://radar.cloudflare.com/year-in-review/2024#email-security"><u>Email Security</u></a> and covers the period from January 1 to December 1, 2024. We have incorporated several new metrics this year, including AI bot &amp; crawler traffic, search engine and browser market share, connection tampering, and “most dangerous” top level domains (TLDs). To ensure consistency, we have kept underlying methodologies consistent with previous years’ calculations. Trends for 200 countries/regions are available on the microsite; smaller or less populated locations are excluded due to insufficient data. Some metrics are only shown worldwide, and are not displayed if a country/region is selected. </p><p>Below, we provide an overview of the content contained within the major Year In Review sections (Traffic, Adoption &amp; Usage, Connectivity, Security, and Email Security), along with notable observations and key findings. In addition, we have also published a companion blog post that specifically explores trends seen across <a href="https://blog.cloudflare.com/radar-2024-year-in-review-internet-services/"><u>Top Internet Services</u></a>.</p><p>The key findings and associated discussion within this post only provide a high-level perspective on the unique insights that can be found in the <a href="https://radar.cloudflare.com/year-in-review/2024"><u>Year in Review microsite</u></a>. Visit the microsite to explore the various datasets and metrics in more detail, including trends seen in your country/region, how these trends have changed as compared to 2023, and how they compare to other countries/regions of interest. Surveying the Internet from this vantage point provides insights that can inform decisions on everything from an organization’s security posture and IT priorities to product development and strategy. </p>
    <div>
      <h2>Traffic trends</h2>
      <a href="#traffic-trends">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4XlL4SnJROa2fArrtUheuo/822ede9708eb6e9aeeebce4331d62140/2627_Graph.png" />
          </figure>
    <div>
      <h3>Global Internet traffic grew 17.2% in 2024.</h3>
      <a href="#global-internet-traffic-grew-17-2-in-2024">
        
      </a>
    </div>
    <p>An inflection point for Internet traffic arguably occurred thirty years ago. The World Wide Web went mainstream in 1994, thanks to the late 1993 <a href="https://cybercultural.com/p/1993-mosaic-launches-and-the-web-is-set-free/"><u>release</u></a> of the <a href="https://www.ncsa.illinois.edu/research/project-highlights/ncsa-mosaic/"><u>NCSA Mosaic</u></a> browser for multiple popular operating systems, which included support for embedded images. In turn, “heavier” (in contrast to text-based) Internet content became the norm, and coupled with the growth in consumption through popular online services and the emerging consumer ISP industry, <a href="https://blogs.cisco.com/sp/the-history-and-future-of-internet-traffic"><u>Internet traffic began to rapidly increase</u></a>, and that trend has continued to the present.</p><p>To determine the traffic trends over time for the Year in Review, we use the average daily traffic volume (excluding bot traffic) over the second full calendar week (January 8-15) of 2024 as our baseline. (The second calendar week is used to allow time for people to get back into their “normal” school and work routines after the winter holidays and New Year’s Day. The percent change shown in the traffic trends chart is calculated relative to the baseline value — it does not represent absolute traffic volume for a country/region. The trend line represents a seven-day trailing average, which is used to smooth the sharp changes seen with data at a daily granularity. To compare 2024’s traffic trends with 2023 data and/or other locations, click the “Compare” icon at the upper right of the graph.</p><p>Throughout the first half of 2024, <a href="https://radar.cloudflare.com/year-in-review/2024?#internet-traffic-growth"><u>worldwide Internet traffic growth</u></a> appeared to be fairly limited, within a percent or two on either side of the baseline value through mid-August. However, at that time growth clearly began to accelerate, climbing consistently through the end of November, growing 17.2% for the year. This trend is similar to those also seen in 2023 and 2022, as we discussed in the <a href="https://blog.cloudflare.com/radar-2023-year-in-review/"><u>2023 Year in Review blog post</u></a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5NjOCs902pW74OQ0bx2usy/58896c0bc06b4a9c819736bde28ed3f4/traffic_-_worldwide.png" />
          </figure><p><sup><i>Internet traffic trends in 2024, worldwide</i></sup></p><p>The West African country of <a href="https://radar.cloudflare.com/year-in-review/2024/gn?previousYear=true"><u>Guinea</u></a> experienced the most significant Internet traffic growth seen in 2024, reaching as much as 350% above baseline. Traffic growth didn’t begin in earnest until late February, and reached an initial peak in early April. It remained between 100% and 200% above baseline until September, when it experienced several multi-week periods of growth. While the September-November periods of traffic growth also occurred in 2023, they peaked at under 90% above baseline.</p><p>The impact of significant Internet outages is also clearly visible when looking at data across the year. Two significant Internet outages in <a href="https://radar.cloudflare.com/year-in-review/2024/cu#internet-traffic-growth"><u>Cuba</u></a> are clearly visible as large drops in traffic in October and November. A reported “complete disconnection” of the national electricity system on the island <a href="https://x.com/CloudflareRadar/status/1847325224208891950"><u>occurred on October 18</u></a>, lasting <a href="https://x.com/CloudflareRadar/status/1848680148813406474"><u>just over three days</u></a>. Just a couple of weeks later, on November 6, <a href="https://x.com/CloudflareRadar/status/1854291286322544752"><u>damage from Hurricane Rafael caused widespread power outages in Cuba</u></a>, resulting in another large drop in Internet traffic. Traffic has remained lower as Cuba’s electrical infrastructure <a href="https://x.com/CloudflareRadar/status/1864263679442567604"><u>continues to struggle</u></a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2rvK8AFYdcJAgQhJQUTiQw/8c4790fd06af8323636878977a9d712c/traffic_-_Cuba.png" />
          </figure><p><sup><i>Internet traffic trends in 2024, Cuba</i></sup></p><p>As we frequently discuss in Cloudflare Radar blog and social media posts, government-directed Internet shutdowns occur all too frequently, and the impact of these actions are also clearly visible when looking at long-term traffic data. In <a href="https://radar.cloudflare.com/year-in-review/2024/bd#internet-traffic-growth"><u>Bangladesh</u></a>, the government ordered the <a href="https://blog.cloudflare.com/q3-2024-internet-disruption-summary/#bangladesh"><u>shutdown of mobile Internet connectivity</u></a> on July 18, in response to student protests. Shortly after mobile networks were shut down, fixed broadband networks were taken offline as well, resulting in a near complete loss of Internet traffic from the country. Connectivity gradually returned over the course of several days, between July 23-28.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5FvubyG6qMeZ9hv1wgFayl/91d356b23788a8f9cdd970cc7e65f8fc/traffic_-_Bangladesh.png" />
          </figure><p><sup><i>Internet traffic trends in 2024, Bangladesh</i></sup></p><p>As we also noted last year, the celebration of major holidays can also have a visible impact on Internet traffic at a country level. For example, in Muslim countries including <a href="https://radar.cloudflare.com/year-in-review/2024/ae?compareWith=ID#internet-traffic-growth"><u>Indonesia and the United Arab Emirates</u></a>, the celebration of Eid al-Fitr, the festival marking the end of the fast of Ramadan, is visible as a noticeable drop in traffic around April 9-10. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/aFTFP2banlfW65XkjUIZM/84bfd5db1036da1b4740843575217113/traffic_-_UAE_Indonesia.png" />
          </figure><p><sup><i>Internet traffic trends in 2024, Indonesia and United Arab Emirates</i></sup></p>
    <div>
      <h3>Google maintained its position as the most popular Internet service. OpenAI, Binance, WhatsApp, and Facebook led their respective categories. </h3>
      <a href="#google-maintained-its-position-as-the-most-popular-internet-service-openai-binance-whatsapp-and-facebook-led-their-respective-categories">
        
      </a>
    </div>
    <p>Over the last several years, the Year In Review has ranked the <a href="https://radar.cloudflare.com/year-in-review/2024#internet-services"><u>most popular Internet services</u></a>. These rankings cover an “overall” perspective, as well as a dozen more specific categories, based on analysis of anonymized query data of traffic to our <a href="https://1.1.1.1/dns"><u>1.1.1.1 public DNS resolver</u></a> from millions of users around the world. For the purposes of these rankings, domains that belong to a single Internet service are grouped together.</p><p>Google once again held the top spot overall, supported by its broad portfolio of services, as well as the popularity of the Android mobile operating system (more on that <a href="#globally-nearly-one-third-of-mobile-device-traffic-was-from-apple-ios-devices-android-had-a-90-share-of-mobile-device-traffic-in-29-countries-regions-peak-ios-mobile-device-traffic-share-was-over-60-in-eight-countries-regions"><u>below</u></a>). Meta properties Facebook, Instagram, and WhatsApp also held spots in the top 10.</p><p><a href="https://www.cloudflare.com/learning/ai/what-is-generative-ai/"><u>Generative AI</u></a> continued to grow in popularity throughout 2024, and in this category, OpenAI again held the top spot, building on the continued success and popularity of ChatGPT. Within Social Media, the top five remained consistent with 2023’s and 2022’s ranking, including Facebook, TikTok, Instagram, X, and Snapchat.</p><p>These categorical rankings, as well as trends seen by specific services, are explored in more detail in a separate blog post, <a href="https://blog.cloudflare.com/radar-2024-year-in-review-internet-services/"><i><u>From ChatGPT to Temu: ranking top Internet services in 2024</u></i></a>.</p>
    <div>
      <h3>Global traffic from Starlink grew 3.3x in 2024, in line with last year’s growth rate. After initiating service in Malawi in July 2023, Starlink traffic from that country grew 38x in 2024.</h3>
      <a href="#global-traffic-from-starlink-grew-3-3x-in-2024-in-line-with-last-years-growth-rate-after-initiating-service-in-malawi-in-july-2023-starlink-traffic-from-that-country-grew-38x-in-2024">
        
      </a>
    </div>
    <p>SpaceX’s Starlink continues to be the leading satellite Internet service provider, bringing connectivity to unserved or underserved areas. In addition to opening up new markets in 2024, Starlink also announced relationships to provide in-flight connectivity to <a href="https://www.cnbc.com/2024/09/17/spacexs-starlink-has-2500-aircraft-under-contract.html"><u>multiple airlines</u></a>, and on <a href="https://x.com/Starlink/status/1790426484022342081"><u>cruise ships</u></a> and <a href="https://x.com/Starlink/status/1857166233969607123"><u>trains</u></a>, as well as enabling subscribers to roam with their subscription using the <a href="https://www.theverge.com/2024/7/11/24196294/starlink-mini-available-us-price-specs"><u>Starlink Mini</u></a>.</p><p>We analyzed aggregate Cloudflare traffic volumes associated with Starlink's primary <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/"><u>autonomous system</u></a> (<a href="https://radar.cloudflare.com/as14593"><u>AS14593</u></a>) to track the growth in usage of the service throughout 2024. Similar to the traffic trends discussed above, the request volume shown on the trend line in the chart represents a seven-day trailing average. Comparisons with 2023 data can be shown by clicking the “Compare” icon at the upper right of the graph. Within comparative views, the lines are scaled to the maximum value shown.</p><p>On a <a href="https://radar.cloudflare.com/year-in-review/2024#starlink-traffic.trends"><u>worldwide</u></a> basis, steady, consistent growth was seen across the year, though it accelerates throughout November. This acceleration may have been driven by traffic associated with customer-specific large software updates. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6Dy2qt4O5b3MCswkckhELA/aa29c7235497bed8c985aa9dd9b63477/traffic_-_Starlink_worldwide.png" />
          </figure><p><sup><i>Starlink traffic growth worldwide in 2024</i></sup></p><p>In many locations, there is pent-up demand for “alternative” connectivity providers such as Starlink, and in these countries/regions, we see rapid traffic growth when service becomes available, such as in <a href="https://radar.cloudflare.com/year-in-review/2024/zw#starlink-traffic.trends"><u>Zimbabwe</u></a>. Service availability was <a href="https://x.com/Starlink/status/1832392080481563037"><u>announced on September 7</u></a>, and traffic from the country began to grow rapidly almost immediately thereafter.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1aLywcrB5w88flsDyK1R1q/1039d989e19dc566cf5f62e60f3f1886/traffic_-_Starlink_Zimbabwe.png" />
          </figure><p><sup><i>Starlink traffic growth in Zimbabwe in 2024</i></sup></p><p>In new markets, traffic growth continues after that initial increase. For example Starlink service became available in Malawi <a href="https://x.com/Starlink/status/1683897037639790592"><u>in July 2023</u></a>, and throughout 2024, Starlink traffic from the country grew 38x. While <a href="https://radar.cloudflare.com/year-in-review/2024/mw#starlink-traffic.trends"><u>Malawi’s 38x increase</u></a> is impressive, other countries also experienced significant growth. In the Eastern European country of <a href="https://radar.cloudflare.com/year-in-review/2024/ge#starlink-traffic.trends"><u>Georgia</u></a>, <a href="https://x.com/Starlink/status/1719581885200998485"><u>service became available on November 1, 2023</u></a>. After a slow ramp, traffic began to take off growing over 100x through 2024. In <a href="https://radar.cloudflare.com/year-in-review/2024/py#starlink-traffic.trends"><u>Paraguay</u></a>, <a href="https://x.com/Starlink/status/1737914318522581489"><u>service availability was announced on December 21</u></a>, and began to grow at the beginning of January, registering an increase of over 900x across the year.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6AXOON7CO7XgnWSNnezoiF/bd56192d682c574a2d242845bb0eda16/traffic_-_Starlink_Malawi.png" />
          </figure><p><sup><i>Starlink traffic growth in Malawi in 2024</i></sup></p>
    <div>
      <h3>Googlebot was responsible for the highest volume of request traffic to Cloudflare in 2024 as it retrieved content from millions of Cloudflare customer sites for search indexing. </h3>
      <a href="#googlebot-was-responsible-for-the-highest-volume-of-request-traffic-to-cloudflare-in-2024-as-it-retrieved-content-from-millions-of-cloudflare-customer-sites-for-search-indexing">
        
      </a>
    </div>
    <p>Cloudflare Radar shows users Internet traffic trends over a selected period of time, but at a country/region or network level. However, <a href="https://blog.cloudflare.com/radar-2023-year-in-review/#googlebot-was-responsible-for-the-highest-volume-of-request-traffic-to-cloudflare-in-2023"><u>as we did in 2023</u></a>, we again wanted to look at the traffic Cloudflare saw over the course of the full year from the entire IPv4 Internet. To do so, we can use <a href="https://en.wikipedia.org/wiki/Hilbert_curve"><u>Hilbert curves</u></a>, which allow us to visualize a sequence of IPv4 addresses in a two-dimensional pattern that keeps nearby IP addresses close to each other, making them <a href="https://xkcd.com/195/"><u>useful</u></a> for surveying the Internet's IPv4 address space.</p><p>Using a Hilbert curve, we can <a href="https://radar.cloudflare.com/year-in-review/2024#ipv4-traffic-distribution"><u>visualize aggregated IPv4 request traffic to Cloudflare</u></a> from January 1 through December 1, 2024. Within the visualization, we aggregate IPv4 addresses at a <a href="https://www.ripe.net/about-us/press-centre/IPv4CIDRChart_2015.pdf"><u>/20</u></a> level, meaning that at the highest zoom level, each square represents traffic from 4,096 IPv4 addresses. This aggregation is done to keep the amount of data used for the visualization manageable. (While we would like to create a similar visualization for IPv6 traffic, the enormity of the full IPv6 address space would make associated traffic very <a href="https://observablehq.com/@vasturiano/hilbert-map-of-ipv6-address-space"><u>hard to see</u></a> in such a visualization, especially as such a small amount has been <a href="https://www.iana.org/numbers/allocations/"><u>allocated for assignment by the Regional Internet Registries</u></a>.)</p><p>Within the visualization, IP addresses are grouped by ownership, and for much of the IP address space shown there, a mouseover at the default zoom level will show the <a href="https://www.nro.net/about/rirs/"><u>Regional Internet Registry (RIR)</u></a> that the address block belongs to. However, there are also a number of blocks that were assigned prior to the existence of the RIR system, and for these, they are labeled with the name of the organization that owns them. Progressive zooming ultimately shows the autonomous system and country/region that the IP address block is associated with, as well as its share of traffic relative to the maximum. (If a country/region is selected, only the IP address blocks associated with that location are visible.) Overall traffic shares are indicated by shading based on a color scale, and although a number of large unshaded blocks are visible, this does not necessarily mean that the associated address space is unused, but rather that it may be used in a way that does not generate traffic to Cloudflare.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6gtrL1H2gUjSKH7AMSabgM/361d38f34860258449a914e26519a4b4/traffic_-_Hilbert_curve.png" />
          </figure><p><sup><i>Hilbert curve showing aggregated 2024 traffic to Cloudflare across the IPv4 Internet</i></sup></p><p>Warmer orange/red shading within the visualization represents areas of higher request volume, and buried within one of those areas is the IP address block that had the maximum request volume to Cloudflare during 2024. As it was in 2023, this address block was <a href="https://radar.cloudflare.com/routing/prefix/66.249.64.0/20"><u>66.249.64.0/20</u></a>, which belongs to Google, and is <a href="https://developers.google.com/static/search/apis/ipranges/googlebot.json"><u>one of several</u></a> used by the <a href="https://developers.google.com/search/docs/crawling-indexing/googlebot"><u>Googlebot</u></a> web crawler to retrieve content for search indexing. This use of that address space is a likely explanation for the high request volume, given the number of web properties on Cloudflare’s network.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/g5rQhT7r4DsgpzYdMT3QT/0d6809d96791ee7165ada170d24156e3/traffic_-_Hilbert_curve_Googlebot.png" />
          </figure><p><sup><i>Zoomed Hilbert curve view showing the IPv4 address block that generated the highest volume of requests</i></sup></p><p>In addition to Google, owners of other prefixes in the top 20 include Alibaba, Microsoft, Amazon, and Apple. To explore the IPv4 Internet in more detail, we encourage you to go to <a href="https://radar.cloudflare.com/year-in-review/2024/#ipv4-traffic-distribution"><u>the Year in Review microsite</u></a> and explore it by dragging and zooming to move around IPv4 address space.</p>
    <div>
      <h3>Among AI bots and crawlers, Bytespider (ByteDance) traffic gradually declined over the course of 2024, while ClaudeBot (Anthropic) was more active during the back half of the year.</h3>
      <a href="#among-ai-bots-and-crawlers-bytespider-bytedance-traffic-gradually-declined-over-the-course-of-2024-while-claudebot-anthropic-was-more-active-during-the-back-half-of-the-year">
        
      </a>
    </div>
    <p><a href="https://www.cloudflare.com/learning/ai/how-to-block-ai-crawlers/">AI bots and crawlers</a> have been in the news throughout 2024 as they voraciously consume content to train ever-evolving models. Controversy has followed them, as not all bots and crawlers respect content owner directives to restrict crawling activity. In July, Cloudflare enabled customers to <a href="https://blog.cloudflare.com/declaring-your-aindependence-block-ai-bots-scrapers-and-crawlers-with-a-single-click/"><u>block these bots and crawlers with a single click</u></a>, and during Birthday Week <a href="https://blog.cloudflare.com/cloudflare-ai-audit-control-ai-content-crawlers/"><u>we introduced AI Audit</u></a> to give website owners even more visibility into and control over how AI platforms access their content. </p><p>Tracking traffic trends for AI bots can help us better understand their activity over time — observing which are the most aggressive and have the highest volume of requests, which perform crawls on a regular basis, etc. The new <a href="https://radar.cloudflare.com/traffic#ai-bot-crawler-traffic"><u>AI bot &amp; crawler traffic graph on Radar’s Traffic page</u></a>, <a href="https://blog.cloudflare.com/bringing-ai-to-cloudflare/#ai-bot-traffic-insights-on-cloudflare-radar"><u>launched in September</u></a>, provides insight into these traffic trends gathered over the selected time period for the top known AI bots. </p><p><a href="https://radar.cloudflare.com/year-in-review/2024#ai-bot-and-crawler-traffic"><u>Looking at traffic trends</u></a> from two of those bots, we can see some interesting patterns. <a href="https://darkvisitors.com/agents/bytespider"><u>Bytespider</u></a> is a crawler operated by ByteDance, the Chinese owner of TikTok, and is reportedly used to download training data for ByteDance’s Large Language Models (LLMs). Bytespider’s crawling activity trended generally downwards over the course of 2024, with end-of-November activity approximately 80-85% lower than that seen at the start of the year. <a href="https://darkvisitors.com/agents/claudebot"><u>ClaudeBot</u></a> is Anthropic’s crawler, which downloads training data for its LLMs that power AI products like Claude. Traffic from ClaudeBot appeared to be mostly non-existent through mid-April, except for some small spikes that possibly represent test runs. Traffic became more consistently non-zero starting in late April, but after an early spike, trailed off through the remainder of the year.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7cm0SG0GC36Z3dFu3A6p3J/10a6e32a469984b2083ee0c2ed743d53/traffic_-_AI_bots_--_NEW.png" />
          </figure><p><sup><i>Traffic trends for AI crawlers Bytespider and ClaudeBot in 2024</i></sup></p><p>Traffic trends for the full list of AI bots &amp; crawlers can be found in the <a href="https://radar.cloudflare.com/explorer?dataSet=ai.bots&amp;dt=2024-01-01_2024-12-31"><u>Cloudflare Radar Data Explorer</u></a>.</p>
    <div>
      <h3>13.0% of TLS 1.3 traffic is using post-quantum encryption.</h3>
      <a href="#13-0-of-tls-1-3-traffic-is-using-post-quantum-encryption">
        
      </a>
    </div>
    <p>The term “<a href="https://en.wikipedia.org/wiki/Post-quantum_cryptography"><u>post-quantum</u></a>” refers to a new set of cryptographic techniques designed to protect data from adversaries that have the ability to capture and store current data for decryption by sufficiently powerful quantum computers in the future. The Cloudflare Research team has been <a href="https://blog.cloudflare.com/sidh-go/"><u>exploring post-quantum cryptography since 2017</u></a>.</p><p>In October 2022, we enabled <a href="https://blog.cloudflare.com/post-quantum-for-all/"><u>post-quantum key agreement</u></a> on our network by default, but use of it requires that browsers and clients support it as well. In 2024, Google's <a href="https://developer.chrome.com/release-notes/124"><u>Chrome 124</u></a> enabled it by default on April 17, and <a href="https://radar.cloudflare.com/year-in-review/2024#post-quantum-encryption"><u>adoption grew rapidly following that release</u></a>, increasing from just over 2% of requests to around 12% within a month, and ended November at 13%. We expect that adoption will continue to grow into and during 2025 due to support in other Chromium-based browsers, growing default support in Mozilla Firefox, and initial testing in Apple Safari.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4ipRtCowVftgad37ht9uMF/68958f72a47bbc179959c2d7ac6cdd72/traffic_-_post-quantum_worldwide.png" />
          </figure><p><sup><i>Growth trends in post-quantum encrypted TLS 1.3 traffic during 2024</i></sup></p>
    <div>
      <h2>Adoption &amp; Usage insights</h2>
      <a href="#adoption-usage-insights">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/177RCO2sEvFJJeJeCBzZim/68acfcc309c57ef2027e9291a5f76d2f/2627_Shield.png" />
          </figure>
    <div>
      <h3>Globally, nearly one-third of mobile device traffic was from Apple iOS devices. Android had a &gt;90% share of mobile device traffic in 29 countries/regions; peak iOS mobile device traffic share was over 60% in eight countries/regions.</h3>
      <a href="#globally-nearly-one-third-of-mobile-device-traffic-was-from-apple-ios-devices-android-had-a-90-share-of-mobile-device-traffic-in-29-countries-regions-peak-ios-mobile-device-traffic-share-was-over-60-in-eight-countries-regions">
        
      </a>
    </div>
    <p>The two leading mobile device operating systems globally are <a href="https://en.wikipedia.org/wiki/IOS"><u>Apple’s iOS</u></a> and <a href="https://en.wikipedia.org/wiki/Android_(operating_system)"><u>Google’s Android</u></a>, and by analyzing information in the <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/User-Agent"><u>user agent</u></a> reported with each request, we can get insight into the distribution of traffic by client operating system throughout the year. Again, we found that Android is responsible for the majority of mobile device traffic when aggregated globally, due to the wide distribution of price points, form factors, and capabilities.</p><p>Similar to <a href="https://radar.cloudflare.com/year-in-review/2023#ios-vs-android"><u>2023’s findings</u></a>, Android was once again <a href="https://radar.cloudflare.com/year-in-review/2024#ios-vs-android"><u>responsible for just over two-thirds of mobile device traffic</u></a>. Looking at the top countries for Android traffic, we find a greater than 95% share in <a href="https://radar.cloudflare.com/year-in-review/2024/sd#ios-vs-android"><u>Sudan</u></a>, <a href="https://radar.cloudflare.com/year-in-review/2024/bd#ios-vs-android"><u>Bangladesh</u></a>, <a href="https://radar.cloudflare.com/year-in-review/2024/tm#ios-vs-android"><u>Turkmenistan</u></a>, <a href="https://radar.cloudflare.com/year-in-review/2024/mw#ios-vs-android"><u>Malawi</u></a>, <a href="https://radar.cloudflare.com/year-in-review/2024/pg#ios-vs-android"><u>Papua New Guinea</u></a>, <a href="https://radar.cloudflare.com/year-in-review/2024/sy#ios-vs-android"><u>Syria</u></a>, and <a href="https://radar.cloudflare.com/year-in-review/2024/ye#ios-vs-android"><u>Yemen</u></a>, up from just two countries in 2023. Similar to last year, we again found that countries/regions with higher levels of Android usage are largely in Africa, Oceania/Asia, and South America, and that many have lower levels of <a href="https://ourworldindata.org/grapher/gross-national-income-per-capita?tab=table"><u>gross national income per capita</u></a>. In these countries/regions, the availability of lower priced “budget” Android devices supports increased adoption.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/9bsuRwzYBybYpOiKqwLja/cbdafb60eab1913a91ec916899d1e807/connectivity_-_mobile_desktop.png" />
          </figure><p><sup><i>Global distribution of mobile device traffic by operating system in 2024</i></sup></p><p>In contrast, iOS adoption tops out in the 65% range in <a href="https://radar.cloudflare.com/year-in-review/2024/je#ios-vs-android"><u>Jersey</u></a>, the <a href="https://radar.cloudflare.com/year-in-review/2024/fo#ios-vs-android"><u>Faroe Islands</u></a>, <a href="https://radar.cloudflare.com/year-in-review/2024/gg#ios-vs-android"><u>Guernsey</u></a>, and <a href="https://radar.cloudflare.com/year-in-review/2024/dk#ios-vs-android"><u>Denmark</u></a>. Adoption rates of 50% or more were seen in a total of 26 countries/regions, including <a href="https://radar.cloudflare.com/year-in-review/2024/no#ios-vs-android"><u>Norway</u></a>, <a href="https://radar.cloudflare.com/year-in-review/2024/se#ios-vs-android"><u>Sweden</u></a>, <a href="https://radar.cloudflare.com/year-in-review/2024/au#ios-vs-android"><u>Australia</u></a>, <a href="https://radar.cloudflare.com/year-in-review/2024/jp#ios-vs-android"><u>Japan</u></a>, the <a href="https://radar.cloudflare.com/year-in-review/2024/us#ios-vs-android"><u>United States</u></a>, and <a href="https://radar.cloudflare.com/year-in-review/2024/ca#ios-vs-android"><u>Canada</u></a>. These locations likely have a greater ability to afford higher priced devices, owing to their comparatively higher gross national income per capita.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/QCfjlx0TgEotwU2wPm0hE/af1359f249aec86894b681249fe7ee70/adoption_-_Android_iOS_top_5.png" />
          </figure><p><sup><i>Countries/regions with the largest share of iOS traffic in 2024</i></sup></p>
    <div>
      <h3>Globally, nearly half of web requests used HTTP/2, with 20.5% using HTTP/3.</h3>
      <a href="#globally-nearly-half-of-web-requests-used-http-2-with-20-5-using-http-3">
        
      </a>
    </div>
    <p>HTTP (HyperText Transfer Protocol) is the core protocol that the web relies upon. <a href="https://datatracker.ietf.org/doc/html/rfc1945"><u>HTTP/1.0</u></a> was first standardized in 1996, <a href="https://www.rfc-editor.org/rfc/rfc2616.html"><u>HTTP/1.1</u></a> in 1999, and <a href="https://www.rfc-editor.org/rfc/rfc7540.html"><u>HTTP/2</u></a> in 2015. The most recent version, <a href="https://www.rfc-editor.org/rfc/rfc9114.html"><u>HTTP/3</u></a>, was completed in 2022, and runs on top of a new transport protocol known as <a href="https://blog.cloudflare.com/the-road-to-quic/"><u>QUIC</u></a>. By running on top of QUIC, <a href="https://www.cloudflare.com/learning/performance/what-is-http3/"><u>HTTP/3</u></a> can deliver improved performance by mitigating the effects of packet loss and network changes, as well as establishing connections more quickly. HTTP/3 also provides encryption by default, which mitigates the risk of attacks. </p><p>Current versions of desktop and mobile Google Chrome (and Chromium-based variants), Mozilla Firefox, and Apple Safari <a href="https://caniuse.com/?search=http%2F3"><u>all support HTTP/3 by default</u></a>. Cloudflare makes HTTP/3 <a href="https://developers.cloudflare.com/speed/optimization/protocol/http3/"><u>available for free</u></a> to all of our customers, although not every customer chooses to enable it.</p><p>Analysis of the HTTP version negotiated for each request provides insight into the distribution of traffic by the various versions of the protocol aggregated across the year. (“HTTP/1.x” aggregates requests made over HTTP/1.0 and HTTP/1.1.) At a <a href="https://radar.cloudflare.com/year-in-review/2024#http-versions"><u>global</u></a> level, 20.5% of requests in 2024 were made using HTTP/3. Another 29.9% of requests were made over the older HTTP/1.x versions, while HTTP/2 remained dominant, accounting for the remaining 49.6%.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/r7KQkjdsXXEtxHoEkFQbO/efb19d1bbd58bef3d657b96555d70103/adoption_-_HTTP_versions_global.png" />
          </figure><p><sup><i>Global distribution of traffic by HTTP version in 2024</i></sup></p><p>Looking at version distribution geographically, we found eight countries/regions sending more than a third of their requests over HTTP/3, with <a href="https://radar.cloudflare.com/year-in-review/2024/re#http-versions"><u>Reunion</u></a>, <a href="https://radar.cloudflare.com/year-in-review/2024/lk#http-versions"><u>Sri Lanka</u></a>, <a href="https://radar.cloudflare.com/year-in-review/2024/mn#http-versions"><u>Mongolia</u></a>, <a href="https://radar.cloudflare.com/year-in-review/2024/gr#http-versions"><u>Greece</u></a>, and <a href="https://radar.cloudflare.com/year-in-review/2024/mk#http-versions"><u>North Macedonia</u></a> comprising the top five as shown below. Eight other countries/regions, including <a href="https://radar.cloudflare.com/year-in-review/2024/ir#http-versions"><u>Iran</u></a>, <a href="https://radar.cloudflare.com/year-in-review/2024/ie#http-versions"><u>Ireland</u></a>, <a href="https://radar.cloudflare.com/year-in-review/2024/hk#http-versions"><u>Hong Kong</u></a>, and <a href="https://radar.cloudflare.com/year-in-review/2024/cn#http-versions"><u>China</u></a>, sent more than half of their requests over HTTP/1.x throughout 2024. More than half of requests were made over HTTP/2 in a total of 147 countries/regions.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2Zq4mMgbvw6jT6pb6LLdF7/401b98731302233b6f9674e74196e819/adoption_-_HTTP_versions_top_5.png" />
          </figure><p><sup><i>Countries/regions with the largest shares of HTTP/3 traffic in 2024</i></sup></p>
    <div>
      <h3>React, PHP, and jQuery were among the most popular technologies used to build websites, while Hubspot, Google, and WordPress were among the most popular vendors of supporting services and platforms.</h3>
      <a href="#react-php-and-jquery-were-among-the-most-popular-technologies-used-to-build-websites-while-hubspot-google-and-wordpress-were-among-the-most-popular-vendors-of-supporting-services-and-platforms">
        
      </a>
    </div>
    <p>Modern websites and applications are extremely complex, built on and integrating on a mix of frameworks, platforms, services, and tools. In order to deliver a seamless user experience, developers must ensure that all of these components happily coexist with each other. Using <a href="https://radar.cloudflare.com/scan"><u>Cloudflare Radar’s URL Scanner</u></a>, we again scanned websites associated with the <a href="https://radar.cloudflare.com/domains"><u>top 5000 domains</u></a> to identify the <a href="https://radar.cloudflare.com/year-in-review/2024#website-technologies"><u>most popular technologies and services</u></a> used across a dozen different categories. </p><p>In looking at core technologies used to build websites, <a href="https://react.dev/"><u>React</u></a> had a commanding lead over <a href="https://vuejs.org/"><u>Vue.js</u></a> and other JavaScript frameworks, <a href="https://www.php.net/"><u>PHP</u></a> was the most popular programming technology, and <a href="https://jquery.com/"><u>jQuery</u></a>’s share was 10x other popular JavaScript libraries.</p><p>Third-party services and platforms are also used by websites and applications to support things like analytics, content management, and marketing automation. Google Analytics remained the most widely used analytics provider, WordPress had a greater than 50% share among content management systems, and for marketing automation providers, category leader HubSpot had nearly twice the usage share of Marketo and MailChimp.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2fJS1OpqRlVCsZ9VOXdU89/c01320ff9d20da4ad2471de780a86033/adoption_-_top_website_technologies.png" />
          </figure><p><sup><i>Top website technologies, JavaScript frameworks category in 2024</i></sup></p>
    <div>
      <h3>Go surpassed NodeJS as the most popular language used for making automated API requests.</h3>
      <a href="#go-surpassed-nodejs-as-the-most-popular-language-used-for-making-automated-api-requests">
        
      </a>
    </div>
    <p>Many dynamic websites and applications are built on <a href="https://blog.cloudflare.com/2024-api-security-report/"><u>automated API calls</u></a>, and we can use our unique visibility into Web traffic to identify the top languages these API clients are written in. Applying heuristics to API-related requests determined to not be coming from a person using a browser or native mobile application helps us to identify the language used to build the API client.</p><p><a href="https://radar.cloudflare.com/year-in-review/2024#api-client-language-popularity"><u>Our analysis</u></a> found that almost 12% of automated API requests are made by <a href="https://go.dev/"><u>Go</u></a>-based clients, with <a href="https://nodejs.org/en/"><u>NodeJS</u></a>, <a href="https://www.python.org/"><u>Python</u></a>, <a href="https://www.java.com/"><u>Java</u></a>, and <a href="https://dotnet.microsoft.com/"><u>.NET</u></a> holding smaller shares. Compared to <a href="https://radar.cloudflare.com/year-in-review/2023#api-client-language-popularity"><u>2023</u></a>, Go’s share increased by approximately 40%, allowing it to capture the top spot, while NodeJS’s share fell by just over 30%. Python and Java also saw their shares increase, while .NET’s fell.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7oq8vCSsDq57HNYCbEV59n/9373b727f7f7da45be317ba34d23dcab/adoption_-_api_client_languages.png" />
          </figure><p><sup><i>Most popular API client languages in 2024</i></sup></p>
    <div>
      <h3>Google is the most popular search engine globally, across all platforms. On mobile devices/OS, Baidu is a distant second. Bing is a distant second across desktop and Windows devices, with DuckDuckGo second most popular on macOS. </h3>
      <a href="#google-is-the-most-popular-search-engine-globally-across-all-platforms-on-mobile-devices-os-baidu-is-a-distant-second-bing-is-a-distant-second-across-desktop-and-windows-devices-with-duckduckgo-second-most-popular-on-macos">
        
      </a>
    </div>
    <p>Protecting and accelerating websites and applications for millions of customers, Cloudflare is in a unique position to measure search engine market share data. Our methodology uses HTTP’s <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referer"><u>referer header</u></a> to identify the search engine sending traffic to customer sites and applications. The market share data is presented as an overall aggregate, as well as broken out by device type and operating system. (Device type and operating system data is derived from the <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/User-Agent"><u>User-Agent</u></a> and <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Client_hints"><u>Client Hints</u></a> headers accompanying a content request.)</p><p>Aggregated at a <a href="https://radar.cloudflare.com/year-in-review/2024#search-engine-market-share"><u>global</u></a> level, Google referred the most traffic to Cloudflare customers, with a greater than 88% share across 2024. Yandex, Baidu, Bing, and DuckDuckGo round out the top five, all with single digit percentage shares. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7bwTSu9NktZ9chotEmTQhs/fd231b3f13fe4709ca7480546276d2e0/adoption_-_search_engine_overall_worldwide.png" />
          </figure><p><sup><i>Overall worldwide search engine market share in 2024</i></sup></p><p>However, when drilling down by location or platform, differences are apparent in the top search engines and their shares. For example, in <a href="https://radar.cloudflare.com/year-in-review/2024/kr#search-engine-market-share"><u>South Korea</u></a>, Google is responsible for only two-thirds of referrals, while local platform <a href="https://www.naver.com/"><u>Naver</u></a> drives 29.2%, with local portal <a href="https://www.daum.net/"><u>Daum</u></a> also in the top five at 1.3%.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/rxOIPPwpJSt73X1GXH8t4/5597fd261ec7fda2cf357c70479be13f/adoption_-_search_engine_overall_South_Korea.png" />
          </figure><p><sup><i>Overall search engine market share in South Korea in 2024</i></sup></p><p>Google’s dominance is also blunted a bit on Windows devices, where it drives only 80% of referrals globally. Unsurprisingly, Bing holds the second spot for Windows users, with a 10.4% share. Yandex, Yahoo, and DuckDuckGo round out the top 5, all with shares below 5%.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4sKOs50fTPbchv55J7gQrM/1ce8b0c1287bbd5b35d9e987e2061207/adoption_-_search_engine_overall_worldwide_Windows.png" />
          </figure><p><sup><i>Overall worldwide search engine market share for Windows devices in 2024</i></sup></p><p>For additional details, including search engines aggregated under “Other”, please refer to the quarterly <a href="https://radar.cloudflare.com/reports/search-engines"><u>Search Engine Referral Reports</u></a> on Cloudflare Radar.</p>
    <div>
      <h3>Google Chrome is the most popular browser overall. While also true on MacOS devices, Safari usage is well ahead of Chrome on iOS devices. On Windows, Edge is the second most popular browser. </h3>
      <a href="#google-chrome-is-the-most-popular-browser-overall-while-also-true-on-macos-devices-safari-usage-is-well-ahead-of-chrome-on-ios-devices-on-windows-edge-is-the-second-most-popular-browser">
        
      </a>
    </div>
    <p>Similar to our ability to measure search engine market share, Cloudflare is also in a unique position to measure browser market share. Our methodology uses information from the <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/User-Agent"><u>User-Agent</u></a> and <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Client_hints"><u>Client Hints</u></a> headers to identify the browser making content requests, along with the associated operating system. Browser market share data is presented as an overall aggregate, as well as broken out by device type and operating system. Note that the shares of browsers available on both desktop and mobile devices, such as Chrome or Safari, are presented in aggregate.</p><p><a href="https://radar.cloudflare.com/year-in-review/2024#browser-market-share"><u>Globally</u></a>, we found that 65.8% of requests came from Google’s Chrome browser across 2024, and that just 15.5% came from Apple’s Safari browser. Microsoft Edge, Mozilla Firefox, and the <a href="https://www.samsung.com/us/support/owners/app/samsung-internet"><u>Samsung Internet browser</u></a> rounded out the top five, all with shares below 10%.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1bEuEbqSrAqe57gnBTeL6c/4426dc0dbc8869d05344433535e0698a/adoption_-_browser_overall_worldwide.png" />
          </figure><p><sup><i>Overall worldwide web browser market share in 2024</i></sup></p><p>Similar to the search engine statistics discussed above, differences are clearly visible when drilling down by location or platform. In some countries where iOS holds a larger market share than Android, Chrome remains the leading browser, but by a much lower margin. For example, in <a href="https://radar.cloudflare.com/year-in-review/2024/se#browser-market-share"><u>Sweden</u></a>, Chrome’s share fell to 56.2%, while Safari’s increased to 22.5%. In <a href="https://radar.cloudflare.com/year-in-review/2024/no#browser-market-share"><u>Norway</u></a>, Chrome fell to just 50%, while Safari grew to 25.6%.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2Dkt8A1HuUpg61G8GYkEXs/5c2649e96d2959a2606afa9d932d5b82/adoption_-_browser_overall_Norway.png" />
          </figure><p><sup><i>Overall web browser market share in Norway in 2024</i></sup></p><p>As the default browser on devices running iOS, Apple Safari was the most popular browser for iOS devices, commanding an 81.7% market share across the year, with Chrome at just 16.1%. And despite being the preinstalled default browser on Windows devices, Edge held just a 17.3% share, in comparison to Chrome’s 68.5%</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5Hvnf7VPBuVTjba0P1bGRU/6d6e31c609a54c8248afe120576210aa/adoption_-_browser_overall_worldwide_iOS.png" />
          </figure><p><sup><i>Overall worldwide web browser market share for iOS devices in 2024</i></sup></p><p>For additional details, including browsers aggregated under “Other”, please refer to the quarterly <u>Browser Market Share Reports</u> on Cloudflare Radar.</p>
    <div>
      <h2>Connectivity</h2>
      <a href="#connectivity">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7xC8lBdDHpahlvkJrf1nI9/7b050dc62c1628e3a5ab3a9418e572d3/2627_Rocket.png" />
          </figure>
    <div>
      <h3>225 major Internet outages were observed around the world in 2024, with many due to government-directed regional and national shutdowns of Internet connectivity.</h3>
      <a href="#225-major-internet-outages-were-observed-around-the-world-in-2024-with-many-due-to-government-directed-regional-and-national-shutdowns-of-internet-connectivity">
        
      </a>
    </div>
    <p>Throughout 2024, as we have over the last several years, we have written frequently about observed Internet outages, whether due to <a href="https://blog.cloudflare.com/east-african-internet-connectivity-again-impacted-by-submarine-cable-cuts"><u>cable cuts</u></a>, <a href="https://blog.cloudflare.com/impact-of-verizons-september-30-outage-on-internet-traffic/"><u>unspecified technical issues</u></a>, <a href="https://blog.cloudflare.com/syria-iraq-algeria-exam-internet-shutdown"><u>government-directed shutdowns</u></a>, or a number of other reasons covered in our quarterly summary posts (<a href="https://blog.cloudflare.com/q1-2024-internet-disruption-summary"><u>Q1</u></a>, <a href="https://blog.cloudflare.com/q2-2024-internet-disruption-summary"><u>Q2</u></a>, <a href="https://blog.cloudflare.com/q3-2024-internet-disruption-summary"><u>Q3</u></a>). The impacts of these outages can be significant, including significant economic losses and severely limited communications. The <a href="https://radar.cloudflare.com/outage-center"><u>Cloudflare Radar Outage Center</u></a> tracks these Internet outages, and uses Cloudflare traffic data for insights into their scope and duration.</p><p>Some of the outages seen through the year were short-lived, lasting just a few hours, while others stretched on for days or weeks. In the latter category, an Internet outage in <a href="https://blog.cloudflare.com/q3-2024-internet-disruption-summary/#haiti"><u>Haiti</u></a> dragged on for eight days in September because repair crews were barred from accessing a damaged submarine cable due to a business dispute, while shutdowns of mobile and fixed Internet providers in <a href="https://blog.cloudflare.com/q3-2024-internet-disruption-summary/#bangladesh"><u>Bangladesh</u></a> lasted for approximately 10 days in July. In the former category, <a href="https://blog.cloudflare.com/q3-2024-internet-disruption-summary/#iraqi-kurdistan"><u>Iraq</u></a> frequently experienced multi-hour nationwide Internet shutdowns intended to prevent cheating on academic exams — these contribute to the clustering visible in the timeline during June, July, August, and September.</p><p>Within the <a href="https://radar.cloudflare.com/year-in-review/2024#internet-outages"><u>timeline</u></a> on the Year in Review microsite, hovering over a dot will display metadata about that outage, and clicking on it will open a page with additional information. Below the map and timeline, we have added a bar graph illustrating the recorded reasons associated with the observed outages. In 2024, over half were due to government-directed shutdowns. If a country/region is selected, only outages and reasons for that country/region will be displayed.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/VDxaH2IkD28RXrcStCn8j/39ce7ad40f6a3d59e155ff09664f80e0/connectivity_-_Internet_outage_map.png" />
          </figure><p><sup><i>Over 200 Internet outages were observed around the world during 2024</i></sup></p>
    <div>
      <h3>Aggregated across 2024, 28.5% of IPv6-capable requests were made over IPv6. India and Malaysia were the strongest countries, at 68.9% and 59.6% IPv6 adoption respectively.</h3>
      <a href="#aggregated-across-2024-28-5-of-ipv6-capable-requests-were-made-over-ipv6-india-and-malaysia-were-the-strongest-countries-at-68-9-and-59-6-ipv6-adoption-respectively">
        
      </a>
    </div>
    <p>The IPv4 protocol still used by many Internet-connected devices was developed in the 1970s, and was never meant to handle the vast and growing scale of the modern Internet. An <a href="https://www.rfc-editor.org/rfc/rfc1883"><u>initial specification for its successor</u></a>, IPv6, was published in December 1995, evolving to a <a href="https://www.rfc-editor.org/rfc/rfc2460"><u>draft standard</u></a> three years later, offering an expanded address space intended to better support the expected growth in the number of Internet-connected devices. At this point, available IPv4 space has long since been <a href="https://ipv4.potaroo.net/"><u>exhausted</u></a>, and connectivity providers use solutions like <a href="https://en.wikipedia.org/wiki/Network_address_translation"><u>Network Address Translation</u></a> to stretch limited IPv4 resources. Hungry for IPv4 address space as their businesses and infrastructure grow, cloud and hosting providers are acquiring blocks of IPv4 address space for <a href="https://auctions.ipv4.global/"><u>as much as \$30 - \$50 per address</u></a>. </p><p>Cloudflare has been a vocal and active advocate for IPv6 since 2011, when we announced our <a href="https://blog.cloudflare.com/introducing-cloudflares-automatic-ipv6-gatewa/"><u>Automatic IPv6 Gateway</u></a>, which enabled free IPv6 support for all of our customers. In 2014, we enabled <a href="https://blog.cloudflare.com/i-joined-cloudflare-on-monday-along-with-5-000-others"><u>IPv6 support by default for all of our customers</u></a>, but not all customers choose to keep it enabled for a variety of reasons. Note that server-side support is only half of the equation for driving IPv6 adoption, as end user connections need to support it as well. (In reality, it is a bit more complex than that, but server and client side support across applications, operating systems, and network environments are the two primary requirements. From a network perspective, implementing IPv6 also brings a number of other <a href="https://www.catchpoint.com/benefits-of-ipv6"><u>benefits</u></a>.) By analyzing the IP version used for each request made to Cloudflare, aggregated throughout the year, we can get insight into the distribution of traffic by the various versions of the protocol.</p><p>At a <a href="https://radar.cloudflare.com/year-in-review/2024#ipv6-adoption"><u>global</u></a> level, 28.5% of IPv6-capable (“<a href="https://www.techopedia.com/definition/19025/dual-stack-network"><u>dual-stack</u></a>”) requests were made over IPv6, up from 26.4% in <a href="https://radar.cloudflare.com/year-in-review/2024?previousYear=true"><u>2023</u></a>. <a href="https://radar.cloudflare.com/year-in-review/2024/in#ipv6-adoption"><u>India</u></a> was again the country with the highest level of IPv6 adoption, at 68.9%, carried in large part by <a href="https://radar.cloudflare.com/adoption-and-usage/as55836?dateStart=2024-01-01&amp;dateEnd=2024-12-01"><u>94% IPv6 adoption at Reliance Jio</u></a>, one of the country’s largest Internet service providers. India was followed closely by <a href="https://radar.cloudflare.com/year-in-review/2024/my#ipv6-adoption"><u>Malaysia</u></a>, where 59.6% of dual-stacked requests were made over IPv6 during 2024, thanks to <a href="https://radar.cloudflare.com/explorer?dataSet=http&amp;groupBy=ases&amp;loc=MY&amp;dt=14d&amp;metric=ip_version%2FIPv6"><u>strong IPv6 adoption rates across leading Internet providers</u></a> within the country. IPv6 adoption in India was up from 66% in <a href="https://radar.cloudflare.com/year-in-review/2024/in?previousYear=true#ipv6-adoption"><u>2023</u></a>, and in Malaysia, it was up from 57.3% <a href="https://radar.cloudflare.com/year-in-review/2024/my?previousYear=true#ipv6-adoption"><u>last year</u></a>. <a href="https://radar.cloudflare.com/year-in-review/2024/sa#ipv6-adoption"><u>Saudi Arabia</u></a> was the only other country with an IPv6 adoption rate above 50% this year, at 51.8%, whereas that list also included <a href="https://radar.cloudflare.com/year-in-review/2023/vn#ipv6-adoption"><u>Vietnam</u></a>, <a href="https://radar.cloudflare.com/year-in-review/2023/gr#ipv6-adoption"><u>Greece</u></a>, <a href="https://radar.cloudflare.com/year-in-review/2023/fr#ipv6-adoption"><u>France</u></a>, <a href="https://radar.cloudflare.com/year-in-review/2023/uy#ipv6-adoption"><u>Uruguay</u></a>, and <a href="https://radar.cloudflare.com/year-in-review/2023/th#ipv6-adoption"><u>Thailand</u></a> in 2023. Thirty four countries/regions, including many in Africa, still have IPv6 adoption rates below 1%, while a total of 96 countries/regions have adoption rates below 10%.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/48L0qRLujnWQRuJ8ZMa8Ed/ac5209577812dd556d275279d4740041/connectivity_-_IPv6_adoption.png" />
          </figure><p><sup><i>Global distribution of traffic by IP version in 2024</i></sup></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/NjeXm7lfs7ZGM3Gn5ToZM/3b401894664cb22347db1a8d8a2bfdc8/connectivity_-_IPv6_adoption_top_5.png" />
          </figure><p><sup><i>Countries/regions with the largest shares of IPv6 traffic in 2024</i></sup></p>
    <div>
      <h3>The top 10 countries ranked by Internet speed all had average download speeds above 200 Mbps. Spain was consistently among the top locations across measured Internet quality metrics.</h3>
      <a href="#the-top-10-countries-ranked-by-internet-speed-all-had-average-download-speeds-above-200-mbps-spain-was-consistently-among-the-top-locations-across-measured-internet-quality-metrics">
        
      </a>
    </div>
    <p>As more and more of our everyday lives move online, including entertainment, work, education, finance, shopping, and even basic social and personal interaction, the quality of our Internet connections is arguably more important than ever, necessitating higher connection speeds and lower latency. Although Internet providers continue to evolve their service portfolios to offer increased connection speeds and reduced latency in order to support growth in use cases like videoconferencing, live streaming, and online gaming, consumer adoption is often mixed due to cost, availability, or other issues. By aggregating the results of <a href="https://speed.cloudflare.com/"><u>speed.cloudflare.com</u></a> tests taken during 2024, we can get a geographic perspective on <a href="https://developers.cloudflare.com/radar/glossary/#connection-quality"><u>connection quality</u></a> metrics including average download and upload speeds, and average idle and loaded latencies, as well as the distribution of the measurements.</p><p>In <a href="https://radar.cloudflare.com/year-in-review/2024#internet-quality"><u>2024</u></a>, Spain was a leader in download speed (292.6 Mbps) and upload speed (192.6 Mbps) metrics, and placed second globally for loaded latency (78.6 ms). (Loaded latency is the round-trip time when data-heavy applications are being used on the network.) Spain’s leadership in these connection quality metrics is supported by the strong progress that the country has made <a href="https://ec.europa.eu/newsroom/dae/redirection/document/106695"><u>towards achieving the EU’s “Digital Decade” objectives</u></a>, including fixed very high capacity network (VHCN) deployment, fiber-to-the-premises (FTTP) coverage, and 5G coverage with the latter two <a href="https://www.trade.gov/country-commercial-guides/spain-digital-economy"><u>reaching</u></a> 95.2% and 92.3% respectively. High speed fiber broadband connections are also relatively affordable, with research showing major providers offering 100 Mbps, 300 Mbps, 600 Mbps, and 1 Gbps packages, with the latter priced between €30 and €46 per month. The figures below for <a href="https://radar.cloudflare.com/year-in-review/2024/es#internet-quality"><u>Spain</u></a> show the largest clusters of speed measurements around the 100 Mbps mark, with slight bumps also visible around 300 Mbps, suggesting that the former package has the highest subscription rate, followed by the latter. Further, they show these connections are also relatively low latency, with 87% of idle latency measurements below 50 ms and 65% of loaded latency measurements below 100 ms, providing users with good <a href="https://www.screenbeam.com/wifihelp/wifibooster/how-to-reduce-latency-or-lag-in-gaming-2/#:~:text=Latency%20is%20measured%20in%20milliseconds,%2C%2020%2D40ms%20is%20optimal."><u>gaming</u></a> and <a href="https://www.haivision.com/glossary/video-latency/#:~:text=Low%20latency%20is%20typically%20defined,and%20streaming%20previously%20recorded%20events."><u>videoconferencing/streaming</u></a> experiences.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/51PcbNyPpAQX79gYg0SxIU/a784aaadd65822d3384f1463570a6129/connectivity_-_Spain_bandwidth.png" />
          </figure><p><sup><i>Measured download/upload speed distribution in Spain in 2024</i></sup></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3Refsg6ctWdHNzscsoIDDF/75da3336fa1e31fd71a2188787944a57/connectivity_-_Spain_latency.png" />
          </figure><p><sup><i>Measured idle/loaded latency distribution in Spain in 2024</i></sup></p>
    <div>
      <h3>41.3% of global traffic comes from mobile devices. In nearly 100 countries/regions, the majority of traffic comes from mobile devices.</h3>
      <a href="#41-3-of-global-traffic-comes-from-mobile-devices-in-nearly-100-countries-regions-the-majority-of-traffic-comes-from-mobile-devices">
        
      </a>
    </div>
    <p>With approximately <a href="https://www.statista.com/topics/840/smartphones/#topicOverview"><u>70% of the world’s population using smartphones</u></a>, and <a href="https://www.pewresearch.org/internet/fact-sheet/mobile/"><u>91% of Americans owning a smartphone</u></a>, these mobile devices have become an integral part of both our personal and professional lives, providing us with Internet access from nearly any place at any time. In some countries/regions, mobile devices primarily connect to the Internet via Wi-Fi, while other countries/regions are “mobile first”, where 4G/5G services are the primary means of Internet access.</p><p>Analysis of information contained with the user agent reported with each request to Cloudflare enables us to categorize it as coming from a mobile, desktop, or other type of device. Aggregating this categorization throughout the year at a <a href="https://radar.cloudflare.com/year-in-review/2024#mobile-vs-desktop"><u>global</u></a> level, we found that 41.3% of traffic came from mobile devices, with 58.7% coming from desktop devices such as laptops and “classic” PCs. These traffic shares were in line with those measured in both <a href="https://radar.cloudflare.com/year-in-review/2023#mobile-vs-desktop"><u>2023</u></a> and 2022, suggesting that mobile device usage has achieved a “steady state”. Over 77% of traffic came from mobile devices in <a href="https://radar.cloudflare.com/year-in-review/2024/sd#mobile-vs-desktop"><u>Sudan</u></a>, <a href="https://radar.cloudflare.com/year-in-review/2024/cu#mobile-vs-desktop"><u>Cuba</u></a>, and <a href="https://radar.cloudflare.com/year-in-review/2024/sy#mobile-vs-desktop"><u>Syria</u></a>, making them the countries/regions with the largest mobile device traffic share in 2024. Other countries/regions that had more than 50% of traffic come from mobile devices were concentrated in the Middle East/Africa, the Asia Pacific region, and South/Central America. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/9bsuRwzYBybYpOiKqwLja/cbdafb60eab1913a91ec916899d1e807/connectivity_-_mobile_desktop.png" />
          </figure><p><sup><i>Global distribution of traffic by device type in 2024</i></sup></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/KRujuREGMTBvLHVAHonuU/ad575fdd822ee3ee0bcabd41a96ef736/connectivity_-_mobile_desktop_top_5.png" />
          </figure><p><sup><i>Countries/regions with the largest shares of mobile device usage in 2024</i></sup></p>
    <div>
      <h3>20.7% of TCP connections are unexpectedly terminated before any useful data can be exchanged.</h3>
      <a href="#20-7-of-tcp-connections-are-unexpectedly-terminated-before-any-useful-data-can-be-exchanged">
        
      </a>
    </div>
    <p>Cloudflare is in a unique position to help measure the health and behaviors of Internet networks around the world. One way we do this is passively measuring rates of connections to Cloudflare that appear <i>anomalous</i>, meaning that they are unexpectedly terminated before any useful data exchange occurs. The underlying causes of connection anomalies are varied and range from DoS attacks to quirky client behavior to third-party connection tampering (e.g., when a network monitors and selectively disrupts connections to filter content).</p><p>Connection anomalies are symptoms — visible signs that “something abnormal” is happening in a network, but the underlying root cause is not always clear from the outset. However, we can gain a better understanding by incorporating previously-reported network behaviors, active measurements and on-the-ground reports, and macro trends across networks. Additional details on such analysis can be found in the blog posts <a href="https://blog.cloudflare.com/connection-tampering/"><i><u>A global assessment of third-party connection tampering</u></i></a> and<a href="https://blog.cloudflare.com/tcp-resets-timeouts/"> <i><u>Bringing insights into TCP resets and timeouts to Cloudflare Radar</u></i></a>.</p><p>Insights into TCP connection anomalies were <a href="https://blog.cloudflare.com/tcp-resets-timeouts/"><u>launched on Cloudflare Radar</u></a> in September, with the plot lines in the associated graph corresponding to the stage of the TCP connection in which the connection anomalously closed (using shorthand, the first three messages we typically receive from the client in a TCP connection are “SYN” and “ACK” packets to establish a connection, and then a “PSH” packet indicating the requested resource). In aggregate <a href="https://radar.cloudflare.com/year-in-review/2024#tcp-connection-anomalies"><u>globally</u></a>, over 20% of connections to Cloudflare were terminated unexpectedly, with the largest share (nearly half) being closed “Post SYN” — that is, after our server has received a client’s SYN packet, but before we have received a subsequent acknowledgement (ACK) from the client or any useful data that would follow the acknowledgement. These terminations can often be attributed to DoS attacks or Internet scanning. Post-ACK (3.1% globally) and Post-PSH (1.4% globally) anomalies are more often associated with connection tampering, especially when they occur at high rates in specific networks.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/11XcEAXkMgOhytTbCsf21J/159fa2459ebc6b9c268bd5d8455213ba/connectivity_-_TCP_connection_anomalies.png" />
          </figure><p><sup><i>Trends in TCP connection anomalies by stage in 2024</i></sup></p>
    <div>
      <h2>Security</h2>
      <a href="#security">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4CkCPZHGlR8gQQQrXt0b5H/cfd3faabbe406fd348b8751825bc43e5/2627_Shield_Globe.png" />
          </figure>
    <div>
      <h3>6.5% of global traffic was mitigated by Cloudflare's systems as being potentially malicious or for customer-defined reasons.</h3>
      <a href="#6-5-of-global-traffic-was-mitigated-by-cloudflares-systems-as-being-potentially-malicious-or-for-customer-defined-reasons">
        
      </a>
    </div>
    <p>To <a href="https://www.cloudflare.com/products/zero-trust/threat-defense/"><u>protect customers from threats</u></a> posed by malicious bots used to attack websites and applications, Cloudflare mitigates this attack traffic using <a href="https://www.cloudflare.com/learning/ddos/what-is-a-ddos-attack/"><u>DDoS</u></a> mitigation techniques or <a href="https://developers.cloudflare.com/waf/managed-rules/"><u>Web Application Firewall (WAF) Managed Rules</u></a>. For a variety of other reasons, customers may also want Cloudflare to mitigate traffic using techniques like <a href="https://developers.cloudflare.com/waf/rate-limiting-rules/"><u>rate-limiting</u></a> requests, or <a href="https://developers.cloudflare.com/waf/tools/ip-access-rules/"><u>blocking all traffic from a given location</u></a>, even if it isn’t malicious. Analyzing traffic to Cloudflare’s network throughout 2024, we looked at the overall share that was mitigated for any reason, as well as the share that was blocked as a DDoS attack or by WAF Managed Rules. </p><p>In 2024, <a href="https://radar.cloudflare.com/year-in-review/2024#mitigated-traffic"><u>6.5% of global traffic was mitigated</u></a>, up almost one percentage point from <a href="https://radar.cloudflare.com/year-in-review/2023#mitigated-traffic"><u>2023</u></a>. Just 3.2% was mitigated as a DDoS attack, or by WAF Managed Rules, a rate slightly higher than in 2023. More than 10% of the traffic originating from 44 countries/regions had mitigations generally applied, while DDoS/WAF mitigations were applied to more than 10% of the traffic originating from just seven countries/regions.</p><p>At a country/region level, <a href="https://radar.cloudflare.com/year-in-review/2024/al?#mitigated-traffic"><u>Albania</u></a> had one of the highest mitigated traffic shares throughout the year, at 42.9%, while <a href="https://radar.cloudflare.com/year-in-review/2024/ly#mitigated-traffic"><u>Libya</u></a> had one of the highest shares of traffic that was mitigated as a DDoS attack or by WAF Managed Rules, at 19.2%. In <a href="https://blog.cloudflare.com/radar-2023-year-in-review/#just-under-6-of-global-traffic-was-mitigated-by-cloudflares-systems-as-being-potentially-malicious-or-for-customer-defined-reasons-in-the-united-states-3-65-of-traffic-was-mitigated-while-in-south-korea-it-was-8-36"><u>2023’s Year in Review blog post</u></a>, we highlighted the United States and Korea. This year, the share of mitigated traffic grew to 5.0% in the <a href="https://radar.cloudflare.com/year-in-review/2024/us?#mitigated-traffic"><u>United States</u></a> (up from 3.65% in 2023), while in <a href="https://radar.cloudflare.com/year-in-review/2024/kr?#mitigated-traffic"><u>South Korea</u></a>, it dropped slightly to 8.1%, down from 8.36%.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3GJ5r18m6Tpor4n2scVRQ5/cc85d08dc2aa496d677d8bfc9439417d/security_-_mitigated_traffic_worldwide.png" />
          </figure><p><sup><i>Trends in mitigated traffic worldwide in 2024</i></sup></p>
    <div>
      <h3>The United States was responsible for over a third of global bot traffic. Amazon Web Services was responsible for 12.7% of global bot traffic, and 7.8% came from Google.</h3>
      <a href="#the-united-states-was-responsible-for-over-a-third-of-global-bot-traffic-amazon-web-services-was-responsible-for-12-7-of-global-bot-traffic-and-7-8-came-from-google">
        
      </a>
    </div>
    <p><a href="https://www.cloudflare.com/learning/bots/what-is-a-bot/"><u>Bot</u></a> traffic describes any non-human Internet traffic, and by monitoring traffic suspected to be from bots site and application owners can spot and, if necessary, block potentially malicious activity. However, not all bots are malicious — bots can also be helpful, and Cloudflare maintains a list of <a href="https://radar.cloudflare.com/traffic/verified-bots"><u>verified bots</u></a> that includes those used for things like search engine indexing, performance testing, and <a href="https://www.cloudflare.com/application-services/solutions/app-performance-monitoring/"><u>availability monitoring</u></a>. Regardless of intent, we analyzed where bot traffic was originating from in 2024, using the IP address of a request to identify the network (<a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/"><u>autonomous system</u></a>) and country/region associated with the bot making the request. Cloud platforms remained among the leading sources of bot traffic due to a number of factors. These include the ease of using automated tools to quickly provision compute resources, the relatively low cost of using these compute resources in an ephemeral manner, the broadly distributed geographic footprint of cloud platforms, and the platforms’ high-bandwidth Internet connectivity.</p><p><a href="https://radar.cloudflare.com/year-in-review/2024#bot-traffic-sources"><u>Globally</u></a>, we found that 68.5% of observed bot traffic came from the top 10 countries in 2024, with the United States responsible for half of that total, over 5x the share of second place Germany. (In comparison to 2023, the US share was up slightly, while Germany’s was down slightly.) Among cloud platforms that originate bot traffic, Amazon Web Services was responsible for 12.7% of global bot traffic, and 7.8% came from Google. Microsoft, Hetzner, Digital Ocean, and OVH all also contributed more than a percent each.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3qlyS355w5LDtoBDtb1qXE/8354c2b07c0af46121a0c667e6d687e4/security_-_bot_distribution_by_source_country.png" />
          </figure><p><sup><i>Global bot traffic distribution by source country in 2024</i></sup></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6euMUlCDcfOInLiCpssg2t/54eb345624346f24ab984bbe6b1c9f67/security_-_bot_distribution_by_source_network.png" />
          </figure><p><sup><i>Global bot traffic distribution by source network in 2024</i></sup></p>
    <div>
      <h3>Globally, Gambling/Games was the most attacked industry, slightly ahead of 2023’s most targeted industry, Finance.</h3>
      <a href="#globally-gambling-games-was-the-most-attacked-industry-slightly-ahead-of-2023s-most-targeted-industry-finance">
        
      </a>
    </div>
    <p>The industries targeted by attacks often shift over time, depending on the intent of the attackers. They may be trying to cause financial harm by attacking ecommerce sites during a busy shopping period, gain an advantage against opponents by attacking an online game, or make a political statement by attacking government-related sites. To identify industry-targeted attack activity during 2024, we analyzed mitigated traffic for customers that had an associated industry and vertical within their customer record. Mitigated traffic was aggregated weekly by source country/region across 19 target industries.</p><p>Companies in the Gambling/Games industry were, in aggregate, the <a href="https://radar.cloudflare.com/year-in-review/2024#most-attacked-industries"><u>most attacked during 2024</u></a>, with 6.6% of global mitigated traffic targeting the industry. The industry was slightly ahead of Finance, which led 2023’s aggregate list. (Both industries are shown at 6.6% in the Summary view due to rounding.)  Gambling/Games sites saw the largest shares of mitigated traffic in January and the first week of February, possibly related to National Football League playoffs in the United States, heading into the <a href="https://blog.cloudflare.com/super-bowl-lviii/"><u>Super Bowl</u></a>.</p><p>Attacks targeting Finance organizations were most active in May, reaching a peak of 15.3% of mitigated traffic the week of May 13. This is in line with the figure in our <a href="https://radar.cloudflare.com/reports/ddos-2024-q2#id-9-top-attacked-industries"><i><u>DDoS threat report for Q2 2024</u></i></a> that shows that Financial Services was the most attacked industry by request volume during the quarter in South America and the Middle East region.</p><p>As we have seen in the past, peak attack activity varied by industry on a weekly basis. The highest peaks for the year were seen in attacks targeting People &amp; Society organizations (19.6% of mitigated traffic, week of January 1), the Autos &amp; Vehicles industry (29.7% of mitigated traffic, week of January 15), and the Real Estate industry (27.5% of mitigated traffic, week of August 26).</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4qjMffdMn6uV7OEFhE5l0F/397672a455b62f712946e30130969657/security_-_targeted_industries.png" />
          </figure><p><sup><i>Global mitigated traffic share by industry in 2024, summary view</i></sup></p>
    <div>
      <h3>Log4j remains a persistent threat and was actively targeted throughout 2024.</h3>
      <a href="#log4j-remains-a-persistent-threat-and-was-actively-targeted-throughout-2024">
        
      </a>
    </div>
    <p>In December 2021, we published a <a href="https://blog.cloudflare.com/tag/log4j/"><u>series of blog posts about the Log4j vulnerability</u></a>, highlighting the threat that it posed, our observations of attempted exploitation, and the steps we took to protect customers. Two years on, in our <a href="https://blog.cloudflare.com/radar-2023-year-in-review/"><u>2023 Year in Review</u></a>, we <a href="https://blog.cloudflare.com/radar-2023-year-in-review/#even-as-an-older-vulnerability-log4j-remained-a-top-target-for-attacks-during-2023-however-http-2-rapid-reset-emerged-as-a-significant-new-vulnerability-beginning-with-a-flurry-of-record-breaking-attacks"><u>noted</u></a> that even as an older vulnerability, Log4j remained a top target for attacks during 2023, with related attack activity significantly higher than other commonly exploited vulnerabilities.</p><p>In 2024, three years after the initial Log4j disclosure, we found that Log4j remains an active threat. This year, we compared normalized daily attack activity for Log4j with attack activity for Atlassian Confluence Code Injection, a vulnerability we <a href="https://radar.cloudflare.com/year-in-review/2023#commonly-exploited-vulnerabilities"><u>examined in the 2023 Year in Review</u></a>, as well as aggregated daily attack activity for multiple <a href="https://en.wikipedia.org/wiki/Common_Vulnerabilities_and_Exposures"><u>CVEs</u></a> related to <a href="https://capec.mitre.org/data/definitions/115.html"><u>Authentication Bypass</u></a> and <a href="https://www.cloudflare.com/en-gb/learning/security/what-is-remote-code-execution/"><u>Remote Code Execution</u></a> vulnerabilities published in 2024.</p><p><a href="https://radar.cloudflare.com/year-in-review/2024#commonly-exploited-vulnerabilities"><u>Log4j attack activity</u></a> appeared to trend generally upwards across the year, with several significant spikes visible during the first half of the year, and then again in October and November. In terms of the difference in activity, Log4j ranges from approximately 4x to over 20x the activity seen for Atlassian Confluence Code Injection, and as much as 100x the aggregated activity seen for Authentication Bypass or Remote Code Injection vulnerabilities.  </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2YdyQ3qMUh10zLAHLefcdU/8a723a1970652c293a1f6c59efe51a99/security_-_vulnerabilities_Log4J.png" />
          </figure><p><sup><i>Global attack activity trends for commonly exploited vulnerabilities in 2024</i></sup></p>
    <div>
      <h3>Routing security, measured as the share of RPKI valid routes and the share of covered IP address space, continued to improve globally throughout 2024. </h3>
      <a href="#routing-security-measured-as-the-share-of-rpki-valid-routes-and-the-share-of-covered-ip-address-space-continued-to-improve-globally-throughout-2024">
        
      </a>
    </div>
    <p>As the routing protocol that underpins the Internet, <a href="https://www.cloudflare.com/learning/security/glossary/what-is-bgp/"><u>Border Gateway Protocol (BGP)</u></a> communicates routes between networks, enabling traffic to flow between source and destination. BGP, however, relies on trust between networks, and incorrect information shared between peers, whether or not it was shared intentionally, can send traffic to the wrong place, potentially with <a href="https://blog.cloudflare.com/bgp-leaks-and-crypto-currencies/"><u>malicious results</u></a>. <a href="https://blog.cloudflare.com/rpki/"><u>Resource Public Key Infrastructure (RPKI)</u></a> is a cryptographic method of signing records that associate a BGP route announcement with the correct originating autonomous system (AS) number, providing a way of ensuring that the information being shared originally came from a network that is allowed to do so. (It is important to note that this is only half of the challenge of implementing routing security, because network providers also need to validate these signatures and filter out invalid announcements to prevent sharing them further.)</p><p>Cloudflare has long been an advocate for routing security, including being a founding participant in the <a href="https://www.manrs.org/2020/03/new-category-of-cdns-and-cloud-providers-join-manrs-to-improve-routing-security/"><u>MANRS CDN and Cloud Programme</u></a> and providing a <a href="https://isbgpsafeyet.com/"><u>public tool</u></a> that enables users to test whether their Internet provider has implemented BGP safely. Building on insights available in the <a href="https://radar.cloudflare.com/routing"><u>Routing page</u></a> on Cloudflare Radar, we analyzed data from <a href="https://ftp.ripe.net/rpki/"><u>RIPE NCC's RPKI daily archive</u></a> to determine the share of RPKI valid routes (as opposed to those route announcements that are <a href="https://rpki.readthedocs.io/en/latest/about/help.html"><u>invalid or whose status is unknown</u></a>) and how that share has changed over the course of 2024, as well as determining the share of IP address space covered by valid routes. The latter metric is of interest because a route announcement covering a significant amount of IP address space (millions of IPv4 addresses, for example) has a greater potential impact than an announcement covering a small block of IP address space (hundreds of IPv4 addresses, for example).</p><p>At a <a href="https://radar.cloudflare.com/year-in-review/2024#routing-security"><u>global</u></a> level during 2024, we saw a 6.4 percentage point increase (from 43.4% to 49.8%) in valid IPv4 routes, and a 3.2 percentage point increase (from 53.7% to 56.9%) in valid IPv6 routes. Given the trajectory, it is likely that over half of IPv4 routes will be RPKI valid by the end of calendar year 2024. Looking at the global share of IP address space covered by valid routes, we saw a 4.7 percentage point increase (from 38.9% to 43.6%) for IPv4, and a 3.3 percentage point increase (from 57.6% to 60.9%) for IPv6.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2ojjIa2U45vvsbha8v6ITk/2c61631ded62b80481d47e1da8a5d2cc/security_-_routing_global_valid_routes.png" />
          </figure><p><sup><i>Shares of global RPKI valid routing entries by IP version in 2024</i></sup></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2rCCmsaqULazLsBgoXZLLC/9aa265e9658d71bd7ee113423c6945ca/security_-_routing_global_valid_ip_address_space.png" />
          </figure><p><sup><i>Shares of globally announced IP address space covered by RPKI valid routes in 2024</i></sup></p><p><a href="https://radar.cloudflare.com/year-in-review/2024/es#routing-security"><u>Spain</u></a> started 2024 with less than half of its routes (both IPv4 and IPv6) RPKI valid. However, the share of valid routes grew significantly on February 15, when <a href="https://radar.cloudflare.com/as12479"><u>AS12479 (Orange Espagne)</u></a> signed records associated with 98% of their IP address prefixes that were previously in an <a href="https://www.ripe.net/manage-ips-and-asns/resource-management/rpki/bgp-origin-validation/"><u>“unknown” (or NotFound) state of RPKI validity</u></a>, thus converting these prefixes from unknown to valid. That drove an immediate increase for IPv4 to 76%, reaching 81% validity by December 1, and an immediate increase for IPv6 to 91%, reaching 92.9% validity by December 1. A notable change in covered IP address space was observed in <a href="https://radar.cloudflare.com/year-in-review/2024/cm#routing-security"><u>Cameroon</u></a>, where covered IPv4 space more than doubled at the end of January, growing from 32% to 82%. This was due to <a href="https://radar.cloudflare.com/as36912"><u>AS36912 (Orange Cameroun)</u></a> signing records associated with all of their IPv4 address prefixes, changing the associated IP address space to RPKI valid. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5e7SUVIju8fidAEBkIOSq4/2085f3237411eca0816a3d2862e9e3df/security_-_routing_Spain_valid_routes.png" />
          </figure><p><sup><i>IPv4 and IPv6 shares of RPKI valid routes for Spain in 2024</i></sup></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/G9adlberrdCmDnB3MupQa/6c866261fc478334673115d6dd01fd76/security_-_routing_Cameroon_valid_ipv4_address_space.png" />
          </figure><p><sup><i>Share of IPv4 address space covered by RPKI valid routes for Cameroon in 2024</i></sup></p>
    <div>
      <h2>Email Security</h2>
      <a href="#email-security">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7vC1TSUDDHpepgs2Yv3Lpx/eb43b5c0a203d7ec0a74939c23684ae5/2627_Shield_Plane.png" />
          </figure>
    <div>
      <h3>An average of 4.3% of emails were determined to be malicious in 2024. </h3>
      <a href="#an-average-of-4-3-of-emails-were-determined-to-be-malicious-in-2024">
        
      </a>
    </div>
    <p>Despite the growing enterprise use of collaboration/messaging apps, email remains an important business application and is a very attractive entry point into enterprise networks for attackers. Attackers will send targeted malicious emails that attempt to impersonate an otherwise legitimate sender (such as a corporate executive), that try to get the user to click on a deceptive link, or that contain a dangerous attachment, among other types of threats. <a href="https://www.cloudflare.com/zero-trust/products/email-security/"><u>Cloudflare Email Security</u></a> protects customers from email-based attacks, including those carried out through targeted malicious email messages. During<a href="https://radar.cloudflare.com/year-in-review/2024#malicious-emails"><u> 2024</u></a>, an average of 4.3% of emails analyzed by Cloudflare were found to be malicious. Aggregated at a weekly level, spikes above 14% were seen in late March, early April, and mid-May. We believe that these spikes were related to targeted “backscatter” attacks, where the attacker flooded a target with undeliverable messages, which then bounced the messages to the victim, whose email had been set as the reply-to: address.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/693EaEyePShBH8CZ7cZWv1/c08e0acd6f8a6a15b730b1cd90bf6283/email_-_malicious_worldwide.png" />
          </figure><p><sup><i>Global malicious email share trends in 2024</i></sup></p>
    <div>
      <h3>Deceptive links and identity deception were the two most common types of threats found in malicious email messages. </h3>
      <a href="#deceptive-links-and-identity-deception-were-the-two-most-common-types-of-threats-found-in-malicious-email-messages">
        
      </a>
    </div>
    <p>Attackers use a variety of techniques, which we refer to as threat categories, when they use malicious email messages as an attack vector. These categories are defined and explored in detail in our <a href="https://blog.cloudflare.com/2023-phishing-report/"><u>phishing threats report</u></a>. In our analysis of malicious emails, we have found that such messages may contain multiple types of threats. In reviewing a weekly aggregation of threat activity trends for these categories, we found that, <a href="https://radar.cloudflare.com/year-in-review/2024#top-email-threats"><u>averaged across 2024</u></a>, 42.9% of malicious email messages contained deceptive links, with the share reaching 70% at times throughout the year. Activity for this thread category was spiky, with low points seen in the March to May timeframe, and a general downward trend visible from July through November.</p><p>Identity deception was a similarly active threat category, with such threats also found in up to 70% of analyzed emails several weeks throughout the year. Averaged across 2024, 35.1% of emails contained attempted identity deception. The activity pattern for this threat category appears to be somewhat similar to deceptive links, with a number of the peaks and valleys occurring during the same weeks. At times, identity deception was a more prevalent threat in analyzed emails than deceptive links, as seen in the graph below.</p><p>Among other threat categories, extortion saw the most significant change throughout the year. After being found in 86% of malicious emails during the first week of January, its share gradually trended lower throughout the year, finishing November under 10%.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/47EGZSEcRbUY67bsnYTSip/81ce3c89beefe1ddbe66f68710366c87/email_-_threat_category.png" />
          </figure><p><sup><i>Global malicious email threat category trends for Deceptive Links and Identity Deception in 2024</i></sup></p>
    <div>
      <h3>Over 99% of the email messages processed by Cloudflare Email Security from the .bar, .rest, and .uno top level domains (TLDs) were found to be either spam or malicious in nature.</h3>
      <a href="#over-99-of-the-email-messages-processed-by-cloudflare-email-security-from-the-bar-rest-and-uno-top-level-domains-tlds-were-found-to-be-either-spam-or-malicious-in-nature">
        
      </a>
    </div>
    <p>In March 2024, we <a href="https://blog.cloudflare.com/email-security-insights-on-cloudflare-radar/"><u>launched a set of email security insights on Cloudflare Radar</u></a>, including visibility into so-called “dangerous domains” — those top level domains (TLDs) that were found to be the sources of the most spam or malicious email among messages analyzed by Cloudflare Email Security. The analysis is based on the sending domain’s TLD, found in the <code>From</code>: header of an email message. For example, if a message came from <code>sender@example.com</code>, then <code>example.com</code> is the sending domain, and .com is the associated TLD.</p><p><a href="https://radar.cloudflare.com/year-in-review/2024?#most-observed-tlds"><u>In aggregate across 2024</u></a>, we found that the <a href="https://icannwiki.org/.bar"><code><u>.bar</u></code></a>, <a href="https://icannwiki.org/.rest"><code><u>.rest</u></code></a>, and <a href="https://icannwiki.org/.uno"><code><u>.uno</u></code></a> TLDs were the “most dangerous”, each with over 99% of analyzed email messages characterized as either spam or malicious. (These TLDs are all at least a decade old, and each sees at least some usage, with <a href="https://research.domaintools.com/statistics/tld-counts/"><u>between 20,000 and 60,000 registered domain names</u></a>.)  Sorting by malicious email share, the <a href="https://icannwiki.org/.ws"><code><u>.ws</u></code></a> ccTLD (country code top level domain) belonging to Western Samoa came out on top, with over 90% of analyzed emails categorized as malicious. Sorting by spam email share, <a href="https://icannwiki.org/.quest"><code><u>.quest</u></code></a> is the biggest offender, with over 88% of emails originating from associated domains characterized as spam.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/30rfi3V9NkY31itUpHQ9is/d1cbb9fce0ecf5a1c3a237f2694c5a13/email_-_dangerous_tlds.png" />
          </figure><p><sup><i>TLDs originating the largest total shares of malicious and spam email in 2024</i></sup></p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>The Internet is an amazingly complex and dynamic organism, constantly changing, growing, and evolving.</p><p>With the <a href="https://radar.cloudflare.com/year-in-review/2024"><u>Cloudflare Radar 2024 Year In Review</u></a>, we are providing insights into the change, growth, and evolution that we have measured and observed throughout the year. Trend graphs, maps, tables, and summary statistics provide our unique perspectives on Internet traffic, Internet quality, and Internet security, and how key metrics across these areas vary around the world and over time.</p><p>We strongly encourage you to visit the <a href="https://radar.cloudflare.com/year-in-review/2024"><u>Cloudflare Radar 2024 Year In Review microsite</u></a> and explore the trends for your country/region, and to consider how they impact your organization so that you are appropriately prepared for 2025. In addition, for insights into the top Internet services across multiple industry categories, we encourage you to read the companion Year in Review blog post, <a href="https://blog.cloudflare.com/radar-2024-year-in-review-internet-services/"><i><u>From ChatGPT to Temu: ranking top Internet services in 2024</u></i></a>.</p><p>If you have any questions, you can contact the Cloudflare Radar team at radar@cloudflare.com or on social media at <a href="https://twitter.com/CloudflareRadar"><u>@CloudflareRadar</u></a> (X), <a href="https://noc.social/@cloudflareradar"><u>https://noc.social/@cloudflareradar</u></a> (Mastodon), and <a href="https://bsky.app/profile/radar.cloudflare.com"><u>radar.cloudflare.com</u></a> (Bluesky).</p>
    <div>
      <h2>Acknowledgements</h2>
      <a href="#acknowledgements">
        
      </a>
    </div>
    <p>As it is every year, it truly is a team effort to produce the data, microsite, and content for our annual Year in Review, and I’d like to acknowledge those team members that contributed to this year’s effort. Thank you to: Jorge Pacheco, Sabina Zejnilovic, Carlos Azevedo, Mingwei Zhang (Data Analysis); André Jesus, Nuno Pereira (Front End Development); João Tomé (Most popular Internet services); Jackie Dutton, Kari Linder, Guille Lasarte (Communications); Eunice Giles (Brand Design); Jason Kincaid (blog editing); and Paula Tavares (Engineering Management), as well as countless other colleagues for their answers, edits, support, and ideas.</p> ]]></content:encoded>
            <category><![CDATA[Year in Review]]></category>
            <category><![CDATA[Radar]]></category>
            <category><![CDATA[Trends]]></category>
            <category><![CDATA[Internet Traffic]]></category>
            <category><![CDATA[Outage]]></category>
            <category><![CDATA[Internet Quality]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">4oLkLHLIZ1vibq8dtPJP6F</guid>
            <dc:creator>David Belson</dc:creator>
        </item>
        <item>
            <title><![CDATA[Resilient Internet connectivity in Europe mitigates impact from multiple cable cuts]]></title>
            <link>https://blog.cloudflare.com/resilient-internet-connectivity-baltic-cable-cuts/</link>
            <pubDate>Wed, 20 Nov 2024 21:30:00 GMT</pubDate>
            <description><![CDATA[ Two recent cable cuts that occurred in the Baltic Sea resulted in little-to-no observable impact to the affected countries, in large part because of the significant redundancy and resilience of Internet infrastructure in Europe.
 ]]></description>
            <content:encoded><![CDATA[ <p></p><p>When cable cuts occur, whether submarine or terrestrial, they often result in observable disruptions to Internet connectivity, knocking a network, city, or country offline. This is especially true when there is insufficient resilience or alternative paths — that is, when a cable is effectively a single point of failure. Associated observations of traffic loss resulting from these disruptions are frequently covered by Cloudflare Radar in social media and blog posts. However, two recent cable cuts that occurred in the Baltic Sea resulted in little-to-no observable impact to the affected countries, as we discuss below, in large part because of the significant redundancy and resilience of Internet infrastructure in Europe.</p>
    <div>
      <h2>BCS East-West Interlink</h2>
      <a href="#bcs-east-west-interlink">
        
      </a>
    </div>
    
    <div>
      <h3>Traffic volume indicators</h3>
      <a href="#traffic-volume-indicators">
        
      </a>
    </div>
    <p>On Sunday, November 17 2024, the <a href="https://www.submarinecablemap.com/submarine-cable/bcs-east-west-interlink"><u>BCS East-West Interlink submarine cable</u></a> connecting Sventoji, Lithuania and Katthammarsvik, Sweden was <a href="https://www.datacenterdynamics.com/en/news/lithuania-sweden-subsea-cable-cut-was-10m-from-severed-finnish-german-cable/"><u>reportedly damaged</u></a> around 10:00 local (Lithuania) time (08:00 UTC). A <a href="https://www.datacenterdynamics.com/en/news/lithuania-sweden-subsea-cable-cut-was-10m-from-severed-finnish-german-cable/"><u>Data Center Dynamics article about the cable cut</u></a> quotes the CTO of Telia Lietuva, the telecommunications provider that operates the cable, and notes “<i>The Lithuanian cable carried about a third of the nation's Internet capacity, but capacity was carried via other routes.</i>”</p><p>As the Cloudflare Radar graphs below show, there was no apparent impact to traffic volumes in either country at the time that the cables were damaged. The NetFlows graphs represent the number of bytes that Cloudflare sends to users and clients in response to their requests.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7xDllSeyPtet5ovpXI3GMH/6bc5680bbd8219f417e891102c4ffb0e/BLOG-2626_2.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2RE0V8M2CFPt1uxsOjhSBz/dc5c261808c021fc9ff0ab65963fce0b/BLOG-2626_3.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6wjHy79iHDqcuzknxcK14o/a4526787c3fdde54a6627b16717aaec0/BLOG-2626_4.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2zPi0GZ54gGDjCMyivhH0C/f6e50728ae7dc110fd15edc40f43b694/BLOG-2626_5.png" />
          </figure>
    <div>
      <h3>Internet quality</h3>
      <a href="#internet-quality">
        
      </a>
    </div>
    <p>Internet quality metrics for both countries show changes in measured bandwidth and latency throughout the day on Sunday, but with no sudden anomalous shifts visible around the time of the cable cut. (The loss of connectivity associated with a cable cut potentially manifests itself as an increase in latency and concurrent decrease in bandwidth due to loss of capacity.) The latency graph for Sweden does show an increase in latency, but it began before the cable cut occurred, is similar to a pattern visible several hours earlier, and is matched by an increase in measured bandwidth, so it is unlikely to be related to the cable cut event.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3aXlBjU08WKKT0OSnWBsIP/eb32b937d1729160dec83204bba06e91/BLOG-2626_6.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5eeR8OlwA5CHROGkqKn1KJ/e372a25ad2a93aaa38339f360f3a7b0e/BLOG-2626_7.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6wGJFP5K0LkEjw8DXbQ45z/516cf3b04ac5c5f2f82398be508fe4b0/BLOG-2626_8.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2ZLyXG9lCsfazoMm1b6c4z/1af296913ff00da991c04d1422bd49fd/BLOG-2626_9.png" />
          </figure>
    <div>
      <h3>Visibility in BGP events, announced IP address space unaffected</h3>
      <a href="#visibility-in-bgp-events-announced-ip-address-space-unaffected">
        
      </a>
    </div>
    <p><a href="https://developers.cloudflare.com/radar/glossary/#bgp-announcements"><u>BGP announcements</u></a> are a way for network providers to communicate routing information to other networks, and announcement activity observed on Telia Lietuva’s <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/"><u>autonomous systems</u></a> around the time of the cable cut may be related to the re-routing referenced in the article. No change in announced IP address space was visible for any of these autonomous systems, suggesting no loss of connectivity as the capacity was re-routed.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7dWHQYn0cJ3PdivPgI2sDI/696207021bf5e75d061040c33505923a/BLOG-2626_10.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5QPU28IyaW3QPCqaIzTZec/19b3ed7675d23441c9493c2313134a41/BLOG-2626_11.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4zSx3b14HwaBIFX5qc59bq/4f8e2b4951498a2edcae846068927350/BLOG-2626_12.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1P04AQZbfZOTisBPutbLZa/5e6520bfd1782976538c98914134fe94/BLOG-2626_13.png" />
          </figure><p>Telegeography’s <a href="http://submarinecablemap.com"><u>submarinecablemap.com</u></a> illustrates, at least in part, the resilience in connectivity enjoyed by these two countries. In addition to the damaged cable, it shows that <a href="https://www.submarinecablemap.com/country/lithuania"><u>Lithuania</u></a> is <a href="https://www.submarinecablemap.com/submarine-cable/bcs-east"><u>connected to neighboring Latvia</u></a> as well as <a href="https://www.submarinecablemap.com/submarine-cable/nordbalt"><u>to the Swedish mainland</u></a>. Over 20 submarine cables land in <a href="https://www.submarinecablemap.com/country/sweden"><u>Sweden</u></a>, connecting it to multiple countries across Europe. In addition to the submarine resilience, network providers in both countries can take advantage of terrestrial fiber connections to neighboring countries, such as those illustrated in a <a href="https://www.arelion.com/our-network"><u>European network map from Arelion</u></a> (formerly Telia), which is only one of the large European backbone providers.</p>
    <div>
      <h2>C-Lion1</h2>
      <a href="#c-lion1">
        
      </a>
    </div>
    
    <div>
      <h3>Traffic volume indicators</h3>
      <a href="#traffic-volume-indicators">
        
      </a>
    </div>
    <p>Less than a day later, the <a href="https://www.submarinecablemap.com/submarine-cable/c-lion1"><u>C-Lion1 submarine cable</u></a>, which connects Helsinki, Finland and Rostock Germany was <a href="https://www.datacenterdynamics.com/en/news/helsinki-rostock-subsea-cable-between-finland-and-germany-severed/"><u>reportedly damaged</u></a> during the early morning hours of Monday, November 18. Cinia, the telecommunications company that owns the cable, <a href="https://www.theguardian.com/world/2024/nov/19/baltic-sea-cables-damage-sabotage-german-minister"><u>said</u></a> that the cable stopped working at about 02:00 UTC. </p><p>In this situation as well, as the Cloudflare Radar graphs below show, there was no apparent impact to traffic volumes in either country at the time that the cables were damaged. The Finland graphs, week-on-week, show fewer bytes transferred and fewer HTTP requests, but that difference is present before the cable cut at 02:00 UTC. However, the trend of the current line does not change after the cable cut, so the two events would appear unrelated. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4OQtSFBgBzdmnzWz8AdM7Z/3a66cec98698bf6d506d93fc13fe4c74/BLOG-2626_14.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4gqxHsD3ykjWGWhATVU8iw/f74916e1faf186efef94e6dc29bbca58/BLOG-2626_15.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4gh5eLQZNabsz9XnOSgtU1/fb6d0770c62ce016d73c1a3c47ae99f1/BLOG-2626_16.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6FIKh8U2nxHkxMdXho6HsI/9e349d0767df5d34d3b8274710c2cb0b/BLOG-2626_17.png" />
          </figure>
    <div>
      <h3>Internet quality</h3>
      <a href="#internet-quality">
        
      </a>
    </div>
    <p>By looking at volume-related metrics, alone, Internet connectivity would appear to be unaffected by the cable cut.</p><p>If, however, we change perspective and look at Internet quality, a brief yet interesting change is visible for Finland around the reported time of the cable damage, though it isn’t clear whether it is related in any way. Just after midnight, median measured bandwidth, previously consistent around 50 Mbps begins to grow, peaking just over 200 Mbps around 03:00 UTC. Around that same time, measured median latency also begins to drop, falling from around 30 ms to a low of 13 ms, also around 03:00 UTC. Median bandwidth returned to normal levels around 06:00 UTC, while latency took about two hours longer to return to normal levels.  These observed  improvements in bandwidth and latency could have been due to traffic being re-routed to along paths with better connectivity to measurement endpoints, but because the shifts began before the cable damage occurred, and recovered shortly thereafter, that is unlikely to be the root cause.</p><p>In Germany, a brief minor increase in median bandwidth peaked around 02:45 UTC, while no notable changes were observed in latency.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/94V0coi6oFBdUMX1SVyl7/44738b06af2e51b4e436c84dbe6a1a79/BLOG-2626_18.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/Bqy5uQ76FwmmOX92Co4cE/96190329454e264966119a0f9a4533ff/BLOG-2626_19.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1BJCVdjJMILFubi4SW8HR6/7b97343910ab70cc1a4cad3d3565a727/BLOG-2626_20.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3lf5GR9ElhjpW0wYzPieNI/c02a588af54ac36521f901307d9f62f7/BLOG-2626_21.png" />
          </figure>
    <div>
      <h3>BGP business as usual</h3>
      <a href="#bgp-business-as-usual">
        
      </a>
    </div>
    <p>From a routing perspective, there was no notable BGP announcement activity observed for top autonomous systems in either Finland or Germany around 02:00 on November 18, and total announced IP address space aggregated at a country level also demonstrated no change.</p><p>Telegeography’s <a href="http://submarinecablemap.com"><u>submarinecablemap.com</u></a> shows that both Finland and Germany also have significant redundancy and resilience from a submarine cable perspective, with over 10 cables landing in <a href="https://www.submarinecablemap.com/country/finland"><u>Finland</u></a>, and nearly 10 landing in <a href="https://www.submarinecablemap.com/country/germany"><u>Germany</u></a>, including <a href="https://www.submarinecablemap.com/submarine-cable/atlantic-crossing-1-ac-1"><u>Atlantic Crossing-1 (AC-1)</u></a>, which connects to the United States over two distinct paths. Terrestrial fiber maps from <a href="https://www.arelion.com/our-network"><u>Arelion</u></a> and <a href="https://map.eunetworks.com/?_ga=2.220121625.1822578510.1543942339-1757484894.1536310774"><u>eunetworks</u></a> (as just two examples) show multiple redundant fiber routes within both countries, as well as cross-border routes to other neighboring countries, enabling more resilient Internet connectivity.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>As we have discussed in multiple prior blog posts (<a href="https://blog.cloudflare.com/not-one-not-two-but-three-undersea-cables-cut-in-jersey"><u>Jersey, 2016</u></a>; <a href="https://blog.cloudflare.com/aae-1-smw5-cable-cuts"><u>AAE-1/SMW5, 2022</u></a>; <a href="https://blog-cloudflare-com.webpkgcache.com/doc/-/s/blog.cloudflare.com/undersea-cable-failures-cause-internet-disruptions-across-africa-march-14-2024"><u>WACS/MainOne/SAT3/ACE, 2024</u></a>; <a href="https://blog.cloudflare.com/east-african-internet-connectivity-again-impacted-by-submarine-cable-cuts/"><u>EASSy/Seacom, 2024</u></a>), cable cuts often cause significant disruptions to Internet connectivity, in many cases because they represent a concentrated point of vulnerability, whether for an individual network provider, city/state, or country. These disruptions are often quite lengthy as well, due to the time needed to marshal repair resources, identify the location of the damage, etc. Although it is not always feasible due to financial or geographic constraints, building redundant and resilient network architecture, at multiple levels, is a best practice. This includes the sending traffic over multiple physical cables (both submarine and terrestrial), connecting to multiple peer and upstream network providers, and even avoiding single points of failure in core Internet resources like DNS servers.</p><p>The Cloudflare Radar team continually monitors the status of Internet connectivity in countries/regions around the world, and we share our observations on the <a href="https://radar.cloudflare.com/outage-center"><u>Cloudflare Radar Outage Center</u></a>, via social media, and in posts on <a href="https://blog.cloudflare.com/tag/cloudflare-radar/"><u>blog.cloudflare.com</u></a>. Follow us on social media at <a href="https://twitter.com/CloudflareRadar"><u>@CloudflareRadar</u></a> (X), <a href="https://noc.social/@cloudflareradar"><u>https://noc.social/@cloudflareradar</u></a> (Mastodon), and <a href="https://bsky.app/profile/radar.cloudflare.com"><u>radar.cloudflare.com</u></a> (Bluesky), or contact us via email.</p> ]]></content:encoded>
            <category><![CDATA[Radar]]></category>
            <category><![CDATA[Internet Traffic]]></category>
            <category><![CDATA[Traffic]]></category>
            <category><![CDATA[Outage]]></category>
            <guid isPermaLink="false">5DP2F9GATeUBYyfl6pQMej</guid>
            <dc:creator>David Belson</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare’s perspective of the October 30, 2024, OVHcloud outage]]></title>
            <link>https://blog.cloudflare.com/cloudflare-perspective-of-the-october-30-2024-ovhcloud-outage/</link>
            <pubDate>Wed, 30 Oct 2024 00:00:00 GMT</pubDate>
            <description><![CDATA[ On October 30, 2024, cloud hosting provider OVHcloud (AS16276) suffered a brief but significant outage. Within this post, we review Cloudflare’s perspective on this outage. ]]></description>
            <content:encoded><![CDATA[ <p>On October 30, 2024, cloud hosting provider <a href="https://radar.cloudflare.com/as16276"><u>OVHcloud (AS16276)</u></a> suffered a brief but significant outage. According to their <a href="https://network.status-ovhcloud.com/incidents/qgb1ynp8x0c4"><u>incident report</u></a>, the problem started at 13:23 UTC, and was described simply as “<i>An incident is in progress on our backbone infrastructure.</i>” OVHcloud noted that the incident ended 17 minutes later, at 13:40 UTC. As a major global cloud hosting provider, some customers use OVHcloud as an origin for sites delivered by Cloudflare — if a given content asset is not in our cache for a customer’s site, we retrieve the asset from OVHcloud.</p><p>We observed traffic starting to drop at 13:21 UTC, just ahead of the reported start time. By 13:28 UTC, it was approximately 95% lower than pre-incident levels. Recovery appeared to start at 13:31 UTC, and by 13:40 UTC, the reported end time of the incident, it had reached approximately 50% of pre-incident levels. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/62w8PcLJ3Q05F1BtA12zUb/6d8ce87f85eb585a7fe0ac02f8cd93d5/image4.jpg" />
          </figure><p><sup><i>Traffic from OVHcloud (AS16276) to Cloudflare</i></sup></p><p></p><p>Cloudflare generally exchanges most of our traffic with OVHcloud over peering links. However, as shown below, peered traffic volume during the incident fell significantly. It appears that some small amount of traffic briefly began to flow over transit links from Cloudflare to OVHcloud due to sudden changes in which Cloudflare data centers we were receiving OVHcloud requests. (Peering is a direct connection between two network providers for the purpose of exchanging traffic. Transit is when one network pays an intermediary network to carry traffic to the destination network.) </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2L0IaXd7B5C6RX23iTG5Pf/3fd2489f159e2281d191f157f5695f94/image3.jpg" />
          </figure><p>Because we peer directly, we exchange most traffic over our private peering sessions with OVHcloud. Instead, we found OVHcloud routing to Cloudflare dropped entirely for a few minutes, then switched to just a single Internet Exchange port in Amsterdam, and finally normalized globally minutes later.</p><p>As the graphs below illustrate, we normally see the largest amount of traffic from OVHcloud in our Frankfurt and Paris data centers, as <a href="https://www.ovhcloud.com/en/about-us/global-infrastructure/regions/"><u>OVHcloud has large data center presences in these regions</u></a>. However, in that shift to transit, and the shift to an Amsterdam Internet Exchange peering point, we saw a spike in traffic routed to our Amsterdam data center. We suspect the routing shifts are the earliest signs of either internal BGP reconvergence, or general network recovery within AS16276, starting with their presence nearest our Amsterdam peering point.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/yCDGCplEsmqXU7uRifjTU/12176147c10ab6e9a766ee5d788b133a/image2.jpg" />
          </figure><p>The <a href="https://network.status-ovhcloud.com/incidents/qgb1ynp8x0c4"><u>postmortem</u></a> published by OVHcloud noted that the incident was caused by “<i>an issue in a network configuration mistakenly pushed by one of our peering partner[s]</i>” and that “<i>We immediately reconfigured our network routes to restore traffic.</i>” One possible explanation for the backbone incident may be a BGP route leak by the mentioned peering partner, where OVHcloud could have accepted a full Internet table from the peer and therefore overwhelmed their network or the peering partner’s network with traffic, or caused unexpected internal BGP route updates within AS16276.</p><p>Upon investigating what route leak may have caused this incident impacting OVHcloud, we found evidence of a maximum prefix-limit threshold being breached on our peering with <a href="https://radar.cloudflare.com/as49981"><u>Worldstream (AS49981)</u></a> in Amsterdam. </p>
            <pre><code>Oct 30 13:16:53  edge02.ams01 rpd[9669]: RPD_BGP_NEIGHBOR_STATE_CHANGED: BGP peer 141.101.65.53 (External AS 49981) changed state from Established to Idle (event PrefixLimitExceeded) (instance master)</code></pre>
            <p></p><p>As the number of received prefixes exceeded the limits configured for our peering session with Worldstream, the BGP session automatically entered an idle state. This prevented the route leak from impacting Cloudflare’s network. In analyzing <a href="https://datatracker.ietf.org/doc/html/rfc7854"><u>BGP Monitoring Protocol (BMP)</u></a> data from AS49981 prior to the automatic session shutdown, we were able to confirm Worldstream was sending advertisements with AS paths that contained their upstream Tier 1 transit provider.</p><p>During this time, we also detected over 500,000 BGP announcements from AS49981, as Worldstream was announcing routes to many of their peers, visible on <a href="https://radar.cloudflare.com/routing/as49981?dateStart=2024-10-30&amp;dateEnd=2024-10-30#bgp-announcements"><u>Cloudflare Radar</u></a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2YmTSJfXomzeb3mh93JyRH/15c764790576468a47d3760bc7f48153/Screenshot_2024-10-30_at_12.49.25_PM.png" />
          </figure><p>Worldstream later <a href="https://noc.worldstream.nl"><u>posted a notice</u></a> on their status page, indicating that their network experienced a route leak, causing routes to be unintentionally advertised to all peers:</p><blockquote><p><i>“Due to a configuration error on one of the core routers, all routes were briefly announced to all our peers. As a result, we pulled in more traffic than expected, leading to congestion on some paths. To address this, we temporarily shut down these BGP sessions to locate the issue and stabilize the network. We are sorry for the inconvenience.”</i></p></blockquote><p>We believe Worldstream also leaked routes on an OVHcloud peering session in Amsterdam, which caused today’s impact.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Cloudflare has written about<a href="https://blog.cloudflare.com/cloudflare-1111-incident-on-june-27-2024"> <u>impactful route leaks</u></a> before, and there are multiple methods available to prevent BGP route leaks from impacting your network. One is setting <a href="https://www.rfc-editor.org/rfc/rfc7454.html#section-8"><u>max prefix-limits</u></a> for a peer, so the BGP session is automatically torn down when a peer sends more prefixes than they are expected to. Other forward-looking measures are<a href="https://manrs.org/2023/02/unpacking-the-first-route-leak-prevented-by-aspa/"> <u>Autonomous System Provider Authorization (ASPA) for BGP</u></a>, where Resource Public Key Infrastructure (RPKI) helps protect a network from accepting BGP routes with an invalid AS path, or<a href="https://rfc.hashnode.dev/rfc9234-observed-in-the-wild"> <u>RFC9234,</u></a> which prevents leaks by tying strict customer, peer, and provider relationships to BGP updates. For improved Internet resilience, we recommend that network operators follow recommendations defined within<a href="https://manrs.org/netops/"> <u>MANRS for Network Operators</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Radar]]></category>
            <category><![CDATA[Trends]]></category>
            <category><![CDATA[Consumer Services]]></category>
            <category><![CDATA[Outage]]></category>
            <guid isPermaLink="false">Vn5VV2dLkJbOn1YNqSSBv</guid>
            <dc:creator>Bryton Herdes</dc:creator>
            <dc:creator>David Belson</dc:creator>
            <dc:creator>Tanner Ryan</dc:creator>
        </item>
        <item>
            <title><![CDATA[Forced offline: the Q3 2024 Internet disruption summary]]></title>
            <link>https://blog.cloudflare.com/q3-2024-internet-disruption-summary/</link>
            <pubDate>Tue, 29 Oct 2024 13:05:00 GMT</pubDate>
            <description><![CDATA[ The third quarter of 2024 was particularly active, with quite a few significant Internet disruptions.  ]]></description>
            <content:encoded><![CDATA[ <p>Cloudflare’s network spans more than 330 cities in over 120 countries, where we interconnect with over 13,000 network providers in order to provide a broad range of services to millions of customers. The breadth of both our network and our customer base provides us with a unique perspective on Internet resilience, enabling us to observe the impact of Internet disruptions. Thanks to <a href="https://radar.cloudflare.com/"><u>Cloudflare Radar</u></a> functionality released earlier this year, we can explore the impact from a <a href="https://developers.cloudflare.com/radar/glossary/#bgp-announcements"><u>routing</u></a> perspective, as well as a traffic perspective, at both a <a href="https://x.com/CloudflareRadar/status/1768654743742579059"><u>network</u></a> and <a href="https://x.com/CloudflareRadar/status/1773704264650543416"><u>location</u></a> level.</p><p>As we have noted in the past, this post is intended as a summary overview of observed and confirmed disruptions, and is not an exhaustive or complete list of issues that have occurred during the quarter. </p><p>A larger list of detected traffic anomalies is available in the <a href="https://radar.cloudflare.com/outage-center#traffic-anomalies"><u>Cloudflare Radar Outage Center</u></a>.</p><p>Having said that, the third quarter of 2024 was particularly active, with quite a few significant Internet disruptions. Unfortunately, <a href="#government-directed"><u>governments continued to impose nationwide Internet shutdowns</u></a> intended to prevent cheating on exams. <a href="#cable-cuts"><u>Damage to both terrestrial and submarine cables</u></a> impacted Internet connectivity across Africa and in other parts of the world. <a href="#severe-weather"><u>Damage caused by an active hurricane season</u></a> caused Internet outages across the Caribbean and in multiple parts of the United States. Because Internet connectivity is dependent on reliable electrical power, both <a href="#power-outages"><u>planned and unplanned power outages</u></a> in South America and Africa resulted in multi-hour Internet disruptions. <a href="#military-action"><u>Military action</u></a> continued to cause Internet outages in affected countries, as did <a href="#maintenance"><u>infrastructure maintenance</u></a>, <a href="#fire"><u>fire</u></a>, and a purported <a href="#cyberattack"><u>cyberattack</u></a>. The quarter also saw several noteworthy Internet disruptions that <a href="#unknown"><u>did not have verified causes</u></a>.</p>
    <div>
      <h2>Government Directed</h2>
      <a href="#government-directed">
        
      </a>
    </div>
    <p>Over the past several years, we have seen multiple governments around the world implement Internet shutdowns in response to protests within their countries. Some shutdowns are more targeted, affecting only (a subset of) mobile Internet providers, while others are more aggressive, effectively cutting off Internet connectivity at a national level. In addition, we all too frequently see governments implement nationwide multi-hour Internet shutdowns in an effort to prevent students from cheating on national exams. Unfortunately, governments were active in both respects during the third quarter, as we observed multiple government directed Internet shutdowns. Several were covered in our August 1 blog post, <a href="https://blog.cloudflare.com/a-recent-spate-of-internet-disruptions-july-2024/"><i><u>A recent spate of Internet disruptions</u></i></a><i>.</i></p>
    <div>
      <h3>Bangladesh</h3>
      <a href="#bangladesh">
        
      </a>
    </div>
    <p><a href="https://timesofindia.indiatimes.com/world/south-asia/internet-shut-nationwide-bandh-announced-why-is-bangladesh-experiencing-deadly-protests/articleshow/111829956.cms"><u>Violent student protests</u></a> in <a href="https://radar.cloudflare.com/bd"><u>Bangladesh</u></a> against quotas in government jobs and rising unemployment rates led the government to order the nationwide shutdown of mobile Internet connectivity on July 18, <a href="https://therecord.media/bangladesh-mobile-internet-social-media-outages-student-protests"><u>reportedly</u></a> to “<i>ensure the security of citizens.</i>” This government-directed shutdown ultimately became a near-complete Internet outage for the country, as broadband networks were taken offline as well. At a country level, <a href="https://radar.cloudflare.com/traffic/bd?dateStart=2024-07-14&amp;dateEnd=2024-07-28"><u>Internet traffic in Bangladesh dropped to near zero</u></a> just before 21:00 local time (15:00 UTC). <a href="https://radar.cloudflare.com/routing/bd?dateStart=2024-07-14&amp;dateEnd=2024-07-28"><u>Announced IP address space from the country dropped to near zero</u></a> at that time as well, meaning that nearly every network in the country was disconnected from the Internet.</p><p>Traffic and announced IP address space at a national level began to recover around 18:00 local time (12:00 UTC) on July 23, and continued over the next several days, as <a href="https://developingtelecoms.com/telecom-business/telecom-regulation/17059-mobile-internet-in-bangladesh-to-stay-dark-until-at-least-sunday.html"><u>fixed broadband connectivity was restored</u></a>, with <a href="https://developingtelecoms.com/telecom-business/telecom-regulation/17067-mobile-internet-returns-to-bangladesh-but-not-social-media-apps.html"><u>mobile connectivity returning on July 28</u></a>. The initial restoration was characterized as a “trial run”, prioritizing banking, commercial sectors, technology firms, exporters, outsourcing providers and media outlets, <a href="https://www.dhakatribune.com/bangladesh/352554/broadband-internet-restored-in-limited-areas-after"><u>according to</u></a> the state minister for post, telecommunication and information technology.</p><p>Ahead of this nationwide shutdown, we observed outages across several Bangladeshi network providers, perhaps foreshadowing what was to come. At <a href="https://radar.cloudflare.com/as24389"><u>AS24389 (Grameenphone)</u></a>, a complete Internet outage started at 01:30 local time on July 18 (19:30 UTC on July 17), with a total loss of both <a href="https://radar.cloudflare.com/traffic/as24389?dateStart=2024-07-14&amp;dateEnd=2024-07-29"><u>Internet traffic</u></a> and <a href="https://radar.cloudflare.com/routing/as24389?dateStart=2024-07-14&amp;dateEnd=2024-07-29"><u>announced IP address space</u></a>.</p><p>The outage at <a href="https://radar.cloudflare.com/as45245"><u>AS25245 (Banglalink)</u></a> started at 02:15 local time on July 18 (20:15 UTC on July 17) as both <a href="https://radar.cloudflare.com/traffic/as45245?dateStart=2024-07-14&amp;dateEnd=2024-07-29"><u>Internet traffic</u></a> and <a href="https://radar.cloudflare.com/routing/as45245?dateStart=2024-07-14&amp;dateEnd=2024-07-29"><u>announced IP address space</u></a> dropped to zero.</p><p>At <a href="https://radar.cloudflare.com/as24432"><u>AS24432 (Robi Axiata)</u></a>, an Internet outage was observed starting around 06:30 local time on July 18 (00:30 UTC), with both <a href="https://radar.cloudflare.com/traffic/as24432?dateStart=2024-07-14&amp;dateEnd=2024-07-29"><u>Internet traffic</u></a> and <a href="https://radar.cloudflare.com/routing/as24432?dateStart=2024-07-14&amp;dateEnd=2024-07-29"><u>announced IP address space</u></a> disappearing at that time.</p><p><a href="https://radar.cloudflare.com/traffic/as58715?dateStart=2024-07-14&amp;dateEnd=2024-07-29"><u>Internet traffic</u></a> at <a href="https://radar.cloudflare.com/as58715"><u>AS58715 (Earth Telecommunication)</u></a> began to fall at 18:00 local time on July 18 (12:00 UTC), reaching zero four hours later. <a href="https://radar.cloudflare.com/routing/as58715?dateStart=2024-07-14&amp;dateEnd=2024-07-29"><u>Announced IP address space</u></a> began to fall at 21:00 local time (15:00 UTC), and was completely gone by 21:25 local time (15:25 UTC).</p><p><a href="https://radar.cloudflare.com/as63526"><u>AS63526 (Carnival Internet)</u></a> was one of the last to fall before the complete shutdown, <a href="https://radar.cloudflare.com/traffic/as63526?dateStart=2024-07-14&amp;dateEnd=2024-07-29"><u>losing traffic</u></a> at 20:45 local time (14:45 UTC), and seeing all of its <a href="https://radar.cloudflare.com/routing/as63526?dateStart=2024-07-14&amp;dateEnd=2024-07-29"><u>announced IP address space</u></a> withdrawn over the following hour.</p><p>These mobile connectivity outages lasted from July 18 through July 28. Just a few days after connectivity was restored, <a href="https://www.business-standard.com/world-news/bangladesh-protests-internet-shutdown-curfew-imposed-97-dead-in-clashes-124080500205_1.html"><u>additional clashes between police and protestors</u></a> drove the government to <a href="https://developingtelecoms.com/telecom-business/telecom-regulation/17105-bangladesh-switches-off-mobile-internet-again-as-protests-escalate-2.html"><u>order mobile Internet connectivity to be shut down</u></a> again. As shown in the graphs below, traffic on these mobile network providers dropped between 13:30 and 14:15 local time (07:30 to 08:15 UTC) on Sunday, August 4.</p><p>These protests ultimately led the government to order a full Internet shutdown in the country, with both traffic and announced IP address space dropping precipitously around 10:30 local time (04:30 UTC) on Monday, August 5. However, the shutdown appeared to be short-lived, as <a href="https://en.prothomalo.com/bangladesh/gm0o97gu3x"><u>broadband connectivity</u></a> began to recover around 13:20 local time (07:20 UTC), with <a href="https://en.prothomalo.com/bangladesh/aoczyp8xg8"><u>mobile connectivity</u></a> being restored around 14:00 local time (08:00 UTC).</p>
    <div>
      <h3>Iraqi Kurdistan</h3>
      <a href="#iraqi-kurdistan">
        
      </a>
    </div>
    <p>Both <a href="https://radar.cloudflare.com/iq"><u>Iraq</u></a> and Iraqi Kurdistan (the autonomous Kurdistan region in the northern part of the country) regularly implement government directed Internet shutdowns to prevent cheating on secondary and baccalaureate exams. Within Iraqi Kurdistan, we observed two sets of exam-related Internet shutdowns during the third quarter. The impacts of the shutdowns are visible on traffic from networks that operate within the region, as well as on the country-level graphs for Iraq.</p><p>The first round of shutdowns occurred in July, impacting <a href="https://radar.cloudflare.com/as59625"><u>AS59625 (KorekTel)</u></a>, <a href="https://radar.cloudflare.com/as21277"><u>AS21277 (Newroz Telecom)</u></a>, <a href="https://radar.cloudflare.com/as48492"><u>AS48492 (IQ Online)</u></a>, and <a href="https://radar.cloudflare.com/as206206"><u>AS206206 (KNET)</u></a> between 06:00 - 08:00 local time (03:00 - 05:00 UTC) on July 3, 7, 10, and 14. This is consistent with shutdowns observed in the <a href="https://blog.cloudflare.com/q2-2024-internet-disruption-summary/"><u>second quarter</u></a>, as well as in <a href="https://blog.cloudflare.com/exam-internet-shutdowns-iraq-algeria/"><u>June 2023</u></a>. None of the impacted networks experienced a drop in announced IP address space during these shutdowns.</p><p>The second set of shutdowns in Iraqi Kurdistan took place across multiple days during the back half of August. On August 17, 19, 21, 24, 26, 28, and 31, all four network providers were again impacted, as seen in the graphs below, with traffic dropping between 06:00 - 08:00 local time (03:00 - 05:00 UTC).</p>
    <div>
      <h3>Iraq</h3>
      <a href="#iraq">
        
      </a>
    </div>
    <p>In <a href="https://radar.cloudflare.com/iq"><u>Iraq</u></a>, a second round of exams for 12th graders resulted in over two weeks of regular Internet shutdowns across the country occurring between 06:00 - 08:00 local time (03:00 - 05:00 UTC) on multiple days between August 29 and September 16, intended to prevent cheating on <a href="https://www.facebook.com/Iraq.Ministry.of.Education/posts/pfbid08kbeG2VEaFPweRiH1ofDdRazpVFKnHA2tRXM6pjQgCsQUXmuCar3oDSVsaCnwUZil"><u>second ministerial exams for secondary education</u></a>. Both HTTP traffic and announced IP address space from Iraq dropped during these shutdowns, as seen in the graphs below.</p><p>(Note that the red annotation bar visible on September 11 &amp; 12 on both the country and network-level graphs below highlights an internal data pipeline issue, and is not associated with an Internet shutdown in Iraq.)</p><p>This round of government-directed shutdowns impacted multiple local network providers, including <a href="https://radar.cloudflare.com/as58322"><u>AS58322 (Halasat)</u></a>, <a href="https://radar.cloudflare.com/as51684"><u>AS51684 (AsiaCell)</u></a>, <a href="https://radar.cloudflare.com/as203214"><u>AS203214 (HulumTele)</u></a>, <a href="https://radar.cloudflare.com/as199739"><u>AS199739 (Earthlink)</u></a>, and <a href="https://radar.cloudflare.com/as59588"><u>AS59588 (ZAINAS)</u></a>. In reviewing the distribution of mobile device and desktop traffic at a network level, gaps were observed during the shutdowns on <a href="https://radar.cloudflare.com/traffic/as58322?dateStart=2024-08-28&amp;dateEnd=2024-09-17#mobile-vs-desktop"><u>AS58322</u></a> and <a href="https://radar.cloudflare.com/traffic/as199739?dateStart=2024-08-28&amp;dateEnd=2024-09-17#mobile-vs-desktop"><u>AS199739</u></a>, and to a lesser extent, <a href="https://radar.cloudflare.com/traffic/as203214?dateStart=2024-08-28&amp;dateEnd=2024-09-17#mobile-vs-desktop"><u>AS203214</u></a>, suggesting that these networks were completely offline, while AS56184 and AS59588 remained at least partially online. (This is also corroborated by complete or partial loss of announced IP address space across these networks during the shutdowns.)</p>
    <div>
      <h3>Syria</h3>
      <a href="#syria">
        
      </a>
    </div>
    <p>A first round of exam-related Internet shutdowns took place in <a href="https://radar.cloudflare.com/sy"><u>Syria</u></a> earlier this year, between May 26 and June 13, and were discussed in our <a href="https://blog.cloudflare.com/syria-iraq-algeria-exam-internet-shutdown"><u>Exam-ining recent Internet shutdowns in Syria, Iraq, and Algeria</u></a> blog post. A second set of exams, and the associated Internet shutdowns requested by the Ministry of Education, began on July 25 and ran through August 8, as specified in the schedule <a href="https://www.facebook.com/photo/?fbid=862569062570288&amp;set=a.449047400589125"><u>published by Syrian Telecom on its Facebook page</u></a>.</p><p>The length of the shutdowns varied by day — they all began at 07:00 local time (04:00 UTC), but the end times ranged between 09:45 -10:30 local time (06:45 - 07:30 UTC). The graphs below show the impact at a country level, as well as to <a href="https://radar.cloudflare.com/as29256"><u>AS29256 (Syrian Telecom)</u></a>, the <a href="https://radar.cloudflare.com/routing/sy"><u>primary telecommunications provider within the country</u></a>.</p><p>These shutdowns were also covered in our August 1 blog post, <a href="https://blog.cloudflare.com/a-recent-spate-of-internet-disruptions-july-2024/"><i><u>A recent spate of Internet disruptions</u></i></a><i>.</i></p>
    <div>
      <h3>Mauritania</h3>
      <a href="#mauritania">
        
      </a>
    </div>
    <p>On August 12, a round of <a href="https://ami.mr/fr/archives/251895"><u>baccalaureate exams began</u></a> in <a href="https://radar.cloudflare.com/mr"><u>Mauritania</u></a>, and in an effort to <a href="https://akhbarwatan.net/%D9%85%D9%88%D8%B1%D9%8A%D8%AA%D8%A7%D9%86%D9%8A%D8%A7-%D9%82%D8%B7%D8%B9-%D8%A7%D9%84%D8%A5%D9%86%D8%AA%D8%B1%D9%86%D8%AA-%D8%A8%D8%B3%D8%A8%D8%A8-%D8%A7%D9%84%D9%85%D8%B3%D8%A7%D8%A8%D9%82%D8%A7/"><u>prevent student cheating on the exams</u></a>, the government instituted multiple Internet shutdowns that impacted several major mobile providers. Two shutdowns were observed on August 12, between 08:00 - 12:00 local time (08:00 - 12:00 UTC) and between 15:00 - 19:00 local time (15:00 - 19:00 UTC), and an additional one was observed on August 13, between 08:00 - 12:30 local time (08:00 - 12:30 UTC). Impacted network providers included <a href="https://radar.cloudflare.com/as37508"><u>AS37508 (Mattel)</u></a>, <a href="https://radar.cloudflare.com/as37541"><u>AS37541 (Chinguitel)</u></a>, and <a href="https://radar.cloudflare.com/as29544"><u>AS29544 (Mauritel)</u></a>. Announced IP address space for these networks remained unchanged during the shutdown periods, suggesting that that mobile subscriber connectivity was disabled, as opposed to the networks effectively being disconnected from the Internet, as we have seen in other countries.</p><p>Exam-related Internet shutdowns are, unfortunately, not new to Mauritania, as authorities in the country also implemented them <a href="https://smex.org/mauritania-the-drawbacks-of-disrupting-mobile-internet-after-prisoners-escape/"><u>between 2017 and 2020</u></a>.</p>
    <div>
      <h2>Cable cuts</h2>
      <a href="#cable-cuts">
        
      </a>
    </div>
    
    <div>
      <h3>Eswatini (Swaziland)</h3>
      <a href="#eswatini-swaziland">
        
      </a>
    </div>
    <p>On July 14, MTN Eswatini (AS327765) informed customers via <a href="https://x.com/MTNEswatini/status/1812558000009163027"><u>a post on X</u></a> that “<i>connection to the internet and data services is currently intermittent, because of fiber cable breaks resulting from wildfires.</i>” This apparent connection disruption was visible in Cloudflare Radar between 19:30 and 20:15 local time (17:30 and 18:15 UTC).</p>
    <div>
      <h3>Cameroon</h3>
      <a href="#cameroon">
        
      </a>
    </div>
    <p>In <a href="https://radar.cloudflare.com/cm"><u>Cameroon</u></a>, a fiber cut that occurred on August 4 during sanitation work disrupted mobile connectivity for Cameroon Telecommunications (<a href="https://radar.cloudflare.com/as15964"><u>AS15964 (Camtel)</u></a>) customers for over half a day. According to a (translated) <a href="https://x.com/Camtelonline/status/1820133286058062079"><u>post on X from Camtel</u></a>, “<i>We inform you that due to the sanitation work carried out in the city of Yaoundé, at the place called Cradat, our Voice and Data services have been temporarily interrupted on the entire mobile network.</i>” The observed disruption occurred between 03:00 - 16:30 local time (02:00 - 15:30 UTC). Although it initially started during a time when traffic was lower overnight anyway, both <a href="https://radar.cloudflare.com/explorer?dataSet=http&amp;loc=as15964&amp;dt=2024-08-04_2024-08-04&amp;timeCompare=2024-07-28"><u>request</u></a> and <a href="https://radar.cloudflare.com/explorer?dataSet=netflows&amp;loc=as15964&amp;dt=2024-08-04_2024-08-04&amp;timeCompare=2024-07-28"><u>bytes</u></a> traffic remained lower than the same time a week prior during the duration of the disruption.</p>
    <div>
      <h3>Liberia</h3>
      <a href="#liberia">
        
      </a>
    </div>
    <p>The <a href="https://radar.cloudflare.com/lr"><u>Liberia</u></a> Telecommunications Authority <a href="https://www.facebook.com/TelecommunicationsAuthorityLIBERA/posts/pfbid0Ryktd7oPg1c8UYc1kAiDWo8aQPK3uUADDkuUYgSdeZtC2tYn4JiCYr66oZQoRBc2l"><u>posted an announcement to their Facebook page</u></a> on August 21 noting that “<i>We have been informed by the CCL that the ACE Cable is experiencing interruptions.</i>” (The <a href="https://ace-submarinecable.com/en/submarine-cable/"><u>Africa Coast to Europe (ACE) submarine cable</u></a> connects multiple countries along the West Coast of Africa to Portugal and Europe.) The announcement further noted that the first signs of interruption occurred at 01:00 local time (and UTC), and that <a href="https://radar.cloudflare.com/as37410"><u>Lonestar Cell MTN (AS37410)</u></a> was among the providers that had been “gravely affected” by the cut.</p><p>We observed traffic on Lonestar Cell MTN dropping just after 01:00, in line with the announcement. The network experienced a complete outage lasting over a day and a half, before traffic started to recover at 14:00 local time (and UTC) on August 22. In a <a href="https://www.facebook.com/LonestarCellMTN/posts/pfbid02xE2qxVEt1XnCHgqftjkj34KQssez13PoGTjSGoBAH688g6m4G7XCLHM58SLBCW8Ll"><u>Facebook post</u></a> on August 22, Lonestar Cell MTN confirmed that Internet service had been restored, and that customer accounts would be credited with 500 MB of data for free.</p>
    <div>
      <h3>Niger</h3>
      <a href="#niger">
        
      </a>
    </div>
    <p>A September 7 <a href="https://x.com/airtelniger/status/1832430266222096571"><u>post on X from Airtel Niger</u></a> alerted customers to Internet service disruptions caused by cuts on international fiber optic cables. As a land-locked country, <a href="https://radar.cloudflare.com/ne"><u>Niger</u></a> is dependent on terrestrial connections to networks in neighboring countries, but it isn’t clear which connection or country Airtel Niger’s post was referencing.</p><p>Two significant Internet disruptions were observed around the time of Airtel Niger’s post that we believe are related to the referenced fiber cuts. The first occurred between 18:00 - 21:00 local time (17:00 - 20:00 UTC) on September 6, visible at a country level and at a network level as well on <a href="https://radar.cloudflare.com/as37531"><u>AS37531 (Airtel Niger)</u></a> and <a href="https://radar.cloudflare.com/as37233"><u>AS37233 (Orange Niger / Zamani Telecom)</u></a>. The second disruption occurred between 10:45 - 12:00 local time (09:45 - 11:00 UTC) on September 7, visible at a country level as well as on those two networks. </p>
    <div>
      <h3>Haiti</h3>
      <a href="#haiti">
        
      </a>
    </div>
    <p>Internet disruptions related to submarine cable failures often take a significant amount of time to resolve because of the challenges repair crews face in getting to, and accessing, the damaged portion of the cable, as it is frequently located deep underwater in the middle of an ocean. A September 14 submarine cable failure that impacted <a href="https://radar.cloudflare.com/as27653"><u>Digicel Haiti (AS27653)</u></a> lasted for over a week for a similar, but slightly different, reason.</p><p>A significant loss of traffic on Digicel Haiti was first observed at 08:00 local time (12:00 UTC) on September 14. On September 16, Digicel Haiti <a href="https://x.com/DigicelHT/status/1835774732743876713/photo/1"><u>posted a press release</u></a> confirming that since September 14, a failure had been detected on an international submarine cable belonging to Cable and Wireless, and that the cable damage occurred at Kaliko Beach Club (the property is <a href="https://www.haitilibre.com/en/news-43221-haiti-digicel-failure-detected-on-an-international-submarine-cable-against-a-backdrop-of-litigation.html"><u>reportedly</u></a> used as a cable entry point). Digicel noted that their technicians went to the scene of the damage immediately, but were denied access, apparently because of a business dispute dating back to 2021. The release also explained that technical teams had taken temporary steps to ensure the continuity of essential services, which prevented the incident from resulting in a complete loss of connectivity. On September 22, a subsequent <a href="https://x.com/DigicelHT/status/1837875515148898513/photo/1"><u>press release</u></a> posted by Digicel Haiti announced the restoration of Internet services as of 02:00 local time (06:00 UTC), and referenced vandalism as the cause of the cable damage.</p>
    <div>
      <h3>Kyrgyzstan</h3>
      <a href="#kyrgyzstan">
        
      </a>
    </div>
    <p>Reported damage to the “<a href="https://akipress.com/news:797695:Internet_disruptions_in_Kyrgyzstan_caused_by_damage_of_main_communication_channel/"><u>backbone wire</u></a>” or “<a href="https://economist-kg.translate.goog/novosti/2024/09/25/akniet-obiasnil-prichinu-probliem-s-dostupom-k-intiernietu-v-bishkiekie-i-chuiskoi-oblasti/?_x_tr_sl=auto&amp;_x_tr_tl=en&amp;_x_tr_hl=en&amp;_x_tr_pto=wapp"><u>main cable</u></a>” of an <a href="https://kaktus-media.translate.goog/doc/510016_propal_internet_y_nekotoryh_sotovyh_operatorov_i_provayderov._pochemy.html?_x_tr_sl=auto&amp;_x_tr_tl=en&amp;_x_tr_hl=en&amp;_x_tr_pto=wapp"><u>upstream provider</u></a> resulted in a brief Internet outage for <a href="https://radar.cloudflare.com/kg"><u>Kyrgyzstan</u></a> Internet provider <a href="https://radar.cloudflare.com/as50223"><u>Megacom (AS50223)</u></a> of September 25. <a href="https://radar.cloudflare.com/as12389"><u>AS12389 (Rostelecom)</u></a> is <a href="https://radar.cloudflare.com/routing/as50223"><u>listed</u></a> as Megacom’s only upstream provider.</p><p>The outage lasted for only an hour, between 15:45 and 16:45 local time (09:45 - 10:45 UTC), dropping both traffic and announced IP address space to zero. At a country level, traffic dropped as much as 72% as compared to the previous week. Given the complete loss of both traffic and IP address space, the damage likely occurred on the connection between Megacom and Rostelecom.</p>
    <div>
      <h2>Severe weather</h2>
      <a href="#severe-weather">
        
      </a>
    </div>
    <p>An active hurricane season during July, August, and September resulted in infrastructure damage caused by multiple hurricanes disrupting Internet connectivity in multiple places across the Caribbean and Southeastern United States.</p>
    <div>
      <h3>Grenada &amp; Saint Vincent and the Grenadines</h3>
      <a href="#grenada-saint-vincent-and-the-grenadines">
        
      </a>
    </div>
    <p>At the start of the third quarter, <a href="https://radar.cloudflare.com/gd"><u>Grenada</u></a> and <a href="https://radar.cloudflare.com/vc"><u>Saint Vincent and the Grenadines</u></a> both suffered significant damage from Hurricane Beryl, <a href="https://www.usatoday.com/story/news/nation/2024/07/03/hurricane-beryl-destruction-islands/74296817007/"><u>reportedly</u></a> causing destruction of infrastructure, buildings, agriculture, and the natural environment.</p><p>On July 1, traffic from Grenada dropped significantly at 10:00 local time (14:00 UTC), just ahead of <a href="https://www.cnn.com/2024/07/01/weather/hurricane-beryl-caribbean-landfall-monday/index.html"><u>landfall</u></a> on Grenada’s Carriacou Island. The most significant impacts to traffic were seen for approximately the first 24 hours, though traffic did not return to expected pre-storm levels until around 10:00 local time (14:00 UTC) on July 5.</p><p>Internet traffic in Saint Vincent and the Grenadines was also disrupted by Hurricane Beryl, also falling at 10:00 local time (14:00 UTC). Similar to Grenada, the most significant impact was seen in the first 24 hours, with consistent gradual recovery seen after that time. However, traffic did not return to expected pre-storm levels until July 11.</p>
    <div>
      <h3>Jamaica</h3>
      <a href="#jamaica">
        
      </a>
    </div>
    <p>As Hurricane Beryl continued across the Caribbean, it <a href="https://x.com/weatherchannel/status/1808576720234008765"><u>passed Jamaica on July 3</u></a>. The associated damage that it caused impacted Internet connectivity on the island, with traffic dropping significantly around 14:00 local time (19:00 UTC). As the graph below shows, the disruption was preceded by higher than normal traffic volumes, presumably due to residents looking for information about Beryl. The disruption lasted nearly a week, with traffic returning to expected levels on July 10.</p>
    <div>
      <h3>U.S. Virgin Islands</h3>
      <a href="#u-s-virgin-islands">
        
      </a>
    </div>
    <p>The following month, damage from Tropical Storm Ernesto caused <a href="https://x.com/VIWAPA/status/1824110275710091527"><u>power outages across the U.S. Virgin Islands</u></a>, resulting in disruptions to Internet connectivity. Traffic from the islands dropped precipitously at 22:00 local time on August 13 (02:00 UTC on August 14) and remained lower for over two days, before returning to expected pre-storm levels around 11:00 local time (15:00 UTC) on August 16.</p>
    <div>
      <h3>Bermuda</h3>
      <a href="#bermuda">
        
      </a>
    </div>
    <p>Over the course of the following few days, Ernesto strengthened from a tropical storm into a hurricane, but had weakened by the time it hit <a href="https://radar.cloudflare.com/bm"><u>Bermuda</u></a> on August 16/17. In this case, damage was <a href="https://www.reuters.com/business/environment/hurricane-ernesto-weakens-still-dangerous-it-closes-bermuda-2024-08-17/"><u>reportedly</u></a> limited to power outages, downed trees, and flooding, but even this limited damage disrupted Internet connectivity on the island. As the storm made landfall on the island, traffic levels dropped over 80% at 22:00 local time on August 16 (01:00 UTC on August 17). Traffic levels remained depressed for about two and a half days, recovering to expected levels around 09:00 local time (12:00 UTC) on August 19.</p>
    <div>
      <h3>Nepal</h3>
      <a href="#nepal">
        
      </a>
    </div>
    <p><a href="https://www.dw.com/en/nepal-floods-landslides-leave-at-least-151-dead/a-70354640"><u>Heavy rains in Nepal</u></a> at the end of September resulted in flooding and landslides across much of the country, which in turn resulted in power outages and Internet disruptions. One such disruption believed to be associated with the impacts of the storm was observed on September 28, when <a href="https://radar.cloudflare.com/as23752"><u>AS23752 (Nepal Telecom)</u></a>, <a href="https://radar.cloudflare.com/as45650"><u>AS45650 (Vianet)</u></a>, <a href="https://radar.cloudflare.com/as139922"><u>AS139922 (Dishhome)</u></a>, and <a href="https://radar.cloudflare.com/as17501"><u>AS17501 (Worldlink)</u></a> all saw traffic drop 50 - 70% between 14:15 - 16:00 local time (08:30 - 10:15 UTC).</p>
    <div>
      <h3>United States</h3>
      <a href="#united-states">
        
      </a>
    </div>
    <p>A disruption to traffic from <a href="https://radar.cloudflare.com/as11427"><u>AS11427 (Charter Communications/Spectrum)</u></a> in Texas that occurred between 12:30 and 19:30 local time on July 9 (17:30 - 00:30 UTC) was caused by “<i>a third-party infrastructure issue caused by the impact of Hurricane Beryl</i>”, according to a July 9 <a href="https://x.com/Ask_Spectrum/status/1810804196112806016"><u>post on X</u></a> from the provider. Spectrum <a href="https://x.com/Ask_Spectrum/status/1810735748410396680"><u>acknowledged the issue</u></a> shortly after it began, and <a href="https://x.com/Ask_Spectrum/status/1810851153053118568"><u>followed up again</u></a> after service had been restored.</p><p>Hurricane Helene <a href="https://www.wistv.com/2024/10/03/reviewing-hurricane-helenes-destructive-path-through-southeast/"><u>made landfall in northern Florida</u></a> as a Category 4 storm late in the evening (local time) on September 26, and over the following hours and days, <a href="https://www.usatoday.com/story/graphics/2024/09/29/hurricane-helene-damage-maps/75440587007/"><u>continued north</u></a> through Georgia, South Carolina, and North Carolina, and into Tennessee. Even as it weakened, it caused historic flooding and damage to roads, homes, power lines, and telecommunications infrastructure. Below, we review the traffic impacts observed at a state level in three of the most impacted states, as well as exploring the impact at a network level for selected providers. (<a href="https://www.kentik.com/blog/author/doug-madory/">Doug Madory at Kentik</a> published an excellent <a href="https://www.kentik.com/blog/hurricane-helene-devastates-network-connectivity-in-parts-of-the-south/"><u>blog post exploring the impact of Helene</u></a> from the perspective of their data, and the networks referenced below were informed by that post.)</p>
    <div>
      <h4>Georgia</h4>
      <a href="#georgia">
        
      </a>
    </div>
    <p>Helene entered Georgia early morning on Friday, September 27, and by midday (local time), peak traffic was approximately 20% lower than peak levels seen in the days ahead of the storm. (The lower peaks on September 28 &amp; 29 are likely due to it being a weekend.) At a state level, peak traffic remained lower over the following week, with more recovery seen heading into the week of October 6.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3aYsyZNt5qCWqJhgK8yt1g/0f8be9f2ed8c2ab5121caef9b8e079ff/SEVERE_WEATHER_-_UNITED_STATES_-_Helene_-_Georgia.png" />
          </figure><p>One of the most significantly impacted network providers in Georgia was <a href="https://radar.cloudflare.com/as11240?dateStart=2024-09-24&amp;dateEnd=2024-10-07"><u>AS11240 (ATC Broadband)</u></a>, which saw traffic start to drop around 22:00 local time on September 26 (02:00 UTC on September 27). Subscribers and customers experienced a near complete outage until around 08:00 local time on September 30 (12:00 UTC), when traffic volumes slowly started to recover. The normal diurnal traffic pattern became more clear in the following days, with peak traffic levels continuing to increase over the next week as well.</p><p>Other network providers in Georgia that experienced significant impacts include <a href="https://radar.cloudflare.com/as400511?dateStart=2024-09-24&amp;dateEnd=2024-10-07"><u>AS400511 (Clearwave Fiber)</u></a>, <a href="https://radar.cloudflare.com/as394473?dateStart=2024-09-24&amp;dateEnd=2024-10-07"><u>AS394473 (Brantley Telephone Company)</u></a>, <a href="https://radar.cloudflare.com/as40285?dateStart=2024-09-24&amp;dateEnd=2024-10-07"><u>AS40285 (Northland Cable Television)</u></a>, <a href="https://radar.cloudflare.com/as15313?dateStart=2024-09-24&amp;dateEnd=2024-10-07"><u>AS15313 (Pembroke Telephone Company)</u></a>, and <a href="https://radar.cloudflare.com/as397118?dateStart=2024-09-24&amp;dateEnd=2024-10-07"><u>AS397118 (Glenwood Telephone Company)</u></a>.</p>
    <div>
      <h4>South Carolina</h4>
      <a href="#south-carolina">
        
      </a>
    </div>
    <p>The midday traffic peak on September 27 in South Carolina was just 65% of the preceding days, with the peaks remaining lower over the following two weekend days. Traffic remained somewhat lower during the week following Helene, with peak increases becoming more evident the week of October 6.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/40euoNqEw8bwaAqVgmsmaQ/ccf5c7114e26a85f6445ce9eaf21b00c/SEVERE_WEATHER_-_UNITED_STATES_-_Helene_-_South_Carolina.png" />
          </figure><p>At <a href="https://radar.cloudflare.com/as19212?dateStart=2024-09-24&amp;dateEnd=2024-10-07"><u>AS19212 (Piedmont Rural Telephone)</u></a> in South Carolina, traffic began to fall rapidly around midnight local time on September 27 (04:00 UTC), reaching a state of near complete outage over the next eight hours. A gradual recovery is visible over the following several days, with a more regular pattern becoming evident on October 1, with rapid growth over the following week, accelerating towards the end of the week.</p><p>Other network providers in South Carolina, including <a href="https://radar.cloudflare.com/as397068?dateStart=2024-09-24&amp;dateEnd=2024-10-07"><u>AS397068 (Carolina Connect)</u></a>, <a href="https://radar.cloudflare.com/as10279?dateStart=2024-09-24&amp;dateEnd=2024-10-07"><u>AS10279 (West Carolina Communications)</u></a>, <a href="https://radar.cloudflare.com/as20222?dateStart=2024-09-24&amp;dateEnd=2024-10-07"><u>AS20222</u></a> &amp; <a href="https://radar.cloudflare.com/as21898?dateStart=2024-09-24&amp;dateEnd=2024-10-07"><u>AS21898 (TruVista)</u></a>, and <a href="https://radar.cloudflare.com/as14615?dateStart=2024-09-24&amp;dateEnd=2024-10-07"><u>AS14615 (Rock Hill Telephone)</u></a>, also experienced significant disruptions to connectivity in the wake of Helene.</p>
    <div>
      <h4>North Carolina</h4>
      <a href="#north-carolina">
        
      </a>
    </div>
    <p>Although a drop in traffic is visible in the graph for North Carolina on September 27, it occurs after a midday peak in line with previous days, and the magnitude is not as significant as that seen in South Carolina and Georgia. Traffic peaks over the following week are in line with the week preceding Helene’s arrival, with higher peaks seen the week of October 6.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4ggc01nO3m5J85jNwm5rSF/13af760fe7a839472ae5c14116042f9c/SEVERE_WEATHER_-_UNITED_STATES_-_Helene_-_North_Carolina.png" />
          </figure><p>North Carolina providers <a href="https://radar.cloudflare.com/as53488?dateStart=2024-09-24&amp;dateEnd=2024-10-07"><u>AS53488 (Morris Broadband)</u></a> and <a href="https://radar.cloudflare.com/as53274?dateStart=2024-09-24&amp;dateEnd=2024-10-07"><u>AS53274 (Skyrunner)</u></a> both experienced multi-day disruptions, likely related to damage from Helene. However, these disruptions took Morris Broadband completely offline several times over the course of a week — the announced IP address space graph below shows three distinct drops to zero, aligning with outages visible in the traffic graph, when the network was effectively disconnected from the Internet. A similar but less severe pattern was seen at Skyrunner, which lost 75-80% of announced IP address space for a two-day period covering September 27-29, aligning with an outage visible in the associated traffic graph.</p><p>Other impacted network providers in North Carolina included <a href="https://radar.cloudflare.com/as22191?dateStart=2024-09-24&amp;dateEnd=2024-10-07"><u>AS22191 (Wilkes Communications)</u></a> and <a href="https://radar.cloudflare.com/as23118?dateStart=2024-09-24&amp;dateEnd=2024-10-07"><u>AS23118 (Skyline Telephone)</u></a>.</p>
    <div>
      <h2>Power outages</h2>
      <a href="#power-outages">
        
      </a>
    </div>
    
    <div>
      <h3>Venezuela</h3>
      <a href="#venezuela">
        
      </a>
    </div>
    <p>A nationwide power outage in <a href="https://radar.cloudflare.com/ve"><u>Venezuela</u></a> on August 30 was, <a href="https://www.reuters.com/business/energy/venezuelas-capital-caracas-other-regions-face-power-outage-2024-08-30/"><u>according to President Nicolás Maduro</u></a>, the result of an attack on the Guri Reservoir, Venezuela's largest hydroelectric project. A <a href="https://www.reuters.com/business/energy/venezuelas-capital-caracas-other-regions-face-power-outage-2024-08-30/"><u>published report</u></a> indicated that all 24 of the country's states reported a total or partial loss of electricity supply. The loss of power unsurprisingly caused an Internet disruption, with country-level traffic dropping 82%, starting around 04:45 local time (08:45 UTC). Traffic began to increase as electricity returned to various parts of the country throughout the day, and returned to expected levels just after midnight local time on August 31 (04:00 UTC). </p>
    <div>
      <h3>Kenya</h3>
      <a href="#kenya">
        
      </a>
    </div>
    <p>On August 30, Kenya Power Care <a href="https://www.facebook.com/KenyaPowerLtd/posts/pfbid0krBvZqWT7AfF8HjTPdm9Y84QmmkgfUzPjhtgxZjzEpyVqRLFS6VBt5vR43s5dxiHl"><u>posted a Customer Alert on its Facebook page</u></a>, issued at 21:57 local time (18:57 UTC), stating that “<i>We have lost power supply to various parts of the country except North Rift region and sections of Western region.</i>” Approximately a half hour before that alert, Kenya’s Internet traffic began to drop, falling as much as 61%. Just two hours later, Kenya Power Care <a href="https://www.facebook.com/KenyaPowerLtd/posts/pfbid0m4kP2NwdiDPnH4UpWH39QkpLANTWc6SR3bpiHxnwCUdBvwwou7p1skfaWbghRFWml"><u>posted a follow up</u></a>, stating “<i>Following the partial outage affecting several parts of the country this evening, we are pleased to report that power supply has now been restored to the entire Western region, as well as parts of Central Rift, South Nyanza, and Nairobi regions.</i>” However, traffic did not return to expected levels for several more hours, taking until 06:00 local time (03:00 UTC).</p><p>A week later, on September 6, Kenya Power Care <a href="https://www.facebook.com/KenyaPowerLtd/posts/pfbid02BcJVt9uu1N3mmGzf9mivyXev4FSJVpPZ5ni1VkZC9WSdYyYyk7MCMtignBPzcVnyl"><u>posted another similar Customer Alert</u></a>, noting that “<i>We are experiencing a power outage affecting several parts of the country, except sections of North Rift and Western regions.</i>” This alert was issued at 09:20 local time (06:20 UTC), and follows a drop in Internet traffic that started around 09:00 local time (06:00 UTC). Traffic dropped approximately 45% during this power outage, and returned to expected levels around 16:00 local time (13:00 UTC). Traffic recovery aligns with a <a href="https://www.facebook.com/KenyaPowerLtd/posts/pfbid02VzrAMQeuTrmfyywXeB7qXFyAmeM1eEQCBX6dvY3DHbyfUoTjgTJATcg9cToBk7zal"><u>subsequent Customer Alert posted on Facebook</u></a>, where Kenya Power Care stated “<i>We are glad to report that normal electricity supply was restored across the country as at 3:49pm”.</i></p><p>A statement from Energy and Petroleum Cabinet Secretary Opiyo Wandayi, <a href="https://www.facebook.com/KenyaPowerLtd/posts/pfbid02Nck9kx6NFmvFRdLEpzPxk1UPW3HtNw41PHNhHd3PMR2Y73BpkMALZmNU3mkar8DPl"><u>shared on Facebook by Kenya Power Care</u></a>, explained the cause of the power outage: “<i>Today, Friday 6th September 2024 at 8.56 am, the 220kV High Voltage Loiyangalani transmission line tripped at Suswa substation while evacuating 288MW from Lake Turkana Wind Power (LTWP) plant. This was followed by a trip on the Ethiopia – Kenya 500kV DC interconnector that was then carrying 200MW, resulting to a total loss of 488MW…</i>” </p>
    <div>
      <h3>Ecuador</h3>
      <a href="#ecuador">
        
      </a>
    </div>
    <p>According to a (translated) September 7 <a href="https://x.com/OperadorCenace/status/1832431918563872871"><u>post on X from CENACE</u></a>, the national electricity operator in <a href="https://radar.cloudflare.com/ec"><u>Ecuador</u></a>, “<i>We inform the public that due to a fault in the Molino substation bar, which is connected to the Paute generation, there has been a power outage in some provinces of the country. Cenace's technical team, in coordination with the distribution companies, is working to gradually restore electrical service. It is estimated that it will take 3 to 4 hours maximum for the supply to return to normal.</i>” The post was published at 09:53 local time (14:53 UTC), approximately an hour after Internet traffic from the country began to drop. Traffic returned to expected levels just under four hours later, at around 12:30 local time (17:30 UTC), in line with CENACE’s predicted time for power to be fully restored.</p><p>On September 18/19, the first of several planned nightly power outages to enable needed grid maintenance in Ecuador disrupted Internet connectivity. Traffic dropped by over 60% as compared to the same time the prior week starting around 21:30 local (02:30 UTC), with the power outages <a href="https://www.americaeconomia.com/en/node/288653"><u>reportedly</u></a> scheduled for 22:00 - 06:00 local time. Internet traffic recovered to expected levels around 06:00 local time (11:00 UTC) as power was restored. Similar power cuts were <a href="https://ec.usembassy.gov/alert-series-of-nationwide-overnight-power-outages-and-curfews/"><u>reportedly planned from September 23 to September 27</u></a>, but these power outages did not appear to impact <a href="https://radar.cloudflare.com/explorer?dataSet=netflows&amp;loc=ec&amp;dt=2024-09-22_2024-09-28&amp;timeCompare=1"><u>traffic levels in Ecuador as compared to the previous week</u></a>. </p>
    <div>
      <h3>Senegal</h3>
      <a href="#senegal">
        
      </a>
    </div>
    <p><a href="https://radar.cloudflare.com/sn"><u>Senegal’s</u></a> power company, Senelec, <a href="https://x.com/Senelecofficiel/status/1834245424787394629"><u>posted a communiqué on X</u></a> on September 12 that stated (translated) “<i>Senelec informs its valued customers that an incident that occurred this morning at the Hann substation resulted in the loss of the OMVS interconnected network and disruptions to electricity distribution.</i>” This disruption to electricity distribution also resulted in a disruption to Internet traffic, which dropped sharply at 13:00 local time (13:00 UTC), falling as much as 80%. Traffic recovered to expected levels by 20:00 local time (20:00 UTC) around the same time that Senelec <a href="https://x.com/Senelecofficiel/status/1834320225954922533"><u>posted a followup about the incident</u></a> that stated (translated) “<i>Effective restoration of electricity supply in all localities.</i>”</p>
    <div>
      <h2>Maintenance</h2>
      <a href="#maintenance">
        
      </a>
    </div>
    
    <div>
      <h3>Syria</h3>
      <a href="#syria">
        
      </a>
    </div>
    <p>As we discussed above, Internet users in <a href="https://radar.cloudflare.com/sy"><u>Syria</u></a> were impacted by an exam-related Internet shutdown from 07:00 - 10:15 local time (04:00 - 07:15 UTC) on July 30. However, just an hour after connectivity was restored, another disruption occurred, as seen in both the traffic and announced IP address space graphs below. According to a (translated) <a href="https://www.facebook.com/photo?fbid=868145108679350&amp;set=a.449047403922458"><u>Facebook post from Syrian Telecom</u></a>, “...<i>during the periodic maintenance of one of the air conditioners in one of the technical halls, an explosion occurred, which caused the internet circuits to be temporarily out of service.</i>” Traffic remained depressed for approximately eight hours, recovering to expected levels around 19:00 local time (16:00 UTC).</p>
    <div>
      <h2>Cyberattack</h2>
      <a href="#cyberattack">
        
      </a>
    </div>
    
    <div>
      <h3>Russia</h3>
      <a href="#russia">
        
      </a>
    </div>
    <p>Roskomnadzor, Russia’s Internet regulate, <a href="https://t.me/roskomnadzorro/1897"><u>blamed</u></a> a brief disruption in traffic observed in <a href="https://radar.cloudflare.com/ru"><u>Russia</u></a> and on <a href="https://radar.cloudflare.com/as12389"><u>AS12389 (Rostelecom)</u></a> on August 21 on a distributed denial-of-service (DDoS) attack that targeted Russian telecommunications operators. The disruption was brief, lasting from around 13:45 until 14:30 Moscow time (10:45 - 11:30 UTC). Roskomnadzor <a href="https://www.uawire.org/massive-internet-outage-in-russia-kremlin-s-attempt-to-block-messaging-apps-causes-nationwide-disruption"><u>subsequently stated</u></a> "<i>As of 3 PM Moscow time, the attack has been repelled, and services are operating normally.</i>" The disruption <a href="https://www.barrons.com/news/large-scale-outages-hit-telegram-whatsapp-in-russia-3a08695c"><u>reportedly</u></a> impacted messaging services Telegram and WhatsApp, <a href="https://www.uawire.org/massive-internet-outage-in-russia-kremlin-s-attempt-to-block-messaging-apps-causes-nationwide-disruption"><u>as well as</u></a> Wikipedia, Yandex, VKontakte, telecom support services, and mobile banking apps. Some experts <a href="https://www.uawire.org/massive-internet-outage-in-russia-kremlin-s-attempt-to-block-messaging-apps-causes-nationwide-disruption"><u>questioned the official explanation</u></a>, suggesting instead that the disruption was due to <a href="https://therecord.media/russia-blames-websites-apps-outages-on-ddos"><u>centralized interference from Roskomnadzor</u></a>.</p>
    <div>
      <h2>Military action</h2>
      <a href="#military-action">
        
      </a>
    </div>
    
    <div>
      <h3>Palestine</h3>
      <a href="#palestine">
        
      </a>
    </div>
    <p>We have covered Internet disruptions related to the ongoing conflict in Gaza multiple times since October 2023, both on <a href="https://x.com/search?q=gaza%20internet%20(from%3Acloudflareradar)&amp;src=typed_query&amp;f=live"><u>Cloudflare Radar’s presence on X</u></a>, and on the Cloudflare blog (<a href="https://blog.cloudflare.com/internet-traffic-patterns-in-israel-and-palestine-following-the-october-2023-attacks/"><u>1</u></a>, <a href="https://blog.cloudflare.com/q4-2023-internet-disruption-summary/"><u>2</u></a>, <a href="https://blog.cloudflare.com/q1-2024-internet-disruption-summary/"><u>3</u></a>). In many of these cases, Paltel (AS12975) has posted notices on social media regarding service disruptions and outages. On September 8, <a href="https://www.facebook.com/paltel.970/posts/pfbid036YptxzF77Rk5U7tVGT5Xh4Yx4897BVoeb4qsZNhGkLh1XxLCTLMzDjp1RLAkBfJHl"><u>Paltel posted a message on its Facebook page</u></a>, stating (translated) “<i>We regret to announce the suspension of home internet services in the central and southern areas of the Gaza Strip, due to the ongoing aggression.</i>”</p><p>Within the Gaza, Rafah, Deir al-Balah Governorates, we observed a sharp drop in traffic at 18:00 local time (16:00 UTC). The impact appeared to be most significant in Rafah and Deir al-Balah. Traffic returned to expected levels around 23:00 local time (21:00 UTC), and Paltel <a href="https://www.facebook.com/paltel.970/posts/pfbid0hJxQReZimYRnNxbMNeyscVCtwhS2wnA4Us6fucJ4WntFuQeS3BAKqhMWxJJqFzaVl"><u>confirmed the service restoration in a subsequent Facebook post</u></a>, stating (translated) “<i>We would like to announce the return of home Internet services in central and southern Gaza Strip to the way it was before it was interrupted hours ago.</i>”	</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2QELKmNYaZC5NmvkTDreST/f913dde97df36d81772756d528745980/MILITARY_ACTION_-_PALESTINE_-_Gaza_-_Gaza_Governorate.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3zALCKZTWs6E62cptuxPjq/cd71ff38103f4574b7d2f6e3c3b66ab6/MILITARY_ACTION_-_PALESTINE_-_Gaza_-_Rafah_Governorate.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7yvARj41pkM60UhcfNtEEC/4963c76fe2ef45802211ae2b6ebbe5ff/MILITARY_ACTION_-_PALESTINE_-_Gaza_-_Deir_al-Balah_Governorate.png" />
          </figure>
    <div>
      <h3>Lebanon</h3>
      <a href="#lebanon">
        
      </a>
    </div>
    <p><a href="https://www.cnn.com/world/live-news/israel-lebanon-war-hezbollah-09-27-24#cm1lbrhcd001k3b6mxt9ij3lb"><u>Israeli airstrikes targeting the Lebanese capital of Beirut</u></a> on September 28 likely knocked local network provider <a href="https://radar.cloudflare.com/as42852"><u>Solidere (AS42852)</u></a> offline for several hours. The graph below shows a loss of traffic starting around 12:15 local time (10:15 UTC), at the same time a complete loss of announced IP address space occurred. Most of Solidere’s IP address space started to get announced again at 14:45 local time (12:45 UTC), and a slight increase in traffic was seen at that time as well. Traffic levels fully recovered just after 18:00 local time (16:00 UTC), and announced IP address space had stabilized by that time as well. </p>
    <div>
      <h2>Fire</h2>
      <a href="#fire">
        
      </a>
    </div>
    
    <div>
      <h3>Algeria</h3>
      <a href="#algeria">
        
      </a>
    </div>
    <p>A fire near a data center in Blida Province, <a href="https://radar.cloudflare.com/dz"><u>Algeria</u></a> disrupted connectivity on AS327931 (Djezzy) at 13:00 and local time (12:00 UTC) on July 24. According to a (translated) <a href="https://x.com/djezzy/status/1816272546284855678"><u>X post from Djezzy</u></a>, “<i>Djezzy announced fluctuations in its services in some areas of the country, as it was a victim of a fire that broke out on Wednesday, July 24, 2024, in a warehouse of one of the companies located near its technical center in the state of Blida.</i>” The post from Djezzy predicted that “<i>97% of the sites will be restored by around 3 am [July 25]</i>”, but traffic did not return to expected levels until the end of the day on July 25.</p>
    <div>
      <h2>Unknown</h2>
      <a href="#unknown">
        
      </a>
    </div>
    
    <div>
      <h3>United States</h3>
      <a href="#united-states">
        
      </a>
    </div>
    <p>On Monday, September 30, customers on Verizon’s mobile network in multiple cities across the United States <a href="https://apnews.com/article/verizon-outage-sos-mode-phone-service-b03c9b8615e0650669339daa2eaa1713"><u>reported</u></a> experiencing a loss of connectivity. Impacted phones showed “SOS” instead of the usual bar-based signal strength indicator, and customers complained of an inability to make or receive calls on their mobile devices. Although initial reports of connectivity problems started around 09:00 ET (13:00 UTC), we didn’t see a noticeable change in request volume at an ASN level until about two hours later. <a href="https://radar.cloudflare.com/as6167"><u>AS6167 (CELLCO)</u></a> is the <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/"><u>autonomous system</u></a> used by Verizon for its mobile network.</p><p>Just before 12:00 ET (16:00 UTC), Verizon <a href="https://x.com/VerizonNews/status/1840780785084985777"><u>published a social media post acknowledging the problem</u></a>, stating “We are aware of an issue impacting service for some customers. Our engineers are engaged, and we are working quickly to identify and solve the issue.” As the <a href="https://radar.cloudflare.com/explorer?dataSet=http&amp;loc=as6167&amp;dt=2024-09-30_2024-09-30&amp;timeCompare=2024-09-23"><u>graph</u></a> below shows, a slight decline (-5%) in HTTP traffic as compared to traffic at the same time a week prior is first visible around 11:00 ET (15:00 UTC), and request volume fell as much as 9% below expected levels at 13:45 ET (17:45 UTC).</p><p>Media reports listed cities including Chicago, Indianapolis, New York City, Atlanta, Cincinnati, Omaha, Phoenix, Denver, Minneapolis, Seattle, Los Angeles, and Las Vegas as being most impacted. Traffic graphs illustrating the impacts seen in these cities can be found in our <a href="https://blog.cloudflare.com/impact-of-verizons-september-30-outage-on-internet-traffic/"><i><u>Impact of Verizon’s September 30 outage on Internet traffic</u></i></a> blog post.</p><p>Traffic appeared to return to expected levels around 17:15 ET (21:15 UTC). At 19:18 ET (23:18 UTC), a <a href="https://x.com/VerizonNews/status/1840893978411221191"><u>social media post</u></a> from Verizon noted “<i>Verizon engineers have fully restored today's network disruption that impacted some customers. Service has returned to normal levels.</i>”</p>
    <div>
      <h3>Pakistan</h3>
      <a href="#pakistan">
        
      </a>
    </div>
    <p>On July 31, <a href="https://radar.cloudflare.com/pk"><u>Pakistan</u></a> experienced a wide-scale Internet disruption that lasted approximately two hours, between 13:30 - 15:30 local time (08:30 - 10:30 UTC). Traffic only dropped ~45% at a country level, but <a href="https://radar.cloudflare.com/as17557"><u>AS17557 (PTCL)</u></a> experienced a near complete loss of traffic, while traffic at <a href="https://radar.cloudflare.com/as24499"><u>AS24499 (Telenor Pakistan)</u></a> dropped nearly 90%. Together, the two network providers serve an estimated nine million users, and are among the top five Internet service providers in the country.</p><p>The actual cause of the disruption is disputed. It was <a href="https://www.globalvillagespace.com/internet-outage-in-pakistan/"><u>reported</u></a> that the Pakistan Telecommunication Authority (PTA) attributed the disruptions to a technical glitch in the international submarine cable affecting the Pakistan Telecommunication Company Limited (PTCL) network. However, another <a href="https://incpak.com/national/internet-services-outtage-across-pakistan/"><u>published report</u></a> noted “According to our sources, the government’s latest firewall edition to block the content was misconfigured, resulting in Internet connectivity disruption.” Additional details can be found in our August 1 blog post, <a href="https://blog.cloudflare.com/a-recent-spate-of-internet-disruptions-july-2024/"><i><u>A recent spate of Internet disruptions</u></i></a><i>.</i></p>
    <div>
      <h3>United Kingdom</h3>
      <a href="#united-kingdom">
        
      </a>
    </div>
    <p>On August 14, subscribers of <a href="https://radar.cloudflare.com/gb"><u>UK</u></a> service provider <a href="https://radar.cloudflare.com/as25135"><u>Vodafone (AS25135)</u></a> <a href="https://www.dailymail.co.uk/sciencetech/article-13742755/Vodafone-network-crashes-internet.html"><u>reported problems</u></a> accessing both mobile and landline Internet connections. Starting around 11:00 local time (10:00 UTC), we observed traffic starting to drop, ultimately falling 43% below the same time the prior week. The disruption was fairly short-lived, as traffic returned to expected levels by 13:30 local time (12:30 UTC). Vodafone did not acknowledge the issue on social media, nor did it provide a public explanation for what caused the disruption.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Although Internet disruptions observed during the third quarter had a variety of underlying causes, those caused by power outages due to aging or insufficiently maintained electrical infrastructure are worth highlighting. Of course, widespread power outages always create a massive inconvenience for impacted populations, but over the last several years, as communication, entertainment, commerce, and more have become increasingly reliant on the Internet, the impact of these outages has become even more significant, because losing electrical power largely means losing Internet connectivity. Although mobile connectivity may still be available in some cases, it is decidedly not a complete replacement, not to mention that mobile devices will eventually need to be recharged. While addressing the underlying infrastructure issues require non-trivial amounts of time, resources, and money, governments appear to be taking steps towards doing so.</p><p>Visit <a href="https://radar.cloudflare.com/"><u>Cloudflare Radar</u></a> for additional insights around Internet disruptions, routing issues, Internet traffic trends, security and attacks, and Internet quality. Follow us on social media at <a href="https://x.com/CloudflareRadar"><u>@CloudflareRadar</u></a> (X), <a href="https://noc.social/@cloudflareradar"><u>noc.social/@cloudflareradar</u></a> (Mastodon), and <a href="https://bsky.app/profile/radar.cloudflare.com"><u>radar.cloudflare.com</u></a> (Bluesky), or contact us via e-mail.</p> ]]></content:encoded>
            <category><![CDATA[Radar]]></category>
            <category><![CDATA[Internet Quality]]></category>
            <category><![CDATA[Internet Shutdown]]></category>
            <category><![CDATA[Outage]]></category>
            <category><![CDATA[Internet Traffic]]></category>
            <category><![CDATA[Consumer Services]]></category>
            <guid isPermaLink="false">3xoUhxvPcDFTiYCT9CjHhs</guid>
            <dc:creator>David Belson</dc:creator>
        </item>
        <item>
            <title><![CDATA[Impact of Verizon’s September 30 outage on Internet traffic]]></title>
            <link>https://blog.cloudflare.com/impact-of-verizons-september-30-outage-on-internet-traffic/</link>
            <pubDate>Tue, 01 Oct 2024 00:32:00 GMT</pubDate>
            <description><![CDATA[ On Monday, September 30, customers on Verizon’s mobile network in multiple cities across the United States reported experiencing a loss of connectivity. HTTP request traffic data from Verizon’s mobile ASN (AS6167) showed nominal declines across impacted cities.
 ]]></description>
            <content:encoded><![CDATA[ <p>On Monday, September 30, 2024, customers on Verizon’s mobile network in multiple cities across the United States <a href="https://apnews.com/article/verizon-outage-sos-mode-phone-service-b03c9b8615e0650669339daa2eaa1713"><u>reported</u></a> experiencing a loss of connectivity. Impacted phones showed “SOS” instead of the usual bar-based signal strength indicator, and customers complained of an inability to make or receive calls on their mobile devices.</p><p><a href="https://radar.cloudflare.com/as6167"><u>AS6167 (CELLCO)</u></a> is the <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/"><u>autonomous system</u></a> used by Verizon for its mobile network. To better understand how the outage impacted Internet traffic on Verizon’s network, we took a look at HTTP request volume from AS6167 independent of geography, as well as traffic from AS6167 in various cities that were <a href="https://mashable.com/live/verizon-outage-live-updates"><u>reported</u></a> to be the most significantly impacted.</p><p>Although initial reports of connectivity problems started around 09:00 ET (13:00 UTC), we didn’t see a noticeable change in request volume at an ASN level until about two hours later. Just before 12:00 ET (16:00 UTC), Verizon <a href="https://x.com/VerizonNews/status/1840780785084985777"><u>published a social media post acknowledging the problem</u></a>, stating “<i>We are aware of an issue impacting service for some customers. Our engineers are engaged and we are working quickly to identify and solve the issue.</i>”</p><p>As the <a href="https://radar.cloudflare.com/explorer?dataSet=http&amp;loc=as6167&amp;dt=2024-09-30_2024-09-30&amp;timeCompare=2024-09-23"><u>Cloudflare Radar graph</u></a> below shows, a slight decline (-5%) in HTTP traffic as compared to traffic at the same time a week prior is first visible around 11:00 ET (15:00 UTC). Request volume fell as much as 9% below expected levels at 13:45 ET (17:45 UTC).</p><p>Just after 17:00 ET (21:00 UTC), Verizon <a href="https://x.com/VerizonNews/status/1840860310997254609"><u>published a second social media post</u></a> noting, in part, “<i>Verizon engineers are making progress on our network issue and service has started to be restored.</i>” Request volumes returned to expected levels around the same time, surpassing the previous week’s levels at 17:15 ET (21:15 UTC). At 19:18 ET (23:18 UTC), a <a href="https://x.com/VerizonNews/status/1840893978411221191"><u>social media post</u></a> from Verizon noted “Verizon engineers have fully restored today's network disruption that impacted some customers. Service has returned to normal levels.”</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5fwaspr4MBTWT0zf36YEmf/86dfccdab0df85ea90edaa369520c23b/BLOG-2587_2.png" />
          </figure><p>Media reports listed cities including Chicago, Indianapolis, New York City, Atlanta, Cincinnati, Omaha, Phoenix, Denver, Minneapolis, Seattle, Los Angeles, and Las Vegas as being most impacted. In addition to looking at comparative traffic trends across the whole Verizon Wireless network, we also compared request volumes in the listed cities to the same time a week prior (September 23).</p><p>Declines in request traffic starting around 11:00 ET (15:00 UTC) are clearly visible in cities including Los Angeles, Seattle, Omaha, Denver, Phoenix, Minneapolis, Indianapolis, and Chicago. In contrast to other cities, Omaha’s request volume was already trending lower than last week heading into today’s outage, but its graph clearly shows the impact of today’s disruption as well. Omaha’s difference in traffic was the most significant, down approximately 30%, while other cities saw declines in the 10-20% range. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/wg0aCH9hUSpECGnzs4I9s/c7eeef574e5ba07cf26c7c667a0a9239/BLOG-2587_3.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5kE1kBRS4tTyB5TPY8ain7/5a153b7c7052352e9e44c6d88b1fab1c/BLOG-2587_4.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7gg28UcLpcp92UYvLpykuO/715fa25efcfcdfc6dd18b8dd5b81b0ca/BLOG-2587_5.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7rcGVaVuBsK7oMhPaAfOx3/3b785c2b4d296b6d459e5f50a5d69da7/BLOG-2587_6.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2WuYIbLm9MdkjN5C20JgG2/6784dca790e32ad199f2fb2122e21fcb/BLOG-2587_7.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3YTBl2XO8YDDgFSzAoIAxo/0a9481add616709fa8d95762cfbfc71e/BLOG-2587_8.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3dt2aD35Ipq0eZKuQcF8PB/2f51bd06ee1ced5fab68a76cc71f70ce/BLOG-2587_9.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6NS4Wq7KB0U60IkhyzhwTs/f69159fe84dd373c6905ebe56e570b6d/BLOG-2587_10.png" />
          </figure><p>Request traffic from Las Vegas initially appeared to exhibit a bit of volatility around 11:00 ET (15:00 UTC), but continues to track fairly closely to last week’s levels before exceeding them starting at 16:00 ET (20:00 UTC). Cincinnati was tracking slightly above last week’s request volume before the outage began, and tracked closely to the prior week during the outage period.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4NatjBWUvwgaU2mRRHAEUV/9a21e0c3ce61eb4fc11a0fb1d239c411/BLOG-2587_11.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3jqFrqdvO203QDKMsFUWrM/1d3e538cd3f00b7e522be922eb6b3664/BLOG-2587_12.png" />
          </figure><p>We observed week-over-week traffic increases during the outage period in New York and Atlanta. However, in both cities, traffic was already slightly above last week’s levels, and that trend continued throughout the day. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/UnTO0qSD24rreW9LHRk0H/043ebdf603b7604c4018944ad5192f2c/BLOG-2587_13.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1gDfClWDv7hqlgZE30HRjF/b427490198d2ba4fc74c6e84753de502/BLOG-2587_14.png" />
          </figure><p>Based on our observations, it appears that voice services on Verizon’s network may have been more significantly impacted than data services, as we saw some declines in request traffic across impacted cities, but none experienced full outages.</p><p>As of this writing (19:15 ET, 23:15 UTC), no specific information has been made available by Verizon regarding the root cause of the network problems. </p> ]]></content:encoded>
            <category><![CDATA[Radar]]></category>
            <category><![CDATA[Trends]]></category>
            <category><![CDATA[Consumer Services]]></category>
            <category><![CDATA[Outage]]></category>
            <guid isPermaLink="false">2dMuW3phhA9ClF8ROOAYPH</guid>
            <dc:creator>David Belson</dc:creator>
        </item>
    </channel>
</rss>