
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Tue, 07 Apr 2026 23:47:45 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Cable cuts, storms, and DNS: a look at Internet disruptions in Q4 2025]]></title>
            <link>https://blog.cloudflare.com/q4-2025-internet-disruption-summary/</link>
            <pubDate>Mon, 26 Jan 2026 14:00:00 GMT</pubDate>
            <description><![CDATA[ The last quarter of 2025 brought several notable disruptions to Internet connectivity. Cloudflare Radar data reveals the impact of cable cuts, power outages, extreme weather, technical problems, and more. ]]></description>
            <content:encoded><![CDATA[ <p>In 2025, we <a href="https://radar.cloudflare.com/outage-center?dateStart=2025-01-01&amp;dateEnd=2025-12-31"><u>observed over 180 Internet disruptions</u></a> spurred by a variety of causes – some were brief and partial, while others were complete outages lasting for days. In the fourth quarter, we tracked only a single <a href="#government-directed"><u>government-directed</u></a> Internet shutdown, but multiple <a href="#cable-cuts"><u>cable cuts</u></a> wreaked havoc on connectivity in several countries. <a href="#power-outages"><u>Power outages</u></a> and <a href="#weather"><u>extreme weather</u></a> disrupted Internet services in multiple places, and the ongoing <a href="#military-action"><u>conflict</u></a> in Ukraine impacted connectivity there as well. As always, a number of the disruptions we observed were due to <a href="#known-or-unspecified-technical-problems"><u>technical problems</u></a> – with some acknowledged by the relevant providers, while others had unknown causes. In addition, incidents at several hyperscaler <a href="#cloud-platforms"><u>cloud platforms</u></a> and <a href="#cloudflare"><u>Cloudflare</u></a> impacted the availability of websites and applications.  </p><p>This post is intended as a summary overview of observed and confirmed disruptions and is not an exhaustive or complete list of issues that have occurred during the quarter. These anomalies are detected through significant deviations from expected traffic patterns observed across our network. Check out the <a href="https://radar.cloudflare.com/outage-center"><u>Cloudflare Radar Outage Center</u></a> for a full list of verified anomalies and confirmed outages. </p>
    <div>
      <h2>Government-directed</h2>
      <a href="#government-directed">
        
      </a>
    </div>
    
    <div>
      <h3>Tanzania</h3>
      <a href="#tanzania">
        
      </a>
    </div>
    <p><a href="https://bsky.app/profile/radar.cloudflare.com/post/3m4df6i7hjk25"><u>The Internet was shut down in Tanzania</u></a> on October 29 as <a href="https://www.theguardian.com/world/2025/oct/29/tanzania-election-president-samia-suluhu-hassan-poised-to-retain-power"><u>violent protests</u></a> took place during the country’s presidential election. Traffic initially fell around 12:30 local time (09:30 UTC), dropping more than 90% lower than the previous week. The disruption lasted approximately 26 hours, with <a href="https://bsky.app/profile/radar.cloudflare.com/post/3m4qec7zdnt2u"><u>traffic beginning to return</u></a> around 14:30 local time (11:30 UTC) on October 30. However, that restoration <a href="https://bsky.app/profile/radar.cloudflare.com/post/3m4gjngzck72u"><u>proved to be quite brief</u></a>, with a significant decrease in traffic occurring around 16:15 local time (13:15 UTC), approximately two hours after it returned. This second near-complete outage lasted until November 3, <a href="https://bsky.app/profile/radar.cloudflare.com/post/3m4g47vasfm2u"><u>when traffic aggressively returned</u></a> after 17:00 local time (14:00 UTC). Nominal drops in <a href="https://radar.cloudflare.com/routing/tz?dateStart=2025-10-29&amp;dateEnd=2025-11-04#announced-ip-address-space"><u>announced IPv4 and IPv6 address space</u></a> were also observed during the shutdown, but there was never a complete loss of announcements, which would have signified a total disconnection of the country from the Internet. (<a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/"><u>Autonomous systems</u></a> announce IP address space to other Internet providers, letting them know what blocks of IP addresses they are responsible for.)</p><p>Tanzania’s president later <a href="https://apnews.com/article/tanzania-samia-suluhu-hassan-internet-shutdown-october-election-1ec66b897e7809865d8971699a7284e0"><u>expressed sympathy</u></a> for the members of the diplomatic community and foreigners residing in the country regarding the impact of the Internet shutdown. Internet and social media services were also <a href="https://www.dw.com/en/tanzania-internet-slowdown-comes-at-a-high-cost/a-55512732"><u>restricted in 2020</u></a> ahead of the country’s general elections.</p>
    <div>
      <h2>Cable cuts</h2>
      <a href="#cable-cuts">
        
      </a>
    </div>
    
    <div>
      <h3>Digicel Haiti</h3>
      <a href="#digicel-haiti">
        
      </a>
    </div>
    <p>Digicel Haiti is unfortunately no stranger to Internet disruptions caused by cable cuts, and the network experienced two more such incidents during the fourth quarter. On October 16, traffic from <a href="https://radar.cloudflare.com/as27653"><u>Digicel Haiti (AS27653)</u></a> began to fall at 14:30 local time (18:30 UTC), reaching near zero at 16:00 local time (20:00 UTC). A translated <a href="https://x.com/jpbrun30/status/1978920959089230003"><u>X post from the company’s Director General</u></a> noted: “<i>We advise our clientele that @DigicelHT is experiencing 2 cuts on its international fiber optic infrastructure.</i>” Traffic began to recover after 17:00 local time (21:00 UTC), and reached expected levels within the following hour. At 17:33 local time (21:34 UTC), the Director General <a href="https://x.com/jpbrun30/status/1978937426841063504"><u>posted</u></a> that “<i>the first fiber on the international infrastructure has been repaired” </i>and service had been restored. </p><p>On November 25, another translated <a href="https://x.com/jpbrun30/status/1993283730467963345"><u>X post from the provider’s Director General</u></a> stated that its “<i>international optical fiber infrastructure on National Road 1</i>” had been cut. We observed traffic dropping on Digicel’s network approximately an hour earlier, with a complete outage observed between 02:00 - 08:00 local time (07:00 - 13:00 UTC). A <a href="https://x.com/jpbrun30/status/1993309357438910484"><u>follow-on X post</u></a> at 08:22 local time (13:22 UTC) stated that all services had been restored.</p>
    <div>
      <h3>Cybernet/StormFiber (Pakistan)</h3>
      <a href="#cybernet-stormfiber-pakistan">
        
      </a>
    </div>
    <p>At 17:30 local time (12:30 UTC) on October 20, Internet traffic for <a href="https://radar.cloudflare.com/as9541"><u>Cybernet/StormFiber (AS9541)</u></a> dropped sharply, falling to a level approximately 50% the same time a week prior. At the same time, the network’s announced IPv4 address space dropped by over a third. The cause of these shifts was damage to the <a href="https://www.submarinecablemap.com/submarine-cable/peace-cable"><u>PEACE</u></a> submarine cable, which suffered a cut in the Red Sea near Sudan. </p><p>PEACE is one of several submarine cable systems (including <a href="https://www.submarinecablemap.com/submarine-cable/imewe"><u>IMEWE</u></a> and <a href="https://www.submarinecablemap.com/submarine-cable/seamewe-4"><u>SEA-ME-WE-4</u></a>) that carry international Internet traffic for Pakistani providers. The provider <a href="https://profit.pakistantoday.com.pk/2025/10/24/stormfiber-pledges-full-restoration-by-monday-after-weeklong-internet-disruptions/"><u>pledged to fully restore service</u></a> by October 27, but traffic and announced IPv4 address space had recovered to near expected levels by around 02:00 local time on October 21 (21:00 UTC on October 20).</p>
<p>


    </p><div>
      <h3>Camtel, MTN Cameroon, Orange Cameroun</h3>
      <a href="#camtel-mtn-cameroon-orange-cameroun">
        
      </a>
    </div>
    <p>Unusual traffic patterns observed across multiple Internet providers in Cameroon on October 23 were reportedly caused by problems on the <a href="https://www.submarinecablemap.com/submarine-cable/west-africa-cable-system-wacs"><u>WACS (West Africa Cable System)</u></a> submarine cable, which connects countries along the west coast of Africa to Portugal. </p><p>A (translated) <a href="https://teleasu.tv/internet-graves-perturbations-observees-ce-jeudi-23-octobre-2025/"><u>published report</u></a> stated that MTN informed subscribers that “<i>following an incident on the WACS fiber optic cable, Internet service is temporarily disrupted</i>” and Orange Cameroun informed subscribers that “<i>due to an incident on the international access fiber, Internet service is disrupted.</i>” An <a href="https://x.com/Camtelonline/status/1981424170316464390"><u>X post from Camtel</u></a> stated “<i>Cameroon Telecommunications (CAMTEL) wishes to inform the public that a technical incident involving WACS cable equipment in Batoke (LIMBE) occurred in the early hours of 23 October 2025, causing Internet connectivity disruptions throughout the country.</i>” </p><p>Traffic across the impacted providers originally fell just at around  05:00 local time (04:00 UTC) before recovering to expected levels around 22:00 local time (21:00 UTC). Traffic across these networks was quite volatile during the day, dropping 90-99% at times. It isn’t clear what caused the visible spikiness in the traffic pattern—possibly attempts to shift Internet traffic to <a href="https://www.submarinecablemap.com/country/cameroon"><u>other submarine cable systems that connect to Cameroon</u></a>. Announced IP address space from <a href="https://radar.cloudflare.com/routing/as30992?dateStart=2025-10-23&amp;dateEnd=2025-10-23#announced-ip-address-space"><u>MTN Cameroon</u></a> and <a href="https://radar.cloudflare.com/routing/as36912?dateStart=2025-10-23&amp;dateEnd=2025-10-23#announced-ip-address-space"><u>Orange Cameroon</u></a> dropped during this period as well, although <a href="https://radar.cloudflare.com/routing/as15964?dateStart=2025-10-23&amp;dateEnd=2025-10-23#announced-ip-address-space"><u>Camtel’s</u></a> announced IP address space did not change.</p><p>Connectivity in the <a href="https://radar.cloudflare.com/cf"><u>Central African Republic</u></a> and <a href="https://radar.cloudflare.com/cg"><u>Republic of Congo</u></a> was also reportedly impacted by the WACS issues.</p>



    <div>
      <h3>Claro Dominicana</h3>
      <a href="#claro-dominicana">
        
      </a>
    </div>
    <p>On December 9, we saw traffic from <a href="https://radar.cloudflare.com/as6400"><u>Claro Dominicana (AS6400)</u></a>, an Internet provider in the Dominican Republic, drop sharply around 12:15 local time (16:15 UTC). Traffic levels fell again around 14:15 local time (18:15 UTC), bottoming out 77% lower than the previous week before quickly returning to expected levels. The connectivity disruption was likely caused by two fiber optic outages, as an <a href="https://x.com/ClaroRD/status/1998468046311002183"><u>X post from the provider</u></a> during the outage noted that they were “causing intermittency and slowness in some services.” A <a href="https://x.com/ClaroRD/status/1998496113838764343"><u>subsequent post on X</u></a> from Claro stated that technicians had restored Internet services nationwide by repairing the severed fiber optic cables.</p>
    <div>
      <h2>Power outages</h2>
      <a href="#power-outages">
        
      </a>
    </div>
    
    <div>
      <h3>Dominican Republic</h3>
      <a href="#dominican-republic">
        
      </a>
    </div>
    <p>According to a (translated) <a href="https://x.com/ETED_RD/status/1988326178219061450"><u>X post from the Empresa de Transmisión Eléctrica Dominicana</u></a> (ETED), a transmission line outage caused an interruption in electrical service in the <a href="https://radar.cloudflare.com/do"><u>Dominican Republic</u></a> on November 11. This power outage impacted Internet traffic from the country, resulting in a <a href="https://noc.social/@cloudflareradar/115533081511310085"><u>nearly 50% drop in traffic</u></a> compared to the prior week, starting at 13:15 local time (17:15 UTC). Traffic levels remained lower until approximately 02:00 local time (06:00 UTC) on December 12, with a later <a href="https://x.com/ETED_RD/status/1988575130990330153"><u>(translated) X post from ETED</u></a> noting “<i>At 2:20 a.m. we have completed the recovery of the national electrical system, supplying 96% of the demand…</i>”</p><p>A subsequent <a href="https://dominicantoday.com/dr/local/2025/11/27/manual-line-disconnection-triggered-nationwide-blackout-report-says/"><u>technical report found</u></a> that “<i>the blackout began at the 138 kV San Pedro de Macorís I substation, where a live line was manually disconnected, triggering a high-intensity short circuit. Protection systems responded immediately, but the fault caused several nearby lines to disconnect, separating 575 MW of generation in the eastern region from the rest of the grid. The imbalance caused major power plants to trip automatically as part of their built-in safety mechanisms.</i>”</p>
    <div>
      <h3>Kenya</h3>
      <a href="#kenya">
        
      </a>
    </div>
    <p>On December 9, a <a href="https://www.tuko.co.ke/kenya/612181-kenya-power-reveals-7-pm-nationwide-blackout-multiple-regions/"><u>major power outage</u></a> impacted multiple regions across <a href="https://radar.cloudflare.com/ke"><u>Kenya</u></a>. Kenya Power explained that the outage “<i>was triggered by an incident on the regional Kenya-Uganda interconnected power network, which caused a disturbance on the Kenyan side of the system</i>” and claimed that “<i>[p]ower was restored to most of the affected areas within approximately 30 minutes.</i>” However, impacts to Internet connectivity lasted for nearly four hours, between 19:15 - 23:00 local time (16:15 - 20:00 UTC). The power outage caused traffic to drop as much as 18% at a national level, with the traffic shifts most visible in <a href="https://radar.cloudflare.com/traffic/7668902"><u>Nakuru County</u></a> and <a href="https://radar.cloudflare.com/traffic/192709"><u>Kaimbu County</u></a>.</p>


    <div>
      <h2>Military action</h2>
      <a href="#military-action">
        
      </a>
    </div>
    
    <div>
      <h3>Odesa, Ukraine</h3>
      <a href="#odesa-ukraine">
        
      </a>
    </div>
    <p><a href="https://odessa-journal.com/russia-carried-out-a-massive-drone-attack-on-the-odessa-region"><u>Russian drone strikes</u></a> on the <a href="https://radar.cloudflare.com/traffic/698738"><u>Odesa region</u></a> in <a href="https://radar.cloudflare.com/ua"><u>Ukraine</u></a> on December 12 damaged warehouses and energy infrastructure, with the latter causing power outages in parts of the region. Those outages disrupted Internet connectivity, resulting in <a href="https://x.com/CloudflareRadar/status/2000993223406211327?s=20"><u>traffic dropping by as much as 57%</u></a> as compared to the prior week. After the initial drop at midnight on December 13 (22:00 UTC on December 12), traffic gradually recovered over the following several days, returning to expected levels around 14:30 local time (12:30 UTC) on December 16.</p>
    <div>
      <h2>Weather</h2>
      <a href="#weather">
        
      </a>
    </div>
    
    <div>
      <h3>Jamaica</h3>
      <a href="#jamaica">
        
      </a>
    </div>
    <p><a href="https://www.nytimes.com/live/2025/10/28/weather/hurricane-melissa-jamaica-landfall?smid=url-share#df989e67-a90e-50fb-92d0-8d5d52f76e84"><u>Hurricane Melissa</u></a> made landfall on <a href="https://radar.cloudflare.com/jm"><u>Jamaica</u></a> on October 28 and left a trail of damage and destruction in its path. Associated <a href="https://www.jamaicaobserver.com/2025/10/28/eyeonmelissa-35-jps-customers-without-power/"><u>power outages</u></a> and infrastructure damage impacted Internet connectivity, causing traffic to initially <a href="https://x.com/CloudflareRadar/status/1983266694715084866"><u>drop by approximately half</u></a>, <a href="https://x.com/CloudflareRadar/status/1983217966347866383"><u>starting</u></a> around 06:15 local time (11:15 UTC), ultimately reaching as much as <a href="https://x.com/CloudflareRadar/status/1983357587707048103"><u>70% lower</u></a> than the previous week. Internet traffic from Jamaica remained well below pre-hurricane levels for several days, and ultimately started to make greater progress towards expected levels <a href="https://x.com/CloudflareRadar/status/1985708253872107713?s=20"><u>during the morning of November 4</u></a>. It can often take weeks or months for Internet traffic from a country to return to “normal” levels following storms that cause massive and widespread damage – while power may be largely restored within several days, damage to physical infrastructure takes significantly longer to address.</p>
    <div>
      <h3>Sri Lanka &amp; Indonesia</h3>
      <a href="#sri-lanka-indonesia">
        
      </a>
    </div>
    <p>On November 26, <a href="https://apnews.com/article/indonesia-sri-lanka-thailand-malaysia-floods-landsides-aa9947df1f6192a3c6c72ef58659d4d2"><u>Cyclone Senyar</u></a> caused catastrophic floods and landslides in <a href="https://radar.cloudflare.com/lk"><u>Sri Lanka</u></a> and <a href="https://radar.cloudflare.com/id"><u>Indonesia</u></a>, killing over 1,000 people and damaging telecommunications and power infrastructure across these countries. The infrastructure damage resulted in <a href="https://x.com/CloudflareRadar/status/1996233525989720083"><u>disruptions to Internet connectivity</u></a>, and resultant lower traffic levels, across multiple regions.</p><p>In Sri Lanka, regions outside the main Western Province were the most affected, and several provinces saw traffic drop <a href="https://x.com/CloudflareRadar/status/1996233528032301513"><u>between 80% and 95%</u></a> as compared to the prior week, including <a href="https://radar.cloudflare.com/traffic/1232860?dateStart=2025-11-24&amp;dateEnd=2025-12-14"><u>North Western</u></a>, <a href="https://radar.cloudflare.com/traffic/1227618?dateStart=2025-11-24&amp;dateEnd=2025-12-14"><u>Southern</u></a>, <a href="https://radar.cloudflare.com/traffic/1225265?dateStart=2025-11-24&amp;dateEnd=2025-12-14"><u>Uva</u></a>, <a href="https://radar.cloudflare.com/traffic/8133521?dateStart=2025-11-24&amp;dateEnd=2025-12-14"><u>Eastern</u></a>, <a href="https://radar.cloudflare.com/traffic/7671049?dateStart=2025-11-24&amp;dateEnd=2025-12-14"><u>Northern</u></a>, <a href="https://radar.cloudflare.com/traffic/1232870?dateStart=2025-11-24&amp;dateEnd=2025-12-14"><u>North Central</u></a>, and <a href="https://radar.cloudflare.com/traffic/1228435?dateStart=2025-11-24&amp;dateEnd=2025-12-14"><u>Sabaragamuwa</u></a>.</p>

<p>In <a href="https://x.com/CloudflareRadar/status/1996233530267885938"><u>Indonesia</u></a>, <a href="https://radar.cloudflare.com/traffic/1215638?dateStart=2025-11-24&amp;dateEnd=2025-12-14"><u>Aceh</u></a> and the Sumatra regions saw the biggest Internet disruptions. In Aceh, traffic initially dropped over 75% as compared to the previous week. In Sumatra, <a href="https://radar.cloudflare.com/traffic/1213642?dateStart=2025-11-24&amp;dateEnd=2025-12-14"><u>North Sumatra</u></a> was the most affected, with an early 30% drop as compared to the previous week, before starting to recover more actively the following week.</p>


    <div>
      <h2>Known or unspecified technical problems</h2>
      <a href="#known-or-unspecified-technical-problems">
        
      </a>
    </div>
    
    <div>
      <h3>Smartfren (Indonesia)</h3>
      <a href="#smartfren-indonesia">
        
      </a>
    </div>
    <p>On October 3, subscribers to Indonesian Internet provider <a href="https://radar.cloudflare.com/as18004"><u>Smartfren (AS18004</u></a>) experienced a service disruption. The issues were <a href="https://x.com/smartfrenworld/status/1973957300466643203"><u>acknowledged by the provider in an X post</u></a>, which stated (in translation), “<i>Currently, telephone, SMS and data services are experiencing problems in several areas.</i>” Traffic from the provider fell as much as 84%, starting around 09:00 local time (02:00 UTC). The disruption lasted for approximately eight hours, as traffic returned to expected levels around 17:00 local time (10:00 UTC). Smartfren did not provide any additional information on what caused the service problems.</p>
    <div>
      <h3>Vodafone UK</h3>
      <a href="#vodafone-uk">
        
      </a>
    </div>
    <p>Major British Internet provider Vodafone UK (<a href="https://radar.cloudflare.com/as5378"><u>AS5378</u></a> &amp; <a href="https://radar.cloudflare.com/as25135"><u>AS25135</u></a>) experienced a brief service outage on October 23. At 15:00 local time (14:00 UTC), traffic on both Vodafone <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/"><u>ASNs</u></a> dropped to zero. Announced IPv4 address space from <a href="https://radar.cloudflare.com/routing/as5378?dateStart=2025-10-13&amp;dateEnd=2025-10-13#announced-ip-address-space"><u>AS5378</u></a> fell by 75%, while announced IPv4 address space from <a href="https://radar.cloudflare.com/routing/as25135?dateStart=2025-10-13&amp;dateEnd=2025-10-13#announced-ip-address-space"><u>AS25135</u></a> disappeared entirely. Both Internet traffic and address space recovered two hours later, returning to expected levels around 17:00 local time (16:00 UTC). Vodafone did not provide any information on their social media channels about the cause of the outage, and their <a href="https://www.vodafone.co.uk/network/status-checker"><u>network status checker page</u></a> was also unavailable during the outage.</p>






    <div>
      <h3>Fastweb (Italy)</h3>
      <a href="#fastweb-italy">
        
      </a>
    </div>
    <p>According to a <a href="https://tg24.sky.it/tecnologia/2025/10/22/fastweb-down-problemi-internet-oggi"><u>published report</u></a>, a DNS resolution issue disrupted Internet services for customers of Italian provider <a href="https://radar.cloudflare.com/as12874"><u>Fastweb (AS12874)</u></a> on October 22, causing observed traffic volumes to drop by over 75%. Fastweb <a href="https://www.firstonline.info/en/fastweb-down-oggi-internet-bloccato-in-tutta-italia-migliaia-di-segnalazioni/"><u>acknowledged the issue</u></a>, which impacted wired Internet customers between 09:30 - 13:00 local time (08:30 - 12:00 UTC).</p><p>Although not an Internet outage caused by connectivity failure, the impact of DNS resolution issues on Internet traffic is very similar. When a provider’s <a href="https://www.cloudflare.com/learning/dns/dns-server-types/"><u>DNS resolver</u></a> is experiencing problems, switching to a service like Cloudflare’s <a href="https://1.1.1.1/dns"><u>1.1.1.1 public DNS resolver</u></a> will often restore connectivity.</p>
    <div>
      <h3>SBIN, MTN Benin, Etisalat Benin</h3>
      <a href="#sbin-mtn-benin-etisalat-benin">
        
      </a>
    </div>
    <p>On December 7, a concurrent drop in traffic was observed across <a href="https://radar.cloudflare.com/as28683"><u>SBIN (AS28683)</u></a>, <a href="https://radar.cloudflare.com/as37424"><u>MTN Benin (AS37424)</u></a>, and <a href="https://radar.cloudflare.com/as37136"><u>Etisalat Benin (AS37136)</u></a>. Between 18:30 - 19:30 local time (17:30 - 18:30 UTC), traffic dropped as much as 80% as compared to the prior week at a country level, nearly 100% at Etisalat and MTN, and over 80% at SBIN.</p><p>While an <a href="https://www.reuters.com/world/africa/soldiers-benins-national-television-claim-have-seized-power-2025-12-07/"><u>attempted coup</u></a> had taken place earlier in the day, it is unclear whether the observed Internet disruption was related in any way. From a routing perspective, all three impacted networks share <a href="https://radar.cloudflare.com/as174"><u>Cogent (AS174)</u></a> as an upstream provider, so a localized issue at Cogent may have contributed to the brief outage.  </p>



    <div>
      <h3>Cellcom Israel</h3>
      <a href="#cellcom-israel">
        
      </a>
    </div>
    <p>According to a <a href="https://www.ynetnews.com/article/2gpt1kt35"><u>reported announcement</u></a> from Israeli provider <a href="https://radar.cloudflare.com/as1680"><u>Cellcom (AS1680)</u></a>, on December 18, there was “<i>a malfunction affecting Internet connectivity that is impacting some of our customers.</i>” This malfunction dropped traffic nearly 70% as compared to the prior week, and occurred between 09:30 - 11:00 local time (07:30 - 09:00 UTC). The “malfunction” may have been a DNS failure, according to a <a href="https://www.israelnationalnews.com/news/419552"><u>published report</u></a>.</p>
    <div>
      <h3>Partner Communications (Israel)</h3>
      <a href="#partner-communications-israel">
        
      </a>
    </div>
    <p>Closing out 2025, on December 30, a major technical failure at Israeli provider <a href="https://radar.cloudflare.com/as12400"><u>Partner Communications (AS12400)</u></a> <a href="https://www.ynetnews.com/tech-and-digital/article/hjewkibnwe"><u>disrupted</u></a> mobile, TV, and Internet services across the country. Internet traffic from Partner fell by two-thirds as compared to the previous week between 14:00 - 15:00 local time (12:00 - 13:00 UTC). During the outage, queries to Cloudflare’s 1.1.1.1 public DNS resolver spiked, suggesting that the problem may have been related to Partner’s DNS infrastructure. However, the provider did not publicly confirm what caused the outage.</p>




    <div>
      <h2>Cloud Platforms</h2>
      <a href="#cloud-platforms">
        
      </a>
    </div>
    <p>During the fourth quarter, we launched a new <a href="https://radar.cloudflare.com/cloud-observatory"><u>Cloud Observatory</u></a> page on Radar that tracks availability and performance issues at a region level across hyperscaler cloud platforms, including <a href="https://radar.cloudflare.com/cloud-observatory/amazon"><u>Amazon Web Services</u></a>, <a href="https://radar.cloudflare.com/cloud-observatory/microsoft"><u>Microsoft Azure</u></a>, <a href="https://radar.cloudflare.com/cloud-observatory/google"><u>Google Cloud Platform</u></a>, and <a href="https://radar.cloudflare.com/cloud-observatory/oracle"><u>Oracle Cloud Infrastructure</u></a>.</p>
    <div>
      <h3>Amazon Web Services</h3>
      <a href="#amazon-web-services">
        
      </a>
    </div>
    <p>On October 20, the Amazon Web Services us-east-1 region in Northern Virginia experienced “<a href="https://health.aws.amazon.com/health/status?eventID=arn:aws:health:us-east-1::event/MULTIPLE_SERVICES/AWS_MULTIPLE_SERVICES_OPERATIONAL_ISSUE/AWS_MULTIPLE_SERVICES_OPERATIONAL_ISSUE_BA540_514A652BE1A"><u>increased error rates and latencies</u></a>” that affected multiple services within the region. The issues impacted not only customers with public-facing Web sites and applications that rely on infrastructure within the region, but also Cloudflare customers that have origin resources hosted in us-east-1.</p><p>We began to see the impact of the problems around 06:30 UTC, as the share of <a href="https://radar.cloudflare.com/cloud-observatory/amazon/us-east-1?dateStart=2025-10-20&amp;dateEnd=2025-10-21#success-rate"><u>error</u></a> (<a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Status#server_error_responses"><u>5xx-class</u></a>) responses began to climb, reaching as high as 17% around 08:00 UTC. The number of <a href="https://radar.cloudflare.com/cloud-observatory/amazon/us-east-1?dateStart=2025-10-20&amp;dateEnd=2025-10-21#connection-failures"><u>failures encountered when attempting to connect to origins</u></a> in us-east-1 climbed as well, peaking around 12:00 UTC.</p>

<p>The impact could also be clearly seen in key network performance metrics, which remained elevated throughout the incident, returning to normal levels just before the end of the incident, around 23:00 UTC. Both <a href="https://radar.cloudflare.com/cloud-observatory/amazon/us-east-1?dateStart=2025-10-20&amp;dateEnd=2025-10-21#tcp-handshake-duration"><u>TCP</u></a> and <a href="https://radar.cloudflare.com/cloud-observatory/amazon/us-east-1?dateStart=2025-10-20&amp;dateEnd=2025-10-21#tls-handshake-duration"><u>TLS</u></a> handshake durations got progressively worse throughout the incident—these metrics measure the amount of time needed for Cloudflare to establish TCP and TLS connections respectively with customer origin servers in us-east-1. In addition, the amount of time elapsed before Cloudflare <a href="https://radar.cloudflare.com/cloud-observatory/amazon/us-east-1/#response-header-receive-duration"><u>received response headers</u></a> from the origin increased significantly during the first several hours of the incident, before gradually returning to expected levels.  </p>





    <div>
      <h3>Microsoft Azure</h3>
      <a href="#microsoft-azure">
        
      </a>
    </div>
    <p>On October 29, Microsoft Azure experienced an <a href="https://azure.status.microsoft/en-us/status/history/?trackingId=YKYN-BWZ"><u>incident</u></a> impacting <a href="https://azure.microsoft.com/en-us/products/frontdoor"><u>Azure Front Door</u></a>, its content delivery network service. According to <a href="https://azure.status.microsoft/en-us/status/history/?trackingId=YKYN-BWZ"><u>Azure's report on the incident</u></a>, “<i>A specific sequence of customer configuration changes, performed across two different control plane build versions, resulted in incompatible customer configuration metadata being generated. These customer configuration changes themselves were valid and non-malicious – however they produced metadata that, when deployed to edge site servers, exposed a latent bug in the data plane. This incompatibility triggered a crash during asynchronous processing within the data plane service.</i>”</p><p>The incident report marked the start time at 15:41 UTC, although we observed the volume of <a href="https://radar.cloudflare.com/cloud-observatory/microsoft/global?dateStart=2025-10-29&amp;dateEnd=2025-10-30#connection-failures"><u>failed connection attempts</u></a> to Azure-hosted origins begin to climb about 45 minutes prior. The TCP and TLS handshake metrics also became more volatile during the incident period, with <a href="https://radar.cloudflare.com/cloud-observatory/microsoft/global?dateStart=2025-10-29&amp;dateEnd=2025-10-30#tcp-handshake-duration"><u>TCP handshakes</u></a> taking over 50% longer at times, and <a href="https://radar.cloudflare.com/cloud-observatory/microsoft/global?dateStart=2025-10-29&amp;dateEnd=2025-10-30#tls-handshake-duration"><u>TLS handshakes</u></a> taking nearly 200% longer at peak. The impacted metrics began to improve after 20:00 UTC, and according to Microsoft, the incident ended at 00:05 UTC on October 30.</p>



    <div>
      <h2>Cloudflare</h2>
      <a href="#cloudflare">
        
      </a>
    </div>
    <p>In addition to the outages discussed above, Cloudflare also experienced two disruptions during the fourth quarter. While these were not Internet outages in the classic sense, they did prevent users from accessing Web sites and applications delivered and protected by Cloudflare when they occurred.</p><p>The first incident took place on November 18, and was caused by a software failure triggered by a change to one of our database systems' permissions, which caused the database to output multiple entries into a “feature file” used by our Bot Management system. Additional details, including a root cause analysis and timeline, can be found in the associated <a href="https://blog.cloudflare.com/18-november-2025-outage/"><u>blog post</u></a>.</p><p>The second incident occurred on December 5, and impacted a subset of customers, accounting for approximately 28% of all HTTP traffic served by Cloudflare. It was triggered by changes being made to our request body parsing logic while attempting to detect and mitigate a newly disclosed industry-wide React Server Components vulnerability. A post-mortem <a href="https://blog.cloudflare.com/5-december-2025-outage/"><u>blog post</u></a> contains additional details, including a root cause analysis and timeline.</p><p>For more information about the work underway at Cloudflare to prevent outages like these from happening again, check out our <a href="https://blog.cloudflare.com/fail-small-resilience-plan/"><u>blog post</u></a> detailing “Code Orange: Fail Small.”</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>The disruptions observed in the fourth quarter underscore the importance of real-time data in maintaining global connectivity. Whether it’s a government-ordered shutdown or a minor technical issue, transparency allows the technical community to respond faster and more effectively. We will continue to track these shifts on Cloudflare Radar, providing the insights needed to navigate the complexities of modern networking. We share our observations on the <a href="https://radar.cloudflare.com/outage-center"><u>Cloudflare Radar Outage Center</u></a>, via social media, and in posts on <a href="https://blog.cloudflare.com/tag/cloudflare-radar/"><u>blog.cloudflare.com</u></a>. Follow us on social media at <a href="https://twitter.com/CloudflareRadar"><u>@CloudflareRadar</u></a> (X), <a href="https://noc.social/@cloudflareradar"><u>noc.social/@cloudflareradar</u></a> (Mastodon), and <a href="https://bsky.app/profile/radar.cloudflare.com"><u>radar.cloudflare.com</u></a> (Bluesky), or contact us via <a><u>email</u></a>.</p><p>As a reminder, while these blog posts feature graphs from <a href="https://radar.cloudflare.com/"><u>Radar</u></a> and the <a href="https://radar.cloudflare.com/explorer"><u>Radar Data Explorer</u></a>, the underlying data is available from our <a href="https://developers.cloudflare.com/api/resources/radar/"><u>API</u></a>. You can use the API to retrieve data to do your own local monitoring or analysis, or you can use the <a href="https://github.com/cloudflare/mcp-server-cloudflare/tree/main/apps/radar#cloudflare-radar-mcp-server-"><u>Radar MCP server</u></a> to incorporate Radar data into your AI tools.</p> ]]></content:encoded>
            <category><![CDATA[Radar]]></category>
            <category><![CDATA[Internet Shutdown]]></category>
            <category><![CDATA[Internet Traffic]]></category>
            <category><![CDATA[Outage]]></category>
            <category><![CDATA[Internet Trends]]></category>
            <category><![CDATA[AWS]]></category>
            <category><![CDATA[Microsoft Azure]]></category>
            <category><![CDATA[Consumer Services]]></category>
            <guid isPermaLink="false">6dRT0oOSVcyQzjnZCkzH7S</guid>
            <dc:creator>David Belson</dc:creator>
        </item>
        <item>
            <title><![CDATA[Magic Cloud Networking simplifies security, connectivity, and management of public clouds]]></title>
            <link>https://blog.cloudflare.com/introducing-magic-cloud-networking/</link>
            <pubDate>Wed, 06 Mar 2024 14:01:00 GMT</pubDate>
            <description><![CDATA[ Introducing Magic Cloud Networking, a new set of capabilities to visualize and automate cloud networks to give our customers secure, easy, and seamless connection to public cloud environments ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4EE4QTE18JtBWAk0XucBb2/818464eba98f9928bbfa7bfe179780d8/image5-5.png" />
            
            </figure><p>Today we are excited to announce Magic Cloud Networking, supercharged by <a href="https://www.cloudflare.com/press-releases/2024/cloudflare-enters-multicloud-networking-market-unlocks-simple-secure/">Cloudflare’s recent acquisition of Nefeli Networks</a>’ innovative technology. These new capabilities to visualize and automate cloud networks will give our customers secure, easy, and seamless connection to public cloud environments.</p><p>Public clouds offer organizations a scalable and on-demand IT infrastructure without the overhead and expense of running their own datacenter. <a href="https://www.cloudflare.com/learning/cloud/what-is-cloud-networking/">Cloud networking</a> is foundational to applications that have been migrated to the cloud, but is difficult to manage without automation software, especially when operating at scale across multiple cloud accounts. Magic Cloud Networking uses familiar concepts to provide a single interface that controls and unifies multiple cloud providers’ native network capabilities to create reliable, cost-effective, and secure cloud networks.</p><p>Nefeli’s approach to multi-cloud networking solves the problem of building and operating end-to-end networks within and across public clouds, allowing organizations to <a href="https://www.cloudflare.com/application-services/solutions/">securely leverage applications</a> spanning any combination of internal and external resources. Adding Nefeli’s technology will make it easier than ever for our customers to connect and protect their users, private networks and applications.</p>
    <div>
      <h2>Why is cloud networking difficult?</h2>
      <a href="#why-is-cloud-networking-difficult">
        
      </a>
    </div>
    <p>Compared with a traditional on-premises data center network, cloud networking promises simplicity:</p><ul><li><p>Much of the complexity of physical networking is abstracted away from users because the physical and ethernet layers are not part of the network service exposed by the cloud provider.</p></li><li><p>There are fewer control plane protocols; instead, the cloud providers deliver a simplified <a href="https://www.cloudflare.com/learning/network-layer/what-is-sdn/">software-defined network (SDN)</a> that is fully programmable via API.</p></li><li><p>There is capacity — from zero up to very large — available instantly and on-demand, only charging for what you use.</p></li></ul><p>However, that promise has not yet been fully realized. Our customers have described several reasons cloud networking is difficult:</p><ul><li><p><b>Poor end-to-end visibility</b>: Cloud network visibility tools are difficult to use and silos exist even within single cloud providers that impede end-to-end monitoring and troubleshooting.</p></li><li><p><b>Faster pace</b>: Traditional IT management approaches clash with the promise of the cloud: instant deployment available on-demand. Familiar ClickOps and CLI-driven procedures must be replaced by automation to meet the needs of the business.</p></li><li><p><b>Different technology</b>: Established network architectures in on-premises environments do not seamlessly transition to a public cloud. The missing ethernet layer and advanced control plane protocols were critical in many network designs.</p></li><li><p><b>New cost models</b>: The dynamic pay-as-you-go usage-based cost models of the public clouds are not compatible with established approaches built around fixed cost circuits and 5-year depreciation. Network solutions are often architected with financial constraints, and accordingly, different architectural approaches are sensible in the cloud.</p></li><li><p><b>New security risks</b>: Securing public clouds with true <a href="https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/">zero trust</a> and least-privilege demands mature operating processes and automation, and familiarity with cloud-specific policies and IAM controls.</p></li><li><p><b>Multi-vendor:</b> Oftentimes enterprise networks have used single-vendor sourcing to facilitate interoperability, operational efficiency, and targeted hiring and training. Operating a network that extends beyond a single cloud, into other clouds or on-premises environments, is a multi-vendor scenario.</p></li></ul><p>Nefeli considered all these problems and the tensions between different customer perspectives to identify where the problem should be solved.</p>
    <div>
      <h2>Trains, planes, and automation</h2>
      <a href="#trains-planes-and-automation">
        
      </a>
    </div>
    <p>Consider a train system. To operate effectively it has three key layers:</p><ul><li><p>tracks and trains</p></li><li><p>electronic signals</p></li><li><p>a company to manage the system and sell tickets.</p></li></ul><p>A train system with good tracks, trains, and signals could still be operating below its full potential because its agents are unable to keep up with passenger demand. The result is that passengers cannot plan itineraries or purchase tickets.</p><p>The train company eliminates bottlenecks in process flow by simplifying the schedules, simplifying the pricing, providing agents with better booking systems, and installing automated ticket machines. Now the same fast and reliable infrastructure of tracks, trains, and signals can be used to its full potential.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/342dyvSqIvF0hJJoCDyf0I/8e92b93f922412344fa34cbbea7a4be1/image8.png" />
            
            </figure>
    <div>
      <h3>Solve the right problem</h3>
      <a href="#solve-the-right-problem">
        
      </a>
    </div>
    <p>In networking, there are an analogous set of three layers, called the <a href="https://www.cloudflare.com/learning/network-layer/what-is-the-control-plane/">networking planes</a>:</p><ul><li><p><b>Data Plane:</b> the network paths that transport data (in the form of packets) from source to destination.</p></li><li><p><b>Control Plane:</b> protocols and logic that change how packets are steered across the data plane.</p></li><li><p><b>Management Plane:</b> the configuration and monitoring interfaces for the data plane and control plane.</p></li></ul><p>In public cloud networks, these layers map to:</p><ul><li><p><b>Cloud Data Plane:</b> The underlying cables and devices are exposed to users as the <a href="https://www.cloudflare.com/learning/cloud/what-is-a-virtual-private-cloud/">Virtual Private Cloud (VPC)</a> or Virtual Network (VNet) service that includes subnets, routing tables, security groups/ACLs and additional services such as load-balancers and VPN gateways.</p></li><li><p><b>Cloud Control Plane:</b> In place of distributed protocols, the cloud control plane is a <a href="https://www.cloudflare.com/learning/network-layer/what-is-sdn/">software defined network (SDN)</a> that, for example, programs static route tables. (There is limited use of traditional control plane protocols, such as BGP to interface with external networks and ARP to interface with VMs.)</p></li><li><p><b>Cloud Management Plane:</b> An administrative interface with a UI and API which allows the admin to fully configure the data and control planes. It also provides a variety of monitoring and logging capabilities that can be enabled and integrated with 3rd party systems.</p></li></ul><p>Like our train example, most of the problems that our customers experience with cloud networking are in the third layer: the management plane.</p><p>Nefeli simplifies, unifies, and automates cloud network management and operations.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/nb9xcqGaRRaYIe0lvlbIs/83da6094ec1f7bc3e4a7d72a17fc511c/image2-6.png" />
            
            </figure>
    <div>
      <h3>Avoid cost and complexity</h3>
      <a href="#avoid-cost-and-complexity">
        
      </a>
    </div>
    <p>One common approach to tackle management problems in cloud networks is introducing Virtual Network Functions (VNFs), which are <a href="https://www.cloudflare.com/learning/cloud/what-is-a-virtual-machine/">virtual machines (VMs)</a> that do packet forwarding, in place of native cloud data plane constructs. Some VNFs are routers, firewalls, or load-balancers ported from a traditional network vendor’s hardware appliances, while others are software-based proxies often built on open-source projects like NGINX or Envoy. Because VNFs mimic their physical counterparts, IT teams could continue using familiar management tooling, but VNFs have downsides:</p><ul><li><p>VMs do not have custom network silicon and so instead rely on raw compute power. The VM is sized for the peak anticipated load and then typically runs 24x7x365. This drives a high cost of compute regardless of the actual utilization.</p></li><li><p>High-availability (HA) relies on fragile, costly, and complex network configuration.</p></li><li><p>Service insertion — the configuration to put a VNF into the packet flow — often forces packet paths that incur additional bandwidth charges.</p></li><li><p>VNFs are typically licensed similarly to their on-premises counterparts and are expensive.</p></li><li><p>VNFs lock in the enterprise and potentially exclude them benefitting from improvements in the cloud’s native data plane offerings.</p></li></ul><p>For these reasons, enterprises are turning away from VNF-based solutions and increasingly looking to rely on the native network capabilities of their cloud service providers. The built-in public cloud networking is elastic, performant, robust, and priced on usage, with high-availability options integrated and backed by the cloud provider’s service level agreement.</p><p>In our train example, the tracks and trains are good. Likewise, the cloud network data plane is highly capable. Changing the data plane to solve management plane problems is the wrong approach. To make this work at scale, organizations need a solution that works together with the native network capabilities of cloud service providers.</p><p>Nefeli leverages native cloud data plane constructs rather than third party VNFs.</p>
    <div>
      <h2>Introducing Magic Cloud Networking</h2>
      <a href="#introducing-magic-cloud-networking">
        
      </a>
    </div>
    <p>The Nefeli team has joined Cloudflare to integrate cloud network management functionality with Cloudflare One. This capability is called Magic Cloud Networking and with it, enterprises can use the Cloudflare dashboard and API to manage their public cloud networks and connect with Cloudflare One.</p>
    <div>
      <h3>End-to-end</h3>
      <a href="#end-to-end">
        
      </a>
    </div>
    <p>Just as train providers are focused only on completing train journeys in their own network, cloud service providers deliver network connectivity and tools within a single cloud account. Many large enterprises have hundreds of cloud accounts across multiple cloud providers. In an end-to-end network this creates disconnected networking silos which introduce operational inefficiencies and risk.</p><p>Imagine you are trying to organize a train journey across Europe, and no single train company serves both your origin and destination. You know they all offer the same basic service: a seat on a train. However, your trip is difficult to arrange because it involves multiple trains operated by different companies with their own schedules and ticketing rates, all in different languages!</p><p>Magic Cloud Networking is like an online travel agent that aggregates multiple transportation options, books multiple tickets, facilitates changes after booking, and then delivers travel status updates.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4P7EhpKlEfTnU7WdPq4dt6/4de908c385b406a89c97f4dc274b3acb/image6.png" />
            
            </figure><p>Through the Cloudflare dashboard, you can discover all of your network resources across accounts and cloud providers and visualize your end-to-end network in a single interface. Once Magic Cloud Networking discovers your networks, you can build a scalable network through a fully automated and simple workflow.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2qXuFK0Q1q96NtH0FYRNg0/3c449510b24a3f206b63b01e1799dddd/image3-8.png" />
            
            </figure><p><i>Resource inventory shows all configuration in a single and responsive UI</i></p>
    <div>
      <h3>Taming per-cloud complexity</h3>
      <a href="#taming-per-cloud-complexity">
        
      </a>
    </div>
    <p>Public clouds are used to deliver applications and services. Each cloud provider offers a composable stack of modular building blocks (resources) that start with the foundation of a billing account and then add on security controls. The next foundational layer, for server-based applications, is VPC networking. Additional resources are built on the VPC network foundation until you have compute, storage, and network infrastructure to host the enterprise application and data. Even relatively simple architectures can be composed of hundreds of resources.</p><p>The trouble is, these resources expose abstractions that are different from the building blocks you would use to build a service on prem, the abstractions differ between cloud providers, and they form a web of dependencies with complex rules about how configuration changes are made (rules which differ between resource types and cloud providers). For example, say I create 100 VMs, and connect them to an IP network. Can I make changes to the IP network while the VMs are using the network? The answer: it depends.</p><p>Magic Cloud Networking handles these differences and complexities for you. It configures native cloud constructs such as VPN gateways, routes, and security groups to securely connect your cloud VPC network to Cloudflare One without having to learn each cloud’s incantations for creating VPN connections and hubs.</p>
    <div>
      <h3>Continuous, coordinated automation</h3>
      <a href="#continuous-coordinated-automation">
        
      </a>
    </div>
    <p>Returning to our train system example, what if the railway maintenance staff find a dangerous fault on the railroad track? They manually set the signal to a stop light to prevent any oncoming trains using the faulty section of track. Then, what if, by unfortunate coincidence, the scheduling office is changing the signal schedule, and they set the signals remotely which clears the safety measure made by the maintenance crew? Now there is a problem that no one knows about and the root cause is that multiple authorities can change the signals via different interfaces without coordination.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/VNaeoX2TNwYwZsweytYSZ/40806d02108119204f638ed5f111d5d0/image1-10.png" />
            
            </figure><p>The same problem exists in cloud networks: configuration changes are made by different teams using different automation and configuration interfaces across a spectrum of roles such as billing, support, security, networking, firewalls, database, and application development.</p><p>Once your network is deployed, Magic Cloud Networking monitors its configuration and health, enabling you to be confident that the security and connectivity you put in place yesterday is still in place today. It tracks the cloud resources it is responsible for, automatically reverting drift if they are changed out-of-band, while allowing you to manage other resources, like storage buckets and application servers, with other automation tools. And, as you change your network, Cloudflare takes care of route management, injecting and withdrawing routes globally across Cloudflare and all connected cloud provider networks.</p><p>Magic Cloud Networking is fully programmable via API, and can be integrated into existing automation toolchains.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3h360ewBWWCRUjhqjrm6wF/5006f8267a880b98ccbe9bfc91cb9029/image7-1.png" />
            
            </figure><p><i>The interface warns when cloud network infrastructure drifts from intent</i></p>
    <div>
      <h2>Ready to start conquering cloud networking?</h2>
      <a href="#ready-to-start-conquering-cloud-networking">
        
      </a>
    </div>
    <p>We are thrilled to introduce Magic Cloud Networking as another pivotal step to fulfilling the promise of the <a href="https://www.cloudflare.com/connectivity-cloud/">Connectivity Cloud</a>. This marks our initial stride in empowering customers to seamlessly integrate Cloudflare with their public clouds to get securely connected, stay securely connected, and gain flexibility and cost savings as they go.</p><p>Join us on this journey for early access: learn more and sign up <a href="https://cloudflare.com/lp/cloud-networking/">here</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/610Vl5u7JVsnszRAmQz0Yt/3bb2a75f47826c1c1969c1d9b0c1db8d/image4-10.png" />
            
            </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Security Week]]></category>
            <category><![CDATA[Network]]></category>
            <category><![CDATA[AWS]]></category>
            <category><![CDATA[EC2]]></category>
            <category><![CDATA[Google Cloud]]></category>
            <category><![CDATA[Microsoft Azure]]></category>
            <category><![CDATA[SASE]]></category>
            <category><![CDATA[Cloudflare One]]></category>
            <category><![CDATA[Multi-Cloud]]></category>
            <category><![CDATA[Zero Trust]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Magic WAN]]></category>
            <category><![CDATA[Connectivity Cloud]]></category>
            <category><![CDATA[Acquisitions]]></category>
            <guid isPermaLink="false">2qMDjBOoY9rSrSaeNzUDzL</guid>
            <dc:creator>Steve Welham</dc:creator>
            <dc:creator>David Naylor</dc:creator>
        </item>
    </channel>
</rss>