
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Tue, 07 Apr 2026 21:32:09 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Cable cuts, storms, and DNS: a look at Internet disruptions in Q4 2025]]></title>
            <link>https://blog.cloudflare.com/q4-2025-internet-disruption-summary/</link>
            <pubDate>Mon, 26 Jan 2026 14:00:00 GMT</pubDate>
            <description><![CDATA[ The last quarter of 2025 brought several notable disruptions to Internet connectivity. Cloudflare Radar data reveals the impact of cable cuts, power outages, extreme weather, technical problems, and more. ]]></description>
            <content:encoded><![CDATA[ <p>In 2025, we <a href="https://radar.cloudflare.com/outage-center?dateStart=2025-01-01&amp;dateEnd=2025-12-31"><u>observed over 180 Internet disruptions</u></a> spurred by a variety of causes – some were brief and partial, while others were complete outages lasting for days. In the fourth quarter, we tracked only a single <a href="#government-directed"><u>government-directed</u></a> Internet shutdown, but multiple <a href="#cable-cuts"><u>cable cuts</u></a> wreaked havoc on connectivity in several countries. <a href="#power-outages"><u>Power outages</u></a> and <a href="#weather"><u>extreme weather</u></a> disrupted Internet services in multiple places, and the ongoing <a href="#military-action"><u>conflict</u></a> in Ukraine impacted connectivity there as well. As always, a number of the disruptions we observed were due to <a href="#known-or-unspecified-technical-problems"><u>technical problems</u></a> – with some acknowledged by the relevant providers, while others had unknown causes. In addition, incidents at several hyperscaler <a href="#cloud-platforms"><u>cloud platforms</u></a> and <a href="#cloudflare"><u>Cloudflare</u></a> impacted the availability of websites and applications.  </p><p>This post is intended as a summary overview of observed and confirmed disruptions and is not an exhaustive or complete list of issues that have occurred during the quarter. These anomalies are detected through significant deviations from expected traffic patterns observed across our network. Check out the <a href="https://radar.cloudflare.com/outage-center"><u>Cloudflare Radar Outage Center</u></a> for a full list of verified anomalies and confirmed outages. </p>
    <div>
      <h2>Government-directed</h2>
      <a href="#government-directed">
        
      </a>
    </div>
    
    <div>
      <h3>Tanzania</h3>
      <a href="#tanzania">
        
      </a>
    </div>
    <p><a href="https://bsky.app/profile/radar.cloudflare.com/post/3m4df6i7hjk25"><u>The Internet was shut down in Tanzania</u></a> on October 29 as <a href="https://www.theguardian.com/world/2025/oct/29/tanzania-election-president-samia-suluhu-hassan-poised-to-retain-power"><u>violent protests</u></a> took place during the country’s presidential election. Traffic initially fell around 12:30 local time (09:30 UTC), dropping more than 90% lower than the previous week. The disruption lasted approximately 26 hours, with <a href="https://bsky.app/profile/radar.cloudflare.com/post/3m4qec7zdnt2u"><u>traffic beginning to return</u></a> around 14:30 local time (11:30 UTC) on October 30. However, that restoration <a href="https://bsky.app/profile/radar.cloudflare.com/post/3m4gjngzck72u"><u>proved to be quite brief</u></a>, with a significant decrease in traffic occurring around 16:15 local time (13:15 UTC), approximately two hours after it returned. This second near-complete outage lasted until November 3, <a href="https://bsky.app/profile/radar.cloudflare.com/post/3m4g47vasfm2u"><u>when traffic aggressively returned</u></a> after 17:00 local time (14:00 UTC). Nominal drops in <a href="https://radar.cloudflare.com/routing/tz?dateStart=2025-10-29&amp;dateEnd=2025-11-04#announced-ip-address-space"><u>announced IPv4 and IPv6 address space</u></a> were also observed during the shutdown, but there was never a complete loss of announcements, which would have signified a total disconnection of the country from the Internet. (<a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/"><u>Autonomous systems</u></a> announce IP address space to other Internet providers, letting them know what blocks of IP addresses they are responsible for.)</p><p>Tanzania’s president later <a href="https://apnews.com/article/tanzania-samia-suluhu-hassan-internet-shutdown-october-election-1ec66b897e7809865d8971699a7284e0"><u>expressed sympathy</u></a> for the members of the diplomatic community and foreigners residing in the country regarding the impact of the Internet shutdown. Internet and social media services were also <a href="https://www.dw.com/en/tanzania-internet-slowdown-comes-at-a-high-cost/a-55512732"><u>restricted in 2020</u></a> ahead of the country’s general elections.</p>
    <div>
      <h2>Cable cuts</h2>
      <a href="#cable-cuts">
        
      </a>
    </div>
    
    <div>
      <h3>Digicel Haiti</h3>
      <a href="#digicel-haiti">
        
      </a>
    </div>
    <p>Digicel Haiti is unfortunately no stranger to Internet disruptions caused by cable cuts, and the network experienced two more such incidents during the fourth quarter. On October 16, traffic from <a href="https://radar.cloudflare.com/as27653"><u>Digicel Haiti (AS27653)</u></a> began to fall at 14:30 local time (18:30 UTC), reaching near zero at 16:00 local time (20:00 UTC). A translated <a href="https://x.com/jpbrun30/status/1978920959089230003"><u>X post from the company’s Director General</u></a> noted: “<i>We advise our clientele that @DigicelHT is experiencing 2 cuts on its international fiber optic infrastructure.</i>” Traffic began to recover after 17:00 local time (21:00 UTC), and reached expected levels within the following hour. At 17:33 local time (21:34 UTC), the Director General <a href="https://x.com/jpbrun30/status/1978937426841063504"><u>posted</u></a> that “<i>the first fiber on the international infrastructure has been repaired” </i>and service had been restored. </p><p>On November 25, another translated <a href="https://x.com/jpbrun30/status/1993283730467963345"><u>X post from the provider’s Director General</u></a> stated that its “<i>international optical fiber infrastructure on National Road 1</i>” had been cut. We observed traffic dropping on Digicel’s network approximately an hour earlier, with a complete outage observed between 02:00 - 08:00 local time (07:00 - 13:00 UTC). A <a href="https://x.com/jpbrun30/status/1993309357438910484"><u>follow-on X post</u></a> at 08:22 local time (13:22 UTC) stated that all services had been restored.</p>
    <div>
      <h3>Cybernet/StormFiber (Pakistan)</h3>
      <a href="#cybernet-stormfiber-pakistan">
        
      </a>
    </div>
    <p>At 17:30 local time (12:30 UTC) on October 20, Internet traffic for <a href="https://radar.cloudflare.com/as9541"><u>Cybernet/StormFiber (AS9541)</u></a> dropped sharply, falling to a level approximately 50% the same time a week prior. At the same time, the network’s announced IPv4 address space dropped by over a third. The cause of these shifts was damage to the <a href="https://www.submarinecablemap.com/submarine-cable/peace-cable"><u>PEACE</u></a> submarine cable, which suffered a cut in the Red Sea near Sudan. </p><p>PEACE is one of several submarine cable systems (including <a href="https://www.submarinecablemap.com/submarine-cable/imewe"><u>IMEWE</u></a> and <a href="https://www.submarinecablemap.com/submarine-cable/seamewe-4"><u>SEA-ME-WE-4</u></a>) that carry international Internet traffic for Pakistani providers. The provider <a href="https://profit.pakistantoday.com.pk/2025/10/24/stormfiber-pledges-full-restoration-by-monday-after-weeklong-internet-disruptions/"><u>pledged to fully restore service</u></a> by October 27, but traffic and announced IPv4 address space had recovered to near expected levels by around 02:00 local time on October 21 (21:00 UTC on October 20).</p>
<p>


    </p><div>
      <h3>Camtel, MTN Cameroon, Orange Cameroun</h3>
      <a href="#camtel-mtn-cameroon-orange-cameroun">
        
      </a>
    </div>
    <p>Unusual traffic patterns observed across multiple Internet providers in Cameroon on October 23 were reportedly caused by problems on the <a href="https://www.submarinecablemap.com/submarine-cable/west-africa-cable-system-wacs"><u>WACS (West Africa Cable System)</u></a> submarine cable, which connects countries along the west coast of Africa to Portugal. </p><p>A (translated) <a href="https://teleasu.tv/internet-graves-perturbations-observees-ce-jeudi-23-octobre-2025/"><u>published report</u></a> stated that MTN informed subscribers that “<i>following an incident on the WACS fiber optic cable, Internet service is temporarily disrupted</i>” and Orange Cameroun informed subscribers that “<i>due to an incident on the international access fiber, Internet service is disrupted.</i>” An <a href="https://x.com/Camtelonline/status/1981424170316464390"><u>X post from Camtel</u></a> stated “<i>Cameroon Telecommunications (CAMTEL) wishes to inform the public that a technical incident involving WACS cable equipment in Batoke (LIMBE) occurred in the early hours of 23 October 2025, causing Internet connectivity disruptions throughout the country.</i>” </p><p>Traffic across the impacted providers originally fell just at around  05:00 local time (04:00 UTC) before recovering to expected levels around 22:00 local time (21:00 UTC). Traffic across these networks was quite volatile during the day, dropping 90-99% at times. It isn’t clear what caused the visible spikiness in the traffic pattern—possibly attempts to shift Internet traffic to <a href="https://www.submarinecablemap.com/country/cameroon"><u>other submarine cable systems that connect to Cameroon</u></a>. Announced IP address space from <a href="https://radar.cloudflare.com/routing/as30992?dateStart=2025-10-23&amp;dateEnd=2025-10-23#announced-ip-address-space"><u>MTN Cameroon</u></a> and <a href="https://radar.cloudflare.com/routing/as36912?dateStart=2025-10-23&amp;dateEnd=2025-10-23#announced-ip-address-space"><u>Orange Cameroon</u></a> dropped during this period as well, although <a href="https://radar.cloudflare.com/routing/as15964?dateStart=2025-10-23&amp;dateEnd=2025-10-23#announced-ip-address-space"><u>Camtel’s</u></a> announced IP address space did not change.</p><p>Connectivity in the <a href="https://radar.cloudflare.com/cf"><u>Central African Republic</u></a> and <a href="https://radar.cloudflare.com/cg"><u>Republic of Congo</u></a> was also reportedly impacted by the WACS issues.</p>



    <div>
      <h3>Claro Dominicana</h3>
      <a href="#claro-dominicana">
        
      </a>
    </div>
    <p>On December 9, we saw traffic from <a href="https://radar.cloudflare.com/as6400"><u>Claro Dominicana (AS6400)</u></a>, an Internet provider in the Dominican Republic, drop sharply around 12:15 local time (16:15 UTC). Traffic levels fell again around 14:15 local time (18:15 UTC), bottoming out 77% lower than the previous week before quickly returning to expected levels. The connectivity disruption was likely caused by two fiber optic outages, as an <a href="https://x.com/ClaroRD/status/1998468046311002183"><u>X post from the provider</u></a> during the outage noted that they were “causing intermittency and slowness in some services.” A <a href="https://x.com/ClaroRD/status/1998496113838764343"><u>subsequent post on X</u></a> from Claro stated that technicians had restored Internet services nationwide by repairing the severed fiber optic cables.</p>
    <div>
      <h2>Power outages</h2>
      <a href="#power-outages">
        
      </a>
    </div>
    
    <div>
      <h3>Dominican Republic</h3>
      <a href="#dominican-republic">
        
      </a>
    </div>
    <p>According to a (translated) <a href="https://x.com/ETED_RD/status/1988326178219061450"><u>X post from the Empresa de Transmisión Eléctrica Dominicana</u></a> (ETED), a transmission line outage caused an interruption in electrical service in the <a href="https://radar.cloudflare.com/do"><u>Dominican Republic</u></a> on November 11. This power outage impacted Internet traffic from the country, resulting in a <a href="https://noc.social/@cloudflareradar/115533081511310085"><u>nearly 50% drop in traffic</u></a> compared to the prior week, starting at 13:15 local time (17:15 UTC). Traffic levels remained lower until approximately 02:00 local time (06:00 UTC) on December 12, with a later <a href="https://x.com/ETED_RD/status/1988575130990330153"><u>(translated) X post from ETED</u></a> noting “<i>At 2:20 a.m. we have completed the recovery of the national electrical system, supplying 96% of the demand…</i>”</p><p>A subsequent <a href="https://dominicantoday.com/dr/local/2025/11/27/manual-line-disconnection-triggered-nationwide-blackout-report-says/"><u>technical report found</u></a> that “<i>the blackout began at the 138 kV San Pedro de Macorís I substation, where a live line was manually disconnected, triggering a high-intensity short circuit. Protection systems responded immediately, but the fault caused several nearby lines to disconnect, separating 575 MW of generation in the eastern region from the rest of the grid. The imbalance caused major power plants to trip automatically as part of their built-in safety mechanisms.</i>”</p>
    <div>
      <h3>Kenya</h3>
      <a href="#kenya">
        
      </a>
    </div>
    <p>On December 9, a <a href="https://www.tuko.co.ke/kenya/612181-kenya-power-reveals-7-pm-nationwide-blackout-multiple-regions/"><u>major power outage</u></a> impacted multiple regions across <a href="https://radar.cloudflare.com/ke"><u>Kenya</u></a>. Kenya Power explained that the outage “<i>was triggered by an incident on the regional Kenya-Uganda interconnected power network, which caused a disturbance on the Kenyan side of the system</i>” and claimed that “<i>[p]ower was restored to most of the affected areas within approximately 30 minutes.</i>” However, impacts to Internet connectivity lasted for nearly four hours, between 19:15 - 23:00 local time (16:15 - 20:00 UTC). The power outage caused traffic to drop as much as 18% at a national level, with the traffic shifts most visible in <a href="https://radar.cloudflare.com/traffic/7668902"><u>Nakuru County</u></a> and <a href="https://radar.cloudflare.com/traffic/192709"><u>Kaimbu County</u></a>.</p>


    <div>
      <h2>Military action</h2>
      <a href="#military-action">
        
      </a>
    </div>
    
    <div>
      <h3>Odesa, Ukraine</h3>
      <a href="#odesa-ukraine">
        
      </a>
    </div>
    <p><a href="https://odessa-journal.com/russia-carried-out-a-massive-drone-attack-on-the-odessa-region"><u>Russian drone strikes</u></a> on the <a href="https://radar.cloudflare.com/traffic/698738"><u>Odesa region</u></a> in <a href="https://radar.cloudflare.com/ua"><u>Ukraine</u></a> on December 12 damaged warehouses and energy infrastructure, with the latter causing power outages in parts of the region. Those outages disrupted Internet connectivity, resulting in <a href="https://x.com/CloudflareRadar/status/2000993223406211327?s=20"><u>traffic dropping by as much as 57%</u></a> as compared to the prior week. After the initial drop at midnight on December 13 (22:00 UTC on December 12), traffic gradually recovered over the following several days, returning to expected levels around 14:30 local time (12:30 UTC) on December 16.</p>
    <div>
      <h2>Weather</h2>
      <a href="#weather">
        
      </a>
    </div>
    
    <div>
      <h3>Jamaica</h3>
      <a href="#jamaica">
        
      </a>
    </div>
    <p><a href="https://www.nytimes.com/live/2025/10/28/weather/hurricane-melissa-jamaica-landfall?smid=url-share#df989e67-a90e-50fb-92d0-8d5d52f76e84"><u>Hurricane Melissa</u></a> made landfall on <a href="https://radar.cloudflare.com/jm"><u>Jamaica</u></a> on October 28 and left a trail of damage and destruction in its path. Associated <a href="https://www.jamaicaobserver.com/2025/10/28/eyeonmelissa-35-jps-customers-without-power/"><u>power outages</u></a> and infrastructure damage impacted Internet connectivity, causing traffic to initially <a href="https://x.com/CloudflareRadar/status/1983266694715084866"><u>drop by approximately half</u></a>, <a href="https://x.com/CloudflareRadar/status/1983217966347866383"><u>starting</u></a> around 06:15 local time (11:15 UTC), ultimately reaching as much as <a href="https://x.com/CloudflareRadar/status/1983357587707048103"><u>70% lower</u></a> than the previous week. Internet traffic from Jamaica remained well below pre-hurricane levels for several days, and ultimately started to make greater progress towards expected levels <a href="https://x.com/CloudflareRadar/status/1985708253872107713?s=20"><u>during the morning of November 4</u></a>. It can often take weeks or months for Internet traffic from a country to return to “normal” levels following storms that cause massive and widespread damage – while power may be largely restored within several days, damage to physical infrastructure takes significantly longer to address.</p>
    <div>
      <h3>Sri Lanka &amp; Indonesia</h3>
      <a href="#sri-lanka-indonesia">
        
      </a>
    </div>
    <p>On November 26, <a href="https://apnews.com/article/indonesia-sri-lanka-thailand-malaysia-floods-landsides-aa9947df1f6192a3c6c72ef58659d4d2"><u>Cyclone Senyar</u></a> caused catastrophic floods and landslides in <a href="https://radar.cloudflare.com/lk"><u>Sri Lanka</u></a> and <a href="https://radar.cloudflare.com/id"><u>Indonesia</u></a>, killing over 1,000 people and damaging telecommunications and power infrastructure across these countries. The infrastructure damage resulted in <a href="https://x.com/CloudflareRadar/status/1996233525989720083"><u>disruptions to Internet connectivity</u></a>, and resultant lower traffic levels, across multiple regions.</p><p>In Sri Lanka, regions outside the main Western Province were the most affected, and several provinces saw traffic drop <a href="https://x.com/CloudflareRadar/status/1996233528032301513"><u>between 80% and 95%</u></a> as compared to the prior week, including <a href="https://radar.cloudflare.com/traffic/1232860?dateStart=2025-11-24&amp;dateEnd=2025-12-14"><u>North Western</u></a>, <a href="https://radar.cloudflare.com/traffic/1227618?dateStart=2025-11-24&amp;dateEnd=2025-12-14"><u>Southern</u></a>, <a href="https://radar.cloudflare.com/traffic/1225265?dateStart=2025-11-24&amp;dateEnd=2025-12-14"><u>Uva</u></a>, <a href="https://radar.cloudflare.com/traffic/8133521?dateStart=2025-11-24&amp;dateEnd=2025-12-14"><u>Eastern</u></a>, <a href="https://radar.cloudflare.com/traffic/7671049?dateStart=2025-11-24&amp;dateEnd=2025-12-14"><u>Northern</u></a>, <a href="https://radar.cloudflare.com/traffic/1232870?dateStart=2025-11-24&amp;dateEnd=2025-12-14"><u>North Central</u></a>, and <a href="https://radar.cloudflare.com/traffic/1228435?dateStart=2025-11-24&amp;dateEnd=2025-12-14"><u>Sabaragamuwa</u></a>.</p>

<p>In <a href="https://x.com/CloudflareRadar/status/1996233530267885938"><u>Indonesia</u></a>, <a href="https://radar.cloudflare.com/traffic/1215638?dateStart=2025-11-24&amp;dateEnd=2025-12-14"><u>Aceh</u></a> and the Sumatra regions saw the biggest Internet disruptions. In Aceh, traffic initially dropped over 75% as compared to the previous week. In Sumatra, <a href="https://radar.cloudflare.com/traffic/1213642?dateStart=2025-11-24&amp;dateEnd=2025-12-14"><u>North Sumatra</u></a> was the most affected, with an early 30% drop as compared to the previous week, before starting to recover more actively the following week.</p>


    <div>
      <h2>Known or unspecified technical problems</h2>
      <a href="#known-or-unspecified-technical-problems">
        
      </a>
    </div>
    
    <div>
      <h3>Smartfren (Indonesia)</h3>
      <a href="#smartfren-indonesia">
        
      </a>
    </div>
    <p>On October 3, subscribers to Indonesian Internet provider <a href="https://radar.cloudflare.com/as18004"><u>Smartfren (AS18004</u></a>) experienced a service disruption. The issues were <a href="https://x.com/smartfrenworld/status/1973957300466643203"><u>acknowledged by the provider in an X post</u></a>, which stated (in translation), “<i>Currently, telephone, SMS and data services are experiencing problems in several areas.</i>” Traffic from the provider fell as much as 84%, starting around 09:00 local time (02:00 UTC). The disruption lasted for approximately eight hours, as traffic returned to expected levels around 17:00 local time (10:00 UTC). Smartfren did not provide any additional information on what caused the service problems.</p>
    <div>
      <h3>Vodafone UK</h3>
      <a href="#vodafone-uk">
        
      </a>
    </div>
    <p>Major British Internet provider Vodafone UK (<a href="https://radar.cloudflare.com/as5378"><u>AS5378</u></a> &amp; <a href="https://radar.cloudflare.com/as25135"><u>AS25135</u></a>) experienced a brief service outage on October 23. At 15:00 local time (14:00 UTC), traffic on both Vodafone <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/"><u>ASNs</u></a> dropped to zero. Announced IPv4 address space from <a href="https://radar.cloudflare.com/routing/as5378?dateStart=2025-10-13&amp;dateEnd=2025-10-13#announced-ip-address-space"><u>AS5378</u></a> fell by 75%, while announced IPv4 address space from <a href="https://radar.cloudflare.com/routing/as25135?dateStart=2025-10-13&amp;dateEnd=2025-10-13#announced-ip-address-space"><u>AS25135</u></a> disappeared entirely. Both Internet traffic and address space recovered two hours later, returning to expected levels around 17:00 local time (16:00 UTC). Vodafone did not provide any information on their social media channels about the cause of the outage, and their <a href="https://www.vodafone.co.uk/network/status-checker"><u>network status checker page</u></a> was also unavailable during the outage.</p>






    <div>
      <h3>Fastweb (Italy)</h3>
      <a href="#fastweb-italy">
        
      </a>
    </div>
    <p>According to a <a href="https://tg24.sky.it/tecnologia/2025/10/22/fastweb-down-problemi-internet-oggi"><u>published report</u></a>, a DNS resolution issue disrupted Internet services for customers of Italian provider <a href="https://radar.cloudflare.com/as12874"><u>Fastweb (AS12874)</u></a> on October 22, causing observed traffic volumes to drop by over 75%. Fastweb <a href="https://www.firstonline.info/en/fastweb-down-oggi-internet-bloccato-in-tutta-italia-migliaia-di-segnalazioni/"><u>acknowledged the issue</u></a>, which impacted wired Internet customers between 09:30 - 13:00 local time (08:30 - 12:00 UTC).</p><p>Although not an Internet outage caused by connectivity failure, the impact of DNS resolution issues on Internet traffic is very similar. When a provider’s <a href="https://www.cloudflare.com/learning/dns/dns-server-types/"><u>DNS resolver</u></a> is experiencing problems, switching to a service like Cloudflare’s <a href="https://1.1.1.1/dns"><u>1.1.1.1 public DNS resolver</u></a> will often restore connectivity.</p>
    <div>
      <h3>SBIN, MTN Benin, Etisalat Benin</h3>
      <a href="#sbin-mtn-benin-etisalat-benin">
        
      </a>
    </div>
    <p>On December 7, a concurrent drop in traffic was observed across <a href="https://radar.cloudflare.com/as28683"><u>SBIN (AS28683)</u></a>, <a href="https://radar.cloudflare.com/as37424"><u>MTN Benin (AS37424)</u></a>, and <a href="https://radar.cloudflare.com/as37136"><u>Etisalat Benin (AS37136)</u></a>. Between 18:30 - 19:30 local time (17:30 - 18:30 UTC), traffic dropped as much as 80% as compared to the prior week at a country level, nearly 100% at Etisalat and MTN, and over 80% at SBIN.</p><p>While an <a href="https://www.reuters.com/world/africa/soldiers-benins-national-television-claim-have-seized-power-2025-12-07/"><u>attempted coup</u></a> had taken place earlier in the day, it is unclear whether the observed Internet disruption was related in any way. From a routing perspective, all three impacted networks share <a href="https://radar.cloudflare.com/as174"><u>Cogent (AS174)</u></a> as an upstream provider, so a localized issue at Cogent may have contributed to the brief outage.  </p>



    <div>
      <h3>Cellcom Israel</h3>
      <a href="#cellcom-israel">
        
      </a>
    </div>
    <p>According to a <a href="https://www.ynetnews.com/article/2gpt1kt35"><u>reported announcement</u></a> from Israeli provider <a href="https://radar.cloudflare.com/as1680"><u>Cellcom (AS1680)</u></a>, on December 18, there was “<i>a malfunction affecting Internet connectivity that is impacting some of our customers.</i>” This malfunction dropped traffic nearly 70% as compared to the prior week, and occurred between 09:30 - 11:00 local time (07:30 - 09:00 UTC). The “malfunction” may have been a DNS failure, according to a <a href="https://www.israelnationalnews.com/news/419552"><u>published report</u></a>.</p>
    <div>
      <h3>Partner Communications (Israel)</h3>
      <a href="#partner-communications-israel">
        
      </a>
    </div>
    <p>Closing out 2025, on December 30, a major technical failure at Israeli provider <a href="https://radar.cloudflare.com/as12400"><u>Partner Communications (AS12400)</u></a> <a href="https://www.ynetnews.com/tech-and-digital/article/hjewkibnwe"><u>disrupted</u></a> mobile, TV, and Internet services across the country. Internet traffic from Partner fell by two-thirds as compared to the previous week between 14:00 - 15:00 local time (12:00 - 13:00 UTC). During the outage, queries to Cloudflare’s 1.1.1.1 public DNS resolver spiked, suggesting that the problem may have been related to Partner’s DNS infrastructure. However, the provider did not publicly confirm what caused the outage.</p>




    <div>
      <h2>Cloud Platforms</h2>
      <a href="#cloud-platforms">
        
      </a>
    </div>
    <p>During the fourth quarter, we launched a new <a href="https://radar.cloudflare.com/cloud-observatory"><u>Cloud Observatory</u></a> page on Radar that tracks availability and performance issues at a region level across hyperscaler cloud platforms, including <a href="https://radar.cloudflare.com/cloud-observatory/amazon"><u>Amazon Web Services</u></a>, <a href="https://radar.cloudflare.com/cloud-observatory/microsoft"><u>Microsoft Azure</u></a>, <a href="https://radar.cloudflare.com/cloud-observatory/google"><u>Google Cloud Platform</u></a>, and <a href="https://radar.cloudflare.com/cloud-observatory/oracle"><u>Oracle Cloud Infrastructure</u></a>.</p>
    <div>
      <h3>Amazon Web Services</h3>
      <a href="#amazon-web-services">
        
      </a>
    </div>
    <p>On October 20, the Amazon Web Services us-east-1 region in Northern Virginia experienced “<a href="https://health.aws.amazon.com/health/status?eventID=arn:aws:health:us-east-1::event/MULTIPLE_SERVICES/AWS_MULTIPLE_SERVICES_OPERATIONAL_ISSUE/AWS_MULTIPLE_SERVICES_OPERATIONAL_ISSUE_BA540_514A652BE1A"><u>increased error rates and latencies</u></a>” that affected multiple services within the region. The issues impacted not only customers with public-facing Web sites and applications that rely on infrastructure within the region, but also Cloudflare customers that have origin resources hosted in us-east-1.</p><p>We began to see the impact of the problems around 06:30 UTC, as the share of <a href="https://radar.cloudflare.com/cloud-observatory/amazon/us-east-1?dateStart=2025-10-20&amp;dateEnd=2025-10-21#success-rate"><u>error</u></a> (<a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Status#server_error_responses"><u>5xx-class</u></a>) responses began to climb, reaching as high as 17% around 08:00 UTC. The number of <a href="https://radar.cloudflare.com/cloud-observatory/amazon/us-east-1?dateStart=2025-10-20&amp;dateEnd=2025-10-21#connection-failures"><u>failures encountered when attempting to connect to origins</u></a> in us-east-1 climbed as well, peaking around 12:00 UTC.</p>

<p>The impact could also be clearly seen in key network performance metrics, which remained elevated throughout the incident, returning to normal levels just before the end of the incident, around 23:00 UTC. Both <a href="https://radar.cloudflare.com/cloud-observatory/amazon/us-east-1?dateStart=2025-10-20&amp;dateEnd=2025-10-21#tcp-handshake-duration"><u>TCP</u></a> and <a href="https://radar.cloudflare.com/cloud-observatory/amazon/us-east-1?dateStart=2025-10-20&amp;dateEnd=2025-10-21#tls-handshake-duration"><u>TLS</u></a> handshake durations got progressively worse throughout the incident—these metrics measure the amount of time needed for Cloudflare to establish TCP and TLS connections respectively with customer origin servers in us-east-1. In addition, the amount of time elapsed before Cloudflare <a href="https://radar.cloudflare.com/cloud-observatory/amazon/us-east-1/#response-header-receive-duration"><u>received response headers</u></a> from the origin increased significantly during the first several hours of the incident, before gradually returning to expected levels.  </p>





    <div>
      <h3>Microsoft Azure</h3>
      <a href="#microsoft-azure">
        
      </a>
    </div>
    <p>On October 29, Microsoft Azure experienced an <a href="https://azure.status.microsoft/en-us/status/history/?trackingId=YKYN-BWZ"><u>incident</u></a> impacting <a href="https://azure.microsoft.com/en-us/products/frontdoor"><u>Azure Front Door</u></a>, its content delivery network service. According to <a href="https://azure.status.microsoft/en-us/status/history/?trackingId=YKYN-BWZ"><u>Azure's report on the incident</u></a>, “<i>A specific sequence of customer configuration changes, performed across two different control plane build versions, resulted in incompatible customer configuration metadata being generated. These customer configuration changes themselves were valid and non-malicious – however they produced metadata that, when deployed to edge site servers, exposed a latent bug in the data plane. This incompatibility triggered a crash during asynchronous processing within the data plane service.</i>”</p><p>The incident report marked the start time at 15:41 UTC, although we observed the volume of <a href="https://radar.cloudflare.com/cloud-observatory/microsoft/global?dateStart=2025-10-29&amp;dateEnd=2025-10-30#connection-failures"><u>failed connection attempts</u></a> to Azure-hosted origins begin to climb about 45 minutes prior. The TCP and TLS handshake metrics also became more volatile during the incident period, with <a href="https://radar.cloudflare.com/cloud-observatory/microsoft/global?dateStart=2025-10-29&amp;dateEnd=2025-10-30#tcp-handshake-duration"><u>TCP handshakes</u></a> taking over 50% longer at times, and <a href="https://radar.cloudflare.com/cloud-observatory/microsoft/global?dateStart=2025-10-29&amp;dateEnd=2025-10-30#tls-handshake-duration"><u>TLS handshakes</u></a> taking nearly 200% longer at peak. The impacted metrics began to improve after 20:00 UTC, and according to Microsoft, the incident ended at 00:05 UTC on October 30.</p>



    <div>
      <h2>Cloudflare</h2>
      <a href="#cloudflare">
        
      </a>
    </div>
    <p>In addition to the outages discussed above, Cloudflare also experienced two disruptions during the fourth quarter. While these were not Internet outages in the classic sense, they did prevent users from accessing Web sites and applications delivered and protected by Cloudflare when they occurred.</p><p>The first incident took place on November 18, and was caused by a software failure triggered by a change to one of our database systems' permissions, which caused the database to output multiple entries into a “feature file” used by our Bot Management system. Additional details, including a root cause analysis and timeline, can be found in the associated <a href="https://blog.cloudflare.com/18-november-2025-outage/"><u>blog post</u></a>.</p><p>The second incident occurred on December 5, and impacted a subset of customers, accounting for approximately 28% of all HTTP traffic served by Cloudflare. It was triggered by changes being made to our request body parsing logic while attempting to detect and mitigate a newly disclosed industry-wide React Server Components vulnerability. A post-mortem <a href="https://blog.cloudflare.com/5-december-2025-outage/"><u>blog post</u></a> contains additional details, including a root cause analysis and timeline.</p><p>For more information about the work underway at Cloudflare to prevent outages like these from happening again, check out our <a href="https://blog.cloudflare.com/fail-small-resilience-plan/"><u>blog post</u></a> detailing “Code Orange: Fail Small.”</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>The disruptions observed in the fourth quarter underscore the importance of real-time data in maintaining global connectivity. Whether it’s a government-ordered shutdown or a minor technical issue, transparency allows the technical community to respond faster and more effectively. We will continue to track these shifts on Cloudflare Radar, providing the insights needed to navigate the complexities of modern networking. We share our observations on the <a href="https://radar.cloudflare.com/outage-center"><u>Cloudflare Radar Outage Center</u></a>, via social media, and in posts on <a href="https://blog.cloudflare.com/tag/cloudflare-radar/"><u>blog.cloudflare.com</u></a>. Follow us on social media at <a href="https://twitter.com/CloudflareRadar"><u>@CloudflareRadar</u></a> (X), <a href="https://noc.social/@cloudflareradar"><u>noc.social/@cloudflareradar</u></a> (Mastodon), and <a href="https://bsky.app/profile/radar.cloudflare.com"><u>radar.cloudflare.com</u></a> (Bluesky), or contact us via <a><u>email</u></a>.</p><p>As a reminder, while these blog posts feature graphs from <a href="https://radar.cloudflare.com/"><u>Radar</u></a> and the <a href="https://radar.cloudflare.com/explorer"><u>Radar Data Explorer</u></a>, the underlying data is available from our <a href="https://developers.cloudflare.com/api/resources/radar/"><u>API</u></a>. You can use the API to retrieve data to do your own local monitoring or analysis, or you can use the <a href="https://github.com/cloudflare/mcp-server-cloudflare/tree/main/apps/radar#cloudflare-radar-mcp-server-"><u>Radar MCP server</u></a> to incorporate Radar data into your AI tools.</p> ]]></content:encoded>
            <category><![CDATA[Radar]]></category>
            <category><![CDATA[Internet Shutdown]]></category>
            <category><![CDATA[Internet Traffic]]></category>
            <category><![CDATA[Outage]]></category>
            <category><![CDATA[Internet Trends]]></category>
            <category><![CDATA[AWS]]></category>
            <category><![CDATA[Microsoft Azure]]></category>
            <category><![CDATA[Consumer Services]]></category>
            <guid isPermaLink="false">6dRT0oOSVcyQzjnZCkzH7S</guid>
            <dc:creator>David Belson</dc:creator>
        </item>
        <item>
            <title><![CDATA[Detecting sensitive data and misconfigurations in AWS and GCP with Cloudflare One]]></title>
            <link>https://blog.cloudflare.com/scan-cloud-dlp-with-casb/</link>
            <pubDate>Fri, 21 Mar 2025 13:10:00 GMT</pubDate>
            <description><![CDATA[ Using Cloudflare’s CASB, integrate, scan, and detect sensitive data and misconfigurations in your cloud storage accounts. ]]></description>
            <content:encoded><![CDATA[ <p>Today is the final day of Security Week 2025, and after a great week of blog posts across a variety of topics, we’re excited to share the latest on Cloudflare’s data security products.</p><p>This announcement takes us to Cloudflare’s SASE platform, <a href="https://www.cloudflare.com/zero-trust/products/"><u>Cloudflare One</u></a>, used by enterprise security and IT teams to manage the security of their employees, applications, and third-party tools, all in one place.</p><p>Starting today, Cloudflare One users can now use the <a href="https://www.cloudflare.com/zero-trust/products/casb/"><u>CASB</u></a> (Cloud Access Security Broker) product to integrate with and scan Amazon Web Services (AWS) S3 and Google Cloud Storage, for posture- and Data Loss Prevention (DLP)-related security issues. <a href="https://dash.cloudflare.com/sign-up"><u>Create a free account</u></a> to check it out.</p><p>Scanning both point-in-time and continuously, users can identify misconfigurations in Identity and Access Management (IAM), bucket, and object settings, and detect sensitive information, like Social Security numbers, credit card numbers, or any other pattern using regex, in cloud storage objects.</p>
    <div>
      <h3>Cloud DLP</h3>
      <a href="#cloud-dlp">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1R1bE9TRmTdHDTg1XeLS60/8269b687ec65e70bcaee437f30d5f590/1.png" />
          </figure><p>Over the last few years, our customers — predominantly security and IT teams — have told us about their appreciation for CASB’s simplicity and effectiveness as a SaaS security product. Its number of <a href="https://developers.cloudflare.com/cloudflare-one/applications/casb/casb-integrations/"><u>supported integrations</u></a>, its ease of setup, and speed in identifying critical issues across popular SaaS platforms, like files shared publicly in Microsoft 365 and exposed sensitive data in Google Workspace, has made it a go-to for many.</p><p>However, as we’ve engaged with customers, one thing became clear: the risks of unmonitored or exposed data at-rest go far beyond just SaaS environments. Sensitive information – whether intellectual property, customer data, or personal identifiers – can wreak havoc on an organization’s reputation and its obligations to its customers if it falls into the wrong hands. For many of our customers, the security of data stored in cloud providers like AWS and GCP is even more critical than the security of data in their SaaS tools.</p><p>That’s why we’ve extended Cloudflare CASB to include <a href="https://developers.cloudflare.com/cloudflare-one/policies/data-loss-prevention/"><u>Cloud DLP (Data Loss Prevention)</u></a> functionality, enabling users to scan objects in Amazon S3 buckets and Google Cloud Storage for sensitive data matches​.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5TKXiAkQKxw3GFQBCuQLjX/198e4620da8239280eff669b7b62678b/2.png" />
          </figure><p>With <a href="https://www.cloudflare.com/zero-trust/products/dlp/"><u>Cloudflare DLP</u></a>, you can choose from pre-built detection profiles that look for common data types (such as Social Security Numbers or credit card numbers) or create your own custom profiles using regular expressions​. As soon as an object matching a DLP profile is detected, you can dive into the details, understanding the file’s context, seeing who owns it, and more. These capabilities provide the insight needed to quickly protect data and prevent exposure in real time.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4cyOaQJ0ZyPO8r7LyeU6ON/30b8453888d6cb13dcb15225f875a0cd/3.png" />
          </figure><p>And as with all CASB integrations, this new functionality also comes with <a href="https://www.cloudflare.com/learning/cloud/what-is-dspm/">posture management features</a>, meaning whether you’re using AWS or GCP, we’ll help you identify misconfigurations and other cloud security issues that could leave your data vulnerable​, like buckets that are publicly-accessible or have critical logging settings disabled, access keys needing rotation, or users without <a href="https://www.cloudflare.com/learning/access-management/what-is-multi-factor-authentication/"><u>multi-factor authentication (MFA)</u></a>. It’s all included.</p>
    <div>
      <h3>Simple by default, configurable where you want it</h3>
      <a href="#simple-by-default-configurable-where-you-want-it">
        
      </a>
    </div>
    <p>Cloudflare CASB and DLP are simple to use by default, making it easy to get started right away. But it’s also highly configurable, giving you the flexibility to fine-tune the scanning profiles to suit your specific needs.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5MLh3y7SMHnX52cjuu4pmE/b7bb67497fc21bc9d3f3f740a9b3fe52/4.png" />
          </figure><p>For example, you can adjust which storage buckets or file types to scan, and even sample only a percentage of objects for analysis​. The scanning also runs within your own cloud environment, so your data never leaves your infrastructure​. This approach keeps your cloud storage secure and your costs managed while allowing you to tailor the solution to your organization’s unique compliance and security requirements.</p><p>Looking ahead, our roadmap also includes expanding support to additional cloud storage environments, such as Azure Blob Storage and Cloudflare R2, further extending our comprehensive, multi-cloud security strategy. Stay tuned for more on that!</p>
    <div>
      <h3>How it works</h3>
      <a href="#how-it-works">
        
      </a>
    </div>
    <p>From the start, we knew that to deliver DLP capabilities across cloud environments, it would require an efficient and scalable design to enable real-time detection of sensitive data exposure.</p>
    <div>
      <h4>Serverless architecture for streamlined processing</h4>
      <a href="#serverless-architecture-for-streamlined-processing">
        
      </a>
    </div>
    <p>An early design decision was made to leverage a serverless architecture approach to ensure sensitive data discovery is both efficient and scalable. Here’s how it works:</p><ul><li><p><b>Compute Account</b>: The entire process runs within a cloud account owned by your organization, known as a Compute Account. This design ensures your data remains within your boundaries, avoiding costly cloud egress fees. The Compute Account can be launched in under 15 minutes using a provided Terraform template.</p></li><li><p><b>Controller function</b>: Every minute, a lightweight, serverless controller function in your cloud environment communicates with Cloudflare’s APIs, fetching the latest DLP configurations and security profiles from your Cloudflare One account.</p></li><li><p><b>Crawler process</b>: The controller triggers an object discovery task, which is processed by a second serverless function known as the Crawler. The Crawler queries cloud storage accounts, like AWS S3 or Google Cloud Storage, via API to identify new objects. Redis is used within the Compute Account to track which objects have yet to be evaluated.</p></li><li><p><b>Scanning for sensitive data</b>: Newly discovered objects are sent through a queue to a third serverless function called the Scanner. This function downloads the objects and streams their contents to the DLP engine in the Compute Account, which scans for matches against predefined or custom DLP Profiles.</p></li><li><p><b>Finding generation and alerts</b>: If a DLP match is found, metadata about the object, such as context and ownership details, is published to a queue. This data is ingested by a Cloudflare-hosted service and presented in the Cloudflare Dashboard as findings, giving security teams the visibility needed to take swift action.</p></li></ul>
    <div>
      <h4>Scalable and secure design</h4>
      <a href="#scalable-and-secure-design">
        
      </a>
    </div>
    <p>The DLP pipeline ensures that sensitive data never leaves your cloud environment — a privacy-first approach. All communication between the Compute Account and Cloudflare's APIs are initiated by the controller, also meaning there is no need to perform any extra configuration to allow ingress traffic.</p>
    <div>
      <h3>How to get started</h3>
      <a href="#how-to-get-started">
        
      </a>
    </div>
    <p>To get started, reach out to your account team to learn more about this new data security functionality and our roadmap. If you want to try this out on your own, you can login to the Cloudflare One dashboard (create a free account <a href="https://www.cloudflare.com/zero-trust/products/"><u>here</u></a> if you don’t have one) and navigate to the CASB page to set up your first integration.</p>
    <div>
      <h3>Watch on Cloudflare TV</h3>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div>
  
</div><p></p> ]]></content:encoded>
            <category><![CDATA[Security Week]]></category>
            <category><![CDATA[Cloudflare One]]></category>
            <category><![CDATA[CASB]]></category>
            <category><![CDATA[DLP]]></category>
            <category><![CDATA[AWS]]></category>
            <category><![CDATA[Google Cloud]]></category>
            <category><![CDATA[Zero Trust]]></category>
            <guid isPermaLink="false">2hlOlV28pXRFpnmnkqbbhw</guid>
            <dc:creator>Alex Dunbrack</dc:creator>
            <dc:creator>Michael Leslie </dc:creator>
        </item>
        <item>
            <title><![CDATA[Magic Cloud Networking simplifies security, connectivity, and management of public clouds]]></title>
            <link>https://blog.cloudflare.com/introducing-magic-cloud-networking/</link>
            <pubDate>Wed, 06 Mar 2024 14:01:00 GMT</pubDate>
            <description><![CDATA[ Introducing Magic Cloud Networking, a new set of capabilities to visualize and automate cloud networks to give our customers secure, easy, and seamless connection to public cloud environments ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4EE4QTE18JtBWAk0XucBb2/818464eba98f9928bbfa7bfe179780d8/image5-5.png" />
            
            </figure><p>Today we are excited to announce Magic Cloud Networking, supercharged by <a href="https://www.cloudflare.com/press-releases/2024/cloudflare-enters-multicloud-networking-market-unlocks-simple-secure/">Cloudflare’s recent acquisition of Nefeli Networks</a>’ innovative technology. These new capabilities to visualize and automate cloud networks will give our customers secure, easy, and seamless connection to public cloud environments.</p><p>Public clouds offer organizations a scalable and on-demand IT infrastructure without the overhead and expense of running their own datacenter. <a href="https://www.cloudflare.com/learning/cloud/what-is-cloud-networking/">Cloud networking</a> is foundational to applications that have been migrated to the cloud, but is difficult to manage without automation software, especially when operating at scale across multiple cloud accounts. Magic Cloud Networking uses familiar concepts to provide a single interface that controls and unifies multiple cloud providers’ native network capabilities to create reliable, cost-effective, and secure cloud networks.</p><p>Nefeli’s approach to multi-cloud networking solves the problem of building and operating end-to-end networks within and across public clouds, allowing organizations to <a href="https://www.cloudflare.com/application-services/solutions/">securely leverage applications</a> spanning any combination of internal and external resources. Adding Nefeli’s technology will make it easier than ever for our customers to connect and protect their users, private networks and applications.</p>
    <div>
      <h2>Why is cloud networking difficult?</h2>
      <a href="#why-is-cloud-networking-difficult">
        
      </a>
    </div>
    <p>Compared with a traditional on-premises data center network, cloud networking promises simplicity:</p><ul><li><p>Much of the complexity of physical networking is abstracted away from users because the physical and ethernet layers are not part of the network service exposed by the cloud provider.</p></li><li><p>There are fewer control plane protocols; instead, the cloud providers deliver a simplified <a href="https://www.cloudflare.com/learning/network-layer/what-is-sdn/">software-defined network (SDN)</a> that is fully programmable via API.</p></li><li><p>There is capacity — from zero up to very large — available instantly and on-demand, only charging for what you use.</p></li></ul><p>However, that promise has not yet been fully realized. Our customers have described several reasons cloud networking is difficult:</p><ul><li><p><b>Poor end-to-end visibility</b>: Cloud network visibility tools are difficult to use and silos exist even within single cloud providers that impede end-to-end monitoring and troubleshooting.</p></li><li><p><b>Faster pace</b>: Traditional IT management approaches clash with the promise of the cloud: instant deployment available on-demand. Familiar ClickOps and CLI-driven procedures must be replaced by automation to meet the needs of the business.</p></li><li><p><b>Different technology</b>: Established network architectures in on-premises environments do not seamlessly transition to a public cloud. The missing ethernet layer and advanced control plane protocols were critical in many network designs.</p></li><li><p><b>New cost models</b>: The dynamic pay-as-you-go usage-based cost models of the public clouds are not compatible with established approaches built around fixed cost circuits and 5-year depreciation. Network solutions are often architected with financial constraints, and accordingly, different architectural approaches are sensible in the cloud.</p></li><li><p><b>New security risks</b>: Securing public clouds with true <a href="https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/">zero trust</a> and least-privilege demands mature operating processes and automation, and familiarity with cloud-specific policies and IAM controls.</p></li><li><p><b>Multi-vendor:</b> Oftentimes enterprise networks have used single-vendor sourcing to facilitate interoperability, operational efficiency, and targeted hiring and training. Operating a network that extends beyond a single cloud, into other clouds or on-premises environments, is a multi-vendor scenario.</p></li></ul><p>Nefeli considered all these problems and the tensions between different customer perspectives to identify where the problem should be solved.</p>
    <div>
      <h2>Trains, planes, and automation</h2>
      <a href="#trains-planes-and-automation">
        
      </a>
    </div>
    <p>Consider a train system. To operate effectively it has three key layers:</p><ul><li><p>tracks and trains</p></li><li><p>electronic signals</p></li><li><p>a company to manage the system and sell tickets.</p></li></ul><p>A train system with good tracks, trains, and signals could still be operating below its full potential because its agents are unable to keep up with passenger demand. The result is that passengers cannot plan itineraries or purchase tickets.</p><p>The train company eliminates bottlenecks in process flow by simplifying the schedules, simplifying the pricing, providing agents with better booking systems, and installing automated ticket machines. Now the same fast and reliable infrastructure of tracks, trains, and signals can be used to its full potential.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/342dyvSqIvF0hJJoCDyf0I/8e92b93f922412344fa34cbbea7a4be1/image8.png" />
            
            </figure>
    <div>
      <h3>Solve the right problem</h3>
      <a href="#solve-the-right-problem">
        
      </a>
    </div>
    <p>In networking, there are an analogous set of three layers, called the <a href="https://www.cloudflare.com/learning/network-layer/what-is-the-control-plane/">networking planes</a>:</p><ul><li><p><b>Data Plane:</b> the network paths that transport data (in the form of packets) from source to destination.</p></li><li><p><b>Control Plane:</b> protocols and logic that change how packets are steered across the data plane.</p></li><li><p><b>Management Plane:</b> the configuration and monitoring interfaces for the data plane and control plane.</p></li></ul><p>In public cloud networks, these layers map to:</p><ul><li><p><b>Cloud Data Plane:</b> The underlying cables and devices are exposed to users as the <a href="https://www.cloudflare.com/learning/cloud/what-is-a-virtual-private-cloud/">Virtual Private Cloud (VPC)</a> or Virtual Network (VNet) service that includes subnets, routing tables, security groups/ACLs and additional services such as load-balancers and VPN gateways.</p></li><li><p><b>Cloud Control Plane:</b> In place of distributed protocols, the cloud control plane is a <a href="https://www.cloudflare.com/learning/network-layer/what-is-sdn/">software defined network (SDN)</a> that, for example, programs static route tables. (There is limited use of traditional control plane protocols, such as BGP to interface with external networks and ARP to interface with VMs.)</p></li><li><p><b>Cloud Management Plane:</b> An administrative interface with a UI and API which allows the admin to fully configure the data and control planes. It also provides a variety of monitoring and logging capabilities that can be enabled and integrated with 3rd party systems.</p></li></ul><p>Like our train example, most of the problems that our customers experience with cloud networking are in the third layer: the management plane.</p><p>Nefeli simplifies, unifies, and automates cloud network management and operations.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/nb9xcqGaRRaYIe0lvlbIs/83da6094ec1f7bc3e4a7d72a17fc511c/image2-6.png" />
            
            </figure>
    <div>
      <h3>Avoid cost and complexity</h3>
      <a href="#avoid-cost-and-complexity">
        
      </a>
    </div>
    <p>One common approach to tackle management problems in cloud networks is introducing Virtual Network Functions (VNFs), which are <a href="https://www.cloudflare.com/learning/cloud/what-is-a-virtual-machine/">virtual machines (VMs)</a> that do packet forwarding, in place of native cloud data plane constructs. Some VNFs are routers, firewalls, or load-balancers ported from a traditional network vendor’s hardware appliances, while others are software-based proxies often built on open-source projects like NGINX or Envoy. Because VNFs mimic their physical counterparts, IT teams could continue using familiar management tooling, but VNFs have downsides:</p><ul><li><p>VMs do not have custom network silicon and so instead rely on raw compute power. The VM is sized for the peak anticipated load and then typically runs 24x7x365. This drives a high cost of compute regardless of the actual utilization.</p></li><li><p>High-availability (HA) relies on fragile, costly, and complex network configuration.</p></li><li><p>Service insertion — the configuration to put a VNF into the packet flow — often forces packet paths that incur additional bandwidth charges.</p></li><li><p>VNFs are typically licensed similarly to their on-premises counterparts and are expensive.</p></li><li><p>VNFs lock in the enterprise and potentially exclude them benefitting from improvements in the cloud’s native data plane offerings.</p></li></ul><p>For these reasons, enterprises are turning away from VNF-based solutions and increasingly looking to rely on the native network capabilities of their cloud service providers. The built-in public cloud networking is elastic, performant, robust, and priced on usage, with high-availability options integrated and backed by the cloud provider’s service level agreement.</p><p>In our train example, the tracks and trains are good. Likewise, the cloud network data plane is highly capable. Changing the data plane to solve management plane problems is the wrong approach. To make this work at scale, organizations need a solution that works together with the native network capabilities of cloud service providers.</p><p>Nefeli leverages native cloud data plane constructs rather than third party VNFs.</p>
    <div>
      <h2>Introducing Magic Cloud Networking</h2>
      <a href="#introducing-magic-cloud-networking">
        
      </a>
    </div>
    <p>The Nefeli team has joined Cloudflare to integrate cloud network management functionality with Cloudflare One. This capability is called Magic Cloud Networking and with it, enterprises can use the Cloudflare dashboard and API to manage their public cloud networks and connect with Cloudflare One.</p>
    <div>
      <h3>End-to-end</h3>
      <a href="#end-to-end">
        
      </a>
    </div>
    <p>Just as train providers are focused only on completing train journeys in their own network, cloud service providers deliver network connectivity and tools within a single cloud account. Many large enterprises have hundreds of cloud accounts across multiple cloud providers. In an end-to-end network this creates disconnected networking silos which introduce operational inefficiencies and risk.</p><p>Imagine you are trying to organize a train journey across Europe, and no single train company serves both your origin and destination. You know they all offer the same basic service: a seat on a train. However, your trip is difficult to arrange because it involves multiple trains operated by different companies with their own schedules and ticketing rates, all in different languages!</p><p>Magic Cloud Networking is like an online travel agent that aggregates multiple transportation options, books multiple tickets, facilitates changes after booking, and then delivers travel status updates.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4P7EhpKlEfTnU7WdPq4dt6/4de908c385b406a89c97f4dc274b3acb/image6.png" />
            
            </figure><p>Through the Cloudflare dashboard, you can discover all of your network resources across accounts and cloud providers and visualize your end-to-end network in a single interface. Once Magic Cloud Networking discovers your networks, you can build a scalable network through a fully automated and simple workflow.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2qXuFK0Q1q96NtH0FYRNg0/3c449510b24a3f206b63b01e1799dddd/image3-8.png" />
            
            </figure><p><i>Resource inventory shows all configuration in a single and responsive UI</i></p>
    <div>
      <h3>Taming per-cloud complexity</h3>
      <a href="#taming-per-cloud-complexity">
        
      </a>
    </div>
    <p>Public clouds are used to deliver applications and services. Each cloud provider offers a composable stack of modular building blocks (resources) that start with the foundation of a billing account and then add on security controls. The next foundational layer, for server-based applications, is VPC networking. Additional resources are built on the VPC network foundation until you have compute, storage, and network infrastructure to host the enterprise application and data. Even relatively simple architectures can be composed of hundreds of resources.</p><p>The trouble is, these resources expose abstractions that are different from the building blocks you would use to build a service on prem, the abstractions differ between cloud providers, and they form a web of dependencies with complex rules about how configuration changes are made (rules which differ between resource types and cloud providers). For example, say I create 100 VMs, and connect them to an IP network. Can I make changes to the IP network while the VMs are using the network? The answer: it depends.</p><p>Magic Cloud Networking handles these differences and complexities for you. It configures native cloud constructs such as VPN gateways, routes, and security groups to securely connect your cloud VPC network to Cloudflare One without having to learn each cloud’s incantations for creating VPN connections and hubs.</p>
    <div>
      <h3>Continuous, coordinated automation</h3>
      <a href="#continuous-coordinated-automation">
        
      </a>
    </div>
    <p>Returning to our train system example, what if the railway maintenance staff find a dangerous fault on the railroad track? They manually set the signal to a stop light to prevent any oncoming trains using the faulty section of track. Then, what if, by unfortunate coincidence, the scheduling office is changing the signal schedule, and they set the signals remotely which clears the safety measure made by the maintenance crew? Now there is a problem that no one knows about and the root cause is that multiple authorities can change the signals via different interfaces without coordination.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/VNaeoX2TNwYwZsweytYSZ/40806d02108119204f638ed5f111d5d0/image1-10.png" />
            
            </figure><p>The same problem exists in cloud networks: configuration changes are made by different teams using different automation and configuration interfaces across a spectrum of roles such as billing, support, security, networking, firewalls, database, and application development.</p><p>Once your network is deployed, Magic Cloud Networking monitors its configuration and health, enabling you to be confident that the security and connectivity you put in place yesterday is still in place today. It tracks the cloud resources it is responsible for, automatically reverting drift if they are changed out-of-band, while allowing you to manage other resources, like storage buckets and application servers, with other automation tools. And, as you change your network, Cloudflare takes care of route management, injecting and withdrawing routes globally across Cloudflare and all connected cloud provider networks.</p><p>Magic Cloud Networking is fully programmable via API, and can be integrated into existing automation toolchains.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3h360ewBWWCRUjhqjrm6wF/5006f8267a880b98ccbe9bfc91cb9029/image7-1.png" />
            
            </figure><p><i>The interface warns when cloud network infrastructure drifts from intent</i></p>
    <div>
      <h2>Ready to start conquering cloud networking?</h2>
      <a href="#ready-to-start-conquering-cloud-networking">
        
      </a>
    </div>
    <p>We are thrilled to introduce Magic Cloud Networking as another pivotal step to fulfilling the promise of the <a href="https://www.cloudflare.com/connectivity-cloud/">Connectivity Cloud</a>. This marks our initial stride in empowering customers to seamlessly integrate Cloudflare with their public clouds to get securely connected, stay securely connected, and gain flexibility and cost savings as they go.</p><p>Join us on this journey for early access: learn more and sign up <a href="https://cloudflare.com/lp/cloud-networking/">here</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/610Vl5u7JVsnszRAmQz0Yt/3bb2a75f47826c1c1969c1d9b0c1db8d/image4-10.png" />
            
            </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Security Week]]></category>
            <category><![CDATA[Network]]></category>
            <category><![CDATA[AWS]]></category>
            <category><![CDATA[EC2]]></category>
            <category><![CDATA[Google Cloud]]></category>
            <category><![CDATA[Microsoft Azure]]></category>
            <category><![CDATA[SASE]]></category>
            <category><![CDATA[Cloudflare One]]></category>
            <category><![CDATA[Multi-Cloud]]></category>
            <category><![CDATA[Zero Trust]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Magic WAN]]></category>
            <category><![CDATA[Connectivity Cloud]]></category>
            <category><![CDATA[Acquisitions]]></category>
            <guid isPermaLink="false">2qMDjBOoY9rSrSaeNzUDzL</guid>
            <dc:creator>Steve Welham</dc:creator>
            <dc:creator>David Naylor</dc:creator>
        </item>
        <item>
            <title><![CDATA[AWS’s Egregious Egress]]></title>
            <link>https://blog.cloudflare.com/aws-egregious-egress/</link>
            <pubDate>Fri, 23 Jul 2021 13:01:00 GMT</pubDate>
            <description><![CDATA[ Amazon’s says: “We strive to offer our customers the lowest possible prices, the best available selection, and the utmost convenience.” When it comes to egress, their prices are far from the lowest. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>When web hosting services first emerged in the mid-1990s, you paid for everything on a separate meter: bandwidth, storage, CPU, and memory. Over time, customers grew to hate the nickel-and-dime nature of these fees. The market evolved to a fixed-fee model. Then came Amazon Web Services.</p><p>AWS was a huge step forward in terms of flexibility and scalability, but a massive step backward in terms of pricing. Nowhere is that more apparent than with their data transfer (bandwidth) pricing. If you look at the (ironically named) <a href="https://calculator.s3.amazonaws.com/index.html#key=files/calc-917de608a77af5087bde42a1caaa61ea1c0123e8&amp;v=ver20210710c7">AWS Simple Monthly Calculator</a> you can calculate the price they charge for bandwidth for their typical customer. The price varies by region, which shouldn't surprise you because the cost of transit is dramatically different in different parts of the world.</p>
    <div>
      <h3>Charging for Stocks, Paying for Flows</h3>
      <a href="#charging-for-stocks-paying-for-flows">
        
      </a>
    </div>
    <p>AWS charges customers based on the amount of data delivered — 1 terabyte (TB) per month, for example. To visualize that, imagine data is water. AWS fills a bucket full of water and then charges you based on how much water is in the bucket. This is known as charging based on “stocks.”</p><p>On the other hand, AWS pays for bandwidth based on the capacity of their network. The base unit of wholesale bandwidth is priced as one Megabit per second per month (1 Mbps). Typically, a provider like AWS, will pay for bandwidth on a monthly fee based on the number of Mbps that their network uses at its peak capacity. So, extending the analogy, AWS doesn't pay for the amount of water that ends up in their customers' buckets, but rather the capacity based on the diameter of the “hose” that is used to fill them. This is known as paying for “flows.”</p>
    <div>
      <h3>Translating Flows to Stocks</h3>
      <a href="#translating-flows-to-stocks">
        
      </a>
    </div>
    <p>You can translate between flow and stock pricing by knowing that a 1 Mbps connection (think of it as the "hose") can transfer 0.3285 TB (328GB) if utilized to its fullest capacity over the course of a month (think of it as running the "hose" at full capacity to fill the "bucket" for a month).<sup>1</sup> AWS obviously has more than 1 Mbps of capacity — they can certainly transfer more than 0.3285 TB per month — but you can use this as the base unit of their bandwidth costs, and compare it against what they charge a customer to deliver 1 Terabyte (1TB), in order to figure out the AWS bandwidth markup.</p><p>One more subtlety to be as accurate as possible. Wholesale bandwidth is also billed at the 95th percentile. That effectively cuts off the peak hour or so of use every day. That means a 1 Mbps connection running at 100% can actually likely transfer closer to 0.3458 TB (346GB) per month.</p><p>Two more factors are important: utilization and regional costs. AWS can't run all their connections at 100% utilization 24x7 for a month. Instead, they'll have some average utilization per transit connection in any month. It’s reasonable to estimate that they likely run at between 20% and 40% average utilization. That would be a typical average utilization range for the industry. The higher their utilization, the more efficient they are, the lower their costs, and the higher their effective customer markup will be.</p><p>To be conservative, we’ve assumed that AWS’s average utilization is the bottom of that range (20%), but you can <a href="https://docs.google.com/spreadsheets/d/11InPX3PDntyC1-F-tWeNwpoztYozf7JeC7OE6iZrc_0/edit?usp=sharing">download the raw data</a> and adjust the assumptions however you think makes sense.</p><p>We have a good sense of the wholesale prices of bandwidth in different regions around the world based on what Cloudflare sees in the market when we buy bandwidth ourselves. We’d imagine AWS gets at least as good of pricing as we do. We’ve included a rough estimate of these prices in the calculation, rounding up on the wholesale price wherever there was a question (which makes AWS look better).</p>
    <div>
      <h3>Massive Markups</h3>
      <a href="#massive-markups">
        
      </a>
    </div>
    <p>Based on these assumptions, here's our best estimate of AWS’s effective markup for egress bandwidth on a per-region basis.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1qM5lTL8RfJFVfqsECMu4V/07bc22f4964beed98ce88a8fcc55ac72/image4-12.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4rt8aZ0kxhYsucX3CbVUQg/3885c745e5f4b57c6964b5ba77ef5e0e/image1-19.png" />
            
            </figure><p>Don’t rest easy, South Korea with your <i>merely</i> 357% markup. The general rule of thumb appears to be that the older a market is, the more Amazon wrings from its customers in egregious <a href="https://www.cloudflare.com/learning/cloud/what-are-data-egress-fees/">egress markups</a> — and the Seoul availability zone is only a bit over four years old. Winter, unfortunately, inevitably seems to come to AWS customers.</p>
    <div>
      <h3>AWS Stands Alone In Not Passing On Savings to Customers</h3>
      <a href="#aws-stands-alone-in-not-passing-on-savings-to-customers">
        
      </a>
    </div>
    <p>Remember, this is for the transit bandwidth that AWS is paying for. For the bandwidth that they exchange with a network like Cloudflare, where they are directly connected (settlement-free peered) over a private network interface (PNI), there are no meaningful incremental costs and their effective margins are nearly infinite. Add in the effect of rebates Amazon collects from colocation providers who charge cross connect fees to customers, and the effective markup is likely even higher.</p><p>Some other cloud providers take into account that their costs are lower when passing over peering connections. Both Microsoft Azure and Google Cloud will substantially discount egress charges for their mutual Cloudflare customers. Members of the <a href="https://www.cloudflare.com/bandwidth-alliance/">Bandwidth Alliance</a> — Alibaba, Automattic, Backblaze, Cherry Servers, Dataspace, DNS Networks, DreamHost, HEFICED, Kingsoft Cloud, Liquid Web, Scaleway, Tencent, Vapor, Vultr, Wasabi, and Zenlayer — waive bandwidth charges for mutual Cloudflare customers.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/57Ans32VnbvStvYPES35y/9427ae5824ed7fd3de4dcf40056ca4b9/image6-9.png" />
            
            </figure><p>At this point, the majority of hosting providers in the industry either substantially discount or entirely waive <a href="https://www.cloudflare.com/the-net/cloud-egress-fees-challenge-future-ai/">egress fees</a> when sending traffic from their network to a peer like Cloudflare. AWS is the notable exception in the industry. It's worth noting that we invited AWS to be a part of the Bandwidth Alliance, and they politely declined.</p><p>It seems like a no-brainer that if we’re not paying for the bandwidth costs, and the hosting provider isn’t paying for the bandwidth costs, customers shouldn’t be charged for the bandwidth costs at the same rate as if the traffic was being sent over the public Internet. Unfortunately, Amazon’s supposed obsession over doing the right thing for customers doesn’t extend to egress charges.</p>
    <div>
      <h3>Artificially Held High</h3>
      <a href="#artificially-held-high">
        
      </a>
    </div>
    <p>Amazon’s mission statement is: “We strive to offer our customers the lowest possible prices, the best available selection, and the utmost convenience.” And yet, when it comes to egress, their prices are far from the lowest possible.</p><p>During the last ten years, industry wholesale transit prices have fallen an average of 23% annually. Compounded over that time, wholesale bandwidth is 93% less expensive than 10 years ago. However, AWS’s egress fees over that same period have fallen by only 25%.</p><p>And, since 2018, the egress fees AWS charges in North America and Europe have not dropped a penny even as wholesale prices in those markets over the same time period have fallen by more than half.</p>
    <div>
      <h3>AWS’s Hotel California Pricing</h3>
      <a href="#awss-hotel-california-pricing">
        
      </a>
    </div>
    <p>Another oddity of AWS’s pricing is that they charge for data transferred out of their network but not for data transferred into their network. If the only time you’ve paid for bandwidth is with your residential Internet connection, then this may make some sense. Because of some technical limitations of the cable network, download bandwidth is typically higher than upload bandwidth on cable modem connections. But that’s not how wholesale bandwidth is bought or sold.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6a6sRKh6RUWexw1melI1so/eca2ebb408ead8b465dc751d54d25727/image5-10.png" />
            
            </figure><p>Wholesale bandwidth isn’t like your home cable connection. Instead, it’s symmetrical. That means that if you purchase a 1 Mbps (1 Megabit per second) connection, then you have the capacity to send 1 Megabit out and receive another 1 Megabit in every second. If you receive 1 Mbps in <i>and simultaneously</i> 1 Mbps out, you pay the same price as if you receive 1 Mbps in and 0 Mbps out or 0 Mbps in and 1 Mbps out. In other words, ingress (data sent to AWS) doesn’t cost them any more or less than egress (data sent from AWS). And yet, they charge customers more to take data out than put it in. It’s a head scratcher.</p><p>We’ve tried to be charitable in trying to understand why AWS would charge this way. Disappointingly, there just doesn't seem to be an innocent explanation. As we dug in, even things like writes versus reads and the wear they put on storage media, as well as the challenges of capacity planning for storage capacity, suggest that AWS should charge less for egress than ingress.</p><p>But they don’t.</p><p>The only rationale we can reasonably come up with for AWS’s egress pricing: locking customers into their cloud, and making it prohibitively expensive to get customer data back out. So much for being customer-first.</p>
    <div>
      <h3>But… But… But…</h3>
      <a href="#but-but-but">
        
      </a>
    </div>
    <p>AWS may object that this doesn't take into account the cost of things like metro dark fiber between data centers, amortized optical and other networking equipment, and cross connects. In our experience, those costs amount to a rounding error of less than one cent per Mbps when operating at AWS-like scale. And these prices have been falling at a similar rate to the decline in the price of bandwidth over the past 10 years. Yet AWS’s egress prices have barely budged.</p><p>All the data above is derived from what’s published on AWS’s simple pricing calculator. There’s no doubt that some large customers are able to negotiate lower prices. But these are the prices charged to small businesses and startups by default. And, when we’ve reviewed pricing even with large AWS customers, the egress fees remain egregious.</p>
    <div>
      <h3>It’s Not Too Late!</h3>
      <a href="#its-not-too-late">
        
      </a>
    </div>
    <p>We have a lot of mutual customers who use Cloudflare and AWS. They’re a great service, and we want to support our mutual customers and provide services in a way that meets their needs and is always as secure, fast, reliable, and efficient as possible. We remain hopeful that AWS will do the right thing, lower their egress fees, join the Bandwidth Alliance — following the lead of the majority of the rest of the hosting industry — and pass along savings from peering with Cloudflare and other networks to all their customers.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1bTWj1lyCvJKiTwIQU7mqq/a9389d3240f8959ecc37002acf342259/image3.gif" />
            
            </figure><p>.......</p><p><sup>1</sup>Here’s the calculation to convert a 1 Mbps flow into TB stocks: 1 Mbps @ 100% for 1 month = (1 million bits per second) * (60 seconds / minute) * (60 minutes / hour) * (730 hours on average/month) divided by (eight bits / byte) divided by 10^12 (to convert bytes to Terabytes) = 0.3285 TB/month.</p> ]]></content:encoded>
            <category><![CDATA[Bandwidth Alliance]]></category>
            <category><![CDATA[Egress]]></category>
            <category><![CDATA[AWS]]></category>
            <guid isPermaLink="false">2eK8lAEdsHVrkxAqbqb9OH</guid>
            <dc:creator>Matthew Prince</dc:creator>
            <dc:creator>Nitin Rao</dc:creator>
        </item>
        <item>
            <title><![CDATA[Comparing Serverless Performance for CPU Bound Tasks]]></title>
            <link>https://blog.cloudflare.com/serverless-performance-with-cpu-bound-tasks/</link>
            <pubDate>Mon, 09 Jul 2018 06:08:00 GMT</pubDate>
            <description><![CDATA[ This post is a part of an ongoing series comparing the performance of Cloudflare Workers with other Serverless providers. In our past tests we intentionally chose a workload which imposes virtually no CPU load (returning the current time).  ]]></description>
            <content:encoded><![CDATA[ <p>This post is a part of an ongoing series comparing the performance of Cloudflare Workers with other Serverless providers. In our <a href="/serverless-performance-comparison-workers-lambda/">past tests</a> we intentionally chose a workload which imposes virtually no CPU load (returning the current time). For these tests, let's look at something which pushes hardware to the limit: cryptography.</p><p>tl;dr <b>Cloudflare Workers are seven times faster than a default Lambda function for workloads which push the CPU.</b> Workers are six times faster than Lambda@Edge, tested globally.  <a href="https://twitter.com/share?url=https%3A%2F%2Fblog.cloudflare.com%2Fserverless-performance-with-cpu-bound-tasks%2F&amp;text=%22For%20workloads%20which%20push%20the%20CPU%2C%20AWS%20Lambda%20runs%20at%201%2F7%20the%20speed%20of%20a%20%40Cloudflare%20Worker.%22&amp;hashtags=serverless"></a></p>
    <div>
      <h3>Slow Crypto</h3>
      <a href="#slow-crypto">
        
      </a>
    </div>
    <p>The <a href="https://en.wikipedia.org/wiki/PBKDF2">PBKDF2</a> algorithm is designed to be slow to compute. It's used to hash passwords; its slowness makes it harder for a password cracker to do their job. Its extreme CPU usage also makes it a good benchmark for the CPU performance of a service like Lambda or Cloudflare Workers.</p><p>We've written <a href="https://github.com/cloudflare/worker-performance-examples/tree/master/pbkdf2">a test</a> based on the Node Crypto (Lambda) and the WebCrypto (Workers) APIs. Our Lambda is deployed to with the default 128MB of memory behind an API Gateway in us-east-1, our Worker is, as always, deployed around the world. I also have our function running in a Lambda@Edge deployment to compare that performance as well. Again, we're using <a href="http://www.catchpoint.com/">Catchpoint</a> to test from hundreds of locations around the world.</p><p>And again, Lambda is slow:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2jNqfJtyBkD0YavRhzRknA/371f0447ad5d4496443eb29c467fb1e1/Screen-Shot-2018-07-06-at-2.01.22-PM.png" />
            
            </figure><p>This chart shows what percentage of the requests made in the past 24 hours were faster than a given number of ms:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/z3UkZmR4BWzqENJWYz8aV/d1b1e7c366e3152313cb4522b4596f86/Screen-Shot-2018-06-29-at-5.57.09-PM.png" />
            
            </figure><p><b>The 95th percentile speed of Workers is 242ms. Lambda@Edge is 842ms (2.4x slower), Lambda 1346ms (4.6x slower).</b></p><p>Lambdas are billed based on the amount of memory they allocate and the number of CPU ms they consume, rounded up to the nearest hundred ms. Running a function which consumes 800ms of compute will cost me $1.86 per million requests. Workers is $0.50/million flat. Obviously even beyond the cost, my users would have a pretty terrible experience waiting over a second for responses.</p><p>As I said, Workers run in almost 160 locations around the world while my Lambda is only running in Northern Virgina. This is something of an unfair comparison, so let's look at just tests in Washington DC:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6bTav5tWtKA2Y3oolQpWK9/a23a68133f5700fa9b0cd2e8568cb7a9/Screen-Shot-2018-06-29-at-6.10.12-PM.png" />
            
            </figure><p>Unsurprisingly the Lambda and Lambda@Edge performance evens out to show us exactly what speed Amazon throttles our CPU to. <b>At the 50th percentile, Lambda processors appear to be 6-7x slower than the mid-range server cores in our data centers.</b>  <a href="https://twitter.com/share?url=https%3A%2F%2Fblog.cloudflare.com%2Fserverless-performance-with-cpu-bound-tasks%2F&amp;text=AWS%20Lambda%20processors%20are%206-7x%20slower%20than%20a%20mid-range%20server.&amp;hashtags=serverless"></a></p>
    <div>
      <h3>Memory</h3>
      <a href="#memory">
        
      </a>
    </div>
    <p>What you might not realize is that the power of the CPU your Lambda gets is a function of how much memory you allocate for it. As you ramp up the memory you get more performance. To test this we allocated a function with 1024MB of memory:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5UGvKeq1zSLLU8jEV48KEB/ffa7768d9b2db73cc662af2198ae25a7/Screen-Shot-2018-07-02-at-3.56.08-PM.png" />
            
            </figure><p>Again this are only tests emanating from Washington DC, and I have filtered points over 500ms.</p><p>The performance of both the Lambda and Lambda@Edge tests with 1024MB of memory is now much closer, but remains roughly half that of Workers. Even worse, running my new Lambda through the somewhat byzantine pricing formula reveals that 100ms (the minimum) of my 1024MB Lambda will cost me the exact same $1.86 I was paying before. <b>This makes Lambda 3.7x more expensive than Workers on a per-cycle basis.</b></p><p>Adding more memory than that only adds cost, not speed. Here is a 3008MB instance (the max I can allocate):</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7ke4fwAlTu91kv5V61tt0d/da50864768a73c57eb18834b966f95b9/Screen-Shot-2018-06-29-at-6.28.58-PM.png" />
            
            </figure><p>This Lambda would be costing me $5.09 per million requests, over 10x more than a Worker, but would still provide less CPU.</p><p>In general our supposition is a 128MB Lambda is buying you 1/8th of a CPU core. This scales up as you elect to purchase Lambda's with more memory, until it hits the performance of one of their cores (minus hypervisor overhead) at which point it levels off.</p><p>Ultimately Amazon is charging based on the duration of execution, but on top of a much slower execution platform than we expected.</p>
    <div>
      <h3>Native Code</h3>
      <a href="#native-code">
        
      </a>
    </div>
    <p>One response we saw to our <a href="/serverless-performance-comparison-workers-lambda/">last post</a> was that Lambda supports running native binaries and that's where its real performance will be exhibited.</p><p>It is of course true that our Javascript tests are likely (definitely in the case of Workers) just calling out to an underlying compiled cryptography implementation. But I don't know the details of how Lambda is implemented, perhaps the critics have a point that native binaries could be faster. So I decided to extend my tests.</p><p>After beseeching a couple of <a href="https://github.com/harrishancock">my</a> <a href="https://twitter.com/terinjokes">colleagues</a> who have more recent C++ experience than I do, I ended up with <a href="https://github.com/cloudflare/worker-performance-examples/blob/master/pbkdf2/lambda_proc/pbkdf2.cpp">a Lambda</a> which executes the BoringSSL implementation of PBKDF2 in plain C++14. The results are utterly boring:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6sKgUpefCdxU9n6f837dJh/305de4bcdda08618e38e1c44fffeadb9/Screen-Shot-2018-07-05-at-4.05.30-PM.png" />
            
            </figure><p>A Lambda executing native code is just as slow as one executing Javascript.</p>
    <div>
      <h3>Java</h3>
      <a href="#java">
        
      </a>
    </div>
    <p>As I mentioned, irrespective of the language, all of this cryptography is likely calling the same highly optimized C implementations. In that way it might not be an accurate reflection of the performance of our chosen programming languages (but is an accurate reflection of CPU performance). Many people still believe that code written in languages like Java is faster than Javascript in V8. I decided to disprove that as well, and I was willing to sacrifice to the point where I installed the JDK, and resigned myself to a lifetime of update notifications.</p><p>To test application-level performance I dusted off my interview chops and wrote a <a href="https://github.com/cloudflare/worker-performance-examples/tree/master/primes">naïve prime factorization function</a> in both Javascript and Java. I won't weigh in on the wisdom of using new vs old guard languages, but I will say it doesn't buy you much with Lambda:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/78B3RY4suwdHl9EsSdWKzp/480cf98d108426b61c5423490374aa04/Screen-Shot-2018-07-06-at-11.27.28-AM.png" />
            
            </figure><p>This is charting two Lambdas, one with 128MB of memory, the other 1024MB. The tests are all from Washington, DC (near both of our Lambdas) to eliminate the advantage of our Worker's global presence. The 128MB instance has 1745ms of 95% percentile wait time.</p><p>When we look at tests which originate all around the globe, where billions of Internet users browse every day, we get the best example of the power of a Worker's deployment yet:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1j72aT30QE6dxefeyoUURp/16225568df441f4b8ec15913dd4af345/Screen-Shot-2018-07-06-at-12.29.22-PM.png" />
            
            </figure><p>When we exclude the competitors from our analysis we are able to use the same techniques to examine the performance of the Workers platform itself and uncover more than a few opportunities for improvement and optimization. That's what we're focused on: making Workers even faster, more powerful, and easier to use.</p><p>As you might imagine given these results, the Workers team is growing. If you'd like to work on this or other audacious projects at Cloudflare, <a href="https://boards.greenhouse.io/cloudflare/jobs/1028650?gh_jid=1028650">reach out</a>.</p><p>.twitter-icon { width: 16px; height: 16px; line-height: 1.1; display: inline-block; background-image: url(data:image/svg+xml;utf8;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iaXNvLTg4NTktMSI/Pgo8IS0tIEdlbmVyYXRvcjogQWRvYmUgSWxsdXN0cmF0b3IgMTguMS4xLCBTVkcgRXhwb3J0IFBsdWctSW4gLiBTVkcgVmVyc2lvbjogNi4wMCBCdWlsZCAwKSAgLS0+CjxzdmcgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB4bWxuczp4bGluaz0iaHR0cDovL3d3dy53My5vcmcvMTk5OS94bGluayIgdmVyc2lvbj0iMS4xIiBpZD0iQ2FwYV8xIiB4PSIwcHgiIHk9IjBweCIgdmlld0JveD0iMCAwIDYxMiA2MTIiIHN0eWxlPSJlbmFibGUtYmFja2dyb3VuZDpuZXcgMCAwIDYxMiA2MTI7IiB4bWw6c3BhY2U9InByZXNlcnZlIiB3aWR0aD0iMTZweCIgaGVpZ2h0PSIxNnB4Ij4KPGc+Cgk8Zz4KCQk8cGF0aCBkPSJNNjEyLDExNi4yNThjLTIyLjUyNSw5Ljk4MS00Ni42OTQsMTYuNzUtNzIuMDg4LDE5Ljc3MmMyNS45MjktMTUuNTI3LDQ1Ljc3Ny00MC4xNTUsNTUuMTg0LTY5LjQxMSAgICBjLTI0LjMyMiwxNC4zNzktNTEuMTY5LDI0LjgyLTc5Ljc3NSwzMC40OGMtMjIuOTA3LTI0LjQzNy01NS40OS0zOS42NTgtOTEuNjMtMzkuNjU4Yy02OS4zMzQsMC0xMjUuNTUxLDU2LjIxNy0xMjUuNTUxLDEyNS41MTMgICAgYzAsOS44MjgsMS4xMDksMTkuNDI3LDMuMjUxLDI4LjYwNkMxOTcuMDY1LDIwNi4zMiwxMDQuNTU2LDE1Ni4zMzcsNDIuNjQxLDgwLjM4NmMtMTAuODIzLDE4LjUxLTE2Ljk4LDQwLjA3OC0xNi45OCw2My4xMDEgICAgYzAsNDMuNTU5LDIyLjE4MSw4MS45OTMsNTUuODM1LDEwNC40NzljLTIwLjU3NS0wLjY4OC0zOS45MjYtNi4zNDgtNTYuODY3LTE1Ljc1NnYxLjU2OGMwLDYwLjgwNiw0My4yOTEsMTExLjU1NCwxMDAuNjkzLDEyMy4xMDQgICAgYy0xMC41MTcsMi44My0yMS42MDcsNC4zOTgtMzMuMDgsNC4zOThjLTguMTA3LDAtMTUuOTQ3LTAuODAzLTIzLjYzNC0yLjMzM2MxNS45ODUsNDkuOTA3LDYyLjMzNiw4Ni4xOTksMTE3LjI1Myw4Ny4xOTQgICAgYy00Mi45NDcsMzMuNjU0LTk3LjA5OSw1My42NTUtMTU1LjkxNiw1My42NTVjLTEwLjEzNCwwLTIwLjExNi0wLjYxMi0yOS45NDQtMS43MjFjNTUuNTY3LDM1LjY4MSwxMjEuNTM2LDU2LjQ4NSwxOTIuNDM4LDU2LjQ4NSAgICBjMjMwLjk0OCwwLDM1Ny4xODgtMTkxLjI5MSwzNTcuMTg4LTM1Ny4xODhsLTAuNDIxLTE2LjI1M0M1NzMuODcyLDE2My41MjYsNTk1LjIxMSwxNDEuNDIyLDYxMiwxMTYuMjU4eiIgZmlsbD0iIzAwNkRGMCIvPgoJPC9nPgo8L2c+CjxnPgo8L2c+CjxnPgo8L2c+CjxnPgo8L2c+CjxnPgo8L2c+CjxnPgo8L2c+CjxnPgo8L2c+CjxnPgo8L2c+CjxnPgo8L2c+CjxnPgo8L2c+CjxnPgo8L2c+CjxnPgo8L2c+CjxnPgo8L2c+CjxnPgo8L2c+CjxnPgo8L2c+CjxnPgo8L2c+Cjwvc3ZnPgo=)}</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[AWS]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Programming]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">6sftBF39cE36rebBIdId4z</guid>
            <dc:creator>Zack Bloom</dc:creator>
        </item>
        <item>
            <title><![CDATA[Thoughts on the AWS outage: making the cloud more resilient to failure]]></title>
            <link>https://blog.cloudflare.com/thoughts-on-the-aws-outage-the-failure-charac/</link>
            <pubDate>Sat, 30 Jun 2012 19:56:00 GMT</pubDate>
            <description><![CDATA[ A huge storm rolled across the eastern United States last night, topping trees and knocking out power. Amazon Web Services (AWS) had one of their primary data centers in Virginia lose power.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>A huge storm rolled across the eastern United States last night, topping trees and knocking out power. Amazon Web Services (AWS) had one of their primary data centers in Virginia lose power. While data centers typically have backup generators for when they lose power from the grid, it appears something in the backup systems failed and AWS's EC2 Northern Virginia region went offline. That took down much of Netflix, Pinterest, Instagram, and other services that rely on Amazon's cloud hosting service.</p><p>My favorite comment on the incident came from Phil Kaplan (<a href="https://twitter.com/pud">@pud</a>) who <a href="https://twitter.com/pud/status/218952182182064128">tweeted</a>: "The cloud is no match for the clouds." It got me thinking about the different types of "cloud" services, their different sensitivities to failure, and how they can be made more resilient.</p>
    <div>
      <h3>Cumulus, Stratus, Cirrus, Nimbus</h3>
      <a href="#cumulus-stratus-cirrus-nimbus">
        
      </a>
    </div>
    <p>There are a lot of different products that call themselves "cloud" services. What that means, however, is very different from one service to another. For example, Salesforce.com was among the first to trumpet the benefits of the cloud. In their case, they were comparing themselves against traditional customer relationship management (CRM) systems that required you to run your own database and maintain your own hardware. In Salesforce.com's case, "cloud" means you can run a specialized application (CRM) on someone else's equipment and pay for it as a service.</p><p>For their core CRM product, Salesforce.com runs their own hardware in multiple locations around the world. However, Salesforce.com purchased another "cloud" service provider called Heroku. Heroku was originally built as a platform to run applications written for the Ruby programming language. It has expanded over time to provide support for other languages including Java, Node.js, Scala, Clojure and Python. Where Salesforce.com's original cloud service allowed you to run their CRM application as a service, Heroku lets you run any application you want from their managed platform.</p><p>Salesforce.com runs on the company's own servers, but Heroku runs atop Amazon's AWS service. In other words, Heroku provides a cloud service that makes it easier to write and deploy your own applications, but they use someone else's infrastructure to deploy it. Before everyone started calling all these "cloud" services, the analysts gave them more specific names that started with a letter and always ended with "aaS." Salesforce.com was Software as a Service (SaaS), Heroku was Platform as a Service (PaaS), and AWS was Infrastructure as a Service (IaaS).</p><p>I'd add a further distinction: the three cloud services I've mentioned so far are all what I'd call Data &amp; Application (D&amp;A) cloud service. In one way or another, they let you store data and process without having to think about the underlying hardware. They may all be cloud services, but they are very different from what we're building at CloudFlare (more on that in a bit).</p>
    <div>
      <h3>Servers All the Way Down</h3>
      <a href="#servers-all-the-way-down">
        
      </a>
    </div>
    <p>In Hindu mythology, there story that talks about how the world is supported on the back of a giant turtle. Steven Hawking's book <i>A Brief History of Time</i> included an anecdote about a scientist giving a lecture to the public on the structure of the universe:</p><blockquote><p>At the end of the lecture, a little old lady at the back of the room got up and said: "What you have told us is rubbish. The world is really a flat plate supported on the back of a giant tortoise." The scientist gave a superior smile before replying, "What is the tortoise standing on?" "You're very clever, young man, very clever," said the old lady. "But it's turtles all the way down!"</p></blockquote>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7M1uT1Jsdz95kM0AUmobfX/917e5f6275ed0630eae7a5c43fa85e4b/turtles_all_the_way_down.jpg.scaled500.jpg" />
            
            </figure><p>While it's easy to forget with the abstractions provided by these services, under all these clouds are servers, switches, and routers. If you're using Salesforce.com for CRM and your company adds a large number of new customers, you don't need to think about adding more drives or servers to scale up. Instead, Salesforce.com handles the process of adding capacity across its hardware. If you're developing on top of a cloud service like AWS's EC2, as your application scales you can "spin up" new instances to provide more computational power. These instances are fractions of the capacity on a physical server which may be shared with other EC2 users. Because each EC2 customer only uses whatever isnecessary for their application, the utilization rates across the servers is very high.</p>
    <div>
      <h3>When Clouds Go Boom</h3>
      <a href="#when-clouds-go-boom">
        
      </a>
    </div>
    <p>It is inevitable that the hardware that makes up these clouds will, from time to time, fail. Spinning hard drives crash, memory goes bad, CPUs overheat, routers flake out, or someone disconnects the wrong power circuit bringing a whole rack of equipment offline. When those pieces of hardware fail, different cloud services will react in different ways.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3lvOuxWUqg5Bc6J47J6750/b4cd4f1406591adb8cfa42eda702cf42/storm_clouds.jpg.scaled500.jpg" />
            
            </figure><p>Salesforce.com runs their own hardware and their own software. They have created systems that replicate the application itself across multiple hardware systems. If one system fails, a load balancer switches to a different hardware system to process the request. Customers' data stored with Salesforce.com is also replicated by the software. While I don't know the explicit details of Salesforce.com's redundancy strategy, it's a safe bet that they use RAID to replicate data between multiple disks that are part of a storage array and backup to some long term storage in case of a major failure. They also likely replicate data betweenmultiple storage arrays within a particular data center and, maybe, replicate the data between data centers.</p><p>Replicating data is relatively easy. Replicating data and keeping it in sync is hard. The problem becomes harder if the locations are geographically separated. The speed of light is very fast, but it still takes a photon of light traveling under perfect conditions nearly <a href="http://www.wolframalpha.com/input/?i=2+*+%28distance+from+Amsterdam+to+san+francisco+in+kilometers+%2F+%28speed+of+light+in+kilometers+per+millisecond%29%29">60ms to roundtrip from San Francisco to Amsterdam</a>. It's slower through the actual fiber and copper cables that make up the Internet, and much, much slower when you take into account the <a href="/the-bandwidth-of-a-boeing-747-and-its-impact">real world performance of the Internet</a>. If two people change the same piece of data in two locations during the latency window between updates, very unpredictable bad things can happen.</p>
    <div>
      <h3>The Challenge of Being in Sync</h3>
      <a href="#the-challenge-of-being-in-sync">
        
      </a>
    </div>
    <p>For certain systems, replicating data is easier than others. Compare Google and Twitter. If you're running a search on Google you'll hit one of the company's many geographically distributed data centers and get aset of results. Someone else running a different search hitting adifferent data center may get slightly different results. Google doesn't promise that everyone will see the same search results. As a result, they have a relatively straight forward data replication problem. The data that makes up Google's index will be "eventually consistent" across all their facilities, but that doesn't harm the underlying application.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2hihBJTyiNu2ODfGzMinRy/438f863b2bafacbf155f87372e8f728d/nsync.jpg.scaled500.jpg" />
            
            </figure><p>Twitter, on the other hand, promises that you'll see real time updates from the people you follow. This creates a much more difficult data replication problem and explains why Twitter has a much more centralized infrastructure and continues to experience many more scaling pains. Facebook provides an interesting case study as well. As Facebook has scaled, they have deemphasized real time updates to timeline in order to make it easier to scale their infrastructure.</p><p>Twitter, Facebook, Google (with their new emphasis on products that require more data synchronization), and a lot of other smart people are working ways to mitigate the problem of data replication and synchronization but the speed of light is only so fast and at some level you'll always bump into the laws of physics. What is key, however, is that choosing to host in the cloud alone is not sufficient to ensure your application is fault tolerant. The data and application layer remain difficult to scale, and even with a service like AWS creating resiliency still requires programmers to make their application servers redundant and replicate their data to the extent possible and practical.</p>
    <div>
      <h3>Front End Layer Scaling</h3>
      <a href="#front-end-layer-scaling">
        
      </a>
    </div>
    <p>While data synchronization makes geographic scaling of the Data &amp; Application layer difficult, there is a part of the web application stack that is a natural candidate for massively distributed scaling: the Front End layer. All web services have a front end. It is the part of the service that receives the requests and hands it off to the application to begin churning. The front end layer also returns the response from the application and databases back to the user that requested it.</p><p>Unlike the Data &amp; Application layer, the Front End layer doesn't need specific knowledge about the application. This means it can be distributed geographically without special application logic or a need for complex data replication strategies. The Front End layer can help tune the response from the Data &amp; Application layer depending on the characteristics of the user. For example, rather than the Data &amp; Application layer changing the presentation of a response based on whether someone is on an iPad or Internet Explorer on a desktop PC, the Front End layer can handle the response.</p><p>The Front End layer can also shield the Data &amp; Application layer from potential threats and attacks. In fact, if you use a protocol like Anycast to route requests geographically, you can isolate attacks or any network problems to only impact a small part of the overall system.</p>
    <div>
      <h3>Front End In the Cloud FTW</h3>
      <a href="#front-end-in-the-cloud-ftw">
        
      </a>
    </div>
    <p>While there remains hesitation in some quarters to turn over the Data &amp; Application layer to the cloud, moving the Front End layer to the cloud is a no-brainer. That, of course, is exactly what we've built at CloudFlare: a scalable front end layer that can run in front of any web application to help it better scale. What's powerful is that any web site or application can provision CloudFlare simply by making a DNS change. Since the Front End layer doesn't need to synchronize data, CloudFlare can begin working to accelerate and protect web traffic immediately and without any changes to the Data &amp; Application layer.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/Vrt4GyaYpuCEoicyecNL8/a1fd6f89e1fa768b383e24d64f1322e5/cloudflare_literally.png.scaled500.png" />
            
            </figure><p>Because we've focused on the Front End layer, CloudFlare's scaling and failure characteristics are very favorable to traditional Data &amp; Application cloud services. Today, for instance, we had a hardware failure in our San Jose data center. Most customers never noticed because 1) it only impacted a limited number of visitors in the region; and 2) we were able to quickly and gracefully fail the data center out and traffic automatically shifted to the next closest data center. The logic to make this graceful failover didn't need to be constructed by our customers' programmers at the application layer because the Front End layer doesn't need the same synchronization as the Data &amp; Application layer.</p><p>We've also worked to make sure that our Front End layer continues to serve static content when one of our customers' Data &amp; Application layer goes down. One of the ways we first get word when AWS or another major host is struggling is when our customers write to us letting us know that they'd be entirely offline if not for our Always Online™ feature. We've got some big improvements to Always Online coming out over the next few weeks which will make the feature even better.</p><p>Going forward, we will continue to make scaling web application easier by providing services like intelligent load balancing between various Data &amp; Application service providers both to maximize performance and also to ensure availability. Moreover, since every CloudFlare customer gets the benefit of all our data centers, as we continue to build out our network it inherently becomes more resilient to failure. Over the next month, we'll be turning on 9 new <a href="http://www.cloudflare.com/network-map">data center locations</a> to further expand our network. While I expect the decision of where to host your Data &amp; Application layer will remain vexing, we're working to make using CloudFlare as your Front End layer a <a href="http://www.cloudflare.com/testimonials#page=aroundtheweb">no-brainer</a>.</p> ]]></content:encoded>
            <category><![CDATA[AWS]]></category>
            <category><![CDATA[EC2]]></category>
            <category><![CDATA[Outage]]></category>
            <guid isPermaLink="false">2zBmNqzC5HWCeDUGIz8wj6</guid>
            <dc:creator>Matthew Prince</dc:creator>
        </item>
        <item>
            <title><![CDATA[Zone Apex / Naked Domain / Root Domain CNAME Support for Amazon EC2, Google App Engine and Other Cloud Hosts]]></title>
            <link>https://blog.cloudflare.com/zone-apex-naked-domain-root-domain-cname-supp/</link>
            <pubDate>Mon, 16 May 2011 19:34:00 GMT</pubDate>
            <description><![CDATA[ CloudFlare now supports CNAME Flattening, which is a better solution to this same problem. Read more in our knowledge base.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p><b>CloudFlare now supports CNAME Flattening, which is a better solution to this same problem. Read more in our knowledge base about </b><a href="https://support.cloudflare.com/hc/en-us/articles/200169056-CNAME-Flattening-RFC-compliant-support-for-CNAME-at-the-root"><b>RFC-compliant support for CNAME at the root</b></a><b>.</b></p><p>One of the challenges of using a service like Amazon Web Services (AWS) Elastic Cloud (EC2) is you need to point your DNS to a CNAME. The problem is the DNS RFC (RFC1033) requires the "zone apex" (sometimes called the "root domain" or "naked domain") to be an "A Record," not a CNAME. This means that with most DNS providers you can setup a subdomain CNAME to point to EC2, but you cannot setup your root domain as a CNAMEto point to EC2.</p><p>In other words, you can do this:<a href="http://www.yourdomain.com">www.yourdomain.com</a> CNAME <a href="http://some-id.ec2.amazonaws.com">some-id.ec2.amazonaws.com</a>&lt;</p><p>But, with most DNS providers (including Amazon's own Route 53), you can't do this:<a href="http://yourdomain.com">yourdomain.com</a> CNAME <a href="http://some-id.ec2.amazonaws.com">some-id.ec2.amazonaws.com</a></p><p>You also cannot reliably point your root A Record to an IP address within the EC2 network since Amazon reserves the right to reallocate the IP address dedicated to your instance. While there are some hacks to redirect the root domain to a subdomain like "www", this limitation creates a mess many people wanting to use cloud service providers ("&gt;as evidenced by <a href="https://forums.aws.amazon.com/thread.jspa?threadID=26945">several</a> <a href="https://forums.aws.amazon.com/thread.jspa?threadID=55995">threads</a> <a href="https://forums.aws.amazon.com/thread.jspa?threadID=56868">on</a> <a href="https://forums.aws.amazon.com/thread.jspa?threadID=62808">the</a> <a href="https://forums.aws.amazon.com/thread.jspa?threadID=62808">subject</a>).</p><p>Never one to let a RFC stand in the way of a solution to a real problem, we're happy to announce that CloudFlare allows you to set your zone apex to a CNAME. This allows CloudFlare users to host on EC2, Rackspace's Cloud, Google App Engine, or other cloud hosts and use their naked domain (e.g., <a href="http://yourdomain.com">yourdomain.com</a>) without forcing a hack solution to a subdomain (e.g., <a href="http://www.yourdomain.com">www.yourdomain.com</a>). Pick whatever host makes the most sense for you, <a href="https://www.cloudflare.com/plans/">sign up for CloudFlare</a>, and we'll help ensure your site is as fast, secure, and effective as possible.</p><p>And, by the way, using CloudFlare's free service for a website hosted on EC2 typically makes the site about 50% faster worldwide while saving you about 65% off your bandwidth bill! Enjoy.</p> ]]></content:encoded>
            <category><![CDATA[AWS]]></category>
            <category><![CDATA[EC2]]></category>
            <category><![CDATA[Reliability]]></category>
            <guid isPermaLink="false">h6gBqSRy1hniuqC2VjeFR</guid>
            <dc:creator>Matthew Prince</dc:creator>
        </item>
    </channel>
</rss>