
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Tue, 07 Apr 2026 18:56:42 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Cloudflare outage on February 20, 2026]]></title>
            <link>https://blog.cloudflare.com/cloudflare-outage-february-20-2026/</link>
            <pubDate>Sat, 21 Feb 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare suffered a service outage on February 20, 2026. A subset of customers who use Cloudflare’s Bring Your Own IP (BYOIP) service saw their routes to the Internet withdrawn via Border Gateway Protocol (BGP). ]]></description>
            <content:encoded><![CDATA[ <p>On February 20, 2026, at 17:48 UTC, Cloudflare experienced a service outage when a subset of customers who use Cloudflare’s Bring Your Own IP (BYOIP) service saw their routes to the Internet withdrawn via Border Gateway Protocol (BGP).</p><p>The issue was not caused, directly or indirectly, by a cyberattack or malicious activity of any kind. This issue was caused by a change that Cloudflare made to how our network manages IP addresses onboarded through the BYOIP pipeline. This change caused Cloudflare to unintentionally withdraw customer prefixes.</p><p>For some BYOIP customers, this resulted in their services and applications being unreachable from the Internet, causing timeouts and failures to connect across their Cloudflare deployments that used BYOIP. The website for Cloudflare’s recursive DNS resolver (1.1.1.1) saw 403 errors as well. The total duration of the incident was 6 hours and 7 minutes with most of that time spent restoring prefix configurations to their state prior to the change.</p><p>Cloudflare engineers reverted the change and prefixes stopped being withdrawn when we began to observe failures. However, before engineers were able to revert the change, ~1,100 BYOIP prefixes were withdrawn from the Cloudflare network. Some customers were able to restore their own service by using the Cloudflare dashboard to re-advertise their IP addresses. We resolved the incident when we restored all prefix configurations.</p><p>We are sorry for the impact to our customers. We let you down today. This post is an in-depth recounting of exactly what happened and which systems and processes failed. We will also outline the steps we are taking to prevent outages like this from happening again.</p>
    <div>
      <h2>How did the outage impact customers?</h2>
      <a href="#how-did-the-outage-impact-customers">
        
      </a>
    </div>
    <p>This graph shows the amount of prefixes advertised by Cloudflare during the incident to a BGP neighbor, which correlates to impact as prefixes that weren’t advertised were unreachable on the Internet:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5QnazHN20Gcf3vLH5r95Cd/c8f42e90f266dd3daeaa308945507024/BLOG-3193_2.png" />
          </figure><p>Out of the total 6,500 prefixes advertised to this peer, 4,306 of those were BYOIP prefixes. These BYOIP prefixes are advertised to every peer and represent all the BYOIP prefixes we advertise globally.   </p><p>During the incident, 1,100 prefixes out of the total 6,500 were withdrawn from 17:56 to 18:46 UTC. Out of the 4,306 total BYOIP prefixes, 25% of BYOIP prefixes were unintentionally withdrawn. We were able to detect impact on one.one.one.one and revert the impacting change before more prefixes were impacted. At 19:19 UTC, we published guidance to customers that they would be able to self-remediate this incident by going to the Cloudflare dashboard and re-advertising their prefixes.</p><p>Cloudflare was able to revert many of the advertisement changes around 20:20 UTC, which caused 800 prefixes to be restored. There were still ~300 prefixes that were unable to be remediated through the dashboard because the service configurations for those prefixes were removed from the edge due to a software bug. These prefixes were manually restored by Cloudflare engineers at 23:03 UTC. </p><p>This incident did not impact all BYOIP customers because the configuration change was applied iteratively and not instantaneously across all BYOIP customers. Once the configuration change was revealed to be causing impact, the change was reverted before all customers were affected. </p><p>The impacted BYOIP customers first experienced a behavior called <a href="https://blog.cloudflare.com/going-bgp-zombie-hunting/"><u>BGP Path Hunting</u></a>. In this state, end user connections traverse networks trying to find a route to the destination IP. This behavior will persist until the connection that was opened times out and fails. Until the prefix is advertised somewhere, customers will continue to see this failure mode. This loop-until-failure scenario affected any product that uses BYOIP for advertisement to the Internet. Additionally, visitors to one.one.one.one, the website for Cloudflare’s recursive DNS resolver, were met with HTTP 403 errors and an “Edge IP Restricted” error message. DNS resolution over the 1.1.1.1 Public Resolver, including DNS over HTTPS, was not affected. A full breakdown of the services impacted is below.</p>
<div><table><colgroup>
<col></col>
<col></col>
</colgroup>
<thead>
  <tr>
    <th><span>Service/Product</span></th>
    <th><span>Impact Description</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>Core CDN and Security Services</span></td>
    <td><span>Traffic was not attracted to Cloudflare, and users connecting to websites advertised on those ranges would have seen failures to connect</span></td>
  </tr>
  <tr>
    <td><span>Spectrum</span></td>
    <td><span>Spectrum apps on BYOIP failed to proxy traffic due to traffic not being attracted to Cloudflare</span></td>
  </tr>
  <tr>
    <td><span>Dedicated Egress</span></td>
    <td><span>Customers who used Gateway Dedicated Egress leveraging BYOIP or Dedicated IPs for CDN Egress leveraging BYOIP would not have been able to send traffic out to their destinations</span></td>
  </tr>
  <tr>
    <td><span>Magic Transit</span></td>
    <td><span>End users connecting to applications protected by Magic Transit would not have been advertised on the Internet, and would have seen connection timeouts and failures</span></td>
  </tr>
</tbody></table></div><p>There was also a set of customers who were unable to restore service by toggling the prefixes on the Cloudflare dashboard. As engineers began reannouncing prefixes to restore service for these customers, these customers may have seen increased latency and failures despite their IP addresses being advertised. This was because the addressing settings for some users were removed from edge servers due an issue in our own software, and the state had to be propagated back to the edge. </p><p>We’re going to get into what exactly broke in our addressing system, but to do that we need to cover a quick primer on the Addressing API, which is the underlying source of truth for customer IP addresses at Cloudflare.</p>
    <div>
      <h2>Cloudflare’s Addressing API</h2>
      <a href="#cloudflares-addressing-api">
        
      </a>
    </div>
    <p>The Addressing API is an authoritative dataset of the addresses present on the Cloudflare network. Any change to that dataset is immediately reflected in Cloudflare's global network. While we are in the process of improving how these systems roll out changes as a part of <a href="https://blog.cloudflare.com/fail-small-resilience-plan/"><u>Code Orange: Fail Small</u></a>, today customers can configure their IP addresses by interacting with public-facing APIs which configure a set of databases that trigger operational workflows propagating the changes to Cloudflare’s edge. This means that changes to the Addressing API are immediately propagated to the Cloudflare edge.</p><p>Advertising and configuring IP addresses on Cloudflare involves several steps:</p><ul><li><p>Customers signal to Cloudflare about advertisement/withdrawal of IP addresses via the Addressing API or BGP Control</p></li><li><p>The Addressing API instructs the machines to change the prefix advertisements</p></li><li><p>BGP will be updated on the routers once enough machines have received the notification to update the prefix</p></li><li><p>Finally, customers can configure Cloudflare products to use BYOIP addresses via <a href="https://developers.cloudflare.com/byoip/service-bindings/"><u>service bindings</u></a> which will assign products to these ranges</p></li></ul><p>The Addressing API allows us to automate most of the processes surrounding how we advertise or withdraw addresses, but some processes still require manual actions. These manual processes are risky because of their close proximity to Production. As a part of <a href="https://blog.cloudflare.com/fail-small-resilience-plan/"><u>Code Orange: Fail Small</u></a>, one of the goals of remediation was to remove manual actions taken in the Addressing API and replace them with safe workflows.</p>
    <div>
      <h2>How did the incident occur?</h2>
      <a href="#how-did-the-incident-occur">
        
      </a>
    </div>
    <p>The specific piece of configuration that broke was a modification attempting to automate the customer action of removing prefixes from Cloudflare’s BYOIP service, a regular customer request that is done manually today. Removing this manual process was part of our Code Orange: Fail Small work to push all changes toward safe, automated, health-mediated deployment. Since the list of related objects of BYOIP prefixes can be large, this was implemented as part of a regularly running sub-task that checks for BYOIP prefixes that should be removed, and then removes them. Unfortunately, this regular cleanup sub-task queried the API with a bug.</p><p>Here is the API query from the cleanup sub-task:</p>
            <pre><code> resp, err := d.doRequest(ctx, http.MethodGet, `/v1/prefixes?pending_delete`, nil)
</code></pre>
            <p>And here is the relevant part of the API implementation:</p>
            <pre><code>	if v := req.URL.Query().Get("pending_delete"); v != "" {
		// ignore other behavior and fetch pending objects from the ip_prefixes_deleted table
		prefixes, err := c.RO().IPPrefixes().FetchPrefixesPendingDeletion(ctx)
		if err != nil {
			api.RenderError(ctx, w, ErrInternalError)
			return
		}

		api.Render(ctx, w, http.StatusOK, renderIPPrefixAPIResponse(prefixes, nil))
		return
	}
</code></pre>
            <p>Because the client is passing pending_delete with no value, the result of Query().Get(“pending_delete”) here will be an empty string (“”), so the API server interprets this as a request for all BYOIP prefixes instead of just those prefixes that were supposed to be removed. The system interpreted this as all returned prefixes being queued for deletion. The new sub-task then began systematically deleting all BYOIP prefixes and all of their related dependent objects including <a href="https://developers.cloudflare.com/byoip/service-bindings/"><u>service bindings</u></a>, until the impact was noticed, and an engineer identified the sub-task and shut it down.</p>
    <div>
      <h3>Why did Cloudflare not catch the bug in our staging environment or testing?</h3>
      <a href="#why-did-cloudflare-not-catch-the-bug-in-our-staging-environment-or-testing">
        
      </a>
    </div>
    <p>Our staging environment contains data that matches Production as closely as possible, but was not sufficient in this case and the mock data we relied on to simulate what would occur was insufficient. </p><p>In addition, while we have tests for this functionality, coverage for this scenario in our testing process and environment was incomplete. Initial testing and code review focused on the BYOIP self-service API journey and were completed successfully. While our engineers successfully tested the exact process a customer would have followed, testing did not cover a scenario where the task-runner service would independently execute changes to user data without explicit input.</p>
    <div>
      <h3>Why was recovery not immediate?</h3>
      <a href="#why-was-recovery-not-immediate">
        
      </a>
    </div>
    <p>Affected BYOIP prefixes were not all impacted in the same way, necessitating more intensive data recovery steps. As a part of Code Orange: Fail Small, we are building a system where operational state snapshots can be safely rolled out through health-mediated deployments. In the event something does roll out that causes unexpected behavior, it can be very quickly rolled back to a known-good state. However, that system is not in Production today.</p><p>BYOIP prefixes were in different states of impact during this incident, and each of these different states required different actions:</p><ul><li><p>Most impacted customers only had their prefixes withdrawn. Customers in this configuration could go into the dashboard and toggle their advertisements, which would restore service. </p></li><li><p>Some customers had their prefixes withdrawn and some bindings removed. These customers were in a partial state of recovery where they could toggle some prefixes but not others.</p></li><li><p>Some customers had their prefixes withdrawn and all service bindings removed. They could not toggle their prefixes in the dashboard because there was no <a href="https://developers.cloudflare.com/byoip/service-bindings/"><u>service</u></a> (Magic Transit, Spectrum, CDN) bound to them. These customers took the longest to mitigate, as a global configuration update had to be initiated to reapply the service bindings for all these customers to every single machine on Cloudflare’s edge.</p></li></ul>
    <div>
      <h3>How does this incident relate to Code Orange: Fail Small?</h3>
      <a href="#how-does-this-incident-relate-to-code-orange-fail-small">
        
      </a>
    </div>
    <p>The change we were making when this incident occurred is part of the Code Orange: Fail Small initiative, which is aimed at improving the resiliency of code and configuration at Cloudflare. As a brief primer of the <a href="https://blog.cloudflare.com/fail-small-resilience-plan/"><u>Code Orange: Fail Small</u></a> initiatives, the work can be divided into three buckets:</p><ul><li><p>Require controlled rollouts for any configuration change that is propagated to the network, just like we do today for software binary releases.</p></li><li><p>Change our internal “break glass” procedures and remove any circular dependencies so that we, and our customers, can act fast and access all systems without issue during an incident.</p></li><li><p>Review, improve, and test failure modes of all systems handling network traffic to ensure they exhibit well-defined behavior under all conditions, including unexpected error states.</p></li></ul><p>The change that we attempted to deploy falls under the first bucket. By moving risky, manual changes to safe, automated configuration updates that are deployed in a health-mediated manner, we aim to improve the reliability of the service.</p><p>Critical work was already ongoing to enhance the Addressing API's configuration change support through staged test mediation and better correctness checks. This work was ongoing in parallel with the deployed change. Although preventative measures weren't fully deployed before the outage, teams were actively working on these systems when the incident occurred. Following our Code Orange: Fail Small promise to require controlled rollouts of any change into Production, our engineering teams have been reaching deep into all layers of our stack to identify and fix all problematic findings. While this outage wasn't itself global, the blast radius and impact were unacceptably large, further reinforcing Code Orange: Fail Small as a priority until we have re-established confidence in all changes to our network being as gradual as possible. Now let’s talk more specifically about improvements to these systems.</p>
    <div>
      <h2>Remediation and follow-up steps</h2>
      <a href="#remediation-and-follow-up-steps">
        
      </a>
    </div>
    
    <div>
      <h3>API schema standardization</h3>
      <a href="#api-schema-standardization">
        
      </a>
    </div>
    <p>One of the issues in this incident is that the pending_delete flag was interpreted as a string, making it difficult for both client and server to rationalize the value of the flag. We will improve the API schema to ensure better standardization, which will make it much easier for testing and systems to validate whether an API call is properly formed or not. This work is part of the third Code Orange workstream, which aims to create well-defined behavior under all conditions.</p>
    <div>
      <h3>Better separation between operational and configured state</h3>
      <a href="#better-separation-between-operational-and-configured-state">
        
      </a>
    </div>
    <p>Today, customers make changes to the addressing schema that are persisted in an authoritative database, and that database is the same one used for operational actions. This makes manual rollback processes more challenging because engineers need to utilize database snapshots instead of rationalizing between desired and actual states. We will redesign the rollback mechanism and database configuration to ensure that we have an easy way to roll back changes quickly and also to introduce layers between customer configuration and Production.  </p><p>We will snap shot the data that we read from the database and are applying to Production, and apply those snapshots in the same way that we deploy all our other Production changes, mediated by health metrics that can automatically stop the deployment if things are going wrong. This means that the next time we have a problem where the database gets changed into a bad state, we can near-instantly revert individual customers (or all customers) to a version that was working.</p><p>While this will temporarily block our customers from being able to make direct updates via our API in the event of an outage, it will mean that we can continue serving their traffic while we work to fix the database, instead of being down for that time. This work aligns with the first and second Code Orange workstreams, which involves fast rollback and also safe, health-mediated deployment of configuration.</p>
    <div>
      <h3>Better arbitrate large withdrawal actions</h3>
      <a href="#better-arbitrate-large-withdrawal-actions">
        
      </a>
    </div>
    <p>We will improve our monitoring to detect when changes are happening too fast or too broadly, such as withdrawing or deleting BGP prefixes quickly, and disable the deployment of snapshots when this happens. This will form a type of circuit breaker to stop any out-of-control process that is manipulating the database from having a large blast radius, like we saw in this incident.</p><p>We also have some ongoing work to directly monitor that the services run by our customers are behaving correctly, and those signals can also be used to trip the circuit breaker and stop potentially dangerous changes from being applied until we have had time to investigate. This work aligns with the first Code Orange workstream, which involves safe deployment of changes.</p><p>Below is the timeline of events inclusive of deployment of the change and remediation steps: </p>
<div><table><colgroup>
<col></col>
<col></col>
<col></col>
</colgroup>
<thead>
  <tr>
    <th><span>Time (UT</span><span>C</span><span>)</span></th>
    <th><span>Status</span></th>
    <th><span>Description</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>2026-02-05 21:53</span></td>
    <td><span>Code merged into system</span></td>
    <td><span>Broken sub-process merged into code base</span></td>
  </tr>
  <tr>
    <td><span>2026-02-20 17:46</span></td>
    <td><span>Code deployed into system</span></td>
    <td><span>Address API release with broken sub-process completes</span></td>
  </tr>
  <tr>
    <td><span>2026-02-20 17:56</span></td>
    <td><span>Impact Start</span></td>
    <td><span>Broken sub-process begins executing. Prefix advertisement updates begin propagating and prefixes begin to be withdrawn </span><span>– IMPACT STARTS – </span></td>
  </tr>
  <tr>
    <td><span>2026-02-20 18:13</span></td>
    <td><span>Cloudflare engaged</span></td>
    <td><span>Cloudflare engaged for failures on one.one.one.one</span></td>
  </tr>
  <tr>
    <td><span>2026-02-20 18:18</span></td>
    <td><span>Internal incident declared</span></td>
    <td><span>Cloudflare engineers continue investigating impact</span></td>
  </tr>
  <tr>
    <td><span>2026-02-20 18:21</span></td>
    <td><span>Addressing API team paged</span></td>
    <td><span>Engineering team responsible for Addressing API engaged and debugging begins</span></td>
  </tr>
  <tr>
    <td><span>2026-02-20 18:46</span></td>
    <td><span>Issue identified</span></td>
    <td><span>Broken sub-process terminated by an engineer and regular execution disabled; remediation begins</span></td>
  </tr>
  <tr>
    <td><span>2026-02-20 19:11</span></td>
    <td><span>Mitigation begins</span></td>
    <td><span>Cloudflare Engineers begin to restore serviceability for prefixes that were withdrawn while others focused on prefixes that were removed</span></td>
  </tr>
  <tr>
    <td><span>2026-02-20 19:19</span></td>
    <td><span>Some prefixes mitigated</span></td>
    <td><span>Customers begin to re-advertise their prefixes via the dashboard to restore service. </span><span>– IMPACT DOWNGRADE –</span></td>
  </tr>
  <tr>
    <td><span>2026-02-20 19:44</span></td>
    <td><span>Additional mitigation continues</span></td>
    <td><span>Engineers begin database recovery methods for removed prefixes</span></td>
  </tr>
  <tr>
    <td><span>2026-02-20 20:30</span></td>
    <td><span>Final mitigation process begins</span></td>
    <td><span>Engineers complete release to restore withdrawn prefixes that still have existing service bindings. Others are still working on removed prefixes </span><span>– IMPACT DOWNGRADE –</span></td>
  </tr>
  <tr>
    <td><span>2026-02-20 21:08</span></td>
    <td><span>Configuration update deploys</span></td>
    <td><span>Engineering begins global machine configuration rollout to restore prefixes that were not self-mitigated or mitigated via previous efforts </span><span>– IMPACT DOWNGRADE –</span></td>
  </tr>
  <tr>
    <td><span>2026-02-20 23:03</span></td>
    <td><span>Configuration update completed</span></td>
    <td><span>Global machine configuration deployment to restore remaining prefixes is completed. </span><span>– IMPACT ENDS –</span></td>
  </tr>
</tbody></table></div><p>We deeply apologize for this incident today and how it affected the service we provide our customers, and also the Internet at large. We aim to provide a network that is resilient to change, and we did not deliver on our promise to you. We are actively making these improvements to ensure improved stability moving forward and to prevent this problem from happening again.</p> ]]></content:encoded>
            <category><![CDATA[Post Mortem]]></category>
            <category><![CDATA[Incident Response]]></category>
            <category><![CDATA[Outage]]></category>
            <guid isPermaLink="false">6apSdbZfHEgeIzBwCqn5ob</guid>
            <dc:creator>David Tuber</dc:creator>
            <dc:creator>Dzevad Trumic</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing REACT: Why We Built an Elite Incident Response Team]]></title>
            <link>https://blog.cloudflare.com/introducing-react-why-we-built-an-elite-incident-response-team/</link>
            <pubDate>Thu, 09 Oct 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ We're launching Cloudforce One REACT, a team of expert security responders designed to eliminate the gap between perimeter defense and internal incident response. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Cloudforce One’s mission is to help defend the Internet. In Q2’25 alone, Cloudflare stopped an average of 190 billion cyber threats every single day. But real-world customer experiences showed us that stopping attacks at the edge isn’t always enough. We saw ransomware disrupt financial operations, data breaches cripple real estate firms, and misconfigurations cause major data losses.</p><p>In each case, the real damage occurred <i>inside</i> networks.</p><p>These internal breaches uncovered another problem: customers had to hand off incidents to separate internal teams for investigation and remediation. Those handoffs created delays and fractured the response. The result was a gap that attackers could exploit. Critical context collected at the edge didn’t reach the teams managing cleanup, and valuable time was lost. Closing this gap has become essential, and we recognized the need to take responsibility for providing customers with a more unified defense.</p><p>Today, <a href="https://www.cloudflare.com/threat-intelligence/"><u>Cloudforce One</u></a> is launching a new suite of <a href="http://cloudflare.com/cloudforce-one/services/incident-response"><u>incident response and security services</u></a> to help organizations prepare for and respond to breaches.</p><p>These services are delivered by <b>Cloudforce One REACT (Respond, Evaluate, Assess, Consult Team)</b>, a group of seasoned responders and security veterans who investigate threats, hunt adversaries, and work closely with executive leadership to guide response and decision-making.

Customers already trust Cloudforce One to provide industry-leading <a href="https://www.cloudflare.com/cloudforce-one/research/"><u>threat intelligence</u></a>, proactively identifying and <a href="https://www.cloudflare.com/threat-intelligence/research/report/cloudflare-participates-in-global-operation-to-disrupt-raccoono365/"><u>neutralizing</u></a> the most sophisticated threats. REACT extends that partnership, bringing our expertise directly to customer environments to stop threats wherever they occur. In this post, we’ll introduce REACT, explain how it works, detail the top threats our team has observed, and show you how to engage our experts directly for support.</p><p>Our goal is simple: to provide an end-to-end<b> security partnership</b>. We want to eliminate the painful gap between defense and recovery. Now, customers can get everything from proactive preparation to decisive incident response and full recovery—all from the partner you already trust to protect your infrastructure.</p><p>It’s time to move beyond fragmented responses and into one unified, powerful defense.</p>
    <div>
      <h2>How REACT works</h2>
      <a href="#how-react-works">
        
      </a>
    </div>
    <p>REACT services consist of two main components: Security advisory services to prepare for incidents and incident response for emergency situations.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5NvO487oZA6GrphFGNORGt/a49489f86f7a556dd9fcbffdf42a8b33/image5.png" />
          </figure><p><sup><i>A breakdown of the Cloudforce One incident readiness and response service offerings.</i></sup></p><p>Advisory services are designed to assess and improve an organization's security posture and readiness. These include proactive threat hunting, backed by Cloudflare’s real-time global threat intelligence, to find existing compromises, tabletop exercises to test response plans against simulated attacks, and both incident readiness and maturity assessments to identify and address systemic weaknesses.</p><p>The Incident Response component is initiated during an active security crisis. The team specializes in handling a range of complex threats, including APT and nation-state activity, ransomware, insider threats, and business email compromise. The response is also informed by Cloudflare's threat intelligence and, as a network-native service, allows responders to deploy mitigation measures directly at the Cloudflare edge for faster containment.</p><p>For organizations requiring guaranteed availability, incident response retainers are offered. These retainers provide priority response, the development of tailored playbooks, and ongoing advisory support.</p><p>Cloudflare’s REACT services are vendor-agnostic in their scope. We are making REACT available to both existing Cloudflare customers and non-customers, regardless of their current technology stack, and regardless of whether their environment is on-premise, public cloud, or hybrid.</p>
    <div>
      <h2>What makes Cloudflare's approach different?</h2>
      <a href="#what-makes-cloudflares-approach-different">
        
      </a>
    </div>
    <p>Our new service provides significant advantages over traditional incident response, where engagement and data sharing occur over separate, out-of-band channels. The integration of the service into the platform enables a more efficient and effective response to threats.</p><p>The core differentiators of this approach are:</p><ul><li><p><b>Unmatched threat visibility. </b>With roughly 20% of the web sitting behind Cloudflare's network, Cloudforce One has unique visibility into emerging attacks as they unfold globally. This lets REACT accelerate their investigations and quickly correlate incident details with emerging attack vectors and known adversary tactics.</p></li><li><p><b>Network-native mitigation.</b> The service is designed for network-native response. This allows the team, with customer authorization, to deploy mitigations directly at the Cloudflare edge, such as a <a href="https://developers.cloudflare.com/waf/custom-rules/"><u>WAF rule</u></a> or <a href="https://developers.cloudflare.com/cloudflare-one/policies/gateway/"><u>Secure Web Gateway policy</u></a>. This capability reduces the time between threat identification and containment. All response actions are tracked within the dashboard for full visibility.</p></li><li><p><b>Service delivery by proven experts.</b> Cloudforce One is composed of seasoned threat researchers, consultants, and incident responders. The team has a documented history of managing complex security incidents, including nation-state activity and sophisticated financial fraud.</p></li><li><p><b>Vendor-agnostic scope.</b> While managed through the Cloudflare dashboard, the scope of the response is vendor-agnostic. The team is equipped to conduct investigations and coordinate remediation across diverse customer environments, including on-premise, public cloud, and hybrid infrastructures.</p></li></ul>
    <div>
      <h2>Key Threats Seen During Engagements So Far</h2>
      <a href="#key-threats-seen-during-engagements-so-far">
        
      </a>
    </div>
    <p>Analysis of security engagements by the REACT team over the last six months reveals three prevalent and high-impact trends. The data indicates that automated defenses, while critical, must be supplemented by specialized incident response capabilities to effectively counter these specific threats.</p>
    <div>
      <h4><b>High-impact insider threats </b></h4>
      <a href="#high-impact-insider-threats">
        
      </a>
    </div>
    <p>The REACT team has seen a significant number of incidents driven by insiders who use trusted access to bypass typical security controls. These threats are difficult to detect as they often combine technical actions with non-technical motivations. Recent scenarios observed are:</p><ul><li><p>Disgruntled or current employees using their specialized, trusted access to execute targeted, destructive attacks.</p></li><li><p>Financially motivated insiders who are compensated by external actors to exfiltrate data or compromise internal systems.</p></li><li><p>State sponsored operatives gain trusted, privileged access via fraudulent remote work roles to exfiltrate data, conduct espionage, and steal funds for illicit regime financing.</p></li></ul>
    <div>
      <h4><b>Ransomware</b></h4>
      <a href="#ransomware">
        
      </a>
    </div>
    <p>The REACT team has observed that ransomware continues to be a primary driver of high-severity incidents, posing an existential threat to nearly every sector. Common themes observed include:</p><ul><li><p>Disruption of core operations in the financial sector via hostage-taking of critical systems. </p></li><li><p>Paralysis of business functions and compromise of client data in the real estate industry, leading to significant downtime and regulatory scrutiny.</p></li><li><p>Broad impact across all industry verticals. </p></li></ul><p>Stopping these attacks demands not only robust defenses but also a well-rehearsed recovery plan that cuts time-to-restoration to hours, not weeks.</p>
    <div>
      <h4><b>Application security and supply chain breaches</b></h4>
      <a href="#application-security-and-supply-chain-breaches">
        
      </a>
    </div>
    <p>The REACT team has also seen a significant increase in incidents originating at the application layer. These threats typically manifest in two primary areas: vulnerabilities within an organization’s own custom-developed  (‘vibe coded’) applications, and security failures originating from their third-party supply chain:</p><ul><li><p>Vibe coding: The practice of providing natural language prompts to AI models to generate code can produce critical vulnerabilities which can be exploited by threat actors using techniques like remote code execution (RCE), memory corruption, and SQL injection.</p></li><li><p>SaaS supply chain risk: A compromise at a critical third-party vendor that exposes sensitive data, such as when attackers used a stolen <a href="https://blog.cloudflare.com/response-to-salesloft-drift-incident/"><u>Salesloft OAuth token</u></a> to exfiltrate customer support cases from their clients' Salesforce instances.</p></li></ul>
    <div>
      <h2>Integrated directly into your Cloudflare dashboard</h2>
      <a href="#integrated-directly-into-your-cloudflare-dashboard">
        
      </a>
    </div>
    <p>Starting today, Cloudflare Enterprise customers will find a new "Incident Response Services" tab in the Threat intelligence navigation page in the Cloudflare dashboard. This dashboard integration ensures that critical security information and the ability to engage our incident response team are always at your fingertips, streamlining the process of getting expert help when it matters most.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5Imz3bhNLw4khcfHhjtvHr/b8d526964688763983b61d588d97b80f/image4.png" />
          </figure><p><sup><i>Screenshot of the Cloudforce One Incident Response Services page in the Cloudflare dashboard</i></sup></p><p>Retainer customers will benefit from a dedicated Under Attack page, which allows customers to contact Cloudforce One team during an active incident. In the event of an active incident, a simple "Request Help" button in our “Under Attack” page will immediately page our on-call incident responders to get you the help you need without delay.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4V9Gr3tYWwORVsPhOLByGr/0844aa8e4f5852ad40ead3e52bff0630/image6.png" />
          </figure><p><sup><i>Screenshot on the Under Attack button in the Cloudflare dashboard</i></sup></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2KnOXewLXgkQ6c4AabrNqS/fdb6ff08ac9170391aa7e2a8e0965223/image3.png" />
          </figure><p><sup><i>Screenshot of the Emergency Incident Response page in the Cloudflare dashboard</i></sup></p><p>For proactive needs, you can also easily submit requests for security advisory services through the Cloudflare dashboard: </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4R25QIIofrdQe71aOv2pFh/40d1de44dc81cede364b76c5c0d2176a/image2.png" />
          </figure><p><sup><i>Confirmation of the successful service request submission</i></sup></p>
    <div>
      <h2>How to engage with Cloudforce One </h2>
      <a href="#how-to-engage-with-cloudforce-one">
        
      </a>
    </div>
    <p><i>To learn more about REACT, existing Enterprise customers can explore the dedicated Incident Response section in the Cloudflare dashboard. For new inquiries regarding proactive partnerships and retainers, please </i><a href="https://www.cloudflare.com/plans/enterprise/contact/"><i><u>contact Cloudflare sales</u></i></a><i>.

If you are facing an active security crisis and need the REACT team on the ground, </i><a href="https://www.cloudflare.com/under-attack-hotline/"><i><u>please contact us immediately</u></i></a><i>.</i></p> ]]></content:encoded>
            <category><![CDATA[Cloudforce One]]></category>
            <category><![CDATA[Incident Response]]></category>
            <category><![CDATA[Digital Forensics]]></category>
            <category><![CDATA[Threat Intelligence]]></category>
            <guid isPermaLink="false">75gR5VwIoZW3jysVwZlES5</guid>
            <dc:creator>Chris O’Rourke</dc:creator>
            <dc:creator>Utsav Adhikari</dc:creator>
            <dc:creator>Blake Darché</dc:creator>
            <dc:creator>Jacob Crisp</dc:creator>
            <dc:creator>Trevor Lyness</dc:creator>
        </item>
    </channel>
</rss>