
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Thu, 09 Apr 2026 23:39:05 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Cloudflare incident on October 30, 2023]]></title>
            <link>https://blog.cloudflare.com/cloudflare-incident-on-october-30-2023/</link>
            <pubDate>Wed, 01 Nov 2023 16:39:43 GMT</pubDate>
            <description><![CDATA[ Multiple Cloudflare services were unavailable for 37 minutes on October 30, 2023, due to the misconfiguration of a deployment tool used by Workers KV. ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1fuf2Zu6hQEVim57ZP2cze/0c4311c85148c448749069bf6cb900a4/Vulnerabilitiy-1.png" />
            
            </figure><p>Multiple Cloudflare services were unavailable for 37 minutes on October 30, 2023. This was due to the misconfiguration of a deployment tool used by Workers KV. This was a frustrating incident, made more difficult by Cloudflare’s reliance on our own suite of products. We are deeply sorry for the impact it had on customers. What follows is a discussion of what went wrong, how the incident was resolved, and the work we are undertaking to ensure it does not happen again.</p><p>Workers KV is our globally distributed key-value store. It is used by both customers and Cloudflare teams alike to manage configuration data, routing lookups, static asset bundles, authentication tokens, and other data that needs low-latency access.</p><p>During this incident, KV returned what it believed was a valid HTTP 401 (Unauthorized) status code instead of the requested key-value pair(s) due to a bug in a new deployment tool used by KV.</p><p>These errors manifested differently for each product depending on how KV is used by each service, with their impact detailed below.</p>
    <div>
      <h3>What was impacted</h3>
      <a href="#what-was-impacted">
        
      </a>
    </div>
    <p>A number of Cloudflare services depend on Workers KV for distributing configuration, routing information, static asset serving, and authentication state globally. These services instead received an HTTP 401 (Unauthorized) error when performing any get, put, delete, or list operation against a KV namespace.</p><p>Customers using the following Cloudflare products would have observed heightened error rates and/or would have been unable to access some or all features for the duration of the incident:</p>
<table>
<thead>
  <tr>
    <th><span>Product</span></th>
    <th><span>Impact</span></th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><span>Workers KV</span></td>
    <td><span>Customers with applications leveraging KV saw those applications fail during the duration of this incident, including both the KV API within Workers, and the REST API.</span><br /><span>Workers applications not using KV were not impacted.</span></td>
  </tr>
  <tr>
    <td><span>Pages</span></td>
    <td><span>Applications hosted on Pages were unreachable for the duration of the incident and returned HTTP 500 errors to users. New Pages deployments also returned HTTP 500 errors to users for the duration.</span></td>
  </tr>
  <tr>
    <td><span>Access</span></td>
    <td><span>Users who were unauthenticated could not log in; any origin attempting to validate the JWT using the /certs endpoint would fail; any application with a device posture policy failed for all users.</span><br /><span>Existing logged-in sessions that did not use the /certs endpoint or posture checks were unaffected. Overall, a large percentage of existing sessions were still affected.</span></td>
  </tr>
  <tr>
    <td><span>WARP / Zero Trust</span></td>
    <td><span>Users were unable to register new devices or connect to resources subject to policies that enforce Device Posture checks or WARP Session timeouts.</span><br /><span>Devices already enrolled, resources not relying on device posture, or that had re-authorized outside of this window were unaffected.</span></td>
  </tr>
  <tr>
    <td><span>Images</span></td>
    <td><span>The Images API returned errors during the incident. Existing image delivery was not impacted.</span></td>
  </tr>
  <tr>
    <td><span>Cache Purge (single file)</span></td>
    <td><span>Single file purge was partially unavailable for the duration of the incident as some data centers could not access configuration data in KV. Data centers that had existing configuration data locally cached were unaffected.</span><br /><span>Other cache purge mechanisms, including purge by tag, were unaffected.</span></td>
  </tr>
  <tr>
    <td><span>Workers</span></td>
    <td><span>Uploading or editing Workers through the dashboard, wrangler or API returned errors during the incident. Deployed Workers were not impacted, unless they used KV. </span></td>
  </tr>
  <tr>
    <td><span>AI Gateway</span></td>
    <td><span>AI Gateway was not able to proxy requests for the duration of the incident.</span></td>
  </tr>
  <tr>
    <td><span>Waiting Room</span></td>
    <td><span>Waiting Room configuration is stored at the edge in Workers KV. Waiting Room configurations, and configuration changes, were unavailable and the service failed open.</span><br /><span>When access to KV was restored, some Waiting Room users would have experienced queuing as the service came back up. </span></td>
  </tr>
  <tr>
    <td><span>Turnstile and Challenge Pages</span></td>
    <td><span>Turnstile's JavaScript assets are stored in KV, and the entry point for Turnstile (api.js) was not able to be served. Clients accessing pages using Turnstile could not initialize the Turnstile widget and would have failed closed during the incident window.</span><br /><span>Challenge Pages (which products like Custom, Managed and Rate Limiting rules use) also use Turnstile infrastructure for presenting challenge pages to users under specific conditions, and would have blocked users who were presented with a challenge during that period.</span></td>
  </tr>
  <tr>
    <td><span>Cloudflare Dashboard</span></td>
    <td><span>Parts of the Cloudflare dashboard that rely on Turnstile and/or our internal feature flag tooling (which uses KV for configuration) returned errors to users for the duration. </span></td>
  </tr>
</tbody>
</table>
    <div>
      <h3>Timeline</h3>
      <a href="#timeline">
        
      </a>
    </div>
    <p><i>All timestamps referenced are in Coordinated Universal Time (UTC).</i></p>
<table>
<thead>
  <tr>
    <th><span>Time</span></th>
    <th><span>Description</span></th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><span>2023-10-30 18:58 UTC</span></td>
    <td><span>The Workers KV team began a progressive deployment of a new KV build to production.</span></td>
  </tr>
  <tr>
    <td><span>2023-10-30 19:29 UTC</span></td>
    <td><span>The internal progressive deployment API returns staging build GUID to a call to list production builds. </span></td>
  </tr>
  <tr>
    <td><span>2023-10-30 19:40 UTC</span></td>
    <td><span>The progressive deployment API was used to continue rolling out the release. This routed a percentage of traffic to the wrong destination, triggering alerting and leading to the decision to roll back.</span></td>
  </tr>
  <tr>
    <td><span>2023-10-30 19:54 UTC</span></td>
    <td><span>Rollback via progressive deployment API attempted, traffic starts to fail at scale. </span><span>— IMPACT START —</span></td>
  </tr>
  <tr>
    <td><span>2023-10-30 20:15 UTC</span></td>
    <td><span>Cloudflare engineers manually edit (via break glass mechanisms) deployment routes to revert to last known good build for the majority of traffic.</span></td>
  </tr>
  <tr>
    <td><span>2023-10-30 20:29 UTC</span></td>
    <td><span>Workers KV error rates return to normal pre-incident levels, and impacted services recover within the following minute.</span></td>
  </tr>
  <tr>
    <td><span>2023-10-30 20:31 UTC</span></td>
    <td><span>Impact resolved </span><span>— IMPACT END — </span></td>
  </tr>
</tbody>
</table><p>As shown in the above timeline, there was a delay between the time we realized we were having an issue at 19:54 UTC and the time we were actually able to perform the rollback at 20:15 UTC.</p><p>This was caused by the fact that multiple tools within Cloudflare rely on Workers KV including Cloudflare Access. Access leverages Workers KV as part of its request verification process. Due to this, we were unable to leverage our internal tooling and had to use break-glass mechanisms to bypass the normal tooling. As described below, we had not spent sufficient time testing the rollback mechanisms. We plan to harden this moving forward.</p>
    <div>
      <h3>Resolution</h3>
      <a href="#resolution">
        
      </a>
    </div>
    <p>Cloudflare engineers manually switched (via break glass mechanism) the production route to the previous working version of Workers KV, which immediately eliminated the failing request path and subsequently resolved the issue with the Workers KV deployment.</p>
    <div>
      <h3>Analysis</h3>
      <a href="#analysis">
        
      </a>
    </div>
    <p>Workers KV is a low-latency key-value store that allows users to store persistent data on Cloudflare's network, as close to the users as possible. This distributed key-value store is used in many applications, some of which are first-party Cloudflare products like Pages, Access, and Zero Trust.</p><p>The Workers KV team was progressively deploying a new release using a specialized deployment tool. The deployment mechanism contains a staging and a production environment, and utilizes a process where the production environment is upgraded to the new version at progressive percentages until all production environments are upgraded to the most recent production build. The deployment tool had a latent bug with how it returns releases and their respective versions. Instead of returning releases from a single environment, the tool returned a broader list of releases than intended, resulting in production and staging releases being returned together.</p><p>In this incident, the service was deployed and tested in staging. But because of the deployment automation bug, when promoting to production, a script that had been deployed to the staging account was incorrectly referenced instead of the pre-production version on the production account. As a result, the deployment mechanism pointed the production environment to a version that was not running anywhere in the production environment, effectively black-holing traffic.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1YKI1LYglUMvikcDlkZF41/ff8ba4a17059a4139884ee71524a8209/image1.png" />
            
            </figure><p>When this happened, Workers KV became unreachable in production, as calls to the product were directed to a version that was not authorized for production access, returning a HTTP 401 error code. This caused dependent products which stored key-value pairs in KV to fail, regardless of whether the key-value pair was cached locally or not.</p><p>Although automated alerting detected the issue immediately, there was a delay between the time we realized we were having an issue and the time we were actually able to perform the roll back. This was caused by the fact that multiple tools within Cloudflare rely on Workers KV including Cloudflare Access. Access uses Workers KV as part of the verification process for user JWTs (JSON Web Tokens).</p><p>These tools include the dashboard which was used to revert the change, and the authentication mechanism to access our continuous integration (CI) system. As Workers KV was down, so too were these services. Automatic rollbacks via our CI system had been successfully tested previously, but the authentication issues (Access relies on KV) due to the incident made accessing the necessary secrets to roll back the deploy impossible.</p><p>The fix ultimately was a manual change of the production build path to a previous and known good state. This path was known to have been deployed and was the previous production build before the attempted deployment.</p>
    <div>
      <h3>Next steps</h3>
      <a href="#next-steps">
        
      </a>
    </div>
    <p>As more teams at Cloudflare have built on Workers, we have "organically" ended up in a place where Workers KV now underpins a tremendous amount of our products and services. This incident has continued to reinforce the need for us to revisit how we can reduce the blast radius of critical dependencies, which includes improving the sophistication of our deployment tooling, its ease-of-use for our internal teams, and product-level controls for these dependencies. We’re prioritizing these efforts to ensure that there is not a repeat of this incident.</p><p>This also reinforces the need for Cloudflare to improve the tooling, and the safety of said tooling, around progressive deployments of Workers applications internally and for customers.</p><p>This includes (but is not limited) to the below list of key follow-up actions (in no specific order) this quarter:</p><ol><li><p>Onboard KV deployments to standardized Workers deployment models which use automated systems for impact detection and recovery.</p></li><li><p>Ensure that the rollback process has access to a known good deployment identifier and that it works when Cloudflare Access is down.</p></li><li><p>Add pre-checks to deployments which will validate input parameters to ensure version mismatches don't propagate to production environments.</p></li><li><p>Harden the progressive deployment tooling to operate in a way that is designed for multi-tenancy. The current design assumes a single-tenant model.</p></li><li><p>Add additional validation to progressive deployment scripts to verify that the deployment matches the app environment (production, staging, etc.).</p></li></ol><p>Again, we’re extremely sorry this incident occurred, and take the impact of this incident on our customers extremely seriously.</p> ]]></content:encoded>
            <category><![CDATA[Post Mortem]]></category>
            <guid isPermaLink="false">2RLr0QNONtOjY9xl3wKG1G</guid>
            <dc:creator>Matt Silverlock</dc:creator>
            <dc:creator>Kris Evans</dc:creator>
        </item>
        <item>
            <title><![CDATA[Hardening Workers KV]]></title>
            <link>https://blog.cloudflare.com/workers-kv-restoring-reliability/</link>
            <pubDate>Wed, 02 Aug 2023 13:05:42 GMT</pubDate>
            <description><![CDATA[ A deep dive into the recent incidents relating to Workers KV, and how we’re going to fix them ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Over the last couple of months, Workers KV has suffered from a series of incidents, culminating in three back-to-back incidents during the week of July 17th, 2023. These incidents have directly impacted customers that rely on KV — and this isn’t good enough.</p><p>We’re going to share the work we have done to understand why KV has had such a spate of incidents and, more importantly, share in depth what we’re doing to dramatically improve how we deploy changes to KV going forward.</p>
    <div>
      <h3>Workers KV?</h3>
      <a href="#workers-kv">
        
      </a>
    </div>
    <p><a href="https://www.cloudflare.com/developer-platform/workers-kv/">Workers KV</a> — or just “KV” — is a key-value service for storing data: specifically, data with high read throughput requirements. It’s especially useful for user configuration, service routing, small assets and/or authentication data.</p><p>We use KV extensively inside Cloudflare too, with <a href="https://www.cloudflare.com/zero-trust/products/access/">Cloudflare Access</a> (part of our Zero Trust suite) and <a href="https://pages.cloudflare.com/">Cloudflare Pages</a> being some of our highest profile internal customers. Both teams benefit from KV’s ability to keep regularly accessed key-value pairs close to where they’re accessed, as well its ability to scale out horizontally without any need to become an expert in operating KV.</p><p>Given Cloudflare’s extensive use of KV, it wasn’t just external customers impacted. Our own internal teams felt the pain of these incidents, too.</p>
    <div>
      <h3>The summary of the post-mortem</h3>
      <a href="#the-summary-of-the-post-mortem">
        
      </a>
    </div>
    <p>Back in June 2023, we announced the move to a new architecture for KV, which is designed to address two major points of customer feedback we’ve had around KV: high latency for infrequently accessed keys (or a key accessed in different regions), and working to ensure the upper bound on KV’s eventual consistency model for writes is 60 seconds — not “mostly 60 seconds”.</p><p>At the time of the blog, we’d already been testing this internally, including early access with our community champions and running a small % of production traffic to validate stability and performance expectations beyond what we could emulate within a staging environment.</p><p>However, in the weeks between mid-June and culminating in the series of incidents during the week of July 17th, we would continue to increase the volume of new traffic onto the new architecture. When we did this, we would encounter previously unseen problems (many of these customer-impacting) — then immediately roll back, fix bugs, and repeat. Internally, we’d begun to identify that this pattern was becoming unsustainable — each attempt to cut traffic onto the new architecture would surface errors or behaviors we hadn’t seen before and couldn’t immediately explain, and thus we would roll back and assess.</p><p>The issues at the root of this series of incidents proved to be significantly challenging to track and observe. Once identified, the two causes themselves proved to be quick to fix, but an (1) observability gap in our error reporting and (2) a mutation to local state that resulted in an unexpected mutation of global state were both hard to observe and reproduce over the days following the customer-facing impact ending.</p>
    <div>
      <h3>The detail</h3>
      <a href="#the-detail">
        
      </a>
    </div>
    <p>One important piece of context to understand before we go into detail on the post-mortem: Workers KV is composed of two separate Workers scripts – internally referred to as the Storage Gateway Worker and SuperCache. SuperCache is an optional path in the Storage Gateway Worker workflow, and is the basis for KV's new (faster) backend (refer to the blog).</p><p>Here is a timeline of events:</p>
<table>
<thead>
  <tr>
    <th><span>Time</span></th>
    <th><span>Description</span></th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><span>2023-07-17 21:52 UTC</span></td>
    <td><span>Cloudflare observes alerts showing 500 HTTP status codes in the MEL01 data-center (Melbourne, AU) and begins investigating.</span><br /><span>We also begin to see a small set of customers reporting HTTP 500s being returned via multiple channels. It is not immediately clear if this is a data-center-wide issue or KV specific, as there had not been a recent KV deployment, and the issue directly correlated with three data-centers being brought back online.</span></td>
  </tr>
  <tr>
    <td><span>2023-07-18 00:09 UTC</span></td>
    <td><span>We disable the new backend for KV in MEL01 in an attempt to mitigate the issue (noting that there had not been a recent deployment or change to the % of users on the new backend).</span></td>
  </tr>
  <tr>
    <td><span>2023-07-18 05:42 UTC</span></td>
    <td><span>Investigating alerts showing 500 HTTP status codes in VIE02 (Vienna, AT) and JNB01 (Johannesburg, SA).</span></td>
  </tr>
  <tr>
    <td><span>2023-07-18 13:51 UTC</span></td>
    <td><span>The new backend is disabled globally after seeing issues in VIE02 (Vienna, AT) and JNB01 (Johannesburg, SA) data-centers, similar to MEL01. In both cases, they had also recently come back online after maintenance, but it remained unclear as to why KV was failing.</span></td>
  </tr>
  <tr>
    <td><span>2023-07-20 19:12 UTC</span></td>
    <td><span>The new backend is inadvertently re-enabled while deploying the update due to a misconfiguration in a deployment script. </span></td>
  </tr>
  <tr>
    <td><span>2023-07-20 19:33 UTC</span></td>
    <td><span>The new backend is (re-) disabled globally as HTTP 500 errors return.</span></td>
  </tr>
  <tr>
    <td><span>2023-07-20 23:46 UTC</span></td>
    <td><span>Broken Workers script pipeline deployed as part of gradual rollout due to incorrectly defined pipeline configuration in the deployment script.</span><br /><span>Metrics begin to report that a subset of traffic is being black-holed.</span></td>
  </tr>
  <tr>
    <td><span>2023-07-20 23:56 UTC</span></td>
    <td><span>Broken pipeline rolled back; errors rates return to pre-incident (normal) levels.</span></td>
  </tr>
</tbody>
</table><p><i>All timestamps referenced are in Coordinated Universal Time (UTC).</i></p><p>We initially observed alerts showing 500 HTTP status codes in the MEL01 data-center (Melbourne, AU) at 21:52 UTC on July 17th, and began investigating. We also received reports from a small set of customers reporting HTTP 500s being returned via multiple channels. This correlated with three data centers being brought back online, and it was not immediately clear if it related to the data centers or was KV-specific — especially given there had not been a recent KV deployment. On 05:42, we began investigating alerts showing 500 HTTP status codes in VIE02 (Vienna) and JNB02 (Johannesburg) data-centers; while both had recently come back online after maintenance, it was still unclear why KV was failing. At 13:51 UTC, we made the decision to disable the new backend globally.</p><p>Following the incident on July 18th, we attempted to deploy an allow-list configuration to reduce the scope of impacted accounts. However, while attempting to roll out a change for the Storage Gateway Worker at 19:12 UTC on July 20th, an older configuration was progressed causing the new backend to be enabled again, leading to the third event. As the team worked to fix this and deploy this configuration, they attempted to manually progress the deployment at 23:46 UTC, which resulted in the passing of a malformed configuration value that caused traffic to be sent to an invalid Workers script configuration.</p><p>After all deployments and the broken Workers configuration (pipeline) had been rolled back at 23:56 on the 20th July, we spent the following three days working to identify the root cause of the issue. We lacked <a href="https://www.cloudflare.com/learning/performance/what-is-observability/">observability</a> as KV's Worker script (responsible for much of KV's logic) was throwing an unhandled exception very early on in the request handling process. This was further exacerbated by prior work to disable error reporting in a disabled data-center due to the noise generated, which had previously resulted in logs being rate-limited upstream from our service.</p><p>This previous mitigation prevented us from capturing meaningful logs from the Worker, including identifying the exception itself, as an uncaught exception terminates request processing. This has raised the priority of improving how unhandled exceptions are reported and surfaced in a Worker (see Recommendations, below, for further details). This issue was exacerbated by the fact that KV's Worker script would fail to re-enter its "healthy" state when a Cloudflare data center was brought back online, as the Worker was mutating an environment variable perceived to be in request scope, but that was in global scope and persisted across requests. This effectively left the Worker “frozen” with the previous, invalid configuration for the affected locations.</p><p>Further, the introduction of a new progressive release process for Workers KV, designed to de-risk rollouts (as an action from a prior incident), prolonged the incident. We found a bug in the deployment logic that led to a broader outage due to an incorrectly defined configuration.</p><p>This configuration effectively caused us to drop a single-digit % of traffic until it was rolled back 10 minutes later. This code is untested at scale, and we need to spend more time hardening it before using it as the default path in production.</p><p>Additionally: although the root cause of the incidents was limited to three Cloudflare data-centers (Melbourne, Vienna, and Johannesburg), traffic across these regions still uses these data centers to route reads and writes to our system of record. Because these three data centers participate in KV’s new backend as regional tiers, a portion of traffic across the Oceania, Europe, and African regions was affected. Only a portion of keys from enrolled namespaces use any given data center as a regional tier in order to limit a single (regional) point of failure, so while traffic across <i>all</i> data centers in the region was impacted, nowhere was <i>all</i> traffic in a given data center affected.</p><p>We estimated the affected traffic to be 0.2-0.5% of KV's global traffic (based on our error reporting), however we observed some customers with error rates approaching 20% of their total KV operations. The impact was spread across KV namespaces and keys for customers within the scope of this incident.</p><p>Both KV’s high total traffic volume and its role as a critical dependency for many customers amplify the impact of even small error rates. In all cases, once the changes were rolled back, errors returned to normal levels and did not persist.</p>
    <div>
      <h3>Thinking about risks in building software</h3>
      <a href="#thinking-about-risks-in-building-software">
        
      </a>
    </div>
    <p>Before we dive into what we’re doing to significantly improve how we build, test, deploy and observe Workers KV going forward, we think there are lessons from the real world that can equally apply to how we improve the safety factor of the software we ship.</p><p>In traditional engineering and construction, there is an extremely common procedure known as a   “JSEA”, or <a href="https://en.wikipedia.org/wiki/Job_safety_analysis">Job Safety and Environmental Analysis</a> (sometimes just “JSA”). A JSEA is designed to help you iterate through a list of tasks, the potential hazards, and most importantly, the controls that will be applied to prevent those hazards from damaging equipment, injuring people, or worse.</p><p>One of the most critical concepts is the “hierarchy of controls” — that is, what controls should be applied to mitigate these hazards. In most practices, these are elimination, substitution, engineering, administration and personal protective equipment. Elimination and substitution are fairly self-explanatory: is there a different way to achieve this goal? Can we eliminate that task completely? Engineering and administration ask us whether there is additional engineering work, such as changing the placement of a panel, or using a horizontal boring machine to lay an underground pipe vs. opening up a trench that people can fall into.</p><p>The last and lowest on the hierarchy, is personal protective equipment (PPE). A hard hat can protect you from severe injury from something falling from above, but it’s a last resort, and it certainly isn’t guaranteed. In engineering practice, any hazard that <i>only</i> lists PPE as a mitigating factor is unsatisfactory: there must be additional controls in place. For example, instead of only wearing a hard hat, we should <i>engineer</i> the floor of scaffolding so that large objects (such as a wrench) cannot fall through in the first place. Further, if we require that all tools are attached to the wearer, then it significantly reduces the chance the tool can be dropped in the first place. These controls ensure that there are multiple degrees of mitigation — defense in depth — before your hard hat has to come into play.</p><p>Coming back to software, we can draw parallels between these controls: engineering can be likened to improving automation, gradual rollouts, and detailed metrics. Similarly, personal protective equipment can be likened to code review: useful, but code review cannot be the only thing protecting you from shipping bugs or untested code. Automation with linters, more robust testing, and new metrics are all vastly <i>safer</i> ways of shipping software.</p><p>As we spent time assessing where to improve our existing controls and how to put new controls in place to mitigate risks and improve the reliability (safety) of Workers KV, we took a similar approach: eliminating unnecessary changes, engineering more resilience into our codebase, automation, deployment tooling, and only then looking at human processes.</p>
    <div>
      <h3>How we plan to get better</h3>
      <a href="#how-we-plan-to-get-better">
        
      </a>
    </div>
    <p>Cloudflare is undertaking a larger, more structured review of KV's observability tooling, release infrastructure and processes to mitigate not only the contributing factors to the incidents within this report, but recent incidents related to KV. Critically, we see tooling and automation as the most powerful mechanisms for preventing incidents, with process improvements designed to provide an additional layer of protection. Process improvements alone cannot be the only mitigation.</p><p>Specifically, we have identified and prioritized the below efforts as the most important next steps towards meeting our own availability SLOs, and (above all) make KV a service that customers building on Workers can rely on for storing configuration and service data in the hot path of their traffic:</p><ul><li><p>Substantially improve the existing observability tooling for unhandled exceptions, both for internal teams and customers building on Workers. This is especially critical for high-volume services, where traditional logging alone can be too noisy (and not specific enough) to aid in tracking down these cases. The existing ongoing work to land this will be prioritized further. In the meantime, we have directly addressed the specific uncaught exception with KV's primary Worker script.</p></li><li><p>Improve the safety around the mutation of environmental variables in a Worker, which currently operate at "global" (per-isolate) scope, but can appear to be per-request. Mutating an environmental variable in request scope mutates the value for all requests transiting that same isolate (in a given location), which can be unexpected. Changes here will need to take backwards compatibility in mind.</p></li><li><p>Continue to expand KV’s test coverage to better address the above issues, in parallel with the aforementioned observability and tooling improvements, as an additional layer of defense. This includes allowing our test infrastructure to simulate traffic from any source data-center, which would have allowed us to more quickly reproduce the issue and identify a root cause.</p></li><li><p>Improvements to our release process, including how KV changes and releases are reviewed and approved, going forward. We will enforce a higher level of scrutiny for future changes, and where possible, reduce the number of changes deployed at once. This includes taking on new infrastructure dependencies, which will have a higher bar for both design and testing.</p></li><li><p>Additional logging improvements, including sampling, throughout our request handling process to improve troubleshooting &amp; debugging. A significant amount of the challenge related to these incidents was due to the lack of logging around specific requests (especially non-2xx requests)</p></li><li><p>Review and, where applicable, improve alerting thresholds surrounding error rates. As mentioned previously in this report, sub-% error rates at a global scale can have severe negative impact on specific users and/or locations: ensuring that errors are caught and not lost in the noise is an ongoing effort.</p></li><li><p>Address maturity issues with our progressive deployment tooling for Workers, which is net-new (and will eventually be exposed to customers directly).</p></li></ul><p>This is not an exhaustive list: we're continuing to expand on preventative measures associated with these and other incidents. These changes will not only improve KVs reliability, but other services across Cloudflare that KV relies on, or that rely on KV.</p><p>We recognize that KV hasn’t lived up to our customers’ expectations recently. Because we rely on KV so heavily internally, we’ve felt that pain first hand as well. The work to fix the issues that led to this cycle of incidents is already underway. That work will not only improve KV’s reliability but also improve the reliability of any software written on the Cloudflare Workers developer platform, whether by our customers or by ourselves.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Reliability]]></category>
            <category><![CDATA[Post Mortem]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">6sRjpTRuwGjPJmHgwHlg7u</guid>
            <dc:creator>Matt Silverlock</dc:creator>
            <dc:creator>Charles Burnett</dc:creator>
            <dc:creator>Rob Sutter</dc:creator>
            <dc:creator>Kris Evans</dc:creator>
        </item>
    </channel>
</rss>