
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sun, 05 Apr 2026 16:53:06 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Instant Purge: invalidating cached content in under 150ms]]></title>
            <link>https://blog.cloudflare.com/instant-purge/</link>
            <pubDate>Tue, 24 Sep 2024 23:00:00 GMT</pubDate>
            <description><![CDATA[ We’ve built the fastest cache purge in the industry by offering a global purge latency for purge by tags, hostnames, and prefixes of less than 150ms on average (P50), representing a 90% improvement.  ]]></description>
            <content:encoded><![CDATA[ <p><sup>(part 3 of the Coreless Purge </sup><a href="https://blog.cloudflare.com/rethinking-cache-purge-architecture/"><sup>series</sup></a><sup>)</sup></p><p>Over the past 14 years, Cloudflare has evolved far beyond a Content Delivery Network (CDN), expanding its offerings to include a comprehensive <a href="https://developers.cloudflare.com/cloudflare-one/"><u>Zero Trust</u></a> security portfolio, network security &amp; performance <a href="https://www.cloudflare.com/network-services/products/"><u>services</u></a>, application security &amp; performance <a href="https://www.cloudflare.com/application-services/products/"><u>optimizations</u></a>, and a powerful <a href="https://www.cloudflare.com/developer-platform/products/"><u>developer platform</u></a>. But customers also continue to rely on Cloudflare for caching and delivering static website content. CDNs are often judged on their ability to return content to visitors as quickly as possible. However, the speed at which content is removed from a CDN's global cache is just as crucial.</p><p>When customers frequently update content such as news, scores, or other data, it is essential they<a href="https://www.cloudflare.com/learning/cdn/common-cdn-issues/"> avoid serving stale, out-of-date information</a> from cache to visitors. This can lead to a <a href="https://www.cloudflare.com/learning/cdn/common-cdn-issues/">subpar experience</a> where users might see invalid prices, or incorrect news. The goal is to remove the stale content and cache the new version of the file on the CDN, as quickly as possible. And that starts by issuing a “purge.”</p><p>In May 2022, we released the <a href="https://blog.cloudflare.com/part1-coreless-purge/"><u>first part</u><b><u> </u></b></a>of the series detailing our efforts to rebuild and publicly document the steps taken to improve the system our customers use, to purge their cached content. Our goal was to increase scalability, and importantly, the speed of our customer’s purges. In that initial post, we explained how our purge system worked and the design constraints we found when scaling. We outlined how after more than a decade, we had outgrown our purge system and started building an entirely new purge system, and provided purge performance benchmarking that users experienced at the time. We set ourselves a lofty goal: to be the fastest.</p><p><b>Today, we’re excited to share that we’ve built the fastest cache purge in the industry.</b>  We now offer a global purge latency for purge by tags, hostnames, and prefixes of less than 150ms on average (P50), representing a 90% improvement since May 2022. Users can now purge from anywhere, (almost) <i>instantly</i>. By the time you hit enter on a purge request and your eyes blink, the file is now removed from our global network — including data centers in <a href="https://www.cloudflare.com/network/"><u>330 cities</u></a> and <a href="https://blog.cloudflare.com/backbone2024/"><u>120+ countries</u></a>.</p><p>But that’s not all. It wouldn’t be Birthday Week if we stopped at just being the fastest purge. We are <b><i>also</i></b><i> </i>announcing that we’re opening up more purge options to Free, Pro, and Business plans. Historically, only Enterprise customers had access to the full arsenal of <a href="https://developers.cloudflare.com/cache/how-to/purge-cache/"><u>cache purge methods</u></a> supported by Cloudflare, such as purge by cache-tags, hostnames, and URL prefixes. As part of rebuilding our purge infrastructure, we’re not only fast but we are able to scale well beyond our current capacity. This enables more customers to use different types of purge. We are excited to offer these new capabilities to all plan types once we finish rolling out our new purge infrastructure, and expect to begin offering additional purge capabilities to all plan types in early 2025. </p>
    <div>
      <h3>Why cache and purge? </h3>
      <a href="#why-cache-and-purge">
        
      </a>
    </div>
    <p>Caching content is like pulling off a spectacular magic trick. It makes loading website content lightning-fast for visitors, slashes the load on origin servers and the cost to operate them, and enables global scalability with a single button press. But here's the catch: for the magic to work, caching requires predicting the future. The right content needs to be cached in the right data center, at the right moment when requests arrive, and in the ideal format. This guarantees astonishing performance for visitors and game-changing scalability for web properties.</p><p>Cloudflare helps make this caching magic trick easy. But regular users of our cache know that getting content into cache is only part of what makes it useful. When content is updated on an origin, it must also be updated in the cache. The beauty of caching is that it holds content until it expires or is evicted. To update the content, it must be actively removed and updated across the globe quickly and completely. If data centers are not uniformly updated or are updated at drastically different times, visitors risk getting different data depending on where they are located. This is where cache “purging” (also known as “cache invalidation”) comes in.</p>
    <div>
      <h3>One-to-many purges on Cloudflare</h3>
      <a href="#one-to-many-purges-on-cloudflare">
        
      </a>
    </div>
    <p>Back in <a href="https://blog.cloudflare.com/rethinking-cache-purge-architecture/"><u>part 2 of the blog series</u></a>, we touched on how there are multiple ways of purging cache: by URL, cache-tag, hostname, URL prefix, and “purge everything”, and discussed a necessary distinction between purging by URL and the other four kinds of purge — referred to as flexible purges — based on the scope of their impact.</p><blockquote><p><i>The reason flexible purge isn’t also fully coreless yet is because it’s a more complex task than “purge this object”; flexible purge requests can end up purging multiple objects – or even entire zones – from cache. They do this through an entirely different process that isn’t coreless compatible, so to make flexible purge fully coreless we would have needed to come up with an entirely new multi-purge mechanism on top of redesigning distribution. We chose instead to start with just purge by URL, so we could focus purely on the most impactful improvements, revamping distribution, without reworking the logic a data center uses to actually remove an object from cache.</i></p></blockquote><p>We said our next steps included a redesign of flexible purges at Cloudflare, and today we’d like to walk you through the resulting system. But first, a brief history of flexible cache purges at Cloudflare and elaboration on why the old flexible purge system wasn’t “coreless compatible”.</p>
    <div>
      <h3>Just in time</h3>
      <a href="#just-in-time">
        
      </a>
    </div>
    <p>“Cache” within a given data center is made up of many machines, all contributing disk space to store customer content. When a request comes in for an asset, the URL and headers are used to calculate a <a href="https://developers.cloudflare.com/cache/how-to/cache-keys/"><u>cache key</u></a>, which is the filename for that content on disk and also determines which machine in the datacenter that file lives on. The filename is the same for every data center, and every data center knows how to use it to find the right machine to cache the content. A <a href="https://developers.cloudflare.com/cache/how-to/purge-cache/"><u>purge request</u></a> for a URL (plus headers) therefore contains everything needed to generate the cache key — the pointer to the response object on disk — and getting that key to every data center is the hardest part of carrying out the purge.</p><p>Purging content based on response properties has a different hardest part. If a customer wants to purge all content with the cache-tag “foo”, for example, there’s no way for us to generate all the cache keys that will point to the files with that cache-tag at request time. Cache-tags are response headers, and the decision of where to store a file is based on request attributes only. To find all files with matching cache-tags, we would need to look at every file in every cache disk on every machine in every data center. That’s thousands upon thousands of machines we would be scanning for each purge-by-tag request. There are ways to avoid actually continuously scanning all disks worldwide (foreshadowing!) but for our first implementation of our flexible purge system, we hoped to avoid the problem space altogether.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7n56ZDJwdBbaTNPJII6s2S/db998973efdca121536a932bc50dd842/image5.png" />
            
            </figure><p>An alternative approach to going to every machine and looking for all files that match some criteria to actively delete from disk was something we affectionately referred to as “lazy purge”. Instead of deleting all matching files as soon as we process a purge request, we wait to do so when we get an end user request for one of those files. Whenever a request comes in, and we have the file in cache, we can compare the timestamp of any recent purge requests from the file owner to the insertion timestamp of the file we have on disk. If the purge timestamp is fresher than the insertion timestamp, we pretend we didn’t find the file on disk. For this to work, we needed to keep track of purge requests going back further than a data center’s maximum cache eviction age to be sure that any file a customer sends a matching flex purge to clear from cache will either be <a href="https://developers.cloudflare.com/cache/concepts/retention-vs-freshness/#retention"><u>naturally evicted</u></a>, or forced to cache MISS and get refreshed from the origin. With this approach, we just needed a distribution and storage system for keeping track of flexible purges.</p>
    <div>
      <h3>Purge looks a lot like a nail</h3>
      <a href="#purge-looks-a-lot-like-a-nail">
        
      </a>
    </div>
    <p>At Cloudflare there is a lot of configuration data that needs to go “everywhere”: cache configuration, load balancer settings, firewall rules, host metadata — countless products, features, and services that depend on configuration data that’s managed through Cloudflare’s control plane APIs. This data needs to be accessible by every machine in every datacenter in our network. The vast majority of that data is distributed via <a href="https://blog.cloudflare.com/introducing-quicksilver-configuration-distribution-at-internet-scale/"><u>a system introduced several years ago called Quicksilver</u></a>. The system works <i>very, very well</i> (sub-second p99 replication lag, globally). It’s extremely flexible and reliable, and reads are lightning fast. The team responsible for the system has done such a good job that Quicksilver has become a hammer that when wielded, makes everything look like a nail… like flexible purges.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/BFntOlvapsYYjYTcxMs7o/d73ad47b63b9d4893b46aeddc28a8698/image2.png" />
            
            </figure><p><sup><i>Core-based purge request entering a data center and getting backhauled to a core data center where Quicksilver distributes the request to all network data centers (hub and spoke).</i></sup><sup> </sup></p><p>Our first version of the flexible purge system used Quicksilver’s spoke-hub distribution to send purges from a core data center to every other data center in our network. It took less than a second for flexible purges to propagate, and once in a given data center, the purge key lookups in the hot path to force cache misses were in the low hundreds of microseconds. We were quite happy with this system at the time, especially because of the simplicity. Using well-supported internal infrastructure meant we weren’t having to manage database clusters or worry about transport between data centers ourselves, since we got that “for free”. Flexible purge was a new feature set and the performance seemed pretty good, especially since we had no predecessor to compare against.</p>
    <div>
      <h3>Victims of our own success</h3>
      <a href="#victims-of-our-own-success">
        
      </a>
    </div>
    <p>Our first version of flexible purge didn’t start showing cracks for years, but eventually both our network and our customer base grew large enough that our system was reaching the limits of what it could scale to. As mentioned above, we needed to store purge requests beyond our maximum eviction age. Purge requests are relatively small, and compress well, but thousands of customers using the API millions of times a day adds up to quite a bit of storage that Quicksilver needed on each machine to maintain purge history, and all of that storage cut into disk space we could otherwise be using to cache customer content. We also found the limits of Quicksilver in terms of how many writes per second it could handle without replication slowing down. We bought ourselves more runway by putting <a href="https://www.boltic.io/blog/kafka-queue#:~:text=Apache%20Kafka%20Queues%3F-,Apache%20Kafka%20queues,-are%20a%20powerful"><u>Kafka queues</u></a> in front of Quicksilver to buffer and throttle ourselves to even out traffic spikes, and increased batching, but all of those protections introduced latency. We knew we needed to come up with a solution without such a strong correlation between usage and operational costs.</p><p>Another pain point exposed by our growing user base that we mentioned in <a href="https://blog.cloudflare.com/rethinking-cache-purge-architecture/"><u>Part 2</u></a> was the excessive round trip times experienced by customers furthest away from our core data centers. A purge request sent by a customer in Australia would have to cross the Pacific Ocean and back before local customers would see the new content.</p><p>To summarize, three issues were plaguing us:</p><ol><li><p>Latency corresponding to how far a customer was from the centralized ingest point.</p></li><li><p>Latency due to the bottleneck for writes at the centralized ingest point.</p></li><li><p>Storage needs in all data centers correlating strongly with throughput demand.</p></li></ol>
    <div>
      <h3>Coreless purge proves useful</h3>
      <a href="#coreless-purge-proves-useful">
        
      </a>
    </div>
    <p>The first two issues affected all types of purge. The spoke-hub distribution model was problematic for purge-by-URL just as much as it was for flexible purges. So we embarked on the path to peer-to-peer distribution for purge-by-URL to address the latency and throughput issues, and the results of that project were good enough that we wanted to propagate flexible purges through the same system. But doing so meant we’d have to replace our use of Quicksilver; it was so good at what it does (fast/reliable replication network-wide, extremely fast/high read throughput) in large part because of the core assumption of spoke-hub distribution it could optimize for. That meant there was no way to write to Quicksilver from “spoke” data centers, and we would need to find another storage system for our purges.</p>
    <div>
      <h3>Flipping purge on its head</h3>
      <a href="#flipping-purge-on-its-head">
        
      </a>
    </div>
    <p>We decided if we’re going to replace our storage system we should dig into exactly what our needs are and find the best fit. It was time to revisit some of our oldest conclusions to see if they still held true, and one of the earlier ones was that proactively purging content from disk would be difficult to do efficiently given our storage layout.</p><p>But was that true? Or could we make active cache purge fast and efficient (enough)? What would it take to quickly find files on disk based on their metadata? “Indexes!” you’re probably screaming, and for good reason. Indexing files’ hostnames, cache-tags, and URLs would undoubtedly make querying for relevant files trivial, but a few aspects of our network make it less straightforward.</p><p>Cloudflare has hundreds of data centers that see trillions of unique files, so any kind of global index — even ignoring the networking hurdles of aggregation — would suffer the same type of bottlenecking issues with our previous spoke-hub system. Scoping the indices to the data center level would be better, but they vary in size up to several hundred machines. Managing a database cluster in each data center scaled to the appropriate size for the aggregate traffic of all the machines was a daunting proposition; it could easily end up being enough work on its own for a separate team, not something we should take on as a side hustle.</p><p>The next step down in scope was an index per machine. Indexing on the same machine as the cache proxy had some compelling upsides: </p><ul><li><p>The proxy could talk to the index over <a href="https://en.wikipedia.org/wiki/Unix_domain_socket"><u>UDS</u></a> (Unix domain sockets), avoiding networking complexities in the hottest paths.</p></li><li><p>As a sidecar service, the index just had to be running anytime the machine was accepting traffic. If a machine died, so would the index, but that didn’t matter, so there wasn’t any need to deal with the complexities of distributed databases.</p></li><li><p>While data centers were frequently adding and removing machines, machines weren’t frequently adding and removing disks. An index could reasonably count on its maximum size being predictable and constant based on overall disk size.</p></li></ul><p>But we wanted to make sure it was feasible on our machines. We analyzed representative cache disks from across our fleet, gathering data like the number of cached assets per terabyte and the average number of cache-tags per asset. We looked at cache MISS, REVALIDATED, and EXPIRED rates to estimate the required write throughput.</p><p>After conducting a thorough analysis, we were convinced the design would work. With a clearer understanding of the anticipated read/write throughput, we started looking at databases that could meet our needs. After benchmarking several relational and non-relational databases, we ultimately chose <a href="https://github.com/facebook/rocksdb"><u>RocksDB</u></a>, a high-performance embedded key-value store. We found that with proper tuning, it could be extremely good at the types of queries we needed.</p>
    <div>
      <h3>Putting it all together</h3>
      <a href="#putting-it-all-together">
        
      </a>
    </div>
    <p>And so CacheDB was born — a service written in Rust and built on RocksDB, which operates on each machine alongside the cache proxy to manage the indexing and purging of cached files. We integrated the cache proxy with CacheDB to ensure that indices are stored whenever a file is cached or updated, and they’re deleted when a file is removed due to eviction or purging. In addition to indexing data, CacheDB maintains a local queue for buffering incoming purge operations. A background process reads purge operations in the queue, looking up all matching files using the indices, and deleting the matched files from disk. Once all matched files for an operation have been deleted, the process clears the indices and removes the purge operation from the queue.</p><p>To further optimize the speed of purges taking effect, the cache proxy was updated to check with CacheDB — similar to the previous lazy purge approach — when a cache HIT occurs before returning the asset. CacheDB does a quick scan of its local queue to see if there are any pending purge operations that match the asset in question, dictating whether the cache proxy should respond with the cached file or fetch a new copy. This means purges will prevent the cache proxy from returning a matching cached file as soon as a purge reaches the machine, even if there are millions of files that correspond to a purge key, and it takes a while to actually delete them all from disk.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5W0ZIGBBbG5Cnc3DSbCPGT/a1572b0b67d844d4e5b7cc7899d320b1/image3.png" />
            
            </figure><p><sup><i>Coreless purge using CacheDB and Durable Objects to distribute purges without needing to first stop at a core data center.</i></sup></p><p>The last piece to change was the distribution pipeline, updated to broadcast flexible purges not just to every data center, but to the CacheDB service running on every machine. We opted for CacheDB to handle the last-mile fan out of machine to machine within a data center, using <a href="https://www.consul.io/"><u>consul</u></a> to keep each machine informed of the health of its peers. The choice let us keep the Workers largely the same for purge-by-URL (more <a href="https://blog.cloudflare.com/rethinking-cache-purge-architecture/"><u>here</u></a>) and flexible purge handling, despite the difference in termination points.</p>
    <div>
      <h3>The payoff</h3>
      <a href="#the-payoff">
        
      </a>
    </div>
    <p>Our new approach reduced the long tail of the lazy purge, saving 10x storage. Better yet, we can now delete purged content immediately instead of waiting for the lazy purge to happen or expire. This new-found storage will improve cache retention on disk for all users, leading to improved cache HIT ratios and reduced egress from your origin.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6B0kVX9Q6qA2JshmcTZSrt/80a845904adf8ba69c121bb54923959e/image1.png" />
            
            </figure><p><sup><i>The shift from lazy content purging (</i></sup><sup><i><u>left</u></i></sup><sup><i>) to the new Coreless Purge architecture allows us to actively delete content (</i></sup><sup><i><u>right</u></i></sup><sup><i>). This helps reduce storage needs and increase cache retention times across our service.</i></sup></p><p>With the new coreless cache purge, we can now get a purge request into any datacenter, distribute the keys to purge, and instantly purge the content from the cache database. This all occurs in less than 150 milliseconds on P50 for tags, hostnames, and prefix URL, covering all <a href="https://www.cloudflare.com/network/"><u>330 cities</u></a> in <a href="https://blog.cloudflare.com/backbone2024/"><u>120+ countries</u></a>.</p>
    <div>
      <h3>Benchmarks</h3>
      <a href="#benchmarks">
        
      </a>
    </div>
    <p>To measure Instant Purge, we wanted to make sure that we were looking at real user metrics — that these were purges customers were actually issuing and performance that was representative of what we were seeing under real conditions, rather than marketing numbers.</p><p>The time we measure represents the period when a request enters the local datacenter, and ends with when the purge has been executed in every datacenter. When the local data center receives the request, one of the first things we do is to add a timestamp to the purge request. When all data centers have completed the purge action, another timestamp is added to “stop the clock.” Each purge request generates this performance data, and it is then sent to a database for us to measure the appropriate quantiles and to help us understand how we can improve further.</p><p>In August 2024, we took purge performance data and segmented our collected data by region based on where the local data center receiving the request was located.</p><table><tr><td><p><b>Region</b></p></td><td><p><b>P50 Aug 2024 (Coreless)</b></p></td><td><p><b>P50 May 2022 (Core-based)</b></p></td><td><p><b>Improvement</b></p></td></tr><tr><td><p>Africa</p></td><td><p>303ms</p></td><td><p>1,420ms</p></td><td><p>78.66%</p></td></tr><tr><td><p>Asia Pacific Region (APAC)</p></td><td><p>199ms</p></td><td><p>1,300ms</p></td><td><p>84.69%</p></td></tr><tr><td><p>Eastern Europe (EEUR)</p></td><td><p>140ms</p></td><td><p>1,240ms</p></td><td><p>88.70%</p></td></tr><tr><td><p>Eastern North America (ENAM)</p></td><td><p>119ms</p></td><td><p>1,080ms</p></td><td><p>88.98%</p></td></tr><tr><td><p>Oceania</p></td><td><p>191ms</p></td><td><p>1,160ms</p></td><td><p>83.53%</p></td></tr><tr><td><p>South America (SA)</p></td><td><p>196ms</p></td><td><p>1,250ms</p></td><td><p>84.32%</p></td></tr><tr><td><p>Western Europe (WEUR)</p></td><td><p>131ms</p></td><td><p>1,190ms</p></td><td><p>88.99%</p></td></tr><tr><td><p>Western North America (WNAM)</p></td><td><p>115ms</p></td><td><p>1,000ms</p></td><td><p>88.5%</p></td></tr><tr><td><p><b>Global</b></p></td><td><p><b>149ms</b></p></td><td><p><b>1,570ms</b></p></td><td><p><b>90.5%</b></p></td></tr></table><p><sup>Note: Global latency numbers on the core-based measurements (May 2022) may be larger than the regional numbers because it represents all of our data centers instead of only a regional portion, so outliers and retries might have an outsized effect.</sup></p>
    <div>
      <h3>What’s next?</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>We are currently wrapping up the roll-out of the last throughput changes which allow us to efficiently scale purge requests. As that happens, we will revise our rate limits and open up purge by tag, hostname, and prefix to all plan types! We expect to begin rolling out the additional purge types to all plans and users beginning in early <b>2025</b>.</p><p>In addition, in the process of implementing this new approach, we have identified improvements that will shave a few more milliseconds off our single-file purge. Currently, single-file purges have a P50 of 234ms. However, we want to, and can, bring that number down to below 200ms.</p><p>If you want to come work on the world's fastest purge system, check out <a href="http://www.cloudflare.com/careers">our open positions</a>.</p>
    <div>
      <h3>Watch on Cloudflare TV</h3>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div>
  
</div><p></p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Performance]]></category>
            <category><![CDATA[Cache]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <guid isPermaLink="false">11EWaw0wCUNwPTM30w7oUN</guid>
            <dc:creator>Alex Krivit</dc:creator>
            <dc:creator>Tim Kornhammar</dc:creator>
            <dc:creator> Connor Harwood</dc:creator>
        </item>
        <item>
            <title><![CDATA[Browser Rendering API GA, rolling out Cloudflare Snippets, SWR, and bringing Workers for Platforms to all users]]></title>
            <link>https://blog.cloudflare.com/browser-rendering-api-ga-rolling-out-cloudflare-snippets-swr-and-bringing-workers-for-platforms-to-our-paygo-plans/</link>
            <pubDate>Fri, 05 Apr 2024 13:01:00 GMT</pubDate>
            <description><![CDATA[ Browser Rendering API is now available to all paid Workers customers with improved session management ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5kiBNiPfz0fqooxige54uO/378848632e2d4633c9f41678f1cff82c/Workers-for-Platforms-now-available-for-PAYGO.png" />
            
            </figure>
    <div>
      <h3>Browser Rendering API is now available to all paid Workers customers with improved session management</h3>
      <a href="#browser-rendering-api-is-now-available-to-all-paid-workers-customers-with-improved-session-management">
        
      </a>
    </div>
    <p>In May 2023, we <a href="/browser-rendering-open-beta">announced</a> the open beta program for the <a href="https://developers.cloudflare.com/browser-rendering/">Browser Rendering API</a>. Browser Rendering allows developers to programmatically control and interact with a headless browser instance and create automation flows for their applications and products.</p><p>At the same time, we launched a version of the <a href="https://developers.cloudflare.com/browser-rendering/platform/puppeteer/">Puppeteer library</a> that works with Browser Rendering. With that, developers can use a familiar API on top of Cloudflare Workers to create all sorts of workflows, such as taking screenshots of pages or automatic software testing.</p><p>Today, we take Browser Rendering one step further, taking it out of beta and making it available to all paid Workers' plans. Furthermore, we are enhancing our API and introducing a new feature that we've been discussing for a long time in the open beta community: session management.</p>
    <div>
      <h3>Session Management</h3>
      <a href="#session-management">
        
      </a>
    </div>
    <p>Session management allows developers to reuse previously opened browsers across Worker's scripts. Reusing browser sessions has the advantage that you don't need to instantiate a new browser for every request and every task, drastically increasing performance and lowering costs.</p><p>Before, to keep a browser instance alive and reuse it, you'd have to implement complex code using Durable Objects. Now, we've simplified that for you by keeping your browsers running in the background and extending the Puppeteer API with new <a href="https://developers.cloudflare.com/browser-rendering/platform/puppeteer/#session-management">session management methods</a> that give you access to all of your running sessions, activity history, and active limits.</p><p>Here’s how you can list your active sessions:</p>
            <pre><code>const sessions = await puppeteer.sessions(env.RENDERING);
console.log(sessions);
[
   {
      "connectionId": "2a2246fa-e234-4dc1-8433-87e6cee80145",
      "connectionStartTime": 1711621704607,
      "sessionId": "478f4d7d-e943-40f6-a414-837d3736a1dc",
      "startTime": 1711621703708
   },
   {
      "sessionId": "565e05fb-4d2a-402b-869b-5b65b1381db7",
      "startTime": 1711621703808
   }
]</code></pre>
            <p>We have added a Worker script <a href="https://developers.cloudflare.com/browser-rendering/get-started/reuse-sessions/#4-code">example on how to use session management</a> to the Developer Documentation.</p>
    <div>
      <h3>Analytics and logs</h3>
      <a href="#analytics-and-logs">
        
      </a>
    </div>
    <p>Observability is an essential part of any Cloudflare product. You can find detailed analytics and logs of your Browser Rendering usage in the dashboard under your account's Worker &amp; Pages section.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2jlU3vFhUa0fXCF7lKYq73/9e63676a0dc7bc54da3ab4cf5efd85dd/image4-10.png" />
            
            </figure><p>Browser Rendering is now available to all customers with a paid Workers plan. Each account is <a href="https://developers.cloudflare.com/browser-rendering/platform/limits/">limited</a> to running two new browsers per minute and two concurrent browsers at no cost during this period. Check our <a href="https://developers.cloudflare.com/browser-rendering/get-started/">developers page</a> to get started.</p>
    <div>
      <h3>We are rolling out access to Cloudflare Snippets</h3>
      <a href="#we-are-rolling-out-access-to-cloudflare-snippets">
        
      </a>
    </div>
    <p>Powerful, programmable, and free of charge, Snippets are the best way to perform complex HTTP request and response modifications on Cloudflare. What was once too advanced to achieve using Rules products is now possible with Snippets. Since the initial <a href="/snippets-announcement">announcement</a> during Developer Week 2022, the promise of extending out-of-the-box Rules functionality by writing simple JavaScript code is keeping the Cloudflare community excited.</p><p>During the first 3 months of 2024 alone, the amount of traffic going through Snippets increased over 7x, from an average of 2,200 requests per second in early January to more than 17,000 in March.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6XCqU9QOeEcg9KoaOShf4x/94bba253b62bf832126baf20b18f5cb4/image2-14.png" />
            
            </figure><p>However, instead of opening the floodgates and letting millions of Cloudflare users in to test (and potentially break) Snippets in the most unexpected ways, we are going to pace ourselves and opt for a phased rollout, much like the newly released <a href="/workers-production-safety">Gradual Rollouts</a> for Workers.</p><p>In the next few weeks, 5% of Cloudflare users will start seeing “Snippets” under the Rules tab of the zone-level menu in their dashboard. If you happen to be part of the first 5%, snip into action and try out how fast and powerful Snippets are even for <a href="/cloudflare-snippets-alpha#what-can-you-build-with-cloudflare-snippets">advanced use cases</a> like dynamically changing the date in headers or A / B testing leveraging the `math.random` function. Whatever you use Snippets for, just keep one thing in mind: this is still an alpha, so please do not use Snippets for production traffic just yet.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/45mBS7TWL4BL6skGoXRDn3/a99f87e1d885e457b1bb35af5773fdb2/Screenshot-2024-04-04-at-6.12.42-PM.png" />
            
            </figure><p>Until then, keep your eyes out for the new Snippets tab in the Cloudflare dashboard and learn more how powerful and flexible Snippets are at the <a href="https://developers.cloudflare.com/rules/snippets">developer documentation</a> in the meantime.</p>
    <div>
      <h3>Coming soon: asynchronous revalidation with stale-while-revalidate</h3>
      <a href="#coming-soon-asynchronous-revalidation-with-stale-while-revalidate">
        
      </a>
    </div>
    <p>One of the features most requested by our customers is the asynchronous revalidation with stale-while-revalidate (SWR) cache directive, and we will be bringing this to you in the second half of 2024.  This functionality will be available by design as part of our new CDN architecture that is being built using Rust with performance and memory safety at top of mind.</p><p>Currently, when a client requests a resource, such as a web page or an image, Cloudflare checks to see if the asset is in cache and provides a cached copy if available. If the file is not in the cache or has expired and become stale, Cloudflare connects to the origin server to check for a fresh version of the file and forwards this fresh version to the end user. This wait time adds latency to these requests and impacts performance.</p><p>Stale-while-revalidate is a cache directive that allows the expired or stale version of the asset to be served to the end user while simultaneously allowing Cloudflare to check the origin to see if there's a fresher version of the resource available. If an updated version exists, the origin forwards it to Cloudflare, updating the cache in the process. This mechanism allows the client to receive a response quickly from the cache while ensuring that it always has access to the most up-to-date content. Stale-while-revalidate strikes a balance between serving content efficiently and ensuring its freshness, resulting in improved performance and a smoother user experience.</p><p>Customers who want to be part of our beta testers and “cache” in on the fun can register <a href="https://forms.gle/EEFDtB97sLG5G5Ui9">here</a>, and we will let you know when the feature is ready for testing!</p>
    <div>
      <h3>Coming on April 16, 2024: Workers for Platforms for our pay-as-you-go plan</h3>
      <a href="#coming-on-april-16-2024-workers-for-platforms-for-our-pay-as-you-go-plan">
        
      </a>
    </div>
    <p>Today, we’re excited to share that on April 16th, Workers for Platforms will be available to all developers through our new $25 pay-as-you-go plan!</p><p>Workers for Platforms is changing the way we build software – it gives you the ability to embed personalization and customization directly into your product. With Workers for Platforms, you can deploy custom code on behalf of your users or let your users directly deploy their own code to your platform, without you or your users having to manage any infrastructure. You can use Workers for Platforms with all the exciting announcements that have come out this Developer Week – it supports all the <a href="https://developers.cloudflare.com/workers/configuration/bindings/">bindings</a> that come with Workers (including <a href="https://developers.cloudflare.com/workers-ai/">Workers AI</a>, <a href="https://developers.cloudflare.com/d1/">D1</a> and <a href="https://developers.cloudflare.com/durable-objects/">Durable Objects</a>) as well as <a href="https://developers.cloudflare.com/workers/languages/python/">Python Workers</a>.  </p><p>Here’s what some of our customers – ranging from enterprises to startups – are building on Workers for Platforms:</p><ul><li><p><a href="https://www.shopify.com/plus/solutions/headless-commerce">Shopify Oxygen</a> is a hosting platform for their Remix-based eCommerce framework Hydrogen, and it’s built on Workers for Platforms! The Hydrogen/Oxygen combination gives Shopify merchants control over their buyer experience without the restrictions of generic storefront templates.</p></li><li><p><a href="https://grafbase.com/">Grafbase</a> is a data platform for developers to create a serverless GraphQL API that unifies data sources across a business under one endpoint. They use Workers for Platforms to give their developers the control and flexibility to deploy their own code written in JavaScript/TypeScript or WASM.</p></li><li><p><a href="https://www.triplit.dev/">Triplit</a> is an open-source database that syncs data between server and browser in real-time. It allows users to build low latency, real-time applications with features like relational querying, schema management and server-side storage built in. Their query and sync engine is built on top of Durable Objects, and they’re using Workers for Platforms to allow their customers to package custom Javascript alongside their Triplit DB instance.</p></li></ul>
    <div>
      <h3>Tools for observability and platform level controls</h3>
      <a href="#tools-for-observability-and-platform-level-controls">
        
      </a>
    </div>
    <p>Workers for Platforms doesn’t just allow you to deploy Workers to your platform – we also know how important it is to have observability and control over your users’ Workers. We have a few solutions that help with this:</p><ul><li><p><a href="https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/platform/custom-limits/">Custom Limits</a>: Set CPU time or subrequest caps on your users’ Workers. Can be used to set limits in order to control your costs on Cloudflare and/or shape your own pricing and packaging model. For example, if you run a freemium model on your platform, you can lower the CPU time limit for customers on your free tier.</p></li><li><p><a href="https://developers.cloudflare.com/workers/observability/logging/tail-workers/">Tail Workers</a>: Tail Worker events contain metadata about the Worker, console.log() messages, and capture any unhandled exceptions. They can be used to provide your developers with live logging in order to monitor for errors and troubleshoot in real time.</p></li><li><p><a href="https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/reference/outbound-workers/">Outbound Workers</a>: Get visibility into all outgoing requests from your users’ Workers. Outbound Workers sit between user Workers and the fetch() requests they make, so you get full visibility over the request before it’s sent out to the Internet.</p></li></ul>
    <div>
      <h3>Pricing</h3>
      <a href="#pricing">
        
      </a>
    </div>
    <p>We wanted to make sure that Workers for Platforms was affordable for hobbyists, solo developers, and indie developers. Workers for Platforms is part of a new $25 pay-as-you-go plan, and it includes the following:</p>
<table>
<thead>
  <tr>
    <th></th>
    <th><span>Included Amounts</span></th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><span>Requests</span></td>
    <td><span>20 million requests/month </span><br /><span>+$0.30 per additional million</span></td>
  </tr>
  <tr>
    <td><span>CPU time</span></td>
    <td><span>60 million CPU milliseconds/month</span><br /><span>+$0.02 per additional million CPU milliseconds</span></td>
  </tr>
  <tr>
    <td><span>Scripts</span></td>
    <td><span>1000 scripts</span><br /><span>+0.02 per additional script/month</span></td>
  </tr>
</tbody>
</table>
    <div>
      <h3>Workers for Platforms will be available to purchase on April 16, 2024!</h3>
      <a href="#workers-for-platforms-will-be-available-to-purchase-on-april-16-2024">
        
      </a>
    </div>
    <p>The Workers for Platforms will be available to purchase under the Workers for Platforms tab on the Cloudflare Dashboard on April 16, 2024.</p><p>In the meantime, to learn more about Workers for Platforms, check out our <a href="https://github.com/cloudflare/workers-for-platforms-example">starter project</a> and <a href="https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/">developer documentation</a>.</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Application Services]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[General Availability]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <guid isPermaLink="false">2wPhlTmw4FThQkJsChhkwy</guid>
            <dc:creator>Tanushree Sharma</dc:creator>
            <dc:creator>Celso Martinho</dc:creator>
            <dc:creator>Nikita Cano</dc:creator>
            <dc:creator>Matt Bullock</dc:creator>
            <dc:creator>Tim Kornhammar</dc:creator>
        </item>
    </channel>
</rss>