
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sun, 05 Apr 2026 06:27:55 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Durable Objects aren't just durable, they're fast: a 10x speedup for Cloudflare Queues]]></title>
            <link>https://blog.cloudflare.com/how-we-built-cloudflare-queues/</link>
            <pubDate>Thu, 24 Oct 2024 13:00:00 GMT</pubDate>
            <description><![CDATA[ Learn how we built Cloudflare Queues using our own Developer Platform and how it evolved to a geographically-distributed, horizontally-scalable architecture built on Durable Objects. Our new architecture supports over 10x more throughput and over 3x lower latency compared to the previous version. ]]></description>
            <content:encoded><![CDATA[ <p></p><p><a href="https://www.cloudflare.com/developer-platform/products/cloudflare-queues/"><u>Cloudflare Queues</u></a> let a developer decouple their Workers into event-driven services. Producer Workers write events to a Queue, and consumer Workers are invoked to take actions on the events. For example, you can use a Queue to decouple an e-commerce website from a service which sends purchase confirmation emails to users. During 2024’s Birthday Week, we <a href="https://blog.cloudflare.com/builder-day-2024-announcements?_gl=1*18s1fwl*_gcl_au*MTgyNDA5NjE5OC4xNzI0MjgzMTQ0*_ga*OTgwZmE0YWUtZWJjMS00NmYxLTllM2QtM2RmY2I4ZjAwNzZk*_ga_SQCRB0TXZW*MTcyODkyOTU2OS4xNi4xLjE3Mjg5Mjk1NzcuNTIuMC4w/#queues-is-ga"><u>announced that Cloudflare Queues is now Generally Available</u></a>, with significant performance improvements that enable larger workloads. To accomplish this, we switched to a new architecture for Queues that enabled the following improvements:</p><ul><li><p>Median latency for sending messages has dropped from ~200ms to ~60ms</p></li><li><p>Maximum throughput for each Queue has increased over 10x, from 400 to 5000 messages per second</p></li><li><p>Maximum Consumer concurrency for each Queue has increased from 20 to 250 concurrent invocations</p></li></ul>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5PvIsHfLwwIkp2LXVUDhmG/99f286f2f89d10b2a7e359d8d66f6dba/image5.png" />
          </figure><p><sup><i>Median latency drops from ~200ms to ~60ms as Queues are migrated to the new architecture</i></sup></p><p>In this blog post, we'll share details about how we built Queues using Durable Objects and the Cloudflare Developer Platform, and how we migrated from an initial Beta architecture to a geographically-distributed, horizontally-scalable architecture for General Availability.</p>
    <div>
      <h3>v1 Beta architecture</h3>
      <a href="#v1-beta-architecture">
        
      </a>
    </div>
    <p>When initially designing Cloudflare Queues, we decided to build something simple that we could get into users' hands quickly. First, we considered leveraging an off-the-shelf messaging system such as Kafka or Pulsar. However, we decided that it would be too challenging to operate these systems at scale with the large number of isolated tenants that we wanted to support.</p><p>Instead of investing in new infrastructure, we decided to build on top of one of Cloudflare's existing developer platform building blocks: <b>Durable Objects.</b> <a href="https://www.cloudflare.com/developer-platform/durable-objects/"><u>Durable Objects</u></a> are a simple, yet powerful building block for coordination and storage in a distributed system. In our initial <i>v1 </i>architecture, each Queue was implemented using a single Durable Object. As shown below, clients would send messages to a Worker running in their region, which would be forwarded to the single Durable Object hosted in the WNAM (Western North America) region. We used a single Durable Object for simplicity, and hosted it in WNAM for proximity to our centralized configuration API service.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/yxj5Gut3usYa0mFbRddXU/881f5905f789bc2f910ee1b2dcadac92/image1.png" />
          </figure><p>One of a Queue's main responsibilities is to accept and store incoming messages. Sending a message to a <i>v1</i> Queue used the following flow:</p><ul><li><p>A client sends a POST request containing the message body to the Queues API at <code>/accounts/:accountID/queues/:queueID/messages</code></p></li><li><p>The request is handled by an instance of the <b>Queue Broker Worker</b> in a Cloudflare data center running near the client.</p></li><li><p>The Worker performs authentication, and then uses Durable Objects <code>idFromName</code> API to route the request to the <b>Queue Durable Object</b> for the given <code>queueID</code></p></li><li><p>The Queue Durable Object persists the message to storage before returning a <i>success </i>back to the client.</p></li></ul><p>Durable Objects handled most of the heavy-lifting here: we did not need to set up any new servers, storage, or service discovery infrastructure. To route requests, we simply provided a <code>queueID</code> and the platform handled the rest. To store messages, we used the Durable Object storage API to <code>put</code> each message, and the platform handled reliably storing the data redundantly.</p>
    <div>
      <h3>Consuming messages</h3>
      <a href="#consuming-messages">
        
      </a>
    </div>
    <p>The other main responsibility of a Queue is to deliver messages to a Consumer. Delivering messages in a v1 Queue used the following process:</p><ul><li><p>Each Queue Durable Object maintained an <b>alarm </b>that was always set when there were undelivered messages in storage. The alarm guaranteed that the Durable Object would reliably wake up to deliver any messages in storage, even in the presence of failures. The alarm time was configured to fire after the user's selected <i>max wait time</i><b><i>, </i></b>if only a partial batch of messages was available. Whenever one or more full batches were available in storage, the alarm was scheduled to fire immediately.</p></li><li><p>The alarm would wake the Durable Object, which continually looked for batches of messages in storage to deliver.</p></li><li><p>Each batch of messages was sent to a "Dispatcher Worker" that used <a href="https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/"><u>Workers for Platforms</u></a> <a href="https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/reference/how-workers-for-platforms-works/#dynamic-dispatch-worker"><i><u>dynamic dispatch</u></i></a> to pass the messages to the <code>queue()</code> function defined in a user's Consumer Worker</p></li></ul>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4vAM17x3nN49JBMGNTblPp/6af391950df5f0fbc14faeccb351e38c/image4.png" />
          </figure><p>This v1 architecture let us flesh out the initial version of the Queues Beta product and onboard users quickly. Using Durable Objects allowed us to focus on building application logic, instead of complex low-level systems challenges such as global routing and guaranteed durability for storage. Using a separate Durable Object for each Queue allowed us to host an essentially unlimited number of Queues, and provided isolation between them.</p><p>However, using <i>only</i> one Durable Object per queue had some significant limitations:</p><ul><li><p><b>Latency: </b>we created all of our v1 Queue Durable Objects in Western North America. Messages sent from distant regions incurred significant latency when traversing the globe.</p></li><li><p><b>Throughput: </b>A single Durable Object is not scalable: it is single-threaded and has a fixed capacity for how many requests per second it can process. This is where the previous 400 messages per second limit came from.</p></li><li><p><b>Consumer Concurrency: </b>Due to <a href="https://developers.cloudflare.com/workers/platform/limits/#simultaneous-open-connections"><u>concurrent subrequest limits</u></a>, a single Durable Object was limited in how many concurrent subrequests it could make to our Dispatcher Worker. This limited the number of <code>queue()</code> handler invocations that it could run simultaneously.</p></li></ul><p>To solve these issues, we created a new v2 architecture that horizontally scales across <b>multiple</b> Durable Objects to implement each single high-performance Queue.</p>
    <div>
      <h3>v2 Architecture</h3>
      <a href="#v2-architecture">
        
      </a>
    </div>
    <p>In the new v2 architecture for Queues, each Queue is implemented using multiple Durable Objects, instead of just one. Instead of a single region, we place <i>Storage Shard </i>Durable Objects in <a href="https://developers.cloudflare.com/durable-objects/reference/data-location/#supported-locations-1"><u>all available regions</u></a> to enable lower latency. Within each region, we create multiple Storage Shards and load balance incoming requests amongst them. Just like that, we’ve multiplied message throughput.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2SJTb2UO8tKGrh26ixwLrA/7272e4eaf6f7e85f086a5ae08670387e/image2.png" />
          </figure><p>Sending a message to a v2 Queue uses the following flow:</p><ul><li><p>A client sends a POST request containing the message body to the Queues API at <code>/accounts/:accountID/queues/:queueID/messages</code></p></li><li><p>The request is handled by an instance of the <b>Queue Broker Worker</b> running in a Cloudflare data center near the client.</p></li><li><p>The Worker:</p><ul><li><p>Performs authentication</p></li><li><p>Reads from Workers KV to obtain a <i>Shard Map</i> that lists available storage shards for the given <code>region</code> and <code>queueID</code></p></li><li><p>Picks one of the region's Storage Shards at random, and uses Durable Objects <code>idFromName</code> API to route the request to the chosen shard</p></li></ul></li><li><p>The Storage Shard persists the message to storage before returning a <i>success </i>back to the client.</p></li></ul><p>In this v2 architecture, messages are stored in the closest available Durable Object storage cluster near the user, greatly reducing latency since messages don't need to be shipped all the way to WNAM. Using multiple shards within each region removes the bottleneck of a single Durable Object, and allows us to scale each Queue horizontally to accept even more messages per second. <a href="https://blog.cloudflare.com/tag/cloudflare-workers-kv/"><u>Workers KV</u></a> acts as a fast metadata store: our Worker can quickly look up the shard map to perform load balancing across shards.</p><p>To improve the <i>Consumer</i> side of v2 Queues, we used a similar "scale out" approach. A single Durable Object can only perform a limited number of concurrent subrequests. In v1 Queues, this limited the number of concurrent subrequests we could make to our Dispatcher Worker. To work around this, we created a new <i>Consumer Shard</i> Durable Object class that we can scale horizontally, enabling us to execute many more concurrent instances of our users' <code>queue()</code> handlers.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2ujodUVBegDcWXi6DYJR41/5f31ba4da387df82613a496ff311f65f/image3.png" />
          </figure><p>Consumer Durable Objects in v2 Queues use the following approach:</p><ul><li><p>Each Consumer maintains an alarm that guarantees it will wake up to process any pending messages. <i>v2 </i>Consumers are notified by the Queue's <i>Coordinator </i>(introduced below) when there are messages ready for consumption. Upon notification, the Consumer sets an alarm to go off immediately.</p></li><li><p>The Consumer looks at the shard map, which contains information about the storage shards that exist for the Queue, including the number of available messages on each shard.</p></li><li><p>The Consumer picks a random storage shard with available messages, and asks for a batch.</p></li><li><p>The Consumer sends the batch to the Dispatcher Worker, just like for v1 Queues.</p></li><li><p>After processing the messages, the Consumer sends another request to the Storage Shard to either "acknowledge" or "retry" the messages.</p></li></ul><p>This scale-out approach enabled us to work around the subrequest limits of a single Durable Object, and increase the maximum supported concurrency level of a Queue from 20 to 250. </p>
    <div>
      <h3>The Coordinator and “Control Plane”</h3>
      <a href="#the-coordinator-and-control-plane">
        
      </a>
    </div>
    <p>So far, we have primarily discussed the "Data Plane" of a v2 Queue: how messages are load balanced amongst Storage Shards, and how Consumer Shards read and deliver messages. The other main piece of a v2 Queue is the "Control Plane", which handles creating and managing all the individual Durable Objects in the system. In our v2 architecture, each Queue has a single <i>Coordinator</i> Durable Object that acts as the brain of the Queue. Requests to create a Queue, or change its settings, are sent to the Queue's Coordinator.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7lYJs23oJ8ibtGgbuOk9JN/7ffa8073a4391602b67a0c6e134975bc/image7.png" />
          </figure><p>The Coordinator maintains a <i>Shard Map</i> for the Queue, which includes metadata about all the Durable Objects in the Queue (including their region, number of available messages, current estimated load, etc.). The Coordinator periodically writes a fresh copy of the Shard Map into Workers KV, as pictured in step 1 of the diagram. Placing the shard map into Workers KV ensures that it is globally cached and available for our Worker to read quickly, so that it can pick a shard to accept the message.</p><p>Every shard in the system periodically sends a heartbeat to the Coordinator as shown in steps 2 and 3 of the diagram. Both Storage Shards and Consumer Shards send heartbeats, including information like the number of messages stored locally, and the current load (requests per second) that the shard is handling. The Coordinator uses this information to perform <b><i>autoscaling. </i></b>When it detects that the shards in a particular region are overloaded, it creates additional shards in the region, and adds them to the shard map in Workers KV. Our Worker sees the updated shard map and naturally load balances messages across the freshly added shards. Similarly, the Coordinator looks at the backlog of available messages in the Queue, and decides to add more Consumer shards to increase Consumer throughput when the backlog is growing. Consumer Shards pull messages from Storage Shards for processing as shown in step 4 of the diagram.</p><p>Switching to a new scalable architecture allowed us to meet our performance goals and take Queues to GA. As a recap, this new architecture delivered these significant improvements:</p><ul><li><p>P50 latency for writing to a Queue has dropped from ~200ms to ~60ms.</p></li><li><p>Maximum throughput for a Queue has increased from 400 to 5000 messages per second.</p></li><li><p>Maximum consumer concurrency has increased from 20 to 250 invocations.	</p></li></ul>
    <div>
      <h3>What's next for Queues</h3>
      <a href="#whats-next-for-queues">
        
      </a>
    </div>
    <ul><li><p>We plan on leveraging the performance improvements in the new <a href="https://developers.cloudflare.com/durable-objects/"><u>beta version of Durable Objects</u></a> which use SQLite to continue to improve throughput/latency in Queues.</p></li><li><p>We will soon be adding message management features to Queues so that you can take actions to purge messages in a queue, pause consumption of messages, or “redrive”/move messages from one queue to another (for example messages that have been sent to a Dead Letter Queue could be “redriven” or moved back to the original queue).</p></li><li><p>Work to make Queues the "event hub" for the Cloudflare Developer Platform:</p><ul><li><p>Create a low-friction way for events emitted from other Cloudflare services with event schemas to be sent to Queues.</p></li><li><p>Build multi-Consumer support for Queues so that Queues are no longer limited to one Consumer per queue.</p></li></ul></li></ul><p>To start using Queues, head over to our <a href="https://developers.cloudflare.com/queues/get-started/"><u>Getting Started</u></a> guide. </p><p>Do distributed systems like Cloudflare Queues and Durable Objects interest you? Would you like to help build them at Cloudflare? <a href="https://boards.greenhouse.io/embed/job_app?token=5390243&amp;gh_src=b2e516a81us"><u>We're Hiring!</u></a></p> ]]></content:encoded>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Cloudflare Queues]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Durable Objects]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">41vXJNWrB0YHsKqSz6SGDS</guid>
            <dc:creator>Josh Wheeler</dc:creator>
            <dc:creator>Siddhant Sinha</dc:creator>
            <dc:creator>Todd Mantell</dc:creator>
            <dc:creator>Pranshu Maheshwari</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare Queues: messages at your speed with consumer concurrency and explicit acknowledgement]]></title>
            <link>https://blog.cloudflare.com/messages-at-your-speed-with-concurrency-and-explicit-acknowledgement/</link>
            <pubDate>Fri, 19 May 2023 13:00:33 GMT</pubDate>
            <description><![CDATA[ Queues is faster than ever before! Now queues will automatically scale up your consumers, clearing out backlogs in a flash. Explicit Acknowledgement allows developers to acknowledge or retry individual messages in a batch, preventing work from being repeated. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Communicating between systems can be a balancing act that has a major impact on your business. <a href="https://www.cloudflare.com/learning/security/api/what-is-an-api/">APIs</a> have limits, billing frequently depends on usage, and end-users are always looking for more speed in the services they use. With so many conflicting considerations, it can feel like a challenge to get it just right. Cloudflare Queues is a tool to make this balancing act simple. With our latest features like consumer concurrency and explicit acknowledgment, it’s easier than ever for developers to focus on writing great code, rather than worrying about the fees and rate limits of the systems they work with.</p><p>Queues is a messaging service, enabling developers to send and receive messages across systems asynchronously with guaranteed delivery. It integrates directly with Cloudflare Workers, making for easy message production and consumption working with the many products and services we offer.</p>
    <div>
      <h2>What’s new in Queues?</h2>
      <a href="#whats-new-in-queues">
        
      </a>
    </div>
    
    <div>
      <h3>Consumer concurrency</h3>
      <a href="#consumer-concurrency">
        
      </a>
    </div>
    <p>Oftentimes, the systems we pull data from can produce information faster than other systems can consume them. This can occur when consumption involves processing information, storing it, or sending and receiving information to a third party system. The result of which is that sometimes, a queue can fall behind where it should be. At Cloudflare, a queue shouldn't be a quagmire. That’s why we’ve introduced Consumer Concurrency.</p><p>With Concurrency, we automatically scale up the amount of consumers needed to match the speed of information coming into any given queue. In this way, customers no longer have to worry about an ever-growing backlog of information bogging down their system.</p>
    <div>
      <h3>How it works</h3>
      <a href="#how-it-works">
        
      </a>
    </div>
    <p>When setting up a queue, developers can set a Cloudflare Workers script as a target to send messages to. With concurrency enabled, Cloudflare will invoke multiple instances of the selected Worker script to keep the messages in the queue moving effectively. This feature is enabled by default for every queue and set to automatically scale.</p><p>Autoscaling considers the following factors when spinning up consumers:  the number of messages in a queue, the rate of new messages, and successful vs. unsuccessful consumption attempts.</p><p>If a queue has enough messages in it, concurrency will increase each time a message batch is successfully processed. Concurrency is decreased when message batches encounter errors. Customers can set a <code>max_concurrency</code> value in the Dashboard or via Wrangler, which caps out how many consumers can be automatically created to perform processing for a given queue.</p><p>Setting the <code>max_concurrency</code> value manually can be helpful in the following situations where producer data is provided in bursts, the datasource API is rate limited, and datasource API has higher costs with more usage.</p><p>Setting a max concurrency value manually allows customers to optimize their workflows for other factors beyond speed.</p>
            <pre><code>// in your wrangler.toml file


[[queues.consumers]]
  queue = "my-queue"

//max concurrency can be set to a number between 1 and 10
//this defines the total amount of consumers running simultaneously

max_concurrency = 7</code></pre>
            <p>To learn more about concurrency you can check out our developer documentation <a href="https://developers.cloudflare.com/queues/learning/consumer-concurrency/">here</a>.</p>
    <div>
      <h3>Concurrency in practice</h3>
      <a href="#concurrency-in-practice">
        
      </a>
    </div>
    <p>It’s baseball season in the US, and for many of us that means fantasy baseball is back! This year is the year we finally write a program that uses data and statistics to pick a winning team, as opposed to picking players based on “feelings” and “vibes”. We’re engineers after all, and baseball is a game of rules. If the Oakland A’s can do it, so can we!</p><p>So how do we put this together? We’ll need a few things:</p><ol><li><p>A list of potential players</p></li><li><p>An API to pull historical game statistics from</p></li><li><p>A queue to send this data to its consumer</p></li><li><p>A Worker script to crunch the numbers and generate a score</p></li></ol><p>A developer can pull from a baseball reference API into a Workers script, and from that worker pass this information to a queue. Historical data is… historical, so we can pull data into our queue as fast as the baseball API will allow us. For our list of potential players, we pull statistics for each game they’ve played. This includes everything from batting averages, to balls caught, to game day weather. Score!</p>
            <pre><code>//get data from a third party API and pass it along to a queue


const response = await fetch("http://example.com/baseball-stats.json");
const gamesPlayedJSON = await response.json();

for (game in gamesPlayedJSON){
//send JSON to your queue defined in your workers environment
env.baseballqueue.send(jsonData)
}</code></pre>
            <p>Our producer Workers script then passes these statistics onto the queue. As each game contains quite a bit of data, this results in hundreds of thousands of “game data” messages waiting to be processed in our queue. Without concurrency, we would have to wait for each batch of messages to be processed one at a time, taking minutes if not longer. But, with Consumer Concurrency enabled, we watch as multiple instances of our worker script invoked to process this information in no time!</p><p>Our Worker script would then take these statistics, apply a heuristic, and store the player name and a corresponding quality score into a database like a Workers KV store for easy access by your application presenting the data.</p>
    <div>
      <h3>Explicit Acknowledgment</h3>
      <a href="#explicit-acknowledgment">
        
      </a>
    </div>
    <p>In Queues previously, a failure of a single message in a batch would result in the whole batch being resent to the consumer to be reprocessed. This resulted in extra cycles being spent on messages that were processed successfully, in addition to the failed message attempt. This hurts both customers and developers, slowing processing time, increasing complexity, and increasing costs.</p><p>With Explicit Acknowledgment, we give developers the precision and flexibility to handle each message individually in their consumer, negating the need to reprocess entire batches of messages. Developers can now tell their queue whether their consumer has properly processed each message, or alternatively if a specific message has failed and needs to be retried.</p><p>An acknowledgment of a message means that that message will not be retried if the batch fails. Only messages that were not acknowledged will be retried. Inversely, a message that is explicitly retried, will be sent again from the queue to be reprocessed without impacting the processing of the rest of the messages currently being processed.</p>
    <div>
      <h3>How it works</h3>
      <a href="#how-it-works">
        
      </a>
    </div>
    <p>In your consumer, there are 4 new methods you can call to explicitly acknowledge a given message: .ack(), .retry(), .ackAll(), .retryAll().</p><p>Both ack() and retry() can be called on individual messages. ack() tells a queue that the message has been processed successfully and that it can be deleted from the queue, whereas retry() tells the queue that this message should be put back on the queue and delivered in another batch.</p>
            <pre><code>async queue(batch, env, ctx) {
    for (const msg of batch.messages) {
	try {
//send our data to a 3rd party for processing
await fetch('https://thirdpartyAPI.example.com/stats', {
	method: 'POST',
	body: msg, 
	headers: {
		'Content-type': 'application/json'
}
});
//acknowledge if successful
msg.ack();
// We don't have to re-process this if subsequent messages fail!
}
catch (error) {
	//send message back to queue for a retry if there's an error
      msg.retry();
		console.log("Error processing", msg, error);
}
    }
  }</code></pre>
            <p>ackAll() and retryAll() work similarly, but act on the entire batch of messages instead of individual messages.</p><p>For more details check out our developer documentation <a href="https://developers.cloudflare.com/queues/learning/batching-retries/">here</a>.</p>
    <div>
      <h3>Explicit Acknowledgment in practice</h3>
      <a href="#explicit-acknowledgment-in-practice">
        
      </a>
    </div>
    <p>In the course of making our Fantasy Baseball team picker, we notice that data isn’t always sent correctly from the baseball reference API. This results in data not being correctly parsed and rejected from our player heuristics.</p><p>Without Explicit Acknowledgment, the entire batch of baseball statistics would need to be retried. Thankfully, we can use Explicit Acknowledgment to avoid that, and tell our queue which messages were parsed successfully and which were not.</p>
            <pre><code>import heuristic from "baseball-heuristic";
export default {
  async queue(batch: MessageBatch, env: Env, ctx: ExecutionContext) {
    for (const msg of batch.messages) {
      try {
        // Calculate the score based on the game stats
        heuristic.generateScore(msg)
        // Explicitly acknowledge results 
        msg.ack()
      } catch (err) {
        console.log(err)
        // Retry just this message
        msg.retry()
      } 
    }
  },
};</code></pre>
            
    <div>
      <h3>Higher throughput</h3>
      <a href="#higher-throughput">
        
      </a>
    </div>
    <p>Under the hood, we’ve been working on improvements to further increase the amount of messages per second each queue can handle. In the last few months, that number has quadrupled, improving from 100 to over 400 messages per second.</p><p>Scalability can be an essential factor when deciding which services to use to power your application. You want a service that can grow with your business. We are always aiming to improve our message throughput and hope to see this number quadruple again over the next year. We want to grow with you.</p>
    <div>
      <h2>What’s next?</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>As our service grows, we want to provide our customers with more ways to interact with our service beyond the traditional Cloudflare Workers workflow. We know our customers’ infrastructure is often complex, spanning across multiple services. With that in mind, our focus will be on enabling easy connection to services both within the Cloudflare ecosystem and beyond.</p>
    <div>
      <h3>R2 as a consumer</h3>
      <a href="#r2-as-a-consumer">
        
      </a>
    </div>
    <p>Today, the only type of consumer you can configure for a queue is a Workers script. While Workers are incredibly powerful, we want to take it a step further and give customers a chance to write directly to other services, starting with <a href="https://www.cloudflare.com/developer-platform/r2/">R2</a>. Coming soon, customers will be able to select an R2 bucket in the Cloudflare Dashboard for a Queue to write to directly, no code required. This will save valuable developer time by avoiding the initial setup in a Workers script, and any maintenance that is required as services evolve. With R2 as a first party consumer in Queues, customers can simply select their bucket, and let Cloudflare handle the rest.</p>
    <div>
      <h3>HTTP pull</h3>
      <a href="#http-pull">
        
      </a>
    </div>
    <p>We're also working to allow you to consume messages from existing infrastructure you might have outside of Cloudflare. Cloudflare Queues will provide an HTTP API for each queue from which any consumer can pull batches of messages for processing. Customers simply make a request to the API endpoint for their queue, receive data they requested, then send an acknowledgment that they have received the data, so the queue can continue working on the next batch.</p>
    <div>
      <h3>Always working to be faster</h3>
      <a href="#always-working-to-be-faster">
        
      </a>
    </div>
    <p>For the Queues team, speed is always our focus, as we understand our customers don't want bottlenecks in the performance of their applications. With this in mind the team will be continuing to look for ways to increase the velocity through which developers can build best in class applications on our developer platform. Whether it's reducing message processing time, the amount of code you need to manage, or giving developers control over their application pipeline, we will continue to implement solutions to allow you to focus on just the important things, while we handle the rest.</p><p>Cloudflare Queues is currently in Open Beta and ready to power your most complex applications.</p><p>Check out our getting started <a href="https://developers.cloudflare.com/queues/learning/how-queues-works/">guide</a> and build your service with us today!</p>
    <div>
      <h3>Watch on Cloudflare TV</h3>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div></div> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Queues]]></category>
            <category><![CDATA[Beta]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">6sSF3GnFonTy4zGf6McmFv</guid>
            <dc:creator>Charles Burnett</dc:creator>
            <dc:creator>Josh Wheeler</dc:creator>
        </item>
        <item>
            <title><![CDATA[A new era for Cloudflare Pages builds]]></title>
            <link>https://blog.cloudflare.com/cloudflare-pages-build-improvements/</link>
            <pubDate>Tue, 10 May 2022 13:01:10 GMT</pubDate>
            <description><![CDATA[ Announcing several build experience improvements for Cloudflare Pages including build times, logging and configuration ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Music is flowing through your headphones. Your hands are flying across the keyboard. You’re stringing together a masterpiece of code. The momentum is building up as you put on the finishing touches of your project. And at last, it’s ready for the world to see. Heart pounding with excitement and the feeling of victory, you push changes to the main branch…. only to end up waiting for the build to execute each step and spit out the build logs.</p>
    <div>
      <h2>Starting afresh</h2>
      <a href="#starting-afresh">
        
      </a>
    </div>
    <p>Since the launch of Cloudflare Pages, there is no doubt that the build experience has been its biggest source of criticism. From the amount of waiting to inflexibility of CI workflow, Pages had a lot of opportunity for growth and improvement. With Pages, our North Star has always been designing a developer platform that fits right into your workflow and oozes simplicity. User pain points have been and always will be our priority, which is why today we are thrilled to share a list of exciting updates to our build times, logs and settings!</p><p>Over the last three quarters, we implemented a new build infrastructure that speeds up Pages builds, so you can iterate quickly and efficiently. In February, we soft released the Pages Fast Builds Beta, allowing you to opt in to this new infrastructure on a per-project basis. This not only allowed us to test our implementation, but also gave our community the opportunity to try it out and give us direct feedback in <a href="https://discord.gg/cloudflaredev">Discord</a>. Today we are excited to announce the new build infrastructure is now generally available and automatically enabled for all existing and new projects!</p>
    <div>
      <h2>Faster build times</h2>
      <a href="#faster-build-times">
        
      </a>
    </div>
    <p>As a developer, your time is extremely valuable, and we realize Pages builds were slow. It was obvious that creating an infrastructure that built projects faster and smarter was one of our top requirements.</p><p>Looking at a Pages build, there are four main steps: (1) initializing the build environment, (2) cloning your git repository, (3) building the application, and (4) deploying to Cloudflare’s global network. Each of these steps is a crucial part of the build process, and upon investigating areas suitable for optimization, we directed our efforts to cutting down on build initialization time.</p><p>In our old infrastructure, every time a build job was submitted, we created a new virtual machine to run that build, costing our users precious dev time. In our new infrastructure, we start jobs on machines that are ready and waiting to be used, taking a major chunk of time away from the build initialization step. This step previously ran for 2+ minutes, but with our new infrastructure update, projects are expected to see a build initialization time cut down to <b>2-3 SECONDS</b>.</p><p>This means less time waiting and more time iterating on your code.</p>
    <div>
      <h3>Fast and secure</h3>
      <a href="#fast-and-secure">
        
      </a>
    </div>
    <p>In our old build infrastructure, because we spun up a new virtual machine (VM) for every build, it would take several minutes to boot up and initialize with the Pages build image needed to execute the build. Alternatively, one could reuse a collection of VMs, assigning a new build to the next available VM, but containers share a kernel with the host operating system, making them far less isolated, posing a huge security risk. This could allow a malicious actor to perform a "container escape" to break out of their sandbox. We wanted the best of both worlds: the speed of a container with the isolation of a virtual machine.</p><p>Enter <a href="https://gvisor.dev/">gVisor</a>, a container sandboxing technology that drastically limits the attack surface of a host. In the new infrastructure, each container running with gVisor is given its own independent application "kernel,” instead of directly sharing the kernel with its host. Then, to address the speed, we keep a cluster of virtual machines warm and ready to execute builds so that when a new Pages deployment is triggered, it takes just a few seconds for a new gVisor container to start up and begin executing meaningful work in a secure sandbox with near native performance.</p>
    <div>
      <h2>Stream your build logs</h2>
      <a href="#stream-your-build-logs">
        
      </a>
    </div>
    <p>After we solidified a fast and secure build, we wanted to enhance the user facing build experience. Because a build may not be successful every time, providing you with the tools you need to debug and access that information as fast as possible is crucial. While we have a long list of future improvements for a better logging experience, today we are starting by enabling you to stream your build logs.</p><p>Prior to today, with the aforementioned build steps required to complete a Pages build, you were required to wait until the build completed in order to view the resulting build logs. Easily addressable issues like incorrectly inputting the build command or specifying an environment variable would have required waiting for the entire build to finish before understanding the problem.</p><p>Today, we’re giving you the power to understand your build issues as soon as they happen. Spend less time waiting for your logs and start debugging the events of your builds within a second or less after they happen!</p><div></div>
<p></p>
    <div>
      <h2>Control Branch Builds</h2>
      <a href="#control-branch-builds">
        
      </a>
    </div>
    <p>Finally, the build experience does not just include the events during execution but everything leading up to the trigger of a build. For our final trick, we’re enabling our users to have full control of the precise branches they’d like to include and exclude for automatic deployments.</p><p>Before today, Pages submitted builds for every commit in both production and preview environments, which led to queued builds and even more waiting if you exceeded your concurrent build limit. We wanted to provide even more flexibility to control your CI workflow. Now you can configure your build settings to specify branches to build, as well as skip ad hoc commits.</p>
    <div>
      <h3>Specify branches to build</h3>
      <a href="#specify-branches-to-build">
        
      </a>
    </div>
    <p>While “unlimited staging” is one of Pages’ greatest advantages, depending on your setup, sometimes automatic deployments to the preview environment can cause extra noise.</p><p>In the Pages build configuration setting, you can specify automatic deployments to be turned off for the production environment, the preview environment, or specific preview branches. In a more extreme case, you can even pause all deployments so that any commit sent to your git source will not trigger a new Pages build.</p><p>Additionally, in your project’s settings, you can now configure the specific Preview branches you would like to include and exclude for automatic deployments. To make this configuration an even more powerful tool, you can use wildcard syntax to set rules for existing branches as well as any newly created preview branches.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7fw9Ltufs0OqoXuByAcKdD/8e871205590c8270f36eb33627a13ae9/image1-15.png" />
            
            </figure><p><a href="https://developers.cloudflare.com/pages/platform/branch-build-controls/">Read more in our Pages docs</a> on how to get started with configuring automatic deployments with Wildcard Syntax.</p>
    <div>
      <h3>Using CI Skip</h3>
      <a href="#using-ci-skip">
        
      </a>
    </div>
    <p>Sometimes commits need to be skipped on an ad hoc basis. A small update to copy or a set of changes within a small timespan don’t always require an entire site rebuild. That’s why we also implemented a CI Skip command for your commit message, signaling to Pages that the update should be skipped by our builder.</p><p>With both CI Skip and configured build rules, you can keep track of your site changes in Pages’ deployment history.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/Nm9Zxx2FDkDz1OAvBq3OE/d349e0628a51eed8036b81db1cadf21d/image2-10.png" />
            
            </figure>
    <div>
      <h2>Where we’re going</h2>
      <a href="#where-were-going">
        
      </a>
    </div>
    <p>We’re extremely excited to bring these updates to you today, but of course, this is only the beginning of improving our build experience. Over the next few quarters, we will be bringing more to the build experience to create a seamless developer journey from site inception to launch.</p>
    <div>
      <h3>Incremental builds and caching</h3>
      <a href="#incremental-builds-and-caching">
        
      </a>
    </div>
    <p>From beta testing, we noticed that our new infrastructure can be less impactful on larger projects that use heavier frameworks such as Gatsby. We believe that every user on our developer platform, regardless of their use case, has the right to fast builds. Up next, we will be implementing incremental builds to help Pages identify only the deltas between commits and rebuild only files that were directly updated. We will also be implementing other caching strategies such as caching external dependencies to save time on subsequent builds.</p>
    <div>
      <h3>Build image updates</h3>
      <a href="#build-image-updates">
        
      </a>
    </div>
    <p>Because we’ve been using the same build image we launched Pages with back in 2021, we are going to make some major updates. Languages release new versions all the time, and we want to make sure we update and maintain the latest versions. An updated build image will mean faster builds, more security and of course supporting all the latest versions of languages and tools we provide. With new build image versions being released, we will allow users to opt in to the updated builds in order to maintain compatibility with all existing projects.</p>
    <div>
      <h3>Productive error messaging</h3>
      <a href="#productive-error-messaging">
        
      </a>
    </div>
    <p>Lastly, while streaming build logs helps you to identify those easily addressable issues, the infamous “Internal error occurred” is sometimes a little more cryptic to decipher depending on the failure. While we recently published a “<a href="https://developers.cloudflare.com/pages/platform/debugging-pages/">Debugging Cloudflare Pages</a>” guide, in the future we’d like to provide the error feedback in a more productive manner, so you can easily identify the issue.</p>
    <div>
      <h2>Have feedback?</h2>
      <a href="#have-feedback">
        
      </a>
    </div>
    <p>As always, your feedback defines our roadmap. With all the updates we’ve made to our build experience, it’s important we hear from you! You can get in touch with our team directly through <a href="https://discord.gg/cloudflaredev">Discord</a>. Navigate to our Pages specific section and check out our various channels specific to different parts of the product!</p>
    <div>
      <h3>Join us at Cloudflare Connect!</h3>
      <a href="#join-us-at-cloudflare-connect">
        
      </a>
    </div>
    <p>Interested in learning more about building with Cloudflare Pages? If you’re based in the New York City area, join us on Thursday, May 12th for a series of workshops on how to build a full stack application on Pages! Follow along with a fully hands-on lab, featuring Pages in conjunction with other products like Workers, Images and Cloudflare Gateway, and hear directly from our product managers. <a href="https://events.www.cloudflare.com/flow/cloudflare/connect2022nyc/landing/page/page">Register now</a>!</p> ]]></content:encoded>
            <category><![CDATA[Platform Week]]></category>
            <category><![CDATA[Cloudflare Pages]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Speed]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">2oh3ffXctAGXfS6sDKaqSB</guid>
            <dc:creator>Nevi Shah</dc:creator>
            <dc:creator>Josh Wheeler</dc:creator>
            <dc:creator>Jacob Hands</dc:creator>
        </item>
    </channel>
</rss>