
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sat, 04 Apr 2026 06:31:14 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Workers Builds: integrated CI/CD built on the Workers platform]]></title>
            <link>https://blog.cloudflare.com/workers-builds-integrated-ci-cd-built-on-the-workers-platform/</link>
            <pubDate>Thu, 31 Oct 2024 13:00:00 GMT</pubDate>
            <description><![CDATA[ Workers Builds, an integrated CI/CD pipeline for the Workers platform, recently launched in open beta. We walk through how we built this product on Cloudflare’s Developer Platform. ]]></description>
            <content:encoded><![CDATA[ <p>During 2024’s Birthday Week, we <a href="https://blog.cloudflare.com/builder-day-2024-announcements/#continuous-integration-and-delivery"><u>launched Workers Builds</u></a> in open beta — an integrated <a href="https://www.cloudflare.com/learning/serverless/glossary/what-is-ci-cd/">Continuous Integration and Delivery (CI/CD) </a>workflow you can use to build and deploy everything from full-stack applications built with the most popular frameworks to simple static websites onto the Workers platform. With Workers Builds, you can connect a GitHub or GitLab repository to a Worker, and Cloudflare will automatically build and deploy your changes each time you push a commit.</p><p>Workers Builds is intended to bridge the gap between the developer experiences for Workers and Pages, the latter of which <a href="https://blog.cloudflare.com/cloudflare-pages/"><u>launched with an integrated CI/CD system in 2020</u></a>. As we continue to <a href="https://blog.cloudflare.com/pages-and-workers-are-converging-into-one-experience/"><u>merge the experiences of Pages and Workers</u></a>, we wanted to bring one of the best features of Pages to Workers: the ability to tie deployments to existing development workflows in GitHub and GitLab with minimal developer overhead. </p><p>In this post, we’re going to share how we built the Workers Builds system on Cloudflare’s Developer Platform, using <a href="https://developers.cloudflare.com/workers/"><u>Workers</u></a>, <a href="https://developers.cloudflare.com/durable-objects"><u>Durable Objects</u></a>, <a href="https://developers.cloudflare.com/hyperdrive"><u>Hyperdrive</u></a>, <a href="https://developers.cloudflare.com/logs/log-explorer/"><u>Workers Logs</u></a>, and <a href="https://developers.cloudflare.com/workers/configuration/smart-placement"><u>Smart Placement</u></a>.</p>
    <div>
      <h2>The design problem</h2>
      <a href="#the-design-problem">
        
      </a>
    </div>
    <p>The core problem for Workers Builds is how to pick up a commit from GitHub or GitLab and start a containerized job that can clone the repo, build the project, and deploy a Worker. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6n6UCIKAM4uAtWzsRBiS16/1c0b655b415afe375b6b153ada570357/BLOG-2594_2.png" />
          </figure><p>Pages solves a similar problem, and we were initially inclined to expand our existing architecture and tech stack, which includes a centralized configuration plane built on Go in Kubernetes. We also considered the ways in which the Workers ecosystem has evolved in the four years since Pages launched — we have since launched so many more tools built for use cases just like this! </p><p>The distributed nature of Workers offers some advantages over a centralized stack — we can spend less time configuring Kubernetes because Workers automatically handles failover and scaling. Ultimately, we decided to keep using what required no additional work to re-use from Pages (namely, the system for connecting GitHub/GitLab accounts to Cloudflare, and ingesting push events from them), and for the rest build out a new architecture on the Workers platform, with reliability and minimal latency in mind.</p>
    <div>
      <h2>The Workers Builds system</h2>
      <a href="#the-workers-builds-system">
        
      </a>
    </div>
    <p>We didn’t need to make any changes to the system that handles connections from GitHub/GitLab to Cloudflare and ingesting push events from them. That left us with two systems to build: the configuration plane for users to connect a Worker to a repo, and a build management system to run and monitor builds.</p>
    <div>
      <h3>Client Worker </h3>
      <a href="#client-worker">
        
      </a>
    </div>
    <p>We can begin with our configuration plane, which consists of a simple Client Worker that implements a RESTful API (using <a href="https://hono.dev/docs/getting-started/cloudflare-workers"><u>Hono</u></a>) and connects to a PostgreSQL database. It’s in this database that we store build configurations for our users, and through this Worker that users can view and manage their builds. </p><p>We use a <a href="https://developers.cloudflare.com/hyperdrive/"><u>Hyperdrive binding</u></a> to connect to our database <a href="https://developers.cloudflare.com/hyperdrive/configuration/connect-to-private-database"><u>securely over Cloudflare Access</u></a> (which also manages connection pooling and query caching).</p><p>We considered a more distributed data model (like <a href="https://developers.cloudflare.com/d1/"><u>D1</u></a>, sharded by account), but ultimately decided that keeping our database in a datacenter more easily fit our use-case. The Workers Builds data model is relational — Workers belong to Cloudflare Accounts, and Builds belong to Workers — and build metadata must be consistent in order to properly manage build queues. We chose to keep our failover-ready database in a centralized datacenter and take advantage of two other Workers products, Smart Placement and Hyperdrive, in order to keep the benefits of a distributed control plane. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/33eYqRr5LXKbAvfP8RR7X7/b82858c39b9755c6e056577c9449b00f/BLOG-2594_3.png" />
          </figure><p>Everything that you see in the Cloudflare Dashboard related to Workers Builds is served by this Worker. </p>
    <div>
      <h3>Build Management Worker</h3>
      <a href="#build-management-worker">
        
      </a>
    </div>
    <p>The more challenging problem we faced was how to run and manage user builds effectively. We wanted to support the same experience that we had achieved with Pages, which led to these key requirements:</p><ol><li><p>Builds should be initiated with minimal latency.</p></li><li><p>The status of a build should be tracked and displayed through its entire lifecycle, starting when a user pushes a commit.</p></li><li><p>Customer build logs should be stored in a secure, private, and long-lived way.</p></li></ol><p>To solve these problems, we leaned heavily into the technology of <a href="https://developers.cloudflare.com/durable-objects/"><u>Durable Objects</u></a> (DO). </p><p>We created a Build Management Worker with two DO classes: A Scheduler class to manage the scheduling of builds, and a class called BuildBuddy to manage individual builds. We chose to design our system this way for an efficient and scalable system. Since each build is assigned its own build manager DO, its operation won’t ever block other builds or the scheduler, meaning we can start up builds with minimal latency. Below, we dive into each of these Durable Objects classes.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6RUDJI7IYIlzcX4qjF9EYY/7e959b7a4489a41d275d74d634389f31/BLOG-2594_4.png" />
          </figure>
    <div>
      <h4>Scheduler DO</h4>
      <a href="#scheduler-do">
        
      </a>
    </div>
    <p>The Scheduler DO class is relatively simple. Using <a href="https://developers.cloudflare.com/durable-objects/api/alarms/"><u>Durable Objects Alarms</u></a>, it is triggered every second to pull up a list of user build configurations that are ready to be started. For each of those builds, the Scheduler creates an instance of our other DO Class, the Build Buddy. </p>
            <pre><code>import { DurableObject } from 'cloudflare:workers'


export class BuildScheduler extends DurableObject {
   state: DurableObjectState
   env: Bindings


   constructor(ctx: DurableObjectState, env: Bindings) {
       super(ctx, env)
   }
   
   // The DO alarm handler will be called every second to fetch builds
   async alarm(): Promise&lt;void&gt; {
// set alarm to run again in 1 second
       await this.updateAlarm()


       const builds = await this.getBuildsToSchedule()
       await this.scheduleBuilds(builds)
   }


   async scheduleBuilds(builds: Builds[]): Promise&lt;void&gt; {
       // Don't schedule builds, if no builds to schedule
       if (builds.length === 0) return


       const queue = new PQueue({ concurrency: 6 })
       // Begin running builds
       builds.forEach((build) =&gt;
           queue.add(async () =&gt; {
       	  // The BuildBuddy is another DO described more in the next section! 
               const bb = getBuildBuddy(this.env, build.build_id)
               await bb.startBuild(build)
           })
       )


       await queue.onIdle()
   }


   async getBuildsToSchedule(): Promise&lt;Builds[]&gt; {
       // returns list of builds to schedule
   }


   async updateAlarm(): Promise&lt;void&gt; {
// We want to ensure we aren't running multiple alarms at once, so we only set the next alarm if there isn’t already one set. 
       const existingAlarm = await this.ctx.storage.getAlarm()
       if (existingAlarm === null) {
           this.ctx.storage.setAlarm(Date.now() + 1000)
       }
   }
}
</code></pre>
            
    <div>
      <h4>Build Buddy DO</h4>
      <a href="#build-buddy-do">
        
      </a>
    </div>
    <p>The Build Buddy DO class is what we use to manage each individual build from the time it begins initializing to when it is stopped. Every build has a buddy for life!</p><p>Upon creation of a Build Buddy DO instance, the Scheduler immediately calls <code>startBuild()</code> on the instance. The <code>startBuild()</code> method is responsible for fetching all metadata and secrets needed to run a build, and then kicking off a build on Cloudflare’s container platform (<a href="https://blog.cloudflare.com/container-platform-preview/"><u>not public yet, but coming soon</u></a>!). </p><p>As the containerized build runs, it reports back to the Build Buddy, sending status updates and logs for the Build Buddy to deal with. </p>
    <div>
      <h5>Build status</h5>
      <a href="#build-status">
        
      </a>
    </div>
    <p>As a build progresses, it reports its own status back to Build Buddy, sending updates when it has finished initializing, has completed successfully, or been terminated by the user. The Build Buddy is responsible for handling this incoming information from the containerized build, writing status updates to the database (via a Hyperdrive binding) so that users can see the status of their build in the Cloudflare dashboard.</p>
    <div>
      <h5>Build logs</h5>
      <a href="#build-logs">
        
      </a>
    </div>
    <p>A running build generates output logs that are important to store and surface to the user. The containerized build flushes these logs to the Build Buddy every second, which, in turn, stores those logs in <a href="https://developers.cloudflare.com/durable-objects/api/storage-api/"><u>DO storage</u></a>. </p><p>The decision to use Durable Object storage here makes it easy to multicast logs to multiple clients efficiently, and allows us to use the same API for both streaming logs and viewing historical logs. </p><p>// build-management-app.ts</p>
            <pre><code>// We created a Hono app to for use by our Client Worker API
const app = new Hono&lt;HonoContext&gt;()
   .post(
       '/api/builds/:build_uuid/status',
       async (c) =&gt; {
           const buildStatus = await c.req.json()


           // fetch build metadata
           const build = ...


           const bb = getBuildBuddy(c.env, build.build_id)
           return await bb.handleStatusUpdate(build, statusUpdate)
       }
   )
   .post(
       '/api/builds/:build_uuid/logs',
       async (c) =&gt; {
           const logs = await c.req.json()
     // fetch build metadata
           const build = ...


           const bb = getBuildBuddy(c.env, build.build_id)
           return await bb.addLogLines(logs.lines)
       }
   )


export default {
   fetch: app.fetch
}
</code></pre>
            <p>// build-buddy.ts</p>
            <pre><code>import { DurableObject } from 'cloudflare:workers'


export class BuildBuddy extends DurableObject {
   compute: WorkersBuildsCompute


   constructor(ctx: DurableObjectState, env: Bindings) {
       super(ctx, env)
       this.compute = new ComputeClient({
           // ...
       })
   }


   // The Scheduler DO calls startBuild upon creating a BuildBuddy instance
   startBuild(build: Build): void {
       this.startBuildAsync(build)         
   }


   async startBuildAsync(build: Build): Promise&lt;void&gt; {
       // fetch all necessary metadata build, including
	// environment variables, secrets, build tokens, repo credentials, 
// build image URI, etc
	// ...


	// start a containerized build
       const computeBuild = await this.compute.createBuild({
           // ...
       })
   }


   // The Build Management worker calls handleStatusUpdate when it receives an update
   // from the containerized build
   async handleStatusUpdate(
       build: Build,
       buildStatusUpdatePayload: Payload
   ): Promise&lt;void&gt; {
// Write status updates to the database
   }


   // The Build Management worker calls addLogLines when it receives flushed logs
   // from the containerized build
   async addLogLines(logs: LogLines): Promise&lt;void&gt; {
       // Generate nextLogsKey to store logs under      
       this.ctx.storage.put(nextLogsKey, logs)
   }


   // The Client Worker can call methods on a Build Buddy via RPC, using a service binding to the Build Management Worker.
   // The getLogs method retrieves logs for the user, and the cancelBuild method forwards a request from the user to terminate a build. 
   async getLogs(cursor: string){
       const decodedCursor = cursor !== undefined ? decodeLogsCursor(cursor) : undefined
       return await this.getLogs(decodedCursor)
   }


   async cancelBuild(compute_id: string, build_id: string): void{
      await this.terminateBuild(build_id, compute_id)
   }


   async terminateBuild(build_id: number, compute_id: string): Promise&lt;void&gt; {
       await this.compute.stopBuild(compute_id)
   }
}


   export function getBuildBuddy(
   env: Pick&lt;Bindings, 'BUILD_BUDDY'&gt;,
   build_id: number
): DurableObjectStub&lt;BuildBuddy&gt; {
   const id = env.BUILD_BUDDY.idFromName(build_id.toString())
   return env.BUILD_BUDDY.get(id)
}
</code></pre>
            
    <div>
      <h5>Alarms</h5>
      <a href="#alarms">
        
      </a>
    </div>
    <p>We utilize <a href="https://developers.cloudflare.com/durable-objects/api/alarms/"><u>alarms</u></a> in the Build Buddy to check that a build has a healthy startup and to terminate any builds that run longer than 20 minutes. </p>
    <div>
      <h2>How else have we leveraged the Developer Platform?</h2>
      <a href="#how-else-have-we-leveraged-the-developer-platform">
        
      </a>
    </div>
    <p>Now that we've gone over the core behavior of the Workers Builds control plane, we'd like to detail a few other features of the Workers platform that we use to improve performance, monitor system health, and troubleshoot customer issues.</p>
    <div>
      <h3>Smart Placement and location hints</h3>
      <a href="#smart-placement-and-location-hints">
        
      </a>
    </div>
    <p>While our control plane is distributed in the sense that it can be run across multiple datacenters, to reduce latency costs, we want most requests to be served from locations close to our primary database in the western US.</p><p>While a build is running, Build Buddy, a Durable Object, is continuously writing status updates to our database. For the Client and the Build Management API Workers, we enabled <a href="https://developers.cloudflare.com/workers/configuration/smart-placement/"><u>Smart Placement</u></a> with <a href="https://developers.cloudflare.com/durable-objects/reference/data-location/#provide-a-location-hint"><u>location hints</u></a> to ensure requests run close to the database.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4hhFLpYizLZ6cyu4h80YL8/40af67320a6bf44f375d6055b2997a99/BLOG-2594_5.png" />
          </figure><p>This graph shows the reduction in round trip time (RTT) observed for our Worker with Smart Placement turned on. </p>
    <div>
      <h3>Workers Logs</h3>
      <a href="#workers-logs">
        
      </a>
    </div>
    <p>We needed a logging tool that allows us to aggregate and search across persistent operational logs from our Workers to assist with identifying and troubleshooting issues. We worked with the Workers Observability team to become early adopters of <a href="https://developers.cloudflare.com/workers/observability/logs/workers-logs"><u>Workers Logs</u></a>.</p><p>Workers Logs worked out of the box, giving us fast and easy to use logs directly within the Cloudflare dashboard. To improve our ability to search logs, we created a <a href="https://www.npmjs.com/package/workers-tagged-logger"><u>tagging library</u></a> that allows us to easily add metadata like the git tag of the deployed worker that the log comes from, allowing us to filter logs by release.</p><p>See a shortened example below for how we handle and log errors on the Client Worker. </p><p>// client-worker-app.ts</p>
            <pre><code>// The Client Worker is a RESTful API built with Hono
const app = new Hono&lt;HonoContext&gt;()
   // This is from the workers-tagged-logger library - first we register the logger
   .use(useWorkersLogger('client-worker-app'))
   // If any error happens during execution, this middleware will ensure we log the error
   .onError(useOnError)
   // routes
   .get(
       '/apiv4/builds',
       async (c) =&gt; {
           const { ids } = c.req.query()
           return await getBuildsByIds(c, ids)
       }
   )


function useOnError(e: Error, c: Context&lt;HonoContext&gt;): Response {
   // Set the project identifier n the error
   logger.setTags({ release: c.env.GIT_TAG })
 
   // Write a log at level 'error'. Can also log 'info', 'log', 'warn', and 'debug'
   logger.error(e)
   return c.json(internal_error.toJSON(), internal_error.statusCode)
}
</code></pre>
            <p>This setup can lead to the following sample log message from our Workers Log dashboard. You can see the release tag is set on the log.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6gfd725NCFNrhlDt3gK515/90138c159285e91535a986266918be13/BLOG-2594_6.png" />
          </figure><p>We can get a better sense of the impact of the error by adding filters to the Workers Logs view, as shown below. We are able to filter on any of the fields since we’re <a href="https://developers.cloudflare.com/workers/observability/logs/workers-logs#logging-structured-json-objects"><u>logging with structured JSON</u></a>.  </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6XqXINluVzzyHd4O17JsnZ/0ac714792a4d21623b4a875291ae0ad0/BLOG-2594_7.png" />
          </figure>
    <div>
      <h3>R2</h3>
      <a href="#r2">
        
      </a>
    </div>
    <p>Coming soon to Workers Builds is build caching, used to store artifacts of a build for subsequent builds to reuse, such as package dependencies and build outputs. Build caching can speed up customer builds by avoiding the need to redownload dependencies from NPM or to rebuild projects from scratch. The cache itself will be backed by <a href="https://www.cloudflare.com/developer-platform/products/r2/">R2 storage</a>. </p>
    <div>
      <h3>Testing</h3>
      <a href="#testing">
        
      </a>
    </div>
    <p>We were able to build up a great testing story using <a href="https://blog.cloudflare.com/workers-vitest-integration/"><u>Vitest and workerd</u></a> — unit tests, cross-worker integration tests, the works. In the example below, we make use of the <code>runInDurableObject</code> stub from <code>cloudflare:test</code> to test instance methods on the Scheduler DO directly.</p><p>// scheduler.spec.ts</p>
            <pre><code>import { env, runInDurableObject } from 'cloudflare:test'
import { expect, test } from 'vitest'
import { BuildScheduler } from './scheduler'


test('getBuildsToSchedule() runs a queued build', async () =&gt; {
   // Our test harness creates a single build for our scheduler to pick up
   const { build } = await harness.createBuild()


   // We create a scheduler DO instance
   const id = env.BUILD_SCHEDULER.idFromName(crypto.randomUUID())
   const stub = env.BUILD_SCHEDULER.get(id)
   await runInDurableObject(stub, async (instance: BuildScheduler) =&gt; {
       expect(instance).toBeInstanceOf(BuildScheduler)


// We check that the scheduler picks up 1 build
       const builds = await instance.getBuildsToSchedule()
       expect(builds.length).toBe(1)
	
// We start the build, which should mark it as running
       await instance.scheduleBuilds(builds)
   })


   // Check that there are no more builds to schedule
   const queuedBuilds = ...
   expect(queuedBuilds.length).toBe(0)
})
</code></pre>
            <p>We use <code>SELF.fetch()</code> from <code>cloudflare:test</code> to run integration tests on our Client Worker, as shown below. This integration test covers our Hono endpoint and database queries made by the Client Worker in retrieving the metadata of a build.</p><p>// builds_api.test.ts</p>
            <pre><code>import { env, SELF } from 'cloudflare:test'
   
it('correctly selects a single build', async () =&gt; {
   // Our test harness creates a randomized build to test with
   const { build } = await harness.createBuild()


   // We send a request to the Client Worker itself to fetch the build metadata
   const getBuild = await SELF.fetch(
       `https://example.com/builds/${build1.build_uuid}`,
       {
           method: 'GET',
           headers: new Headers({
               Authorization: `Bearer JWT`,
               'content-type': 'application/json',
           }),
       }
   )


   // We expect to receive a 200 response from our request and for the 
   // build metadata returned to match that of the random build that we created
   expect(getBuild.status).toBe(200)
   const getBuildV4Resp = await getBuild.json()
   const buildResp = getBuildV4Resp.result
   expect(buildResp).toBeTruthy()
   expect(buildResp).toEqual(build)
})
</code></pre>
            <p>These tests run on the same runtime that Workers run on in production, meaning we have greater confidence that any code changes will behave as expected when they go live. </p>
    <div>
      <h3>Analytics</h3>
      <a href="#analytics">
        
      </a>
    </div>
    <p>We use the technology underlying the <a href="https://developers.cloudflare.com/analytics/analytics-engine/"><u>Workers Analytics Engine</u></a> to collect all of the metrics for our system. We set up <a href="https://developers.cloudflare.com/analytics/analytics-engine/grafana/"><u>Grafana</u></a> dashboards to display these metrics. </p>
    <div>
      <h3>JavaScript-native RPC</h3>
      <a href="#javascript-native-rpc">
        
      </a>
    </div>
    <p><a href="https://blog.cloudflare.com/javascript-native-rpc/"><u>JavaScript-native RPC</u></a> was added to Workers in April of 2024, and it’s pretty magical. In the scheduler code example above, we call <code>startBuild()</code> on the BuildBuddy DO from the Scheduler DO. Without RPC, we would need to stand up routes on the BuildBuddy <code>fetch()</code> handler for the Scheduler to trigger with a fetch request. With RPC, there is almost no boilerplate — all we need to do is call a method on a class. </p>
            <pre><code>const bb = getBuildBuddy(this.env, build.build_id)


// Starting a build without RPC 😢
await bb.fetch('http://do/api/start_build', {
    method: 'POST',
    body: JSON.stringify(build),
})


// Starting a build with RPC 😸
await bb.startBuild(build)
</code></pre>
            
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>By using Workers and Durable Objects, we were able to build a complex and distributed system that is easy to understand and is easily scalable. </p><p>It’s been a blast for our team to build on top of the very platform that we work on, something that would have been much harder to achieve on Workers just a few years ago. We believe in being Customer Zero for our own products — to identify pain points firsthand and to continuously improve the developer experience by applying them to our own use cases. It was fulfilling to have our needs as developers met by other teams and then see those tools quickly become available to the rest of the world — we were collaborators and internal testers for Workers Logs and private network support for Hyperdrive (both released on Birthday Week), and the soon to be released container platform.</p><p>Opportunities to build complex applications on the Developer Platform have increased in recent years as the platform has matured and expanded product offerings for more use cases. We hope that Workers Builds will be yet another tool in the Workers toolbox that enables developers to spend less time thinking about configuration and more time writing code. </p><p>Want to try it out? Check out the <a href="https://developers.cloudflare.com/workers/ci-cd/builds/"><u>docs</u></a> to learn more about how to deploy your first project with Workers Builds.</p> ]]></content:encoded>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <guid isPermaLink="false">6uKjGQLUKCb33wGIcOQE1Y</guid>
            <dc:creator>Serena Shah-Simpson</dc:creator>
            <dc:creator>Jacob Hands</dc:creator>
            <dc:creator>Natalie Rogers</dc:creator>
        </item>
        <item>
            <title><![CDATA[Smart Placement speeds up applications by moving code close to your backend — no config needed]]></title>
            <link>https://blog.cloudflare.com/announcing-workers-smart-placement/</link>
            <pubDate>Tue, 16 May 2023 13:00:46 GMT</pubDate>
            <description><![CDATA[ Smart Placement automatically places your workloads in an optimal location that minimizes latency and speeds up your applications! ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/68NfZl5AwLPrk573wmR7Id/e605ccc58d1612b69beef233229ae79f/image2-14.png" />
            
            </figure><p>We’ve all experienced the frustrations of slow loading websites, or an app that seems to stall when it needs to call an API for an update. Anything less than instant and your mind wanders to something else...</p><p>One way to make things fast is to bring resources as close to the user as possible — this is what Cloudflare has been doing with compute by running within milliseconds of most of the world’s population. But, counterintuitive as it may seem, sometimes bringing compute closer to the user can actually slow applications down. If your application needs to connect to APIs, databases or other resources that aren’t located near the end user then it can be more performant to run the application near the resources instead of the user.</p><p>So today we’re excited to announce Smart Placement for Workers and Pages Functions, making every interaction as fast as possible. With Smart Placement, Cloudflare is taking serverless computing to the Supercloud by moving compute resources to optimal locations in order to speed up applications. The best part – it’s completely automatic, without any extra input (like the dreaded “region”) needed.</p><p>Smart Placement is available now, in open beta, to all Workers and Pages customers!</p><p><a href="https://smart-placement-demo.pages.dev/">Check out our demo on how Smart Placement works!</a></p><p></p>
    <div>
      <h2>The serverless shift</h2>
      <a href="#the-serverless-shift">
        
      </a>
    </div>
    <p>Cloudflare’s anycast network is built to process requests <i>instantly and close to the user</i>. As a developer, that’s what makes Cloudflare Workers, our serverless compute offering so compelling. Competitors are bounded by “regions” while Workers run everywhere — hence we have one region: Earth. Requests handled entirely by Workers can be processed right then and there, without ever having to hit an origin server.</p><p>While this concept of serverless was originally considered to be for lightweight tasks, serverless computing has been seeing a shift in recent years. It’s being used to replace traditional architecture, which relies on origin servers and self-managed infrastructure, instead of simply augmenting it. We’re seeing more and more of these use cases with Workers and Pages users.</p>
    <div>
      <h3>Serverless needs state</h3>
      <a href="#serverless-needs-state">
        
      </a>
    </div>
    <p>With the shift to going serverless and building entire applications on Workers comes a need for data. Storing information about previous actions or events lets you build personalized, interactive applications. Say you need to create user profiles, store which page a user left off at, which SKUs a user has in their cart – all of these are mapped to data points used to maintain state. Backend services like relational databases, key-value stores, blob storage, and APIs all let you build stateful applications.</p>
    <div>
      <h3>Cloudflare compute + storage: a powerful duo</h3>
      <a href="#cloudflare-compute-storage-a-powerful-duo">
        
      </a>
    </div>
    <p>We have our own growing suite of storage offerings: Workers KV, Durable Objects, D1, <a href="https://www.cloudflare.com/developer-platform/r2/">R2</a>. As we’re maturing our data products, we think deeply about their interactions with Workers so that you don’t have to! For example, another approach that has better performance in some cases is moving storage, rather than compute close to users. If you’re using Durable Objects to create a real-time game we could move the Durable Objects to minimize latency for all users.</p><p>Our goal for the future state is that you set mode = "smart"and we evaluate the optimal placement of all your resources with no additional configuration needed.</p>
    <div>
      <h3>Cloudflare compute + ${backendService}</h3>
      <a href="#cloudflare-compute-backendservice">
        
      </a>
    </div>
    <p>Today, the primary use case for Smart Placement is when you’re using non-Cloudflare services like external databases or third party APIs for your applications.</p><p>Many backend services, whether they’re self-hosted or managed services, are centralized, meaning that data is stored and managed in a single location. Your users are global and Workers are global, but your backend is centralized.</p><p>If your code makes multiple requests to your backend services they could be crossing the globe multiple times, having a big hit on performance. Some services offer data replication and caching which help to improve performance, but also come with trade-offs like data consistency and higher costs that should be weighed against your use case.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6Xh0tIBnMeZp7z1mDeonqd/2d55124ff2c831d964eb36a98852a968/map.png" />
            
            </figure><p>The Cloudflare network is <a href="https://www.cloudflare.com/network/">~50ms from 95% of the world’s connected population</a>. Turning this on its head, we're also very close to your backend services.</p>
    <div>
      <h2>Application performance is user experience</h2>
      <a href="#application-performance-is-user-experience">
        
      </a>
    </div>
    <p>Let’s understand how moving compute close to your backend services could decrease application latency by walking through an example:</p><p>Say you have a user in Sydney, Australia who’s accessing an application running on Workers. This application makes three round trips to a database located in Frankfurt, Germany in order to serve the user’s request.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4EGjgjjCqADFBQGq5OXVFC/cfd95273a6e9a7afbe09ab27e3c90104/download--1--6.png" />
            
            </figure><p>Intuitively, you can guess that the bottleneck is going to be the time that it takes the Worker to perform multiple round trips to your database. Instead of the Worker being invoked close to the user, what if it was invoked in a data center closest to the database instead?</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4D4IOJIAlHuGYce5DqjGky/09d291252b813c1981cac9ef19cb1fd7/wAAAABJRU5ErkJggg__" />
            
            </figure><p>Let’s put this to the test.</p><p>We measured the request duration for a Worker without Smart Placement and compared it to one with Smart Placement enabled. For both tests, we sent 3,500 requests from Sydney to a Worker which does three round trips to an <a href="https://upstash.com/">Upstash</a> instance (free-tier) located in eu-central-1 (Frankfurt).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/54nvh0FI2RcDh1gQ9ea4Tq/6a0ca6a7ea678cd965c79b25d7b90874/image3-7.png" />
            
            </figure><p>The results are clear! In this example, moving the Worker close to the backend <b>improved</b> <b>application performance by 4-8x</b>.</p>
    <div>
      <h2>Network decisions shouldn’t be human decisions</h2>
      <a href="#network-decisions-shouldnt-be-human-decisions">
        
      </a>
    </div>
    <p>As a developer, you should focus on what you do best – building applications – without needing to worry about the network decisions that will make your application faster.</p><p>Cloudflare has a unique vantage point: our network gathers intelligence around the optimal paths between users, Cloudflare data centers and back-end servers – we have lots of experience in this area with <a href="/argo/">Argo Smart Routing</a>. Smart Placement takes these factors into consideration to automatically place your Worker in the best spot to minimize overall request duration.</p><p>So, how does Smart Placement work?</p><p>Smart Placement can be enabled on a per-Worker basis under the “Settings” tab or in your wrangler.toml file:</p>
            <pre><code>[placement]
mode = "smart"</code></pre>
            <p>Once you enable Smart Placement on your Worker or Pages Function, the Smart Placement algorithm analyzes fetch requests (also known as subrequests) that your Worker is making in real time. It then compares these to latency data aggregated by our network. If we detect that on average your Worker makes more than one subrequest to a back-end resource, then your Worker will automatically get invoked from the optimal data center!</p><p>There are some back-end services that, for good reason, are not considered by the Smart Placement algorithm:</p><ul><li><p>Globally distributed services: If the services that your Worker communicates with are geo-distributed in many regions, Smart Placement isn’t a good fit. We automatically rule these out of the Smart Placement optimization.</p></li><li><p>Analytics or logging services: Requests to analytics or logging services don’t need to be in the critical path of your application. <a href="https://developers.cloudflare.com/workers/runtime-apis/fetch-event/?ref=blog.cloudflare.com#waituntil"><code>waitUntil()</code></a> should be used so that the response back to users isn’t blocked when instrumenting your code. Since <code>waitUntil()</code> doesn’t impact the request duration from a user’s perspective, we automatically rule analytics/logging services out of the Smart Placement optimization.</p></li></ul><p>Refer to our <a href="https://developers.cloudflare.com/workers/platform/smart-placement/#supported-backends">documentation</a> for a list of services not considered by the Smart Placement algorithm.</p><p>Once Smart Placement kicks in, you’ll be able to see a new “Request Duration” tab on your Worker. We route 1% of requests without Smart Placement enabled so that you can see its impact on request duration.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2IicbhnuU143f8T6EJc9s4/2f24b79cc4b019123b1e7ed2df4cd02a/download-9.png" />
            
            </figure><p>And yes, it is really that easy!</p><p>Try out Smart Placement by checking out our <a href="https://smart-placement-demo.pages.dev/">demo</a> (it’s a lot of fun to play with!). To learn more, visit our <a href="https://developers.cloudflare.com/workers/platform/smart-placement/">developer documentation</a>.</p>
    <div>
      <h2>What’s next for Smart Placement?</h2>
      <a href="#whats-next-for-smart-placement">
        
      </a>
    </div>
    <p>We’re only getting started! We have lots of ideas on how we can improve Smart Placement:</p><ul><li><p>Support for calculating the optimal location when the application uses multiple back-ends</p></li><li><p>Fine-tuned placement (e.g. if your Worker uses multiple back-ends depending on the path. We calculate the optimal placement per path instead of per-Worker)</p></li><li><p>Support for TCP based connections</p></li></ul><p>We would like to hear from you! If you have feedback or feature requests, reach out through the <a href="https://discord.com/invite/cloudflaredev">Cloudflare Developer Discord</a>.</p>
    <div>
      <h3>Watch on Cloudflare TV</h3>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div></div> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Database]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">7AkCd01DEJfXbjEGoK8Uzq</guid>
            <dc:creator>Michael Hart</dc:creator>
            <dc:creator>Serena Shah-Simpson</dc:creator>
            <dc:creator>Tanushree Sharma</dc:creator>
        </item>
    </channel>
</rss>