
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Thu, 09 Apr 2026 11:13:59 GMT</lastBuildDate>
        <item>
            <title><![CDATA[How Workers VPC Services connects to your regional private networks from anywhere in the world]]></title>
            <link>https://blog.cloudflare.com/workers-vpc-open-beta/</link>
            <pubDate>Wed, 05 Nov 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Workers VPC Services enter open beta today. We look under the hood to see how Workers VPC connects your globally-deployed Workers to your regional private networks by using Cloudflare's global network, while abstracting cross-cloud networking complexity. ]]></description>
            <content:encoded><![CDATA[ <p>In April, we shared our vision for a <a href="https://blog.cloudflare.com/workers-virtual-private-cloud/"><u>global virtual private cloud on Cloudflare</u></a>, a way to unlock your applications from regionally constrained clouds and on-premise networks, enabling you to build truly cross-cloud applications.</p><p>Today, we’re announcing the first milestone of our Workers VPC initiative: VPC Services. VPC Services allow you to connect to your APIs, containers, virtual machines, serverless functions, databases and other services in regional private networks via <a href="https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/"><u>Cloudflare Tunnels</u></a> from your <a href="https://workers.cloudflare.com/"><u>Workers</u></a> running anywhere in the world. </p><p>Once you set up a Tunnel in your desired network, you can register each service that you want to expose to Workers by configuring its host or IP address. Then, you can access the VPC Service as you would any other <a href="https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/"><u>Workers service binding</u></a> — Cloudflare’s network will automatically route to the VPC Service over Cloudflare’s network, regardless of where your Worker is executing:</p>
            <pre><code>export default {
  async fetch(request, env, ctx) {
    // Perform application logic in Workers here	

    // Call an external API running in a ECS in AWS when needed using the binding
    const response = await env.AWS_VPC_ECS_API.fetch("http://internal-host.com");

    // Additional application logic in Workers
    return new Response();
  },
};</code></pre>
            <p>Workers VPC is now available to everyone using Workers, at no additional cost during the beta, as is Cloudflare Tunnels. <a href="https://dash.cloudflare.com/?to=/:account/workers/vpc/services"><u>Try it out now.</u></a> And read on to learn more about how it works under the hood.</p>
    <div>
      <h2>Connecting the networks you trust, securely</h2>
      <a href="#connecting-the-networks-you-trust-securely">
        
      </a>
    </div>
    <p>Your applications span multiple networks, whether they are on-premise or in external clouds. But it’s been difficult to connect from Workers to your APIs and databases locked behind private networks. </p><p>We have <a href="https://blog.cloudflare.com/workers-virtual-private-cloud/"><u>previously described</u></a> how traditional virtual private clouds and networks entrench you into traditional clouds. While they provide you with workload isolation and security, traditional virtual private clouds make it difficult to build across clouds, access your own applications, and choose the right technology for your stack.</p><p>A significant part of the cloud lock-in is the inherent complexity of building secure, distributed workloads. VPC peering requires you to configure routing tables, security groups and network access-control lists, since it relies on networking across clouds to ensure connectivity. In many organizations, this means weeks of discussions and many teams involved to get approvals. This lock-in is also reflected in the solutions invented to wrangle this complexity: Each cloud provider has their own bespoke version of a “Private Link” to facilitate cross-network connectivity, further restricting you to that cloud and the vendors that have integrated with it.</p><p>With Workers VPC, we’re simplifying that dramatically. You set up your Cloudflare Tunnel once, with the necessary permissions to access your private network. Then, you can configure Workers VPC Services, with the tunnel and hostname (or IP address and port) of the service you want to expose to Workers. Any request made to that VPC Service will use this configuration to route to the given service within the network.</p>
            <pre><code>{
  "type": "http",
  "name": "vpc-service-name",
  "http_port": 80,
  "https_port": 443,
  "host": {
    "hostname": "internally-resolvable-hostname.com",
    "resolver_network": {
      "tunnel_id": "0191dce4-9ab4-7fce-b660-8e5dec5172da"
    }
  }
}</code></pre>
            <p>This ensures that, once represented as a Workers VPC Service, a service in your private network is secured in the same way other Cloudflare bindings are, using the Workers binding model. Let’s take a look at a simple VPC Service binding example:</p>
            <pre><code>{
  "name": "WORKER-NAME",
  "main": "./src/index.js",
  "vpc_services": [
    {
      "binding": "AWS_VPC2_ECS_API",
      "service_id": "5634563546"
    }
  ]
}</code></pre>
            <p>Like other Workers bindings, when you deploy a Worker project that tries to connect to a VPC Service, the access permissions are verified at deploy time to ensure that the Worker has access to the service in question. And once deployed, the Worker can use the VPC Service binding to make requests to that VPC Service — and only that service within the network. </p><p>That’s significant: Instead of exposing the entire network to the Worker, only the specific VPC Service can be accessed by the Worker. This access is verified at deploy time to provide a more explicit and transparent service access control than traditional networks and access-control lists do.</p><p>This is a key factor in the design of Workers bindings: de facto security with simpler management and making Workers immune to Server-Side Request Forgery (SSRF) attacks. <a href="https://blog.cloudflare.com/workers-environment-live-object-bindings/#security"><u>We’ve gone deep on the binding security model in the past</u></a>, and it becomes that much more critical when accessing your private networks. </p><p>Notably, the binding model is also important when considering what Workers are: scripts running on Cloudflare’s global network. They are not, in contrast to traditional clouds, individual machines with IP addresses, and do not exist within networks. Bindings provide secure access to other resources within your Cloudflare account – and the same applies to Workers VPC Services.</p>
    <div>
      <h2>A peek under the hood</h2>
      <a href="#a-peek-under-the-hood">
        
      </a>
    </div>
    <p>So how do VPC Services and their bindings route network requests from Workers anywhere on Cloudflare’s global network to regional networks using tunnels? Let’s look at the lifecycle of a sample HTTP Request made from a VPC Service’s dedicated <b>fetch()</b> request represented here:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4iUTiZjmbm2ujppLugfxJo/4db92fdf8549c239f52d8636e2589baf/image4.png" />
          </figure><p>It all starts in the Worker code, where the <b>.fetch() </b>function of the desired VPC Service is called with a standard JavaScript <a href="https://developer.mozilla.org/en-US/docs/Web/API/Request"><u>Request</u></a> (as represented with Step 1). The Workers runtime will use a <a href="https://capnproto.org/"><u>Cap’n Proto</u></a> remote-procedure-call to send the original HTTP request alongside additional context, as it does for many other Workers bindings. </p><p>The Binding Worker of the VPC Service System receives the HTTP request along with the binding context, in this case, the Service ID of the VPC Service being invoked. The Binding Worker will proxy this information to the Iris Service within an HTTP CONNECT connection, a standard pattern across Cloudflare’s bindings to place connection logic to Cloudflare’s edge services within Worker code rather than the Workers runtime itself (Step 2). </p><p>The Iris Service is the main service for Workers VPC. Its responsibility is to accept requests for a VPC Service and route them to the network in which your VPC Service is located. It does this by integrating with <a href="https://blog.cloudflare.com/extending-local-traffic-management-load-balancing-to-layer-4-with-spectrum/#how-we-enabled-spectrum-to-support-private-networks"><u>Apollo</u></a>, an internal service of <a href="https://developers.cloudflare.com/cloudflare-one/?cf_target_id=2026081E85C775AF31266A26CE7F3D4D"><u>Cloudflare One</u></a>. Apollo provides a unified interface that abstracts away the complexity of securely connecting to networks and tunnels, <a href="https://blog.cloudflare.com/from-ip-packets-to-http-the-many-faces-of-our-oxy-framework/"><u>across various layers of networking</u></a>. </p><p>To integrate with Apollo, Iris must complete two tasks. First, Iris will parse the VPC Service ID from the metadata and fetch the information of the tunnel associated with it from our configuration store. This includes the tunnel ID and type from the configuration store (Step 3), which is the information that Iris needs to send the original requests to the right tunnel.</p><p>Second, Iris will create the UDP datagrams containing DNS questions for the A and AAAA records of the VPC Service’s hostname. These datagrams will be sent first, via Apollo. Once DNS resolution is completed, the original request is sent along, with the resolved IP address and port (Step 4). That means that steps 4 through 7 happen in sequence twice for the first request: once for DNS resolution and a second time for the original HTTP Request. Subsequent requests benefit from Iris’ caching of DNS resolution information, minimizing request latency.</p><p>In Step 5, Apollo receives the metadata of the Cloudflare Tunnel that needs to be accessed, along with the DNS resolution UDP datagrams or the HTTP Request TCP packets. Using the tunnel ID, it determines which datacenter is connected to the Cloudflare Tunnel. This datacenter is in a region close to the Cloudflare Tunnel, and as such, Apollo will route the DNS resolution messages and the Original Request to the Tunnel Connector Service running in that datacenter (Step 5).</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6eXnv33qvTvGRRNGqS9ywj/99e57beeaa32de0724c6c9f396ab3b17/image3.png" />
          </figure><p>The Tunnel Connector Service is responsible for providing access to the Cloudflare Tunnel to the rest of Cloudflare’s network. It will relay the DNS resolution questions, and subsequently the original request to the tunnel over the QUIC protocol (Step 6).</p><p>Finally, the Cloudflare Tunnel will send the DNS resolution questions to the DNS resolver of the network it belongs to. It will then send the original HTTP Request from its own IP address to the destination IP and port (Step 7). The results of the request are then relayed all the way back to the original Worker, from the datacenter closest to the tunnel all the way to the original Cloudflare datacenter executing the Worker request.</p>
    <div>
      <h2>What VPC Service allows you to build</h2>
      <a href="#what-vpc-service-allows-you-to-build">
        
      </a>
    </div>
    <p>This unlocks a whole new tranche of applications you can build on Cloudflare. For years, Workers have excelled at the edge, but they've largely been kept "outside" your core infrastructure. They could only call public endpoints, limiting their ability to interact with the most critical parts of your stack—like a private accounts API or an internal inventory database. Now, with VPC Services, Workers can securely access those private APIs, databases, and services, fundamentally changing what's possible.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/DDDzgVtHtK92DZ4LwKhLI/904fc30fcab4797fd6ee263f09b85ab1/image2.png" />
          </figure><p>This immediately enables true cross-cloud applications that span Cloudflare Workers and any other cloud like AWS, GCP or Azure. We’ve seen many customers adopt this pattern over the course of our private beta, establishing private connectivity between their external clouds and Cloudflare Workers. We’ve even done so ourselves, connecting our Workers to Kubernetes services in our core datacenters to power the control plane APIs for many of our services. Now, you can build the same powerful, distributed architectures, using Workers for global scale while keeping stateful backends in the network you already trust.</p><p>It also means you can connect to your on-premise networks from Workers, allowing you to modernize legacy applications with the performance and infinite scale of Workers. More interesting still are some emerging use cases for developer workflows. We’ve seen developers run <a href="https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/"><code><u>cloudflared</u></code></a> on their laptops to connect a deployed Worker back to their local machine for real-time debugging. The full flexibility of Cloudflare Tunnels is now a programmable primitive accessible directly from your Worker, opening up a world of possibilities.</p>
    <div>
      <h2>The path ahead of us</h2>
      <a href="#the-path-ahead-of-us">
        
      </a>
    </div>
    <p>VPC Services is the first milestone within the larger Workers VPC initiative, but we’re just getting started. Our goal is to make connecting to any service and any network, anywhere in the world, a seamless part of the Workers experience. Here’s what we’re working on next:</p><p><b>Deeper network integration</b>. Starting with Cloudflare Tunnels was a deliberate choice. It's a highly available, flexible, and familiar solution, making it the perfect foundation to build upon. To provide more options for enterprise networking, we're going to be adding support for standard IPsec tunnels, Cloudflare Network Interconnect (CNI), and AWS Transit Gateway, giving you and your teams more choices and potential optimizations. Crucially, these connections will also become truly bidirectional, allowing your private services to initiate connections back to Cloudflare resources such as pushing events to Queues or fetching from R2.</p><p><b>Expanded protocol and service support. </b>The next step beyond HTTP is enabling access to TCP services. This will first be achieved by integrating with Hyperdrive. We're evolving the previous Hyperdrive support for private databases to be simplified with VPC Services configuration, avoiding the need to add Cloudflare Access and manage security tokens. This creates a more native experience, complete with Hyperdrive's powerful connection pooling. Following this, we will add broader support for raw TCP connections, unlocking direct connectivity to services like Redis caches and message queues from <a href="https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/"><code><u>Workers ‘connect()’</u></code></a>.</p><p><b>Ecosystem compatibility. </b>We want to make connecting to a private service feel as natural as connecting to a public one. To do so, we will be providing a unique autogenerated hostname for each Workers VPC Service, similar to <a href="https://developers.cloudflare.com/hyperdrive/get-started/#write-a-worker"><u>Hyperdrive’s connection strings</u></a>. This will make it easier to use Workers VPC with existing libraries and object–relational mapping libraries that may require a hostname (e.g., in a global ‘<code>fetch()</code>’ call or a MongoDB connection string). Workers VPC Service hostname will automatically resolve and route to the correct VPC Service, just as the ‘<code>fetch()</code>’ command does.</p>
    <div>
      <h2>Get started with Workers VPC</h2>
      <a href="#get-started-with-workers-vpc">
        
      </a>
    </div>
    <p>We’re excited to release Workers VPC Services into open beta today. We’ve spent months building out and testing our first milestone for Workers to private network access. And we’ve refined it further based on feedback from both internal teams and customers during the closed beta. </p><p><b>Now, we’re looking forward to enabling everyone to build cross-cloud apps on Workers with Workers VPC, available for free during the open beta.</b> With Workers VPC, you can bring your apps on private networks to region Earth, closer to your users and available to Workers across the globe.</p><p><a href="https://dash.cloudflare.com/?to=/:account/workers/vpc/services"><b><u>Get started with Workers VPC Services for free now.</u></b></a></p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Workers VPC]]></category>
            <category><![CDATA[Cloudflare Tunnel]]></category>
            <category><![CDATA[Network]]></category>
            <category><![CDATA[Hybrid Cloud]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[VPC]]></category>
            <category><![CDATA[Private Network]]></category>
            <guid isPermaLink="false">3nRyPdIVogbDGSeUZgRY41</guid>
            <dc:creator>Thomas Gauvin</dc:creator>
            <dc:creator>Matt Alonso</dc:creator>
            <dc:creator>Eric Falcão</dc:creator>
        </item>
        <item>
            <title><![CDATA[Partnering to make full-stack fast: deploy PlanetScale databases directly from Workers]]></title>
            <link>https://blog.cloudflare.com/planetscale-postgres-workers/</link>
            <pubDate>Thu, 25 Sep 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ We’ve teamed up with PlanetScale to make shipping full-stack applications on Cloudflare Workers even easier.  ]]></description>
            <content:encoded><![CDATA[ <p>We’re not burying the lede on this one: you can now connect <a href="https://www.cloudflare.com/developer-platform/products/workers/"><u>Cloudflare Workers</u></a> to your PlanetScale databases directly and ship full-stack applications backed by Postgres or MySQL. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3tcLGobPxPIHoDYEiGcY0X/d970a4a6b8a9e6ebc7d06ab57b168007/Frame_1321317798__1_.png" />
          </figure><p>We’ve teamed up with <a href="https://planetscale.com/"><u>PlanetScale</u></a> because we wanted to partner with a database provider that we could confidently recommend to our users: one that shares our obsession with performance, reliability and developer experience. These are all critical factors for any development team building a serious application. </p><p>Now, when connecting to PlanetScale databases, your connections are automatically configured for optimal performance with <a href="https://www.cloudflare.com/developer-platform/products/hyperdrive/"><u>Hyperdrive</u></a>, ensuring that you have the fastest access from your Workers to your databases, regardless of where your Workers are running.</p>
    <div>
      <h3>Building full-stack</h3>
      <a href="#building-full-stack">
        
      </a>
    </div>
    <p>As Workers has matured into a full-stack platform, we’ve introduced more options to facilitate your connectivity to data. With <a href="https://developers.cloudflare.com/kv/"><u>Workers KV</u></a>, we made it easy to store configuration and cache unstructured data on the edge. With <a href="https://www.cloudflare.com/developer-platform/products/d1/"><u>D1</u></a> and <a href="https://www.cloudflare.com/developer-platform/products/durable-objects/"><u>Durable Objects</u></a>, we made it possible to build multi-tenant apps with simple, isolated SQL databases. And with Hyperdrive, we made connecting to external databases fast and scalable from Workers.</p><p>Today, we’re introducing a new choice for building on Cloudflare: Postgres and MySQL PlanetScale databases, directly accessible from within the Cloudflare dashboard. Link your Cloudflare and PlanetScale accounts, stop manually copying API keys back-and-forth, and connect Workers to any of your PlanetScale databases (production or otherwise!).</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/71rXsGZgXWem4yvkhdtHsP/55f9433b5447c09703ef39a547881497/image3.png" />
          </figure><p><sup>Connect to a PlanetScale database — no figuring things out on your own</sup></p><p>Postgres and MySQL are the most popular options for building applications, and with good reason. Many large companies have built and scaled on these databases, providing for a robust ecosystem (like Cloudflare!). And you may want to have access to the power, familiarity, and functionality that these databases provide. </p><p>Importantly, all of this builds on <a href="https://blog.cloudflare.com/it-it/how-hyperdrive-speeds-up-database-access/"><u>Hyperdrive</u></a>, our distributed connection pooler and query caching infrastructure. Hyperdrive keeps connections to your databases warm to avoid incurring latency penalties for every new request, reduces the CPU load on your database by managing a connection pool, and can cache the results of your most frequent queries, removing load from your database altogether. Given that about 80% of queries for a typical transactional database are read-only, this can be substantial — we’ve observed this in reality!</p>
    <div>
      <h3>No more copying credentials around</h3>
      <a href="#no-more-copying-credentials-around">
        
      </a>
    </div>
    <p>Starting today, you can <a href="https://dash.cloudflare.com/?to=/:account/workers/hyperdrive?step=1&amp;modal=1"><u>connect to your PlanetScale databases from the Cloudflare dashboard</u></a> in just a few clicks. Connecting is now secure by default with a one-click password rotation option, without needing to copy and manage credentials back and forth. A Hyperdrive configuration will be created for your PlanetScale database, providing you with the optimal setup to start building on Workers.</p><p>And the experience spans both Cloudflare and PlanetScale dashboards: you can also create and view attached Hyperdrive configurations for your databases from the PlanetScale dashboard.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3I7WyAGXCLY8xhugPlIhl5/0ec38f0248140a628d805df7bb62dcc3/image2.png" />
          </figure><p>By automatically integrating with Hyperdrive, your PlanetScale databases are optimally configured for access from Workers. When you connect your database via Hyperdrive, Hyperdrive’s Placement system automatically determines the location of the database and places its pool of database connections in Cloudflare data centers with the lowest possible latency. </p><p>When one of your Workers connects to your Hyperdrive configuration for your PlanetScale database, Hyperdrive will ensure the fastest access to your database by eliminating the unnecessary roundtrips included in a typical database connection setup. Hyperdrive will resolve connection setup within the Hyperdrive client and use existing connections from the pool to quickly serve your queries. Better yet, Hyperdrive allows you to cache your query results in case you need to scale for high-read workloads. </p><p>This is a peek under the hood of how Hyperdrive makes access to PlanetScale as fast as possible. We’ve previously blogged about <a href="https://blog.cloudflare.com/it-it/how-hyperdrive-speeds-up-database-access/"><u>Hyperdrive’s technical underpinnings</u></a> — it’s worth a read. And with this integration with Hyperdrive, you can easily connect to your databases across different Workers applications or environments, without having to reconfigure your credentials. All in all, a perfect match.</p>
    <div>
      <h3>Get started with PlanetScale and Workers</h3>
      <a href="#get-started-with-planetscale-and-workers">
        
      </a>
    </div>
    <p>With this partnership, we’re making it trivially easy to build on Workers with PlanetScale. Want to build a new application on Workers that connects to your existing PlanetScale cluster? With just a few clicks, you can create a globally deployed app that can query your database, cache your hottest queries, and keep your database connections warmed for fast access from Workers.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3eTtJKz4sxeNvClVQMWIFg/9c91fb02b1cd4eca7ad5ef013e7ab0f0/image4.png" />
          </figure><p><sup><i>Connect directly to your PlanetScale MySQL or Postgres databases from the Cloudflare dashboard, for optimal configuration with Hyperdrive.</i></sup></p><p>To get started, you can:</p><ul><li><p>Head to the <a href="https://dash.cloudflare.com/?to=/:account/workers/hyperdrive?step=1&amp;modal=1"><u>Cloudflare dashboard</u></a> and connect your PlanetScale account</p></li><li><p>… or head to <a href="https://app.planetscale.com/"><u>PlanetScale</u></a> and connect your Cloudflare account</p></li><li><p>… and then deploy a Worker</p></li></ul><p>Review the <a href="https://developers.cloudflare.com/hyperdrive/"><u>Hyperdrive docs</u></a> and/or the <a href="https://planetscale.com/docs"><u>PlanetScale docs</u></a> to learn more about how to connect Workers to PlanetScale and start shipping.</p> ]]></content:encoded>
            <category><![CDATA[Hyperdrive]]></category>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Partnership]]></category>
            <category><![CDATA[Database]]></category>
            <guid isPermaLink="false">7ibt13YouHX6Ew1wLZn5pi</guid>
            <dc:creator>Matt Silverlock</dc:creator>
            <dc:creator>Thomas Gauvin</dc:creator>
            <dc:creator>Adrian Gracia  </dc:creator>
        </item>
        <item>
            <title><![CDATA[Announcing Cloudflare Email Service’s private beta]]></title>
            <link>https://blog.cloudflare.com/email-service/</link>
            <pubDate>Thu, 25 Sep 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Today, we’re launching Cloudflare Email Service. Send and receive email directly from your Workers with native bindings—no API keys needed. Sign up for the private beta. ]]></description>
            <content:encoded><![CDATA[ <p>If you are building an application, you rely on email to communicate with your users. You validate their signup, notify them about events, and send them invoices through email. The service continues to find new purpose with agentic workflows and other AI-powered tools that rely on a simple email as an input or output.</p><p>And it is a pain for developers to manage. It’s frequently the most annoying burden for most teams. Developers deserve a solution that is simple, reliable, and deeply integrated into their workflow. </p><p>Today, we're excited to announce just that: the private beta of Email Sending, a new capability that allows you to send transactional emails directly from Cloudflare Workers. Email Sending joins and expands our popular <a href="https://developers.cloudflare.com/email-routing/"><u>Email Routing</u></a> product, and together they form the new Cloudflare Email Service — a single, unified developer experience for all your email needs.</p><p>With Cloudflare Email Service, we’re distilling our years of experience <a href="https://developers.cloudflare.com/cloudflare-one/email-security/"><u>securing</u></a> and <a href="https://developers.cloudflare.com/email-routing/"><u>routing</u></a> emails, and combining it with the power of the developer platform. Now, sending an email is as easy as adding a binding to a Worker and calling <code>send</code>:</p>
            <pre><code>export default {
  async fetch(request, env, ctx) {

    await env.SEND_EMAIL.send({
      to: [{ email: "hello@example.com" }],
      from: { email: "api-sender@your-domain.com", name: "Your App" },
      subject: "Hello World",
      text: "Hello World!"
    });

    return new Response(`Successfully sent email!`);
  },
};</code></pre>
            
    <div>
      <h3>Email experience is user experience</h3>
      <a href="#email-experience-is-user-experience">
        
      </a>
    </div>
    <p>Email is a core tenet of your user experience. It’s how you stay in touch with your users when they are outside your applications. Users rely on email to inform them when they need to take actions such as password resets, purchase receipts, magic login links, and onboarding flows. When they fail, your application fails.</p><p>That means it’s crucial that emails need to land in your users’ inboxes, both reliably and quickly. A magic link that arrives ten minutes late is a lost user. An email delivered to a spam folder breaks user flows and can erode trust in your product. That’s why we’re focusing on deliverability and time-to-inbox with Cloudflare Email Service. </p><p>To do this, we’re tightly integrating with DNS to automatically configure the necessary DNS records — like SPF, DKIM and DMARC — such that email providers can verify your sending domain and trust your emails. Plus, in true Cloudflare fashion, Email Service is a global service. That means that we can deliver your emails with low latency anywhere in the world, without the complexity of managing servers across regions.</p>
    <div>
      <h3>Simple and flexible for developers</h3>
      <a href="#simple-and-flexible-for-developers">
        
      </a>
    </div>
    <p>Treating email as a core piece of your application also means building for every touchpoint in your development workflow. We’re building Email Service as part of the Cloudflare stack to make developing with email feels as natural as writing a Worker. </p><p>In practice, that means solving for every part of the transactional email workflow:</p><ul><li><p>Starting with Email Service is easy. Instead of managing API keys and secrets, you can use the <code>Email</code> binding to your <code>wrangler.jsonc</code> and send emails securely and with no risk of leaked credentials. </p></li><li><p>You can use Workers to process incoming mail, store attachments in R2, and add tasks to Queues to get email sending off the hot path of your application. And you can use <code>wrangler</code> to emulate Email Sending locally, allowing you to test your user journeys without jumping between tools and environments.</p></li><li><p>In production, you have clear <a href="https://www.cloudflare.com/learning/performance/what-is-observability/">observability</a> over your emails with bounce rates and delivery events. And, when a user reports a missing email, you can quickly dive into the delivery status to debug issues quickly and help get your user back on track.</p></li></ul><p>We’re also making sure Email Service seamlessly fits into your existing applications. If you need to send emails from external services, you can do so using either REST APIs or SMTP. Likewise, if you’ve been leaning on existing email frameworks (like <a href="https://react.email/"><u>React Email</u></a>) to send rich, HTML-rendered emails to users, you can continue to use them with Email Service. Import the library, render your template, and pass it to the `send` method just as you would elsewhere.</p>
            <pre><code>import { render, pretty, toPlainText } from '@react-email/render';
import { SignupConfirmation } from './templates';

export default {
  async fetch(request, env, ctx) {

    // Convert React Email template to html
    const html = await pretty(await render(&lt;SignupConfirmation url="https://your-domain.com/confirmation-id"/&gt;));

    // Use the Email Sending binding to send emails
    await env.SEND_EMAIL.send({
      to: [{ email: "hello@example.com" }],
      from: { email: "api-sender@your-domain.com", name: "Welcome" },
      subject: "Signup Confirmation",
      html,
      text: toPlainText(html)
    });

    return new Response(`Successfully sent email!`);
  }
};</code></pre>
            
    <div>
      <h3>Email Routing and Email Sending: Better together</h3>
      <a href="#email-routing-and-email-sending-better-together">
        
      </a>
    </div>
    <p>Sending email is only half the story. Applications often need to receive and parse emails to create powerful workflows. By combining Email Sending with our existing <a href="https://developers.cloudflare.com/email-routing"><u>Email Routing</u></a> capabilities, we're providing a complete, end-to-end solution for all your application's email needs.</p><p>Email Routing allows you to create custom email addresses on your domain and handle incoming messages programmatically with a Worker, which can enable powerful application flows such as:</p><ul><li><p>Using <a href="https://developers.cloudflare.com/workers-ai/"><u>Workers AI</u></a> to parse, summarize and even label incoming emails: flagging security events from customers, early signs of a bug or incident, and/or generating automatic responses based on those incoming emails.</p></li><li><p>Creating support tickets in systems like JIRA or Linear from emails sent to <code>support@your-domain.com</code>.</p></li><li><p>Processing invoices sent to <code>invoices@your-domain.com</code> and storing attachments in R2.</p></li></ul><p>To use Email Routing, add the <code>email</code> handler to your Worker application and process it as needed:</p>
            <pre><code>export default {
  // Create an email handler to process emails delivered to your Worker
  async email(message, env, ctx) {

    // Classify incoming emails using Workers AI
    const { score, label } = env.AI.run("@cf/huggingface/distilbert-sst-2-int8", { text: message.raw" })

    env.PROCESSED_EMAILS.send({score, label, message});
  },
};  </code></pre>
            <p>When you combine inbound routing with outbound sending, you can close the loop entirely within Cloudflare. Imagine a user emails your support address. A Worker can receive the email, parse its content, call a third-party API to create a ticket, and then use the Email Sending binding to send an immediate confirmation back to the user with their ticket number. That’s the power of a unified Email Service.</p><p>Email Sending will require a paid Workers subscription, and we'll be charging based on messages sent. We're still finalizing the packaging, and we'll update our documentation, <a href="https://developers.cloudflare.com/changelog/"><u>changelog</u></a>, and notify users as soon as we have final pricing and long before we start charging. Email Routing limits will remain unchanged.</p>
    <div>
      <h3>What’s next</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>Email is core to your application today, and it's becoming essential for the next generation of AI agents, background tasks, and automated workflows. We built the Cloudflare Email Service to be the engine for this new era of applications, we’ll be making it available in private beta this November.</p><ul><li><p>Interested in Email Sending? <a href="https://forms.gle/BX6ECfkar3oVLQxs7"><u>Sign up to the waitlist here.</u></a> </p></li><li><p>Want to start processing inbound emails? Get started with <a href="https://developers.cloudflare.com/email-routing/"><u>Email Routing</u></a>, which is available now, remains free and will be folded into the new email sending APIs coming.</p></li></ul><p>We’re excited to be adding Email Service to our Developer Platform, and we’re looking forward to seeing how you reimagine user experiences that increasingly rely on emails!</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Cloudflare Email Service]]></category>
            <guid isPermaLink="false">3yl6uG1uh1UE5rplzBlLad</guid>
            <dc:creator>Thomas Gauvin</dc:creator>
            <dc:creator>Celso Martinho</dc:creator>
        </item>
        <item>
            <title><![CDATA[A global virtual private cloud for building secure cross-cloud apps on Cloudflare Workers]]></title>
            <link>https://blog.cloudflare.com/workers-virtual-private-cloud/</link>
            <pubDate>Fri, 11 Apr 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ We’re announcing Workers VPC: a global private network that allows applications deployed on Cloudflare Workers to connect to your legacy cloud infrastructure.  ]]></description>
            <content:encoded><![CDATA[ <p>Today, we’re sharing a preview of a new feature that makes it easier to build cross-cloud apps: Workers VPC. </p><p>Workers VPC is our take on the traditional <a href="https://www.cloudflare.com/learning/cloud/what-is-a-virtual-private-cloud/"><u>virtual private cloud (VPC)</u></a>, modernized for a network and compute that isn’t tied to a single cloud region. And we’re complementing it with Workers VPC Private Links to make building across clouds easier. Together, they introduce two new capabilities to <a href="https://developers.cloudflare.com/workers"><u>Workers</u></a>:</p><ol><li><p>A way to group your apps’ resources on Cloudflare into isolated environments, where only resources within a Workers VPC can access one another, allowing you to secure and segment app-to-app traffic (a “Workers VPC”).</p></li><li><p>A way to connect a Workers VPC to a legacy VPC in a public or private cloud, enabling your Cloudflare resources to access your resources in private networks and vice versa, as if they were in a single VPC (the “Workers VPC Private Link”).</p></li></ol>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3b8ZShcyU8OvMSKi9Ku9fW/70c1ee1a3f10242551dd32438d0bcfba/1.png" />
          </figure><p><sup><i>Workers VPC and Workers VPC Private Link enable bidirectional connectivity between Cloudflare and external clouds</i></sup></p><p>When linked to an external VPC, Workers VPC makes the underlying resources directly addressable, so that application developers can think at the application layer, without dropping down to the network layer. Think of this like a unified VPC across clouds, with built-in service discovery.</p><p>We’re actively building Workers VPC on the foundation of our existing private networking products and expect to roll it out later in 2025. We wanted to share a preview of it early to get feedback and learn more about what you need. </p>
    <div>
      <h2>Building private cross-cloud apps is hard </h2>
      <a href="#building-private-cross-cloud-apps-is-hard">
        
      </a>
    </div>
    <p>Developers are increasingly choosing Workers as their platform of choice, building rich, stateful applications on it. We’re way past Workers’ <a href="https://blog.cloudflare.com/introducing-cloudflare-workers"><u>original edge use-cases</u></a>: you’re modernizing more of your stack and moving more business logic on to Workers. You’re choosing Workers to build real-time collaboration applications that access your external databases, large scale applications that use your secured APIs, and <a href="https://www.cloudflare.com/learning/ai/what-is-model-context-protocol-mcp/">Model Context Protocol</a> (MCP) servers that expose your business logic to <a href="https://www.cloudflare.com/learning/ai/what-is-agentic-ai/">agents</a> as close to your end users as possible.</p><p>Now, you’re running into the final barrier holding you back in external clouds: the VPC. Virtual private clouds provide you with peace of mind and security, but they’ve been cleverly designed to deliberately add mile-high barriers to building your apps on Workers. That’s the unspoken, vested interest behind getting you to use more legacy VPCs:<b> </b>it’s yet another way that <a href="https://blog.cloudflare.com/tag/connectivity-cloud/"><u>captivity clouds</u></a><b> </b>hold your data and apps hostage and lock you in. </p><p>In conversation after conversation, you’ve told us “VPCs are a blocker”. We get it: your company policies mandate the VPC, and with good reason! So, to access private resources from Workers, you have to either 1) create new public APIs that perform authentication to provide secure access, or 2) set up and scale Cloudflare Tunnels and Zero Trust for each resource that you want to access. That’s a lot of hoops to jump through before you can even start building.</p><p>While we have the storage and compute options for you to build fully on Workers, we also understand that you won’t be moving your applications or your data overnight! But we think you should at least be <b>free</b> to choose Workers <b>today</b> to build modern applications, AI agents, and real-time global applications with your existing private APIs and databases. That’s why we’re building Workers VPC.</p><p>We’ve witnessed the pain of building around VPCs first hand. In 2024, we shipped <a href="https://blog.cloudflare.com/elephants-in-tunnels-how-hyperdrive-connects-to-databases-inside-your-vpc-networks/"><u>support for private databases</u></a> for <a href="https://developers.cloudflare.com/hyperdrive/"><u>Hyperdrive</u></a>. This made it possible for you to connect to databases in an external VPC from Cloudflare Workers, using Cloudflare Tunnels as the underlying network solution. As a point-to-point solution, it’s been working great! But this solution has its limitations: managing and scaling a Cloudflare Tunnel for each resource in your external cloud isn’t sustainable for large, complex architectures. </p><p>We want to provide a dead-simple solution for you to unlock access to external cloud resources, in a manner that scales as you modernize more of your workloads with Workers. And we’re leveraging the experience we have in building Magic WAN and Magic Cloud Networking to make that possible.</p><p>So, we’re taking VPCs global with Workers VPC. And we’re letting you connect them to your legacy private networks with Workers VPC Private Links. Because we think you should be free to build secure, global, cross-cloud apps on Workers. </p>
    <div>
      <h2>Global cross-cloud apps need a global VPC</h2>
      <a href="#global-cross-cloud-apps-need-a-global-vpc">
        
      </a>
    </div>
    <p>Private networks are complex to set up, they span across many layers of abstraction, and entire teams are needed to manage them. There are few things as complex as managing architectures that have outgrown their original point-to-point network! So we knew we needed to provide a simple solution for isolated environments on our platform.</p><p>Workers VPCs are, by definition, virtual private clouds. That means that they allow you to define isolated  environments of Workers and Developer Platform resources like <a href="https://www.cloudflare.com/developer-platform/products/r2/"><u>R2</u></a>, <a href="https://developers.cloudflare.com/kv"><u>Workers KV</u></a>, and <a href="https://www.cloudflare.com/developer-platform/products/d1/"><u>D1</u></a> that have secure access to one another. Other resources in your Cloudflare account won’t have access to these — VPCs allow you to specify certain sets of resources that are associated with certain apps and ensure no cross-application access of resources happens.</p><p>Workers VPCs are the equivalent of the legacy VPC, re-envisioned for the Cloudflare Developer Platform. The main difference is how Workers VPCs are implemented under the hood: instead of being built on top of regional, IP-based networking, Workers VPCs are built for global scale with the Cloudflare network performing isolation of resources across all of its datacenters. </p><p>And as you would expect from traditional VPCs, Workers VPCs have networking capabilities that allow them to seamlessly integrate with traditional networks, enabling you to build cross-cloud apps that never leave the networks you trust. That’s where Workers VPC Private Links comes in. </p><p>Like AWS PrivateLink and other VPC-to-VPC approaches, Workers VPC Private Links connect your Workers VPC to your external cloud using either standard tunnels over IPsec or <a href="https://blog.cloudflare.com/cloudflare-network-interconnect/"><u>Cloudflare Network Interconnect</u></a>. When a Private Link is established, resources from either side can access one another directly, with nothing exposed over the public Internet, as if they were a single, connected VPC.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6TXRsaIO3a9tFFWo3LZJqq/adc1d2d54baf595bad661ff9ed035640/2.png" />
          </figure><p><sup><i>Workers VPC Private Link automatically provisions a gateway for IPsec tunnels or Cloudflare Network Interconnect and configures DNS for routing to Cloudflare resources</i></sup></p><p>To make this possible, Workers VPC and Private Links work together to automatically provision and manage the resources in your external cloud. This establishes the connection between both networks and configures the resources required to make bidirectional routing possible. And, because we know some teams will want to maintain full responsibility over resource provisioning, Workers VPC Private Link can automatically provide you with Terraform scripts to provision external cloud resources that you can run yourself.</p><p>After the connection is made, Workers VPC will automatically detect the resources in your external VPC and make them available as bindings with unique IDs. Requests made through the Workers VPC resource binding will automatically be routed to your external VPC, where DNS resolution will occur (if you’re using hostname-accessed resources) and will be routed to the expected resource. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6TvQBKtKVxy9MC0buyE07x/57d03cf1c0be765b582318e53e6d6a8e/3.png" />
          </figure><p>For example, connecting from Cloudflare Workers to a private API in an external VPC is just a matter of calling fetch() on a binding to a named Workers VPC resource:</p>
            <pre><code>const response = await env.WORKERS_VPC_RESOURCE.fetch("/api/users/342");</code></pre>
            <p>Similarly, Cloudflare resources are accessible via a standardized URL that has been configured within a private DNS resource in your external cloud by Workers VPC Private Link. If you were attempting to access R2 objects from an API in your VPC, you would be able to make the request to the expected URL:</p>
            <pre><code>const response = await fetch("https://&lt;account_id&gt;.r2.cloudflarestorage.com.cloudflare-workers-vpc.com");</code></pre>
            <p>Best of all, since Workers VPC is built on our existing platform, it takes full advantage of our networking and routing capabilities to reduce egress fees and let you build global apps.</p><p>First, by supporting <a href="https://developers.cloudflare.com/network-interconnect/"><u>Cloudflare Network Interconnect</u></a> as the underlying connection method, Workers VPC Private Links can help you lower your bandwidth costs by taking advantage of discounted external cloud egress pricing. Second, since Workers VPC is global by nature, your Workers and resources can be placed wherever needed to ensure optimal performance. For instance, with Workers’ <a href="https://developers.cloudflare.com/workers/configuration/smart-placement/"><u>Smart Placement</u></a>, you can ensure that your Workers are automatically placed in a region closest to your external, regional VPC to maximize app performance. </p>
    <div>
      <h2>An end-to-end connectivity cloud</h2>
      <a href="#an-end-to-end-connectivity-cloud">
        
      </a>
    </div>
    <p>Workers VPC unlocks huge swaths of your workloads that are currently locked into external clouds, without requiring you to expose those private resources to the public Internet to build on Workers. Here are real examples of applications that you’ve told us you’re looking forward to build on Workers with Workers VPC:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/52a04nrfXOe7FSVTAJzEn5/20747efd37dcd3e21f2d752ce4a6cdd8/4.png" />
          </figure><p><sup><i>Sample architecture of real-time canvas application built on Workers and Durable Objects accessing a private database and container in an external VPC</i></sup></p><p>Let’s say you’re trying to build a new feature for your application on <a href="https://developers.cloudflare.com/workers"><u>Workers</u></a>. You also want to add real-time collaboration to your app using <a href="https://developers.cloudflare.com/durable-objects/"><u>Durable Objects</u></a>. And you’re using <a href="http://blog.cloudflare.com/cloudflare-containers-coming-2025"><u>Containers</u></a> as well because you need to access FFmpeg for live video processing. In each scenario, you need a way to persist the state updates in your existing traditional database and access your existing APIs.</p><p>While in the past, you might have had to create a separate API just to handle update operations from Workers and Durable Objects, you can now directly access the traditional database and update the value directly with Workers VPC. </p><p>Same thing goes for <a href="https://modelcontextprotocol.io/introduction"><u>Model Context Protocol (MCP)</u></a> servers! If you’re <a href="https://developers.cloudflare.com/agents/guides/remote-mcp-server/"><u>building an MCP server on Workers</u></a>, you may want to expose certain functionality that isn’t immediately available as a public API, especially if time to market is important. With Workers VPC, you can create new functionality directly in your MCP server that builds upon your private APIs or databases, enabling you to ship quickly and securely. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3B4uuJCwP6RdFugEMjDLsq/65c01455b7903884384390ec344a6f57/5.png" />
          </figure><p><sup><i>Sample architecture of external cloud resources accessing data from R2, D1, KV</i></sup></p><p>Lots of development teams are landing more and more data on the Cloudflare Developer Platform, whether it is storing AI training data on <a href="https://developers.cloudflare.com/r2/"><u>R2</u></a> due to its <a href="https://www.cloudflare.com/the-net/cloud-egress-fees-challenge-future-ai/">zero-egress cost efficiency</a>, application data in <a href="https://developers.cloudflare.com/d1"><u>D1</u></a> with its horizontal sharding model, or configuration data in <a href="https://developers.cloudflare.com/kv"><u>KV</u></a> for its global single-digit millisecond read latencies. </p><p>Now, you need to provide a way to use the training data in R2 from your compute in your external cloud to train or fine-tune LLM models. Since you’re accessing user data, you need to use a private network because it’s mandated by your security teams. Likewise, you need to access user data and configuration data in D1 and KV for certain administrative or analytical tasks and you want to do so while avoiding the public Internet. Workers VPC enables direct, private routing from your external VPC to Cloudflare resources, with easily accessible hostnames from the automatically configured private DNS.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1AF32up4fVJsS9NoI11GRl/bdc8741dcae296c8b3f9ab34c823b520/6.png" />
          </figure><p>Finally, let’s use an <a href="https://developers.cloudflare.com/agents/"><u>AI agents</u></a> example — it’s <a href="https://blog.cloudflare.com/welcome-to-developer-week-2025/"><u>Developer Week 2025</u></a> after all! This AI agent is built on Workers, and uses retrieval augmented generation (RAG) to improve the results of its generated text while minimizing the context window. </p><p>You’re using <a href="https://www.postgresql.org/"><u>PostgreSQL</u></a> and <a href="https://www.elastic.co/elasticsearch"><u>Elasticsearch</u></a> in your external cloud because that’s where your data currently resides and you’re a fan of <a href="https://github.com/pgvector/pgvector"><u>pgvector</u></a>. You’ve decided to use Workers because you want to get to market quickly, and now you need to access your database. Your database is, once again, placed in a private network and is inaccessible from the public Internet. </p><p>While you could provision a new Hyperdrive and Cloudflare Tunnel in a container, since your Workers VPC is already set up and linked, you can access the database directly using either Workers or <a href="https://developers.cloudflare.com/hyperdrive/"><u>Hyperdrive</u></a>. </p><p>And what if new documents get added to your <a href="https://www.cloudflare.com/learning/cloud/what-is-object-storage/">object storage</a> in your external cloud? You might want to kick off a workflow to process the new document, chunk it, get embeddings for it, and update the state of your application in consequence, all while providing real-time updates to your end users about the status of the workflow? </p><p>Well, in that case, you can use <a href="https://developers.cloudflare.com/workflows"><u>Workflows</u></a>, triggered by a serverless function in the external cloud. The Workflow will then fetch the new document in object storage, process it as needed, use your preferred embedding provider (whether Workers AI or another provider) in order to process and update the vector stores in Postgres, and then update the state of your application. </p><p>These are just some of the workloads that we know will benefit from Workers VPC on day 1. We’re excited to see what you build and are looking forward to working with you to make global VPCs real. </p>
    <div>
      <h2>A new era for virtual private clouds</h2>
      <a href="#a-new-era-for-virtual-private-clouds">
        
      </a>
    </div>
    <p>We’re incredibly excited for you to be able to build more on Workers with Workers VPC. We believe that private access to your APIs and databases in your private networks will redefine what you can build on Workers. Workers VPCs unlock access to your private resources to let your ship faster, more performant apps on Workers. And we’re obviously going to ensure that <a href="http://blog.cloudflare.com/cloudflare-containers-coming-2025"><u>Containers</u></a> integrate natively with Workers VPC.</p><p>We’re actively building Workers VPC on the networking primitives and on-ramps we’ve been using to connect customer networks at scale, and our goal is to ship an early preview later in 2025.</p><p>We’re planning to tackle connectivity from Workers to external clouds first, enabling you to modernize more apps that need access to private APIs and databases with Workers, before expanding to support full-directional traffic flows and multiple Workers VPC networks. If you want to shape the vision of Workers VPC and have workloads trapped in a legacy cloud, <a href="https://www.cloudflare.com/workers-virtual-private-cloud-signup"><u>express interest here</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Workers VPC]]></category>
            <guid isPermaLink="false">1YpOIeuQls9R5sHSC6ScsF</guid>
            <dc:creator>Thomas Gauvin</dc:creator>
            <dc:creator>Steve Welham</dc:creator>
        </item>
        <item>
            <title><![CDATA[Build global MySQL apps using Cloudflare Workers and Hyperdrive]]></title>
            <link>https://blog.cloudflare.com/building-global-mysql-apps-with-cloudflare-workers-and-hyperdrive/</link>
            <pubDate>Tue, 08 Apr 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ MySQL comes to Cloudflare Workers and Hyperdrive: MySQL drivers and ORMs are now compatible with Workers runtime, and Hyperdrive allow you to connect to your regional database from Workers. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Today, we’re announcing support for MySQL in Cloudflare Workers and Hyperdrive. You can now build applications on Workers that connect to your MySQL databases directly, no matter where they’re hosted, with native MySQL drivers, and with optimal performance. </p><p>Connecting to MySQL databases from Workers has been an area we’ve been focusing on <a href="https://blog.cloudflare.com/relational-database-connectors/"><u>for quite some time</u></a>. We want you to build your apps on Workers with your existing data, even if that data exists in a SQL database in us-east-1. But connecting to traditional SQL databases from Workers has been challenging: it requires making stateful connections to regional databases with drivers that haven’t been designed for <a href="https://blog.cloudflare.com/workerd-open-source-workers-runtime/"><u>the Workers runtime</u></a>. </p><p>After multiple attempts at solving this problem for Postgres, <a href="https://developers.cloudflare.com/hyperdrive/"><u>Hyperdrive</u></a> emerged as our solution that provides the best of both worlds: it supports existing database drivers and libraries while also providing best-in-class performance. And it’s such a critical part of connecting to databases from Workers that we’re making it free (check out the <a href="https://blog.cloudflare.com/how-hyperdrive-speeds-up-database-access"><u>Hyperdrive free tier announcement</u></a>).</p><p>With <a href="http://blog.cloudflare.com/full-stack-development-on-cloudflare-workers"><u>new Node.js compatibility improvements</u></a> and <a href="http://developers.cloudflare.com/changelog/2025-04-08-hyperdrive-mysql-support/"><u>Hyperdrive support for the MySQL wire protocol</u></a>, we’re happy to say MySQL support for Cloudflare Workers has been achieved. If you want to jump into the code and have a MySQL database on hand, this “Deploy to Cloudflare” button will get you setup with a deployed project and will create a repository so you can dig into the code. </p><a href="https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/mysql-hyperdrive-template"><img src="https://deploy.workers.cloudflare.com/button" /></a>
<p></p><p>Read on to learn more about how we got MySQL to work on Workers, and why Hyperdrive is critical to making connectivity to MySQL databases fast.</p>
    <div>
      <h2>Getting MySQL to work on Workers</h2>
      <a href="#getting-mysql-to-work-on-workers">
        
      </a>
    </div>
    <p>Until recently, connecting to MySQL databases from Workers was not straightforward. While it’s been possible to make TCP connections from Workers <a href="https://blog.cloudflare.com/workers-tcp-socket-api-connect-databases/"><u>for some time</u></a>, MySQL drivers had many dependencies on Node.js that weren’t available on the Workers runtime, and that prevented their use.</p><p>This led to workarounds being developed. PlanetScale provided a <a href="https://planetscale.com/blog/introducing-the-planetscale-serverless-driver-for-javascript"><u>serverless driver for JavaScript</u></a>, which communicates with PlanetScale servers using HTTP instead of TCP to relay database messages. In a separate effort, a <a href="https://github.com/nora-soderlund/cloudflare-mysql"><u>fork</u></a> of the <a href="https://www.npmjs.com/package/mysql"><u>mysql</u></a> package was created to polyfill the missing Node.js dependencies and modify the <code>mysql</code> package to work on Workers. </p><p>These solutions weren’t perfect. They required using new libraries that either did not provide the level of support expected for production applications, or provided solutions that were limited to certain MySQL hosting providers. They also did not integrate with existing codebases and tooling that depended on the popular MySQL drivers (<a href="https://www.npmjs.com/package/mysql"><u>mysql</u></a> and <a href="https://www.npmjs.com/package/mysql2"><u>mysql2</u></a>). In our effort to enable all JavaScript developers to build on Workers, we knew that we had to support these drivers.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3s4E4XVAbvqRwyk6aITm1h/cb9700eae49d2593c9a834fc7a09018e/1.png" />
          </figure><p><sup><i>Package downloads from </i></sup><a href="https://www.npmjs.com/"><sup><i><u>npm</u></i></sup></a><sup><i> for </i></sup><a href="https://www.npmjs.com/package/mysql"><sup><i><u>mysql</u></i></sup></a><sup><i> and </i></sup><a href="https://www.npmjs.com/package/mysql2"><sup><i><u>mysql2</u></i></sup></a></p><p>Improving our Node.js compatibility story was critical to get these MySQL drivers working on our platform. We first identified <a href="https://nodejs.org/api/net.html"><u>net</u></a> and <a href="https://nodejs.org/api/stream.html"><u>stream</u></a> as APIs that were needed by both drivers. This, complemented by Workers’ <a href="https://blog.cloudflare.com/more-npm-packages-on-cloudflare-workers-combining-polyfills-and-native-code/"><u>nodejs_compat</u></a> to resolve unused Node.js dependencies with <a href="https://github.com/unjs/unenv"><code><u>unenv</u></code></a>, enabled the <a href="https://www.npmjs.com/package/mysql"><u>mysql</u></a> package to work on Workers:</p>
            <pre><code>import { createConnection } from 'mysql';

export default {
 async fetch(request, env, ctx): Promise&lt;Response&gt; {
    const result = await new Promise&lt;any&gt;((resolve) =&gt; {

       const connection = createConnection({
         host: env.DB_HOST,
         user: env.DB_USER,
         password: env.DB_PASSWORD,
         database: env.DB_NAME,
	  port: env.DB_PORT
       });

       connection.connect((error: { message : string }) =&gt; {
          if(error) {
            throw new Error(error.message);
          }
          
          connection.query("SHOW tables;", [], (error, rows, fields) =&gt; {
          connection.end();
         
          resolve({ fields, rows });
        });
       });

      });

     return new Response(JSON.stringify(result), {
       headers: {
         'Content-Type': 'application/json',
       },
     });
 },
} satisfies ExportedHandler&lt;Env&gt;;</code></pre>
            <p>Further work was required to get <a href="https://www.npmjs.com/package/mysql2"><u>mysql2</u></a> working: dependencies on Node.js <a href="https://nodejs.org/api/timers.html"><u>timers</u></a> and the JavaScript <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/eval"><u>eval</u></a> API remained. While we were able to land support for <a href="https://github.com/cloudflare/workerd/pull/3346"><u>timers</u></a> in the Workers runtime, <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/eval"><u>eval</u></a> was not an API that we could securely enable in the Workers runtime at this time. </p><p><a href="https://www.npmjs.com/package/mysql2"><u>mysql2</u></a> uses <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/eval"><u>eval</u></a> to optimize the parsing of MySQL results containing large rows with more than 100 columns (see <a href="https://github.com/sidorares/node-mysql2/issues/2055#issuecomment-1614222188"><u>benchmarks</u></a>). This blocked the driver from working on Workers, since the Workers runtime does not support this module. Luckily, <a href="https://github.com/sidorares/node-mysql2/pull/2289"><u>prior effort existed</u></a> to get mysql2 working on Workers using static parsers for handling text and binary MySQL data types without using <code>eval()</code>, which provides similar performance for a majority of scenarios. </p><p>In <a href="https://www.npmjs.com/package/mysql2"><u>mysql2</u></a> version <code>3.13.0</code>, a new option to disable the use of <code>eval()</code> was released to make it possible to use the driver in Cloudflare Workers:</p>
            <pre><code>import { createConnection  } from 'mysql2/promise';

export default {
 async fetch(request, env, ctx): Promise&lt;Response&gt; {
    const connection = await createConnection({
     host: env.DB_HOST,
     user: env.DB_USER,
     password: env.DB_PASSWORD,
     database: env.DB_NAME,
     port: env.DB_PORT

     // The following line is needed for mysql2 to work on Workers (as explained above)
     // mysql2 uses eval() to optimize result parsing for rows with &gt; 100 columns
     // eval() is not available in Workers due to runtime limitations
     // Configure mysql2 to use static parsing with disableEval
     disableEval: true
   });

   const [results, fields] = await connection.query(
     'SHOW tables;'
   );

   return new Response(JSON.stringify({ results, fields }), {
     headers: {
       'Content-Type': 'application/json',
       'Access-Control-Allow-Origin': '*',
     },
   });
 },
} satisfies ExportedHandler&lt;Env&gt;;</code></pre>
            <p>So, with these efforts, it is now possible to connect to MySQL from Workers. But, getting the MySQL drivers working on Workers was only half of the battle. To make MySQL on Workers performant for production uses, we needed to make it possible to connect to MySQL databases with Hyperdrive.</p>
    <div>
      <h2>Supporting MySQL in Hyperdrive</h2>
      <a href="#supporting-mysql-in-hyperdrive">
        
      </a>
    </div>
    <p>If you’re a MySQL developer, <a href="https://developers.cloudflare.com/hyperdrive/"><u>Hyperdrive</u></a> may be new to you. Hyperdrive solves a core problem: connecting from Workers to regional SQL databases is slow. Database drivers <a href="https://blog.cloudflare.com/how-hyperdrive-speeds-up-database-access/"><u>require many roundtrips</u></a> to establish a connection to a database. Without the ability to reuse these connections between Worker invocations, a lot of unnecessary latency is added to your application. </p><p>Hyperdrive solves this problem by pooling connections to your database globally and eliminating unnecessary roundtrips for connection setup. As a plus, Hyperdrive also provides integrated caching to offload popular queries from your database. We wrote an entire deep dive on how Hyperdrive does this, which you should definitely <a href="https://blog.cloudflare.com/how-hyperdrive-speeds-up-database-access/"><u>check out</u></a>. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/R8bfw57o8KEXHD7EstTmP/eee5182beb931373c25a1c42f5dd0ce3/2.png" />
          </figure><p>Getting Hyperdrive to support MySQL was critical for us to be able to say “Connect from Workers to MySQL databases”. That’s easier said than done. To support a new database type, Hyperdrive needs to be able to parse the wire protocol of the database in question, in this case, the <a href="https://dev.mysql.com/doc/dev/mysql-server/8.4.3/PAGE_PROTOCOL.html"><u>MySQL protocol</u></a>. Once this is accomplished, Hyperdrive can extract queries from protocol messages, cache results across Cloudflare locations, relay messages to a datacenter close to your database, and pool connections reliably close to your origin database. </p><p>Adapting Hyperdrive to parse a new language, MySQL protocol, is a challenge in its own right. But it also presented some notable differences with Postgres. While the intricacies are beyond the scope of this post, the differences in MySQL’s <a href="https://dev.mysql.com/doc/refman/8.4/en/authentication-plugins.html"><u>authentication plugins</u></a> across providers and how MySQL’s connection handshake uses <a href="https://dev.mysql.com/doc/dev/mysql-server/latest/group__group__cs__capabilities__flags.html"><u>capability flags</u></a> required some adaptation of Hyperdrive. In the end, we leveraged the experience we gained in building Hyperdrive for Postgres to iterate on our support for MySQL. And we’re happy to announce MySQL support is available for Hyperdrive, with all of the <a href="https://developers.cloudflare.com/changelog/2025-03-04-hyperdrive-pooling-near-database-and-ip-range-egress/"><u>performance</u></a> <a href="https://developers.cloudflare.com/changelog/2024-12-11-hyperdrive-caching-at-edge/"><u>improvements</u></a> we’ve made to Hyperdrive available from the get-go!</p><p>Now, you can create new Hyperdrive configurations for MySQL databases hosted anywhere (we’ve tested MySQL and MariaDB databases from AWS (including AWS Aurora), GCP, Azure, PlanetScale, and self-hosted databases). You can create Hyperdrive configurations for your MySQL databases from the dashboard or the <a href="https://developers.cloudflare.com/workers/wrangler/"><u>Wrangler CLI</u></a>:</p>
            <pre><code>wrangler hyperdrive create mysql-hyperdrive 
--connection-string="mysql://user:password@db-host.example.com:3306/defaultdb"</code></pre>
            <p>In your Wrangler configuration file, you’ll need to set your Hyperdrive binding to the ID of the newly created Hyperdrive configuration as well as set Node.js compatibility flags:</p>
            <pre><code>{
 "$schema": "node_modules/wrangler/config-schema.json",
 "name": "workers-mysql-template",
 "main": "src/index.ts",
 "compatibility_date": "2025-03-10",
 "observability": {
   "enabled": true
 },
 "compatibility_flags": [
   "nodejs_compat"
 ],
 "hyperdrive": [
   {
     "binding": "HYPERDRIVE",
     "id": "&lt;HYPERDRIVE_CONFIG_ID&gt;"
   }
 ]
}</code></pre>
            <p>From your Cloudflare Worker, the Hyperdrive binding provides you with custom connection credentials that connect to your Hyperdrive configuration. From there onward, all of your queries and database messages will be routed to your origin database by Hyperdrive, leveraging Cloudflare’s network to speed up routing.</p>
            <pre><code>import { createConnection  } from 'mysql2/promise';

export interface Env {
 HYPERDRIVE: Hyperdrive;
}

export default {
 async fetch(request, env, ctx): Promise&lt;Response&gt; {
  
   // Hyperdrive provides new connection credentials to use with your existing drivers
   const connection = await createConnection({
     host: env.HYPERDRIVE.host,
     user: env.HYPERDRIVE.user,
     password: env.HYPERDRIVE.password,
     database: env.HYPERDRIVE.database,
     port: env.HYPERDRIVE.port,

     // Configure mysql2 to use static parsing (as explained above in Part 1)
     disableEval: true 
   });

   const [results, fields] = await connection.query(
     'SHOW tables;'
   );

   return new Response(JSON.stringify({ results, fields }), {
     headers: {
       'Content-Type': 'application/json',
       'Access-Control-Allow-Origin': '*',
     },
   });
 },
} satisfies ExportedHandler&lt;Env&gt;;</code></pre>
            <p>As you can see from this code snippet, you only need to swap the credentials in your JavaScript code for those provided by Hyperdrive to migrate your existing code to Workers. No need to change the ORMs or drivers you’re using! </p>
    <div>
      <h2>Get started building with MySQL and Hyperdrive</h2>
      <a href="#get-started-building-with-mysql-and-hyperdrive">
        
      </a>
    </div>
    <p>MySQL support for Workers and Hyperdrive has been long overdue and we’re excited to see what you build. We published a template for you to get started building your MySQL applications on Workers with Hyperdrive:</p><a href="https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/mysql-hyperdrive-template"><img src="https://deploy.workers.cloudflare.com/button" /></a>
<p></p><p>As for what’s next, we’re going to continue iterating on our support for MySQL during the beta to support more of the MySQL protocol and MySQL-compatible databases. We’re also going to continue to expand the feature set of Hyperdrive to make it more flexible for your full-stack workloads and more performant for building full-stack global apps on Workers.</p><p>Finally, whether you’re using MySQL, PostgreSQL, or any of the other compatible databases, we think you should be using Hyperdrive to get the best performance. And because we want to enable you to build on Workers regardless of your preferred database, <a href="https://blog.cloudflare.com/how-hyperdrive-speeds-up-database-access"><u>we’re making Hyperdrive available to the Workers free plan. </u></a></p><p>We want to hear your feedback on MySQL, Hyperdrive, and building global applications with Workers. Join the #hyperdrive channel in our <a href="http://discord.cloudflare.com/"><u>Developer Discord</u></a> to ask questions, share what you’re building, and talk to our Product &amp; Engineering teams directly.</p><p>Thank you to <a href="https://github.com/wellwelwel"><u>Weslley Araújo</u></a>, <a href="https://github.com/sidorares"><u>Andrey Sidorov</u></a>, <a href="https://github.com/shiyuhang0"><u>Shi Yuhang</u></a>, <a href="https://github.com/Mini256"><u>Zhiyuan Liang</u></a>, <a href="https://github.com/nora-soderlund"><u>Nora Söderlund</u></a> and other open-source contributors who helped push this initiative forward.</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[MySQL]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Hyperdrive]]></category>
            <guid isPermaLink="false">77Nbj8Tnnrr6vMWsDSFekZ</guid>
            <dc:creator>Thomas Gauvin</dc:creator>
            <dc:creator>Andrew Repp</dc:creator>
            <dc:creator>Kirk Nickish</dc:creator>
            <dc:creator>Yagiz Nizipli</dc:creator>
        </item>
        <item>
            <title><![CDATA[We made Workers KV up to 3x faster — here’s the data]]></title>
            <link>https://blog.cloudflare.com/faster-workers-kv/</link>
            <pubDate>Thu, 26 Sep 2024 13:00:00 GMT</pubDate>
            <description><![CDATA[ Speed is a critical factor that dictates Internet behavior. Every additional millisecond a user spends waiting for your web page to load results in them abandoning your website.  ]]></description>
            <content:encoded><![CDATA[ <p>Speed is a critical factor that dictates Internet behavior. Every additional millisecond a user spends waiting for your web page to load results in them abandoning your website. The old adage remains as true as ever: <a href="https://www.cloudflare.com/learning/performance/more/website-performance-conversion-rates/"><u>faster websites result in higher conversion rates</u></a>. And with such outcomes tied to Internet speed, we believe a faster Internet is a better Internet.</p><p>Customers often use <a href="https://developers.cloudflare.com/kv/"><u>Workers KV</u></a> to provide <a href="https://developers.cloudflare.com/workers/"><u>Workers</u></a> with key-value data for configuration, routing, personalization, experimentation, or serving assets. Many of Cloudflare’s own products rely on KV for just this purpose: <a href="https://developers.cloudflare.com/pages"><u>Pages</u></a> stores static assets, <a href="https://developers.cloudflare.com/cloudflare-one/policies/access/"><u>Access</u></a> stores authentication credentials, <a href="https://developers.cloudflare.com/ai-gateway/"><u>AI Gateway</u></a> stores routing configuration, and <a href="https://developers.cloudflare.com/images/"><u>Images</u></a> stores configuration and assets, among others. So KV’s speed affects the latency of every request to an application, throughout the entire lifecycle of a user session. </p><p>Today, we’re announcing up to 3x faster KV hot reads, with all KV operations faster by up to 20ms. And we want to pull back the curtain and show you how we did it. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/67VzWOTRpMd9Dbc6ZM7M4M/cefb1d856344d9c963d4adfbe1b32fba/BLOG-2518_2.png" />
          </figure><p><sup><i>Workers KV read latency (ms) by percentile measured from Pages</i></sup></p>
    <div>
      <h2>Optimizing Workers KV’s architecture to minimize latency</h2>
      <a href="#optimizing-workers-kvs-architecture-to-minimize-latency">
        
      </a>
    </div>
    <p>At a high level, Workers KV is itself a Worker that makes requests to central storage backends, with many layers in between to properly cache and route requests across Cloudflare’s network. You can rely on Workers KV to support operations made by your Workers at any scale, and KV’s architecture will seamlessly handle your required throughput. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3pcw5pO2eeGJ1RriESJFSB/651fe26718f981eb741ad2ffb2f288c9/BLOG-2518_3.png" />
          </figure><p><sup><i>Sequence diagram of a Workers KV operation</i></sup></p><p>When your Worker makes a read operation to Workers KV, your Worker establishes a network connection within its Cloudflare region to KV’s Worker. The KV Worker then accesses the <a href="https://developers.cloudflare.com/workers/runtime-apis/cache/"><u>Cache API</u></a>, and in the event of a cache miss, retrieves the value from the storage backends. </p><p>Let’s look one level deeper at a simplified trace: </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7mCpF8NRgSg70p8VNTFXu8/4cabdae18285575891f49a5cd34c9ab8/BLOG-2518_4.png" />
          </figure><p><sup><i>Simplified trace of a Workers KV operation</i></sup></p><p>From the top, here are the operations completed for a KV read operation from your Worker:</p><ol><li><p>Your Worker makes a connection to Cloudflare’s network in the same data center. This incurs ~5 ms of network latency.</p></li><li><p>Upon entering Cloudflare’s network, a service called <a href="https://blog.cloudflare.com/upgrading-one-of-the-oldest-components-in-cloudflare-software-stack/"><u>Front Line (FL)</u></a> is used to process the request. This incurs ~10 ms of operational latency.</p></li><li><p>FL proxies the request to the KV Worker. The KV Worker does a cache lookup for the key being accessed. This, once again, passes through the Front Line layer, incurring an additional ~10 ms of operational latency.</p></li><li><p>Cache is stored in various backends within each region of Cloudflare’s network. A service built upon <a href="https://blog.cloudflare.com/pingora-open-source/"><u>Pingora</u></a>, our open-sourced Rust framework for proxying HTTP requests, routes the cache lookup to the proper cache backend.</p></li><li><p>Finally, if the cache lookup is successful, the KV read operation is resolved. Otherwise, the request reaches our storage backends, where it gets its value.</p></li></ol><p>Looking at these flame graphs, it became apparent that a major opportunity presented itself to us: reducing the FL overhead (or eliminating it altogether) and reducing the cache misses across the Cloudflare network would reduce the latency for KV operations.</p>
    <div>
      <h3>Bypassing FL layers between Workers and services to save ~20ms</h3>
      <a href="#bypassing-fl-layers-between-workers-and-services-to-save-20ms">
        
      </a>
    </div>
    <p>A request from your Worker to KV doesn’t need to go through FL. Much of FL’s responsibility is to process and route requests from outside of Cloudflare — that’s more than is needed to handle a request from the KV binding to the KV Worker. So we skipped the Front Line altogether in both layers.</p><div>
  
</div>
<p><sup><i>Reducing latency in a Workers KV operation by removing FL layers</i></sup></p><p>To bypass the FL layer from the KV binding in your Worker, we modified the KV binding to connect directly to the KV Worker within the same Cloudflare location. Within the Workers host, we configured a C++ subpipeline to allow code from bindings to establish a direct connection with the proper routing configuration and authorization loaded. </p><p>The KV Worker also passes through the FL layer on its way to our internal <a href="https://blog.cloudflare.com/pingora-open-source/"><u>Pingora</u></a> service. In this case, we were able to use an internal Worker binding that allows Workers for Cloudflare services to bind directly to non-Worker services within Cloudflare’s network. With this fix, the KV Worker sets the proper cache control headers and establishes its connection to Pingora without leaving the network. </p><p>Together, both of these changes reduced latency by ~20 ms for every KV operation. </p>
    <div>
      <h3>Implementing tiered cache to minimize requests to storage backends</h3>
      <a href="#implementing-tiered-cache-to-minimize-requests-to-storage-backends">
        
      </a>
    </div>
    <p>We also optimized KV’s architecture to reduce the amount of requests that need to reach our centralized storage backends. These storage backends are further away and incur network latency, so improving the cache hit rate in regions close to your Workers significantly improves read latency.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1791aSxXPH1lgOIr3RQrLz/1685f6a57d627194e76cb657cd22ddd8/BLOG-2518_5.png" />
          </figure><p><sup><i>Workers KV uses Tiered Cache to resolve operations closer to your users</i></sup></p><p>To accomplish this, we used <a href="https://developers.cloudflare.com/cache/how-to/tiered-cache/#custom-tiered-cache"><u>Tiered Cache</u></a>, and implemented a cache topology that is fine-tuned to the usage patterns of KV. With a tiered cache, requests to KV’s storage backends are cached in regional tiers in addition to local (lower) tiers. With this architecture, KV operations that may be cache misses locally may be resolved regionally, which is especially significant if you have traffic across an entire region spanning multiple Cloudflare data centers. </p><p>This significantly reduced the amount of requests that needed to hit the storage backends, with ~30% of requests resolved in tiered cache instead of storage backends.</p>
    <div>
      <h2>KV’s new architecture</h2>
      <a href="#kvs-new-architecture">
        
      </a>
    </div>
    <p>As a result of these optimizations, KV operations are now simplified:</p><ol><li><p>When you read from KV in your Worker, the <a href="https://developers.cloudflare.com/kv/concepts/kv-bindings/"><u>KV binding</u></a> binds directly to KV’s Worker, saving 10 ms. </p></li><li><p>The KV Worker binds directly to the Tiered Cache service, saving another 10 ms. </p></li><li><p>Tiered Cache is used in front of storage backends, to resolve local cache misses regionally, closer to your users.</p></li></ol>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2cW0MsOH120GKUAlIUvDR4/7f574632ee81d3b955ed99bf87d86fa2/BLOG-2518_6.png" />
          </figure><p><sup><i>Sequence diagram of KV operations with new architecture</i></sup></p><p>In aggregate, these changes significantly reduced KV’s latency. 

The impact of the direct binding to cache is clearly seen in the wall time of the KV Worker, given this value measures the duration of a retrieval of a key-value pair from cache. The 90th percentile of all KV Worker invocations now resolve in less than 12 ms — before the direct binding to cache, that was 22 ms. That’s a 10 ms decrease in latency. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1UmcRB0Afk8mHig2DrThsh/d913cd33a28c1c2b093379238a90551c/BLOG-2518_7.png" />
          </figure><p><sup><i>Wall time (ms) within the KV Worker by percentile</i></sup></p><p>These KV read operations resolve quickly because the data is cached locally in the same Cloudflare location. But what about reads that aren’t resolved locally? ~30% of these resolve regionally within the tiered cache. Reads from tiered cache are up to 100 ms faster than when resolved at central storage backends, once again contributing to making KV reads faster in aggregate.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1Gz2IxlcNuDDRp3MhC4m40/ee364b710cec4332a5c307a784f34fa4/BLOG-2518_8.png" />
          </figure><p><sup><i>Wall time (ms) within the KV Worker for tiered cache vs. storage backends reads</i></sup></p><p>These graphs demonstrate the impact of direct binding from the KV binding to cache, and tiered cache. To see the impact of the direct binding from a Worker to the KV Worker, we need to look at the latencies reported by Cloudflare products that use KV.</p><p><a href="https://developers.cloudflare.com/pages/"><u>Cloudflare Pages</u></a>, which serves static assets like HTML, CSS, and scripts from KV, saw load times for fetching assets improve by up to 68%. Workers asset hosting, which we also announced as part of today’s <a href="http://blog.cloudflare.com/builder-day-2024-announcements"><u>Builder Day announcements</u></a>, gets this improved performance from day 1.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/67VzWOTRpMd9Dbc6ZM7M4M/cefb1d856344d9c963d4adfbe1b32fba/BLOG-2518_2.png" />
          </figure><p><sup><i>Workers KV read operation latency measured within Cloudflare Pages by percentile</i></sup></p><p><a href="https://developers.cloudflare.com/queues/"><u>Queues</u></a> and <a href="https://developers.cloudflare.com/cloudflare-one/applications/"><u>Access</u></a> also saw their latencies for KV operations drop, with their KV read operations now 2-5x faster. These services rely on Workers KV data for configuration and routing data, so KV’s performance improvement directly contributes to making them faster on each request. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1Gz2IxlcNuDDRp3MhC4m40/ee364b710cec4332a5c307a784f34fa4/BLOG-2518_8.png" />
          </figure><p><sup><i>Workers KV read operation latency measured within Cloudflare Queues by percentile</i></sup></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1HFapaO1Gv09g9VlODrLAu/56d39207fb3dfefe258fa5e8cb8bd67b/BLOG-2518_10.png" />
          </figure><p><sup><i>Workers KV read operation latency measured within Cloudflare Access by percentile</i></sup></p><p>These are just some of the direct effects that a faster KV has had on other services. Across the board, requests are resolving faster thanks to KV’s faster response times. </p><p>And we have one more thing to make KV lightning fast. </p>
    <div>
      <h3>Optimizing KV’s hottest keys with an in-memory cache </h3>
      <a href="#optimizing-kvs-hottest-keys-with-an-in-memory-cache">
        
      </a>
    </div>
    <p>Less than 0.03% of keys account for nearly half of requests to the Workers KV service across all namespaces. These keys are read thousands of times per second, so making these faster has a disproportionate impact. Could these keys be resolved within the KV Worker without needing additional network hops?</p><p>Almost all of these keys are under 100 KB. At this size, it becomes possible to use the in-memory cache of the KV Worker — a limited amount of memory within the <a href="https://developers.cloudflare.com/workers/reference/how-workers-works/#isolates"><u>main runtime process</u></a> of a Worker sandbox. And that’s exactly what we did. For the highest throughput keys across Workers KV, reads resolve without even needing to leave the Worker runtime process.</p><div>
  
</div>
<p><sup><i>Sequence diagram of KV operations with the hottest keys resolved within an in-memory cache</i></sup></p><p>As a result of these changes, KV reads for these keys, which represent over 40% of Workers KV requests globally, resolve in under a millisecond. We’re actively testing these changes internally and expect to roll this out during October to speed up the hottest key-value pairs on Workers KV.</p>
    <div>
      <h2>A faster KV for all</h2>
      <a href="#a-faster-kv-for-all">
        
      </a>
    </div>
    <p>Most of these speed gains are already enabled with no additional action needed from customers. Your websites that are using KV are already responding to requests faster for your users, as are the other Cloudflare services using KV under the hood and the countless websites that depend upon them. </p><p>And we’re not done: we’ll continue to chase performance throughout our stack to make your websites faster. That’s how we’re going to move the needle towards a faster Internet. </p><p>To see Workers KV’s recent speed gains for your own KV namespaces, head over to your dashboard and check out the <a href="https://developers.cloudflare.com/kv/observability/metrics-analytics/"><u>new KV analytics</u></a>, with latency and cache status detailed per namespace.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Cloudflare Workers KV]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Performance]]></category>
            <guid isPermaLink="false">76i5gcdv0wbMNnwa7E17MR</guid>
            <dc:creator>Thomas Gauvin</dc:creator>
            <dc:creator>Rob Sutter</dc:creator>
            <dc:creator>Andrew Plunk</dc:creator>
        </item>
    </channel>
</rss>