
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Thu, 23 Apr 2026 00:00:09 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Making Rust Workers reliable: panic and abort recovery in wasm‑bindgen]]></title>
            <link>https://blog.cloudflare.com/making-rust-workers-reliable/</link>
            <pubDate>Wed, 22 Apr 2026 13:00:00 GMT</pubDate>
            <description><![CDATA[ Panics in Rust Workers were historically fatal, poisoning the entire instance. By collaborating upstream on the wasm‑bindgen project, Rust Workers now support resilient critical error recovery, including panic unwinding using WebAssembly Exception Handling. ]]></description>
            <content:encoded><![CDATA[ <p><a href="https://developers.cloudflare.com/workers/languages/rust/"><u>Rust Workers</u></a> run on the Cloudflare Workers platform by compiling Rust to WebAssembly, but as we’ve found, WebAssembly has some sharp edges. When things go wrong with a panic or an unexpected abort, the runtime can be left in an undefined state. For users of Rust Workers, panics were historically fatal, poisoning the instance and possibly even bricking the Worker for a period of time.</p><p>While we were able to detect and mitigate these issues, there remained a small chance that a Rust Worker would unexpectedly fail and cause other requests to fail along with it. An unhandled Rust abort in a Worker affecting one request might escalate into a broader failure affecting sibling requests or even continue to affect new incoming requests. The root cause of this was in wasm-bindgen, the core project that generates the Rust-to-JavaScript bindings Rust Workers depend on, and its lack of built-in recovery semantics.</p><p>In this post, we’ll share how the latest version of Rust Workers handles comprehensive Wasm error recovery that solves this abort-induced sandbox poisoning. This work has been contributed back into <a href="https://github.com/wasm-bindgen/wasm-bindgen"><u>wasm-bindgen</u></a> as part of our collaboration within the wasm-bindgen organization <a href="https://blog.rust-lang.org/inside-rust/2025/07/21/sunsetting-the-rustwasm-github-org/"><u>formed last year</u></a>. First with <code>panic=unwind</code> support, which ensures that a single failed request never poisons other requests, and then with abort recovery mechanisms that guarantee Rust code on Wasm can never re-execute after an abort. </p>
    <div>
      <h2>Initial recovery mitigations</h2>
      <a href="#initial-recovery-mitigations">
        
      </a>
    </div>
    <p>Our initial attempts to address reliability in this area focused on understanding and containing failures caused by Rust panics and aborts in production Rust Workers. We introduced a custom Rust panic handler that tracked failure state within a Worker and triggered full application reinitialization before handling subsequent requests. On the JavaScript side, this required wrapping the Rust-JavaScript call boundary using Proxy‑based indirection to ensure that all entrypoints were consistently encapsulated. We also made targeted modifications to the generated bindings to correctly reinitialize the WebAssembly module after a failure.</p><p>While this approach relied on custom JavaScript logic, it demonstrated that reliable recovery was achievable and eliminated the persistent failure modes we were seeing in practice. This solution was shipped by default to all workers‑rs users starting in version 0.6, and it laid the groundwork for the more general, upstreamed abort recovery mechanisms described in the sections that follow.</p>
    <div>
      <h2>Implementing <code>panic=unwind</code> with WebAssembly Exception Handling</h2>
      <a href="#implementing-panic-unwind-with-webassembly-exception-handling">
        
      </a>
    </div>
    <p>The abort recovery mechanisms described above ensure that a Worker can survive a failure, but they do so by reinitializing the entire application. For stateless request handlers, this is fine. But for workloads that hold meaningful state in memory, such as Durable Objects, reinitialization means losing that state entirely. A single panic in one request could wipe the in-memory state being used by other concurrent requests.</p><p>In most native Rust environments, panics can be unwound, allowing destructors to run and the program to recover without losing state. In WebAssembly, things historically looked very different. Rust compiled to Wasm via <code>wasm32-unknown-unknown</code> defaults to <code>panic=abort</code>, so a panic inside a Rust Worker would abruptly trap with an <code>unreachable</code> instruction and exit Wasm back to JS with a <code>WebAssembly.RuntimeError</code>.</p><p>To recover from panics without discarding instance state, we needed <code>panic=unwind</code> support for <code>wasm32-unknown-unknown</code> in wasm-bindgen, made possible by the WebAssembly Exception Handling proposal, which gained wide engine support in 2023.</p><p>We start by compiling with <code>RUSTFLAGS='-Cpanic=unwind' cargo build -Zbuild-std</code>, which rebuilds the standard library with unwind support and generates code with proper panic unwinding. For example:</p>
            <pre><code>struct HasDropA;
struct HasDropB;
extern "C" {
    fn imported_func();
}

fn some_func() {
    let a = HasDropA;
    let b = HasDropB;
    imported_func();
}</code></pre>
            <p>compiles to WebAssembly as:</p>
            <pre><code>try
  call &lt;imported_func&gt;
catch_all
  call &lt;drop_b&gt;
  call &lt;drop_a&gt;
  rethrow
end
call &lt;drop_b&gt;
call &lt;drop_a&gt;</code></pre>
            <p>This ensures that even if <code>imported_func()</code> panics, destructors still run. Similarly, <code>std::panic::catch_unwind(|| some_func())</code> compiles into:</p>
            <pre><code>try
  call &lt;some_func&gt;
  ;; set result to Ok(return value)
catch
  try
    call &lt;std::panicking::catch_unwind::cleanup&gt;
    ;; set result to Err(panic payload)
  catch_all
    call &lt;core::panicking::cannot_unwind&gt;
    unreachable
  end
end</code></pre>
            <p>Getting this to work end-to-end required several changes to the wasm-bindgen toolchain. The WebAssembly parser Walrus did not know how to handle try/catch instructions, so we added support for them. The descriptor interpreter also needed to be taught how to evaluate code containing exception handling blocks. At that point, the full application could be built with <code>panic=unwind</code>.</p><p>The final step was modifying the exports generated by wasm-bindgen to catch panics at the Rust-JavaScript boundary and surface them as JavaScript <code>PanicError</code> exceptions. One subtlety: Rust will catch foreign exceptions and abort when unwinding through <code>extern "C"</code> functions, so exports needed to be marked <code>extern "C-unwind"</code> to explicitly allow unwinding across the boundary. For futures, a panic rejects the JavaScript <code>Promise</code> with a <code>PanicError</code>.</p><p>Closures required special attention to ensure unwind safety was properly checked, via a new <code>MaybeUnwindSafe</code> trait that checks <code>UnwindSafe</code> only when built with <code>panic=unwind</code>. This quickly exposed a problem, though: many closures capture references that remain after an unwind, making them inherently unwind-unsafe. To avoid a situation where users are encouraged to incorrectly wrap closures in <code>AssertUnwindSafe</code> just to satisfy the compiler, we added <code>Closure::new_aborting</code> variants, which terminate on panic instead of unwinding in cases where unwind safety can't be guaranteed.</p><p>With panic unwinding enabled:</p><ul><li><p>Panics in exported Rust functions are caught by wasm-bindgen</p></li><li><p>Panics surface to JavaScript as PanicError exceptions</p></li><li><p>Async exports reject their returned promises with a PanicError</p></li><li><p>Rust destructors run correctly</p></li><li><p>The WebAssembly instance remains valid and reusable</p></li></ul><p>The full details of the approach and how to use it in wasm-bindgen are covered in the latest guide page for <a href="https://wasm-bindgen.github.io/wasm-bindgen/reference/catch-unwind.html"><u>Wasm Bindgen: Catching Panics</u></a>.</p>
    <div>
      <h2>Abort recovery</h2>
      <a href="#abort-recovery">
        
      </a>
    </div>
    <p>Even with <code>panic=unwind</code> support, aborts still happen - out-of-memory errors being one common cause. Because aborts can’t unwind, there is no possibility of state recovery at all, but we can at least detect and recover from aborts for future operations to avoid invalid state erroring subsequent requests.</p><p>Panic unwind support introduced a new problem for abort recovery. When we receive an error from Wasm we don’t know if it came from an <code>extern “C-unwind”</code> foreign error, or if it was a genuine abort. Aborts can take many shapes in WebAssembly.</p><p>We had two options to solve this technically: either mark all errors which are definitely aborts, or mark all errors which are definitely unwinds. Either could have worked but we chose the latter. Since our foreign exception handling was directly using raw WAT-level (WebAssembly text format) Exception Handling instructions already, we found it easier to implement exception tags for foreign exceptions to distinguish them from aborting non-unwind-safe exceptions.</p><p>With the ability to clearly distinguish between recoverable and non-recoverable errors thanks to this <code>Exception.Tag</code> feature in WebAssembly Exception Handling, we were able to then integrate both a new abort handler as well as abort reentrancy guards.

A new abort hook, <code>set_on_abort</code>, can be used at initialization time to attach a handler that recovers accordingly for the platform embedding’s needs.</p><p>Hardening panic and abort handling is critical to avoiding invalid execution state. WebAssembly allows deeply interleaved call stacks, where Wasm can call into JavaScript and JavaScript can re-enter Wasm at arbitrary depths, while alongside this, multiple tasks can be functioning in the same instance. Previously, an abort occurring in one task or nested stack was not guaranteed to invalidate higher stacks through JS, leading to undefined behavior. Care was required to ensure we can guarantee the execution model, and contribution in this space remains ongoing.</p><p>While aborts are never ideal, and reinitialization on failure is an absolute worst-case scenario, implementing critical error recovery as the last line of defense ensures execution correctness and that future operations will be able to succeed. The invalid state does not persist, ensuring a single failure does not cascade into multiple failures.</p>
    <div>
      <h2>Extension: abort reinitialization for wasm-bindgen libraries</h2>
      <a href="#extension-abort-reinitialization-for-wasm-bindgen-libraries">
        
      </a>
    </div>
    <p>While we were working on this, we realized that this is a common problem for libraries used by JS that are built with wasm-bindgen, and that they would also benefit from attaching an abort handler to be able to perform recovery.</p><p>But when building Wasm as an ES module and importing it directly (e.g. via <code>import { func } from ‘wasm-dep’</code>), it’s not clear what the recovery mechanism would be for a Wasm abort while calling <code>func()</code> for an already-linked and initialized library that is in a user JS application.</p><p>While not strictly a Rust Workers use case, our team also supports JS-based Workers users who run Rust-backed Wasm library dependencies. If we could fix this problem at the same time, that could indirectly also benefit Wasm usage on the Cloudflare Workers platform.</p><p>To support automatic abort recovery for Wasm library use cases, we added support for an experimental reinitialization mechanism into wasm‑bindgen, <code>--reset-state-function</code>. This exposes a function that allows the Rust application to effectively request that it reset its internal Wasm instance back to its initial state for the next call, without requiring consumers of the generated bindings to reimport or recreate them. Class instances from the old instance will throw as their handles become orphaned, but new classes can then be constructed. The JS application using a Wasm library is errored but not bricked.</p><p>The full technical details of this feature and how to use it in wasm-bindgen are covered in the new wasm-bindgen guide section <a href="https://wasm-bindgen.github.io/wasm-bindgen/reference/handling-aborts.html"><u>Wasm Bindgen: Handling Aborts</u></a>.</p>
    <div>
      <h2>Maturing the Rust Wasm Exception Handling ecosystem</h2>
      <a href="#maturing-the-rust-wasm-exception-handling-ecosystem">
        
      </a>
    </div>
    <p>Upstream contributions for this work did not stop at the wasm-bindgen project. Building for Wasm with <code>panic=unwind</code> still requires an experimental nightly Rust target, so we’ve also been working to advance Rust’s Wasm support for WebAssembly Exception Handling to help bring this to stable Rust.</p><p>During the development of WebAssembly Exception Handling, a late‑stage specification change resulted in two variants: <a href="https://github.com/WebAssembly/exception-handling/blob/main/proposals/exception-handling/legacy/Exceptions.md"><u>legacy exception handling</u></a> and the final modern <a href="https://github.com/WebAssembly/exception-handling/blob/main/proposals/exception-handling/Exceptions.md"><u>exception handling "with exnref"</u></a>. Today, Rust’s WebAssembly targets still default to emitting code for the legacy variant. While legacy exception handling is widely supported, it is now deprecated.</p><p>Modern WebAssembly Exception Handling is supported as of the following JS platform releases:</p><table><tr><td><p><b>Runtime</b></p></td><td><p><b>Version</b></p></td><td><p><b>Release Date</b></p></td></tr><tr><td><p>v8</p></td><td><p>13.8.1</p></td><td><p>April 28, 2025</p></td></tr><tr><td><p>workerd</p></td><td><p>v1.20250620.0</p></td><td><p>June 19, 2025</p></td></tr><tr><td><p>Chrome</p></td><td><p>138</p></td><td><p>June 28, 2025</p></td></tr><tr><td><p>Firefox</p></td><td><p>131</p></td><td><p>October 1, 2024</p></td></tr><tr><td><p>Safari</p></td><td><p>18.4</p></td><td><p>March 31, 2025</p></td></tr><tr><td><p>Node.js</p></td><td><p>25.0.0</p></td><td><p>October 15, 2025</p></td></tr></table><p>As we were investigating the support matrix, the largest concern ended up being the Node.js 24 LTS release schedule, which would have left the entire ecosystem stuck on legacy WebAssembly Exception Handling until April 2028.</p><p>Having discovered this discrepancy, we were able to backport modern exception handling to the Node.js 24 release, and even backport the fixes needed to make it work on the Node.js 22 release line to ensure support for this target. This should allow the modern Exception Handling proposal to become the default target next year.</p><p>Over the coming months, we’ll be working to make the transition to stable <code>panic=unwind</code> and modern Exception Handling as invisible as possible to end users.</p><p>While these long‑term investments in the ecosystem take time, they help build a stronger foundation for the Rust WebAssembly community as a whole, and we’re glad to be able to contribute to these improvements.</p>
    <div>
      <h2>Using panic unwind in Rust Workers</h2>
      <a href="#using-panic-unwind-in-rust-workers">
        
      </a>
    </div>
    <p>As of version 0.8.0 of Rust Workers, we have a new <code>--panic-unwind</code> flag, which can be added to the build command, following the <a href="https://github.com/cloudflare/workers-rs?tab=readme-ov-file#panic-recovery-with---panic-unwind"><u>instructions here</u></a>.</p><p>With this flag, panics can be fully recovered, and abort recovery will use the new abort classification and recovery hook mechanism. We highly recommend upgrading and trying it out for a more stable Rust Workers experience, and plan to make <code>panic=unwind</code> the default in a subsequent release. Users remaining on <code>panic=abort</code> will still continue to take advantage of the previous custom recovery wrapper handling from 0.6.0.</p>
    <div>
      <h2>Committing to Rust Workers stability</h2>
      <a href="#committing-to-rust-workers-stability">
        
      </a>
    </div>
    <p>This work is part of our ongoing effort towards a stable release for Rust Workers. By solving these sharp edges of the Wasm platform foundations at their root, and contributing back to the ecosystem where it makes sense, we build stronger foundations not just for our platform, but the entire Rust, JS, and Wasm ecosystem.</p><p>We have a number of future improvements planned for Rust Workers, and we’ll soon be sharing updates on this additional work, including wasm-bindgen generics and automated bindgen, which Guy Bedford from our team previewed in a talk on <a href="https://www.youtube.com/watch?v=zlSJY8Qv5XI"><u>Rust &amp; JS Interoperability at Wasm.io</u></a> last month.</p><p>Find us in <b>#rust‑on‑workers</b> on the <a href="https://discord.com/invite/cloudflaredev"><u>Cloudflare Discord</u></a>. We also welcome feedback and discussion and especially all new contributors to the <a href="https://github.com/cloudflare/workers-rs"><u>workers-rs</u></a> and <a href="https://github.com/wasm-bindgen/wasm-bindgen"><u>wasm-bindgen</u></a> GitHub projects.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Rust]]></category>
            <category><![CDATA[Rust Workers]]></category>
            <category><![CDATA[WebAssembly]]></category>
            <category><![CDATA[WASM]]></category>
            <category><![CDATA[Reliability]]></category>
            <category><![CDATA[Engineering]]></category>
            <category><![CDATA[Open Source]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <guid isPermaLink="false">2StUNK716oSircL9mJUjrg</guid>
            <dc:creator>Guy Bedford</dc:creator>
            <dc:creator>Hood Chatham</dc:creator>
            <dc:creator>Logan Gatlin</dc:creator>
        </item>
        <item>
            <title><![CDATA[Building the agentic cloud: everything we launched during Agents Week 2026]]></title>
            <link>https://blog.cloudflare.com/agents-week-in-review/</link>
            <pubDate>Mon, 20 Apr 2026 13:00:00 GMT</pubDate>
            <description><![CDATA[ Agents Week 2026 is a wrap. Let’s take a look at everything we announced, from compute and security to the agent toolbox, platform tools, and the emerging agentic web. Everything we shipped for the agentic cloud.
 ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Today marks the end of our first Agents Week, an innovation week dedicated entirely to the age of agents. It couldn’t have been more timely: over the past year, agents have swiftly changed how people work. Coding agents are helping developers ship faster than ever. Support agents resolve tickets end-to-end. Research agents validate hypotheses across hundreds of sources in minutes. And people aren't just running one agent: they're running several in parallel and around the clock.</p><p>As Cloudflare's CTO Dane Knecht and VP of Product Rita Kozlov noted in our <a href="https://blog.cloudflare.com/welcome-to-agents-week/"><u>welcome to Agents Week post</u></a>, the potential scale of agents is staggering: If even a fraction of the world's knowledge workers each run a few agents in parallel, you need compute capacity for tens of millions of simultaneous sessions. The one-app-serves-many-users model the cloud was built on doesn't work for that. But that's exactly what developers and businesses want to do: build agents, deploy them to users, and run them at scale.</p><p>Getting there means solving problems across the entire stack. Agents need <b>compute</b> that scales from full operating systems to lightweight isolates. They need <b>security</b> and identity built into how they run.  They need an <b>agent toolbox</b>: the right models, tools, and context to do real work. All the code that agents generate needs a clear path from afternoon <b>prototype to production</b> app. And finally, as agents drive a growing share of Internet traffic, the web itself needs to adapt for the emerging <b>agentic web</b>. Turns out, the containerless, serverless compute platform we launched eight years ago with Workers was ready-made for this moment. Since then, we've grown it into a full platform, and this week we shipped the next wave of primitives purpose-built for agents, organized around exactly those problems.</p><p>We are here to create Cloud 2.0 — the agentic cloud. Infrastructure designed for a world where agents are a primary workload. </p><p>Here's a list of everything we announced this week — we wouldn’t want you to miss a thing.</p>
    <div>
      <h2>Compute</h2>
      <a href="#compute">
        
      </a>
    </div>
    <p>It starts with compute. Agents need somewhere to run, and somewhere to store and run the code they write. Not all agents need the same thing: some need a full operating system to install packages and run terminal commands, most need something lightweight that starts in milliseconds and scales to millions. This week we shipped the environments to run them, as well as a new Git-compatible workspace for agents:</p><table><tr><th><p><b>Announcement</b></p></th><th><p><b>Summary</b></p></th></tr><tr><td><p><a href="https://blog.cloudflare.com/artifacts-git-for-agents-beta"><u>Artifacts: Versioned storage that speaks Git</u></a></p></td><td><p>Give your agents, developers, and automations a home for code and data. We’ve just launched Artifacts: Git-compatible versioned storage built for agents. Create tens of millions of repos, fork from any remote, and hand off a URL to any Git client.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/sandbox-ga/"><u>Agents have their own computers with Sandboxes GA</u></a></p></td><td><p>Cloudflare Sandboxes give AI agents a persistent, isolated environment: a real computer with a shell, a filesystem, and background processes that starts on demand and picks up exactly where it left off.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/sandbox-auth/"><u>Dynamic, identity-aware, and secure: egress controls for Sandboxes</u></a></p></td><td><p>Outbound Workers for Sandboxes provide a programmable, zero-trust egress proxy for AI agents. This allows developers to inject credentials and enforce dynamic security policies without exposing sensitive tokens to untrusted code.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/durable-object-facets-dynamic-workers/"><u>Durable Objects in Dynamic Workers: Give each AI-generated app its own database</u></a></p></td><td><p>Durable Object Facets allows Dynamic Workers to instantiate Durable Objects with their own isolated SQLite databases. This enables developers to build platforms that run persistent, stateful code generated on-the-fly.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/workflows-v2/"><u>Rearchitecting the Workflows control plane for the agentic era</u></a></p></td><td><p>Cloudflare Workflows, a durable execution engine for multi-step applications, now supports 50,000 concurrency and 300 creation rate limits through a rearchitectured control plane, helping scale to meet the use cases for durable background agents.</p></td></tr></table>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6GiG5cEdgwoLU94AuUuLfS/f9022abc8ce823ef2468860474598b6f/BLOG-3239_2.png" />
          </figure>
    <div>
      <h2>Security</h2>
      <a href="#security">
        
      </a>
    </div>
    <p>Running agents and their code is only half the challenge. Agents connect to private networks, access internal services, and take autonomous actions on behalf of users. When anyone in an organization can spin up their own agents, security can't be an afterthought. It has to be the default. This week, we launched the tools to make that easy.</p><table><tr><th><p><b>Announcement</b></p></th><th><p><b>Summary</b></p></th></tr><tr><td><p><a href="https://blog.cloudflare.com/mesh/"><u>Secure private networking for everyone: users, nodes, agents, Workers — introducing Cloudflare Mesh</u></a></p></td><td><p>Cloudflare Mesh provides secure, private network access for users, nodes, and autonomous AI agents. By integrating with Workers VPC, developers can now grant agents scoped access to private databases and APIs without manual tunnels.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/managed-oauth-for-access/"><u>Managed OAuth for Access: make internal apps agent-ready in one click</u></a></p></td><td><p>Managed OAuth for Cloudflare Access helps AI agents securely navigate internal applications. By adopting RFC 9728, agents can authenticate on behalf of users without using insecure service accounts.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/improved-developer-security/"><u>Securing non-human identities: automated revocation, OAuth, and scoped permissions</u></a></p></td><td><p>Cloudflare is introducing scannable API tokens, enhanced OAuth visibility, and GA for resource-scoped permissions. These tools help developers implement a true least-privilege architecture while protecting against credential leakage.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/enterprise-mcp/"><u>Scaling MCP adoption: our reference architecture for enterprise MCP deployments</u></a></p></td><td><p>We share Cloudflare's internal strategy for governing MCP using Access, AI Gateway, and MCP server portals. We also launch Code Mode to slash token costs and recommend new rules for detecting Shadow MCP in Cloudflare Gateway.</p></td></tr></table>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6dT2Kw2vS0AJvk6Nw1oAg2/c1b9ca10314f85664ce6a939339f1247/BLOG-3239_3.png" />
          </figure>
    <div>
      <h2>Agent Toolbox</h2>
      <a href="#agent-toolbox">
        
      </a>
    </div>
    <p>A capable agent needs to be able to think and remember, communicate, and see. This means being powered with the right models, with access to the right tools and the right context for their task at hand. This week we shipped the primitives — inference, search, memory, voice, email, and a browser — that turn an agent into something that actually gets work done.</p><table><tr><th><p><b>Announcement</b></p></th><th><p><b>Summary</b></p></th></tr><tr><td><p><a href="https://blog.cloudflare.com/project-think/"><u>Project Think: building the next generation of AI agents on Cloudflare</u></a></p></td><td><p>Announcing a preview of the next edition of the Agents SDK — from lightweight primitives to a batteries-included platform for AI agents that think, act, and persist.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/voice-agents/"><u>Add voice to your agent</u></a></p></td><td><p>An experimental voice pipeline for the Agents SDK enables real-time voice interactions over WebSockets. Developers can now build agents with continuous STT and TTS in just ~30 lines of server-side code.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/email-for-agents/"><u>Cloudflare Email Service: now in public beta. Ready for your agents</u></a></p></td><td><p>Agents are becoming multi-channel. That means making them available wherever your users already are — including the inbox. Cloudflare Email Service enters public beta with the infrastructure layer to make that easy: send, receive, and process email natively from your agents.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/ai-platform/"><u>Cloudflare's AI platform: an inference layer designed for agents </u></a></p></td><td><p>We're building Cloudflare into a unified inference layer for agents, letting developers call models from 14+ providers. New features include Workers binding for running third-party models and an expanded catalog with multimodal models.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/high-performance-llms/"><u>Building the foundation for running extra-large language models</u></a></p></td><td><p>We built a custom technology stack to run fast large language models on Cloudflare’s infrastructure. This post explores the engineering trade-offs and technical optimizations required to make high-performance AI inference accessible.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/unweight-tensor-compression/"><u>Unweight: how we compressed an LLM 22% without sacrificing quality</u></a></p></td><td><p>Running large LLMs across Cloudflare’s network requires us to be smarter and more efficient about GPU memory bandwidth. That’s why we developed Unweight, a lossless inference-time compression system that achieves up to a 22% model footprint reduction, so that we can deliver faster and cheaper inference than ever before. </p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/introducing-agent-memory/"><u>Agents that remember: introducing Agent Memory</u></a></p></td><td><p>Cloudflare Agent Memory is a managed service that gives AI agents persistent memory, allowing them to recall what matters, forget what doesn't, and get smarter over time.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/ai-search-agent-primitive/"><u>AI Search: the search primitive for your agents</u></a></p></td><td><p>AI Search is the search primitive for your agents. Create instances dynamically, upload files, and search across instances with hybrid retrieval and relevance boosting. Just create a search instance, upload, and search.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/browser-run-for-ai-agents/"><u>Browser Run: give your agents a browser</u></a></p></td><td><p>Browser Rendering is now Browser Run, with Live View, Human in the Loop, CDP access, session recordings, and 4x higher concurrency limits for AI agents.</p></td></tr></table>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3sTmO0Pt0PabTR2g8pZ963/a3f523a17db3a13301072232664b11c2/BLOG-3239_4.png" />
          </figure>
    <div>
      <h2>Prototype to production</h2>
      <a href="#prototype-to-production">
        
      </a>
    </div>
    <p>The best infrastructure is also one that’s easy to use. We want to meet developers and their agents where they’re already working: in the terminal, in the editor, in a prompt, and make the full Cloudflare platform accessible without context-switching.</p><table><tr><th><p><b>Announcement</b></p></th><th><p><b>Summary</b></p></th></tr><tr><td><p><a href="https://blog.cloudflare.com/cf-cli-local-explorer/"><u>Building a CLI for all of Cloudflare</u></a></p></td><td><p>We’re introducing cf, a new unified CLI designed for consistency across the Cloudflare platform, alongside Local Explorer for debugging local data. These tools simplify how developers and AI agents interact with our nearly 3,000 API operations.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/introducing-agent-lee/"><u>Introducing Agent Lee - a new interface to the Cloudflare stack</u></a></p></td><td><p>Agent Lee is an in-dashboard agent that shifts Cloudflare’s interface from manual tab-switching to a single prompt. Using sandboxed TypeScript, it helps you troubleshoot and manage your stack as a grounded technical collaborator.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/flagship/"><u>Introducing Flagship: feature flags built for the age of AI</u></a></p></td><td><p>Introducing Flagship, a native feature flag service built on Cloudflare’s global network to eliminate the latency of third-party providers. By using KV and Durable Objects, Flagship allows for sub-millisecond flag evaluation.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/deploy-planetscale-postgres-with-workers/"><u>Deploy Postgres and MySQL databases with PlanetScale + Workers</u></a></p></td><td><p>Learn how to deploy PlanetScale Postgres and MySQL databases via Cloudflare and connect Cloudflare Workers.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/registrar-api-beta/"><u>Register domains wherever you build: Cloudflare Registrar API now in beta</u></a></p></td><td><p>The Cloudflare Registrar API is now in beta. Developers and AI agents can search, check availability, and register domains at cost directly from their editor, their terminal, or their agent — without leaving their workflow.</p></td></tr></table>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1BuOIU5Q3gfMIekZJEiBra/732c42b78aa397e7bb19b3816c7d1516/BLOG-3239_5.png" />
          </figure>
    <div>
      <h2>Agentic Web</h2>
      <a href="#agentic-web">
        
      </a>
    </div>
    <p>As more agents come online, they're still browsing an Internet that was built for people. Existing websites need new tools to control what bots can access their content, package and present it for agents, and measure how ready they are for this shift.</p><table><tr><th><p><b>Announcement</b></p></th><th><p><b>Summary</b></p></th></tr><tr><td><p><a href="https://blog.cloudflare.com/agent-readiness/"><u>Introducing the Agent Readiness score. Is your site agent-ready?</u></a></p></td><td><p>The Agent Readiness score can help site owners understand how well their websites support AI agents. Here we explore new standards, share Radar data, and detail how we made Cloudflare’s docs the most agent-friendly on the web.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/ai-redirects/"><u>Redirects for AI Training enforces canonical content</u></a></p></td><td><p>Soft directives don’t stop crawlers from ingesting deprecated content. Redirects for AI Training allows anybody on Cloudflare to redirect verified crawlers to canonical pages with one toggle and no origin changes.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/network-performance-agents-week/"><u>Agents Week: Network performance update</u></a></p></td><td><p>By migrating our request handling layer to a Rust-based architecture called FL2, Cloudflare has increased its performance lead to 60% of the world’s top networks. We use real-user measurements and TCP connection trimeans to ensure our data reflects the actual experience of people on the Internet</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/shared-dictionaries/"><u>Shared dictionary compression that keeps up with the agentic web</u></a></p></td><td><p>We give you a sneak peek of our support for shared compression dictionaries, show you how it improves page load times, and reveal when you’ll be able to try the beta yourself.</p></td></tr></table>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5jKSnMla0hclK999LtHfuu/886f66ffda1c699a5a15bf6748a6cf9b/BLOG-3239_6.png" />
          </figure>
    <div>
      <h2>That’s a wrap</h2>
      <a href="#thats-a-wrap">
        
      </a>
    </div>
    <p>Agents Week 2026 is ending, but the agentic cloud is just getting started. Everything we shipped this week — from compute and security to the agent toolbox and the agentic web — is the foundation. We're going to keep building on it to give you everything you need to build what's next.</p><p>We also have more blog posts coming out today and tomorrow to continue the story, so keep an eye out for the latest <a href="https://blog.cloudflare.com/"><u>at our blog</u></a>.</p><p>If you're building on any of what we announced this week, we want to hear about it. Come find us on <a href="https://x.com/cloudflaredev"><u>X</u></a> or <a href="https://discord.com/invite/cloudflaredev"><u>Discord</u></a>, or head to the <a href="https://developers.cloudflare.com/products/?product-group=Developer+platform"><u>developer documentation</u></a>. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/YLNvB0KU0Eh3KpAj6YgD6/77c190843d1fa76b212571f1d607d8d2/BLOG-3239_7.png" />
          </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Agents Week]]></category>
            <category><![CDATA[Agents]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Durable Objects]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[SDK]]></category>
            <category><![CDATA[Browser Run]]></category>
            <category><![CDATA[Cloudflare Access]]></category>
            <category><![CDATA[Browser Rendering]]></category>
            <category><![CDATA[MCP]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Sandbox]]></category>
            <category><![CDATA[LLM]]></category>
            <category><![CDATA[Cloudflare Gateway]]></category>
            <category><![CDATA[Workers AI]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[API]]></category>
            <guid isPermaLink="false">25CSwW9eXDM4FJOdVPLf8f</guid>
            <dc:creator>Ming Lu</dc:creator>
            <dc:creator>Anni Wang</dc:creator>
        </item>
        <item>
            <title><![CDATA[The AI engineering stack we built internally — on the platform we ship]]></title>
            <link>https://blog.cloudflare.com/internal-ai-engineering-stack/</link>
            <pubDate>Mon, 20 Apr 2026 13:00:00 GMT</pubDate>
            <description><![CDATA[ We built our internal AI engineering stack on the same products we ship. That means 20 million requests routed through AI Gateway, 241 billion tokens processed, and inference running on Workers AI, serving more than 3,683 internal users. Here's how we did it.
 ]]></description>
            <content:encoded><![CDATA[ <p></p><p>In the last 30 days, 93% of Cloudflare’s R&amp;D organization used AI coding tools powered by infrastructure we built on our own platform.</p><p>Eleven months ago, we undertook a major project: to truly integrate AI into our engineering stack. We needed to build the internal MCP servers, access layer, and AI tooling necessary for agents to be useful at Cloudflare. We pulled together engineers from across the company to form a tiger team called iMARS (Internal MCP Agent/Server Rollout Squad). The sustained work landed with the Dev Productivity team, who also own much of our internal tooling including CI/CD, build systems, and automation.</p><p>Here are some numbers that capture our own agentic AI use over the last 30 days:</p><ul><li><p><b>3,683 internal users</b> actively using AI coding tools (60% company-wide, 93% across R&amp;D), out of approximately 6,100 total employees</p></li><li><p><b>47.95 million</b> AI requests </p></li><li><p><b>295 teams</b> are currently utilizing agentic AI tools and coding assistants.</p></li><li><p><b>20.18 million</b> AI Gateway requests per month</p></li><li><p><b>241.37 billion</b> tokens routed through AI Gateway</p></li><li><p><b>51.83 billion</b> tokens processed on Workers AI</p></li></ul><p>The impact on developer velocity internally is clear: we’ve never seen a quarter-to-quarter increase in merge requests to this degree.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3lMKwBTT3m7DDmS4BnoNhB/9002104b9c6c09cf72f4052d71e22363/BLOG-3270_2.png" />
          </figure><p>As AI tooling adoption has grown the 4-week rolling average has climbed from ~5,600/week to over 8,700. The week of March 23 hit 10,952, nearly double the Q4 baseline.</p><p>MCP servers were the starting point, but the team quickly realized we needed to go further: rethink how standards are codified, how code gets reviewed, how engineers onboard, and how changes propagate across thousands of repos.</p><p>This post dives deep into what that looked like over the past eleven months and where we ended up. We're publishing now, to close out Agents Week, because the AI engineering stack we built internally runs on the same products we’re shipping and enhancing this week.</p>
    <div>
      <h2>The architecture at a glance</h2>
      <a href="#the-architecture-at-a-glance">
        
      </a>
    </div>
    <p>The engineer-facing tools layer (<a href="https://opencode.ai/"><u>OpenCode</u></a>, Windsurf, and other MCP-compatible clients) include both open-source and third-party coding assistant tools.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2oZJhdVNbW05lJaSnN3hds/21c5427f275ba908388333884530f455/image8.png" />
          </figure><p>Each layer maps to a Cloudflare product or tool we use:</p><table><tr><th><p>What we built</p></th><th><p>Built with</p></th></tr><tr><td><p>Zero Trust authentication</p></td><td><p><a href="https://developers.cloudflare.com/cloudflare-one/"><b><u>Cloudflare Access</u></b></a></p></td></tr><tr><td><p>Centralized LLM routing, cost tracking, BYOK, and Zero Data Retention controls</p></td><td><p><a href="https://developers.cloudflare.com/ai-gateway/"><b><u>AI Gateway</u></b></a></p></td></tr><tr><td><p>On-platform inference with open-weight models</p></td><td><p><a href="https://developers.cloudflare.com/workers-ai/"><b><u>Workers AI</u></b></a></p></td></tr><tr><td><p>MCP Server Portal with single OAuth</p></td><td><p><a href="https://developers.cloudflare.com/workers/"><b><u>Workers</u></b></a> + <a href="https://developers.cloudflare.com/cloudflare-one/"><b><u>Access</u></b></a></p></td></tr><tr><td><p>AI Code Reviewer CI integration</p></td><td><p><a href="https://developers.cloudflare.com/workers/"><b><u>Workers</u></b></a> + <a href="https://developers.cloudflare.com/ai-gateway/"><b><u>AI Gateway</u></b></a></p></td></tr><tr><td><p>Sandboxed execution for agent-generated code (Code Mode)</p></td><td><p><a href="https://blog.cloudflare.com/dynamic-workers/"><b><u>Dynamic Workers</u></b></a></p></td></tr><tr><td><p>Stateful, long-running agent sessions</p></td><td><p><a href="https://developers.cloudflare.com/agents/"><b><u>Agents SDK</u></b></a> (McpAgent, Durable Objects)</p></td></tr><tr><td><p>Isolated environments for cloning, building, and testing</p></td><td><p><a href="https://developers.cloudflare.com/sandbox/"><b><u>Sandbox SDK</u></b></a> — GA as of Agents Week</p></td></tr><tr><td><p>Durable multi-step workflows</p></td><td><p><a href="https://developers.cloudflare.com/workflows/"><b><u>Workflows</u></b></a> — scaled 10x during Agents Week</p></td></tr><tr><td><p>16K+ entity knowledge graph</p></td><td><p><a href="https://backstage.io/"><b><u>Backstage</u></b></a> (OSS)</p></td></tr></table><p>None of this is internal-only infrastructure. Everything (besides Backstage) listed above is a shipping product, and many of them got substantial updates during Agents Week.</p><p>We’ll walk through this in three acts:</p><ol><li><p><b>The platform layer</b> — how authentication, routing, and inference work (AI Gateway, Workers AI, MCP Portal, Code Mode)</p></li><li><p><b>The knowledge layer</b> — how agents understand our systems (Backstage, AGENTS.md)</p></li><li><p><b>The enforcement layer</b> — how we keep quality high at scale (AI Code Reviewer, Engineering Codex)</p></li></ol>
    <div>
      <h2>Act 1: The platform layer</h2>
      <a href="#act-1-the-platform-layer">
        
      </a>
    </div>
    
    <div>
      <h3>How AI Gateway helped us stay secure and improve the developer experience</h3>
      <a href="#how-ai-gateway-helped-us-stay-secure-and-improve-the-developer-experience">
        
      </a>
    </div>
    <p>When you have over 3,600+ internal users using AI coding tools daily, you need to solve for access and visibility across many clients, use cases, and roles.</p><p>Everything starts with <a href="https://developers.cloudflare.com/cloudflare-one/"><b><u>Cloudflare Access</u></b></a>, which handles all authentication and zero-trust policy enforcement. Once authenticated, every LLM request routes through <a href="https://developers.cloudflare.com/ai-gateway/"><b><u>AI Gateway</u></b></a>. This gives us a single place to manage provider keys, cost tracking, and data retention policies.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5zorZ21OXIbNg7VACGzwZX/e1b35951622bab6ae7ca9fa9b8a5c844/BLOG-3270_4.png" />
          </figure><p><sup><i>The OpenCode AI Gateway overview: 688.46k requests per day, 10.57B tokens per day, routing to four providers through one endpoint.</i></sup></p><p>AI Gateway analytics show how monthly usage is distributed across model providers. Over the last month, internal request volume broke down as follows.</p><table><tr><th><p>Provider</p></th><th><p>Requests/month</p></th><th><p>Share</p></th></tr><tr><td><p>Frontier Labs (OpenAI, Anthropic, Google)</p></td><td><p>13.38M</p></td><td><p>91.16%</p></td></tr><tr><td><p>Workers AI</p></td><td><p>1.3M</p></td><td><p>8.84%</p></td></tr></table><p>Frontier models handle the bulk of complex agentic coding work for now, but Workers AI is already a significant part of the mix and handles an increasing share of our agentic engineering workloads.</p>
    <div>
      <h4>How we increasingly leverage Workers AI</h4>
      <a href="#how-we-increasingly-leverage-workers-ai">
        
      </a>
    </div>
    <p><a href="https://developers.cloudflare.com/workers-ai/"><b><u>Workers AI</u></b></a> is Cloudflare's serverless AI inference platform which runs open-source models on GPUs across our global network. Beyond huge cost improvements compared to frontier models, a key advantage is that inference stays on the same network as your Workers, Durable Objects, and storage. No cross-cloud hops to deal with, which cause more latency, network flakiness, and additional networking configuration to manage.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2WFqiZSL1RGbD2O7gOPDfR/2edc8c69fdea89fef76e600cb3ee5384/BLOG-3270_5.png" />
          </figure><p><sup><i>Workers AI usage in the last month: 51.47B input tokens, 361.12M output tokens.</i></sup></p><p><a href="https://blog.cloudflare.com/workers-ai-large-models/"><b><u>Kimi K2.5</u></b></a>, launched on Workers AI in March 2026, is a frontier-scale open-source model with a 256k context window, tool calling, and structured outputs. As we described in our <a href="https://blog.cloudflare.com/workers-ai-large-models/"><u>Kimi K2.5 launch post</u></a>, we have a security agent that processes over 7 billion tokens per day on Kimi. That would cost an estimated $2.4M per year on a mid-tier proprietary model. But on Workers AI, it's 77% cheaper.</p><p>Beyond security, we use Workers AI for documentation review in our CI pipeline, for generating AGENTS.md context files across thousands of repositories, and for lightweight inference tasks where same-network latency matters more than peak model capability.</p><p>As open-source models continue to improve, we expect Workers AI to handle a growing share of our internal workloads. </p><p>One thing we got right early: routing through a single proxy Worker from day one. We could have had clients connect directly to AI Gateway, which would have been simpler to set up initially. But centralizing through a Worker meant we could add per-user attribution, model catalog management, and permission enforcement later without touching any client configs. Every feature described in the bootstrap section below exists because we had that single choke point. The proxy pattern gives you a control plane that direct connections don't, and if we plug in additional coding assistant tools later, the same Worker and discovery endpoint will handle them.</p>
    <div>
      <h4>How it works: one URL to configure everything</h4>
      <a href="#how-it-works-one-url-to-configure-everything">
        
      </a>
    </div>
    <p>The entire setup starts with one command:</p>
            <pre><code>opencode auth login https://opencode.internal.domain
</code></pre>
            <p>That command triggers a chain that configures providers, models, MCP servers, agents, commands, and permissions, without the user touching a config file.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3sK7wVF3QbeNJ0rjLBabXh/05b9098197b7a3d084e6d860ed22b169/BLOG-3270_6.png" />
          </figure><p><b>Step 1: Discover auth requirements.</b> OpenCode fetches <a href="https://opencode.ai/docs/config/"><u>config</u></a> from a URL like <code>https://opencode.internal.domain/.well-known/opencode</code>. </p><p>This discovery endpoint is served by a Worker and the response has an <code>auth</code> block telling OpenCode how to authenticate, along with a <code>config</code> block with providers, MCP servers, agents, commands, and default permissions:</p>
            <pre><code>{
  "auth": {
    "command": ["cloudflared", "access", "login", "..."],
    "env": "TOKEN"
  },
  "config": {
    "provider": { "..." },
    "mcp": { "..." },
    "agent": { "..." },
    "command": { "..." },
    "permission": { "..." }
  }
}
</code></pre>
            <p><b>Step 2: Authenticate via Cloudflare Access.</b> OpenCode runs the auth command and the user authenticates through the same SSO they use for everything else at Cloudflare. <code>cloudflared</code> returns a signed JWT. OpenCode stores it locally and automatically attaches it to every subsequent provider request.</p><p><b>Step 3: Config is merged into OpenCode.</b> The config provided is shared defaults for the entire organization, but local configs always take priority. Users can override the default model, add their own agents, or adjust project and user scoped permissions without affecting anyone else.</p><p><b>Inside the proxy Worker.</b> The Worker is a simple Hono app that does three things:</p><ol><li><p><b>Serves the shared config.</b> The config is compiled at deploy time from structured source files and contains placeholder values like {baseURL} for the Worker's origin. At request time, the Worker replaces these, so all provider requests route through the Worker rather than directly to model providers. Each provider gets a path prefix (<code>/anthropic, /openai, /google-ai-studio/v1beta, /compat</code> for Workers AI) that the Worker forwards to the corresponding AI Gateway route.</p></li><li><p><b>Proxies requests to AI Gateway.</b> When OpenCode sends a request like <code>POST /anthropic/v1/messages</code>, the Worker validates the Cloudflare Access JWT, then rewrites headers before forwarding:
</p>
            <pre><code>Stripped:   authorization, cf-access-token, host
Added:      cf-aig-authorization: Bearer &lt;API_KEY&gt;
            cf-aig-metadata: {"userId": "&lt;anonymous-uuid&gt;"}
</code></pre>
            <p>The request goes to AI Gateway, which routes it to the appropriate provider. The response passes straight through with zero buffering. The <code>apiKey</code> field in the client config is empty because the Worker injects the real key server-side. No API keys exist on user machines.</p></li><li><p><b>Keeps the model catalog fresh.</b> An hourly cron trigger fetches the current OpenAI model list from <code>models.dev</code>, caches it in Workers KV, and injects <code>store: false</code> on every model for Zero Data Retention. New models get ZDR automatically without a config redeploy.</p></li></ol><p><b>Anonymous user tracking.</b> After JWT validation, the Worker maps the user's email to a UUID using D1 for persistent storage and KV as a read cache. AI Gateway only ever sees the anonymous UUID in <code>cf-aig-metadata</code>, never the email. This gives us per-user cost tracking and usage analytics without exposing identities to model providers or Gateway logs.</p><p><b>Config-as-code. </b>Agents and commands are authored as markdown files with YAML frontmatter. A build script compiles them into a single JSON config validated against the OpenCode JSON schema. Every new session picks up the latest version automatically.</p><p>The overall architecture is simple and easy for anyone to deploy with our developer platform: a proxy Worker, Cloudflare Access, AI Gateway, and a client-accessible discovery endpoint that configures everything automatically. Users run one command and they're done. There’s nothing for them to configure manually, no API keys on laptops or MCP server connections to manually set up. Making changes to our agentic tools and updating what 3,000+ people get in their coding environment is just a <code>wrangler deploy</code> away.</p>
    <div>
      <h3>The MCP Server Portal: one OAuth, multiple MCP tools</h3>
      <a href="#the-mcp-server-portal-one-oauth-multiple-mcp-tools">
        
      </a>
    </div>
    <p>We described our full approach to governing MCP at enterprise scale in a <a href="https://blog.cloudflare.com/enterprise-mcp/"><u>separate post</u></a>, including how we use <a href="https://developers.cloudflare.com/cloudflare-one/access-controls/ai-controls/mcp-portals/"><u>MCP Server Portals</u></a>, Cloudflare Access, and Code Mode together. Here's the short version of what we built internally.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/36gUHwTs8CzZeS03l9yno1/42953a6dc2ac944f4dbe31e3ae51570e/BLOG-3270_7.png" />
          </figure><p>Our internal portal aggregates 13 production MCP servers exposing 182+ tools across Backstage, GitLab, Jira, Sentry, Elasticsearch, Prometheus, Google Workspace, our internal Release Manager, and more. This unifies access and simplifies everything giving us one endpoint and one Cloudflare Access flow governing access to every tool.</p><p>Each MCP server is built on the same foundation: McpAgent from the Agents SDK, <a href="https://github.com/cloudflare/workers-oauth-provider"><b><u>workers-oauth-provider</u></b></a> for OAuth, and Cloudflare Access for identity. The whole thing lives in a single monorepo with shared auth infrastructure, Bazel builds, CI/CD pipelines, and <code>catalog-info.yaml </code>for Backstage registration. Adding a new server is mostly copying an existing one and changing the API it wraps. For more on how this works and the security architecture behind it, see <a href="https://blog.cloudflare.com/enterprise-mcp/"><u>our enterprise MCP reference architecture</u></a>.</p>
    <div>
      <h3>Code Mode at the portal layer</h3>
      <a href="#code-mode-at-the-portal-layer">
        
      </a>
    </div>
    <p>MCP is the right protocol for connecting AI agents to tools, but it has a practical problem: every tool definition consumes context window tokens before the model even starts working. As the number of MCP servers and tools grows, so does the token overhead, and at scale, this becomes a real cost. <a href="https://blog.cloudflare.com/code-mode/"><u>Code Mode </u></a>is the emerging fix: instead of loading every tool schema up front, the model discovers and calls tools through code.</p><p>Our GitLab MCP server originally exposed 34 individual tools (<code>get_merge_request</code>, <code>list_pipelines</code>, <code>get_file_content</code>, and so on). Those 34 tool schemas consumed roughly 15,000 tokens of context window per request. On a 200K context window, that's 7.5% of the budget gone before asking a question. Multiplied across every request, every engineer, every day, it adds up.</p><p>MCP Server Portals now support Code Mode proxying, which lets us solve that problem centrally instead of one server at a time. Rather than exposing every upstream tool definition to the client, the portal collapses them into two portal-level tools: <code>portal_codemode_search and portal_codemode_execute</code>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7jQa4HPmrOaOhUojZCJQ8q/4e81201065a50dd67c07e257b725e8b8/BLOG-3270_8.png" />
          </figure><p>The nice thing about doing this at the portal layer is that it scales cleanly. Without Code Mode, every new MCP server adds more schema overhead to every request. With portal-level Code Mode, the client still only sees two tools even as we connect more servers behind the portal. That means less context bloat, lower token cost, and a cleaner architecture overall.</p>
    <div>
      <h2>Act 2: The knowledge layer</h2>
      <a href="#act-2-the-knowledge-layer">
        
      </a>
    </div>
    
    <div>
      <h3>Backstage: the knowledge graph underneath all of it</h3>
      <a href="#backstage-the-knowledge-graph-underneath-all-of-it">
        
      </a>
    </div>
    <p>Before the iMARS team could build MCP servers that were actually useful, we needed to solve a more fundamental problem: structured data about our services and infrastructure. We need our agents to understand context outside the code base, like who owns what, how services depend on each other, where the documentation lives, and what databases a service talks to.</p><p>We run <a href="https://backstage.io/"><u>Backstage</u></a>, the open-source internal developer portal originally built by Spotify, as our service catalog. It's self-hosted (not on Cloudflare products, for the record) and it tracks things like:</p><ul><li><p>2,055 services, 167 libraries, and 122 packages</p></li><li><p>228 APIs with schema definitions</p></li><li><p>544 systems (products) across 45 domains</p></li><li><p>1,302 databases, 277 ClickHouse tables, 173 clusters</p></li><li><p>375 teams and 6,389 users with ownership mappings</p></li><li><p>Dependency graphs connecting services to the databases, Kafka topics, and cloud resources they rely on</p></li></ul><p>Our Backstage MCP server (13 tools) is available through our MCP Portal, and an agent can look up who owns a service, check what it depends on, find related API specs, and pull Tech Insights scores, all without leaving the coding session.</p><p>Without this structured data, agents are working blind. They can read the code in front of them, but they can't see the system around it. The catalog turns individual repos into a connected map of the engineering organization.</p>
    <div>
      <h3>AGENTS.md: getting thousands of repos ready for AI</h3>
      <a href="#agents-md-getting-thousands-of-repos-ready-for-ai">
        
      </a>
    </div>
    <p>Early in the rollout, we kept seeing the same failure mode: coding agents produced changes that looked plausible and were still wrong for the repo. Usually the problem was local context: the model didn't know the right test command, the team's current conventions, or which parts of the codebase were off-limits. That pushed us toward AGENTS.md: a short, structured file in each repo that tells coding agents how the codebase actually works and forces teams to make that context explicit.</p>
    <div>
      <h4>What AGENTS.md looks like</h4>
      <a href="#what-agents-md-looks-like">
        
      </a>
    </div>
    <p>We built a system that generates AGENTS.md files across our GitLab instance. Because these files sit directly in the model's context window, we wanted them to stay short and high-signal. A typical file looks like this:</p>
            <pre><code># AGENTS.md

## Repository
- Runtime: cloudflare workers
- Test command: `pnpm test`
- Lint command: `pnpm lint`

## How to navigate this codebase
- All cloudflare workers  are in src/workers/, one file per worker
- MCP server definitions are in src/mcp/, each tool in a separate file
- Tests mirror source: src/foo.ts -&gt; tests/foo.test.ts

## Conventions
- Testing: use Vitest with `@cloudflare/vitest-pool-workers` (Codex: RFC 021, RFC 042)
- API patterns: Follow internal REST conventions (Codex: API-REST-01)

## Boundaries
- Do not edit generated files in `gen/`
- Do not introduce new background jobs without updating `config/`

## Dependencies
- Depends on: auth-service, config-service
- Depended on by: api-gateway, dashboard
</code></pre>
            <p></p><p>When an agent reads this file, it doesn't have to infer the repo from scratch. It knows how the codebase is organized, which conventions to follow and which Engineering Codex rules apply.</p>
    <div>
      <h4>How we generate them at scale</h4>
      <a href="#how-we-generate-them-at-scale">
        
      </a>
    </div>
    <p>The generator pipeline pulls entity metadata from our Backstage service catalog (ownership, dependencies, system relationships), analyzes the repository structure to detect the language, build system, test framework, and directory layout, then maps the detected stack to relevant Engineering Codex standards. A capable model then generates the structured document, and the system opens a merge request so the owning team can review and refine it.</p><p>We've processed roughly 3,900 repositories this way. The first pass wasn't always perfect, especially for polyglot repos or unusual build setups, but even that baseline was much better than asking agents to infer everything from scratch.</p><p>The initial merge request solved the bootstrap problem, but keeping these files current mattered just as much. A stale AGENTS.md can be worse than no file at all. We closed that loop with the AI Code Reviewer, which can flag when repository changes suggest that AGENTS.md should be updated.</p>
    <div>
      <h2>Act 3: The enforcement layer</h2>
      <a href="#act-3-the-enforcement-layer">
        
      </a>
    </div>
    
    <div>
      <h3>The AI Code Reviewer</h3>
      <a href="#the-ai-code-reviewer">
        
      </a>
    </div>
    <p>Every merge request at Cloudflare gets an AI code review. Integration is straightforward: teams add a single CI component to their pipeline, and from that point every MR is reviewed automatically.</p><p>We use GitLab's self-hosted solution as our CI/CD platform. The reviewer is implemented as a GitLab CI component that teams include in their pipeline. When an MR is opened or updated, the CI job runs <a href="https://opencode.ai/"><u>OpenCode</u></a> with a multi-agent review coordinator. The coordinator classifies the MR by risk tier (trivial, lite, or full) and delegates to specialized review agents: code quality, security, codex compliance, documentation, performance, and release impact. Each agent connects to the AI Gateway for model access, pulls Engineering Codex rules from a central repo, and reads the repository's AGENTS.md for codebase context. Results are posted back as structured MR comments.</p><p>A separate Workers-based config service handles centralized model selection per reviewer agent, so we can shift models without changing the CI template. The review process itself runs in the CI runner and is stateless per execution.</p>
    <div>
      <h3>The output format</h3>
      <a href="#the-output-format">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5CKN6frPHygTkAHBDrY3y7/e04eb00702a25627291c904fbde2e7ea/BLOG-3270_9.png" />
          </figure><p>We spent time getting the output format right. Reviews are broken into categories (Security, Code Quality, Performance) so engineers can scan headers rather than reading walls of text. Each finding has a severity level (Critical, Important, Suggestion, or Optional Nits) that makes it immediately clear what needs attention versus what's informational.</p><p>The reviewer maintains context across iterations. If it flagged something in a previous review round that has since been fixed, it acknowledges that rather than re-raising the same issue. And when a finding maps to an Engineering Codex rule, it cites the specific rule ID, turning an AI suggestion into a reference to an organizational standard.</p><p>Workers AI handles about 15% of the reviewer's traffic, primarily for documentation review tasks where Kimi K2.5 performs well at a fraction of the cost of frontier models. Models like Opus 4.6 and GPT 5.4 handle security-sensitive and architecturally complex reviews where reasoning capability matters most.</p><p>Over the last 30 days:</p><ul><li><p><b>100%</b> AI code reviewer coverage across all repos on our standard CI pipeline.</p></li><li><p>5.47M AI Gateway requests</p></li><li><p>24.77B tokens processed</p></li></ul><p>We're releasing a <a href="http://blog.cloudflare.com/ai-code-review"><u>detailed technical blog post</u></a> alongside this one that covers the reviewer's internal architecture, including how we route between models, the multi-agent orchestration, and the cost optimization strategies we've developed.</p>
    <div>
      <h3>Engineering Codex: engineering standards as agent skills</h3>
      <a href="#engineering-codex-engineering-standards-as-agent-skills">
        
      </a>
    </div>
    <p>The <b>Engineering Codex</b> is Cloudflare's new internal standards system where our core engineering standards live. We have a multi-stage AI distillation process, which outputs a set of codex rules ("If you need X, use Y. You must do X, if you are doing Y or Z.") along with an agent skill that uses progressive disclosure and nested hierarchical information directories and links across markdown files. </p><p>This skill is available for engineers to use locally as they build with prompts like “how should I handle errors in my Rust service?” or “review this TypeScript code for compliance.” Our Network Firewall team audited <code>rampartd</code> using a multi-agent consensus process where every requirement was scored COMPLIANT, PARTIAL, or NON-COMPLIANT with specific violation details and remediation steps reducing what previously required weeks of manual work to a structured, repeatable process.</p><p>At review time, the AI Code Reviewer cites specific Codex rules in its feedback.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6vXFmpfwqv1Xqo5CszCsEB/ada6417470b2b7533a12244578cbd9d0/BLOG-3270_10.png" />
          </figure><p><sup> </sup><sup><i>AI Code Review: showing categorized findings (Codex Compliance in this case) noting the codex RFC violation.</i></sup></p><p>None of these pieces are especially novel on their own. Plenty of companies run service catalogs, ship reviewer bots, or publish engineering standards. The difference is the wiring. When an agent can pull context from Backstage, read AGENTS.md for the repo it’s editing, and get reviewed against Codex rules by the same toolchain, the first draft is usually close enough to ship. That wasn’t true six months ago.</p>
    <div>
      <h2>The scoreboard</h2>
      <a href="#the-scoreboard">
        
      </a>
    </div>
    <p>From launching this effort to 93% R&amp;D adoption took less than a year.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1OtTpHobBRjorxOV2dWG8w/9b9dc80cba9741ee65e8565f63091e1a/BLOG-3270_11.png" />
          </figure><p><b>Company-wide adoption (Feb 5 – April 15, 2026):</b></p><table><tr><th><p>Metric</p></th><th><p>Value</p></th></tr><tr><td><p>Active users</p></td><td><p><b>3,683</b> (60% of the company)</p></td></tr><tr><td><p>R&amp;D team adoption</p></td><td><p><b>93%</b></p></td></tr><tr><td><p>AI messages</p></td><td><p><b>47.95M</b></p></td></tr><tr><td><p>Teams with AI activity</p></td><td><p><b>295</b></p></td></tr><tr><td><p>OpenCode messages</p></td><td><p><b>27.08M</b></p></td></tr><tr><td><p>Windsurf messages</p></td><td><p><b>434.9K</b></p></td></tr></table><p><b>AI Gateway (last 30 days, combined):</b></p><table><tr><th><p>Metric</p></th><th><p>Value</p></th></tr><tr><td><p>Requests</p></td><td><p><b>20.18M</b></p></td></tr><tr><td><p>Tokens</p></td><td><p><b>241.37B</b></p></td></tr></table><p><b>Workers AI (last 30 days):</b></p><table><tr><th><p>Metric</p></th><th><p>Value</p></th></tr><tr><td><p>Input tokens</p></td><td><p><b>51.47B</b></p></td></tr><tr><td><p>Output tokens</p></td><td><p><b>361.12M</b></p></td></tr></table>
    <div>
      <h2>What's next: background agents</h2>
      <a href="#whats-next-background-agents">
        
      </a>
    </div>
    <p>The next evolution in our internal engineering stack will include background agents: agents that can be spun up on demand with the same tools available locally (MCP portal, git, test runners) but running entirely in the cloud. The architecture uses Durable Objects and the Agents SDK for orchestration, delegating to Sandbox containers when the job requires a full development environment like cloning a repo, installing dependencies, or running tests. The <a href="https://blog.cloudflare.com/sandbox-ga/"><u>Sandbox SDK went GA</u></a> during Agents Week.</p><p>Long-running agents, <a href="https://blog.cloudflare.com/project-think/"><u>shipped natively into the Agents SDK</u></a> during Agents Week, solve the durable session problem that previously required workarounds. The SDK now supports sessions that run for extended periods without eviction, enough for an agent to clone a large repo, run a full test suite, iterate on failures, and open a MR in a single session.</p><p>This represents an eleven-month effort to rethink not just how code gets written, but how it gets reviewed, how standards are enforced, and how changes ship safely across thousands of repos. Every layer runs on the same products our customers use.</p>
    <div>
      <h2>Start building</h2>
      <a href="#start-building">
        
      </a>
    </div>
    <p>Agents Week just <a href="https://blogs.cloudflare.com/agents-week-in-review"><u>shipped</u></a> everything you need. The platform is here.</p>
            <pre><code>npx create-cloudflare@latest --template cloudflare/agents-starter
</code></pre>
            <p>That agents starter gets you running. The diagram below is the full architecture for when you’re ready to grow it, your tools layer on top (chatbot, web UI, CLI, browser extension), the Agents SDK handling session state and orchestration in the middle, and the Cloudflare services you call from it underneath.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3Ebzq5neZbshbrhb5WysYY/54335791b2f731e2095fd57a4fe11cf3/image11.png" />
          </figure><p><b>Docs:</b> <a href="https://developers.cloudflare.com/agents/"><u>Agents SDK</u></a> · <a href="https://developers.cloudflare.com/sandbox/"><u>Sandbox SDK</u></a> · <a href="https://developers.cloudflare.com/ai-gateway/"><u>AI Gateway</u></a> · <a href="https://developers.cloudflare.com/workers-ai/"><u>Workers AI</u></a> · <a href="https://developers.cloudflare.com/workflows/"><u>Workflows</u></a> · <a href="https://developers.cloudflare.com/agents/api-reference/codemode/"><u>Code Mode</u></a> · <a href="https://blog.cloudflare.com/model-context-protocol/"><u>MCP on Cloudflare</u></a></p><p><b>Repos:</b> <a href="https://github.com/cloudflare/agents"><u>cloudflare/agents</u></a> · <a href="https://github.com/cloudflare/sandbox-sdk"><u>cloudflare/sandbox-sdk</u></a> · <a href="https://github.com/cloudflare/mcp-server-cloudflare"><u>cloudflare/mcp-server-cloudflare</u></a> · <a href="https://github.com/cloudflare/skills"><u>cloudflare/skills</u></a></p><p>For more on how we’re using AI at Cloudflare, read the post on <a href="http://blog.cloudflare.com/ai-code-review"><u>our process for AI Code Review</u></a>. And check out <a href="https://www.cloudflare.com/agents-week/updates/"><u>everything we shipped during Agents Week</u></a>.</p><p>We’d love to hear what you build. Find us on <a href="https://discord.cloudflare.com/"><u>Discord</u>,</a> <a href="https://x.com/CloudflareDev"><u>X</u>, and</a> <a href="https://bsky.app/profile/cloudflare.social"><u>Bluesky</u></a>.</p><p><sup><i>Ayush Thakur built the AGENTS.md system and the AI Gateway integration for the OpenCode infrastructure, Scott Roemeschke is the Engineering Manager of the Developer Productivity team at Cloudflare, Rajesh Bhatia leads the Productivity Platform function at Cloudflare. This post was a collaborative effort across the Devtools team, with help from volunteers across the company through the iMARS (Internal MCP Agent/Server Rollout Squad) tiger team.</i></sup></p> ]]></content:encoded>
            <category><![CDATA[Agents Week]]></category>
            <category><![CDATA[Agents]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[SASE]]></category>
            <category><![CDATA[MCP]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Cloudflare Gateway]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Workers AI]]></category>
            <guid isPermaLink="false">4z7ZJ5ZbY3e3YwJtkg6NiD</guid>
            <dc:creator>Ayush Thakur</dc:creator>
            <dc:creator>Scott Roe-Meschke</dc:creator>
            <dc:creator>Rajesh Bhatia</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing Flagship: feature flags built for the age of AI]]></title>
            <link>https://blog.cloudflare.com/flagship/</link>
            <pubDate>Fri, 17 Apr 2026 13:00:00 GMT</pubDate>
            <description><![CDATA[ We are launching Flagship, a native feature flag service built on Cloudflare’s global network to eliminate the latency of third-party providers. By using KV and Durable Objects, Flagship allows for sub-millisecond flag evaluation. ]]></description>
            <content:encoded><![CDATA[ <p>AI is writing more code than ever. AI-assisted contributions now account for a rapidly growing share of new code across the platform. Agentic coding tools like OpenCode and Claude Code are shipping entire features in minutes.</p><p>AI-generated code entering production is only going to accelerate. But the bigger shift isn't just speed — it's autonomy.</p><p>Today, an AI agent writes code and a human reviews, merges, and deploys it. Tomorrow, the agent does all of that itself. The question becomes: how do you let an agent ship to production without removing every safety net?</p><p>Feature flags are the answer. An agent writes a new code path behind a flag and deploys it — the flag is off, so nothing changes for users. The agent then enables the flag for itself or a small test cohort, exercises the feature in production, and observes the results. If metrics look good, it ramps the rollout. If something breaks, it disables the flag. The human doesn't need to be in the loop for every step — they set the boundaries, and the flag controls the blast radius.</p><p>This is the workflow feature flags were always building toward: not just decoupling deployment from release, but decoupling human attention from every stage of the shipping process. The agent moves fast because the flag makes it safe to move fast.</p><p>Today, we're announcing Flagship — Cloudflare's native feature flag service, built on <a href="https://openfeature.dev/"><u>OpenFeature</u></a>, the CNCF open standard for feature flag evaluation. It works everywhere — Workers, Node.js, Bun, Deno, and the browser — but it's fastest on Workers, where flags are evaluated within the Cloudflare network. With the Flagship binding and OpenFeature, integration looks like this:</p>
            <pre><code>await OpenFeature.setProviderAndWait(
    new FlagshipServerProvider({ binding: env.FLAGS })
);</code></pre>
            <p>Flagship is now available in closed beta.</p>
    <div>
      <h3>The problem with feature flags on Workers</h3>
      <a href="#the-problem-with-feature-flags-on-workers">
        
      </a>
    </div>
    <p>Many Cloudflare developers have resorted to the pragmatic workaround: hardcoding flag logic directly into their Workers. And honestly, it works well enough in the beginning. Workers deploy in seconds, so flipping a boolean in code and pushing it to production is fast enough for most situations.</p><p>But it doesn't stay simple. One hardcoded flag becomes ten. Ten becomes fifty, owned by different teams, with no central view of what's on or off. There's no audit trail — when something breaks, you're searching <code>git blame</code> to figure out who toggled what.</p>
    <div>
      <h4>Network call to external services</h4>
      <a href="#network-call-to-external-services">
        
      </a>
    </div>
    <p>Another common pattern used on workers is to make an HTTP request to an external service in the following manner:</p>
            <pre><code>const response = await fetch("https://flags.example-service.com/v1/evaluate", {
      ...
      body: JSON.stringify({
        flagKey: "new-checkout-flow",
        context: {
          ...
        },
      }),
    });
const { value } = await response.json();
if (value === true) {
    return handleNewCheckout(request);
}
return handleLegacyCheckout(request);</code></pre>
            <p>That outbound request sits on the critical path of every single user request. It could add considerable latency depending on how far the user is from the flag service's region.</p><p>This is a strange situation. Your application runs at the edge, milliseconds from the user. But the feature flag check forces it to reach back across the Internet to another API before it can decide what to render.</p>
    <div>
      <h4>Why local evaluation doesn't solve the problem</h4>
      <a href="#why-local-evaluation-doesnt-solve-the-problem">
        
      </a>
    </div>
    <p>Some feature flag services offer a "local evaluation" SDK. Instead of calling a remote API on every request, the SDK downloads the full set of flag rules into memory and evaluates them locally. No outbound request per evaluation and the flag decision happens in-process.</p><p>On Workers, none of these assumptions hold. There is no long-lived process: a Worker isolate can be created, serve a request, and be evicted between one request and the next. A new invocation could mean re-initializing the SDK from scratch.</p><p>On a serverless platform, you need a distribution primitive that's already at the edge, one where the caching is managed for you, reads are local, and you don't need a persistent connection to keep things up to date.</p><p>Cloudflare KV is a great primitive for this!</p>
    <div>
      <h3>How Flagship works</h3>
      <a href="#how-flagship-works">
        
      </a>
    </div>
    <p>Flagship is built entirely on Cloudflare's infrastructure — Workers, Durable Objects, and KV. There are no external databases, no third-party services, and no centralized origin servers in the evaluation path.</p><p>When you create or update a flag, the control plane writes the change atomically to a <a href="https://developers.cloudflare.com/durable-objects/"><u>Durable Object</u></a> — a SQLite-backed, globally unique instance that serves as the source of truth for that app's flag configuration and changelog. Within seconds, the updated flag config is synced to <a href="https://developers.cloudflare.com/kv/"><u>Workers KV</u></a>, Cloudflare's globally distributed key-value store, where it's replicated across Cloudflare's network.</p><p>When a request evaluates a flag, Flagship reads the flag config directly from KV at the edge — the same Cloudflare location already handling the request. The evaluation engine then runs right there in an <a href="https://developers.cloudflare.com/workers/reference/how-workers-works/#isolates"><u>isolate</u></a>: it matches the request context against the flag's targeting rules, resolves the rollout percentage, and returns a variation. Both the data and the logic live at the edge — nothing is sent elsewhere to be evaluated.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1nKDxB6OkUr1fpVKGWEugC/8fc4dbf0bd1ec45e30010a223982487e/image2.png" />
          </figure>
    <div>
      <h3>Using Flagship: the Worker binding</h3>
      <a href="#using-flagship-the-worker-binding">
        
      </a>
    </div>
    <p>For teams running Cloudflare Workers, Flagship offers a direct binding that evaluates flags inside the Workers runtime — no HTTP round-trip, no SDK overhead. Add the binding to your <code>wrangler.jsonc</code> and your Worker is connected:</p>
            <pre><code>{
  "flagship": [
    {
      "binding": "FLAGS",
      "app_id": "&lt;APP_ID&gt;"
    }
  ]
}</code></pre>
            <p>That's it. Your account ID is inferred from your Cloudflare account, and the <code>app_id</code> ties the binding to a specific Flagship app. In your Worker, you just ask for a flag value:</p>
            <pre><code>export default {
  async fetch(request: Request, env: Env) {
    // Simple boolean check
    const showNewUI = await env.FLAGS.getBooleanValue('new-ui', false, {
      userId: 'user-42',
      plan: 'enterprise',
    });
    // Full evaluation details when you need them
    const details = await env.FLAGS.getStringDetails('checkout-flow', 'v1', {
      userId: 'user-42',
    });
    // details.value = "v2", details.variant = "new", details.reason = "TARGETING_MATCH"
  },
};</code></pre>
            <p>The binding supports typed accessors for every variation type - <code>getBooleanValue()</code>, <code>getStringValue()</code>, <code>getNumberValue()</code>, <code>getObjectValue()</code> - plus <code>*Details()</code> variants that return the resolved value alongside the matched variant and the reason it was selected. On evaluation errors, the default value is returned gracefully. On type mismatches, the binding throws an exception — that's a bug in your code, not a transient failure.</p>
    <div>
      <h3>The SDK: OpenFeature-native</h3>
      <a href="#the-sdk-openfeature-native">
        
      </a>
    </div>
    <p>Most feature flag SDKs come with their own interfaces and evaluation patterns. Over time, those become deeply embedded in your codebase — and switching providers means rewriting every call site.</p><p>We didn't want to build another one of those. Flagship is built on <a href="https://openfeature.dev/">OpenFeature</a>, the CNCF open standard for feature flag evaluation. OpenFeature defines a common interface for flag evaluation across languages and providers — it's the same relationship that OpenTelemetry has to observability. You write your evaluation code once against the standard, and swap providers by changing a single line of configuration.</p>
            <pre><code>import { OpenFeature } from '@openfeature/server-sdk';
import { FlagshipServerProvider } from '@cloudflare/flagship/server';
await OpenFeature.setProviderAndWait(
  new FlagshipServerProvider({
    appId: 'your-app-id',
    accountId: 'your-account-id',
    authToken: 'your-cloudflare-api-token',
  })
);
const client = OpenFeature.getClient();
const showNewCheckout = await client.getBooleanValue(
  'new-checkout-flow',
  false,
  {
    targetingKey: 'user-42',
    plan: 'enterprise',
    country: 'US',
  }
);</code></pre>
            <p>If you're running on Workers with the Flagship binding, you can pass it directly to the OpenFeature provider. The binding already carries your account context, so there's nothing to configure — authentication is implicit.</p>
            <pre><code>import { OpenFeature } from '@openfeature/server-sdk';
import { FlagshipProvider } from '@cloudflare/flagship/server';
let initialized = false;
export default {
  async fetch(request: Request, env: Env) {
    if (!initialized) {
      await OpenFeature.setProviderAndWait(
        new FlagshipServerProvider({ binding: env.FLAGS })
      );
      initialized = true;
    }
    const client = OpenFeature.getClient();
    const showNewCheckout = await client.getBooleanValue('new-checkout-flow', false, {
      targetingKey: 'user-42',
      plan: 'enterprise',
    });
  },
};</code></pre>
            <p>Your evaluation code doesn't change — the OpenFeature interface is identical. But under the hood, Flagship evaluates flags through the binding instead of over HTTP. You get the portability of the standard with the performance of the binding.</p><p>A client-side provider is also available for browsers. It pre-fetches the flags you specify, caches them with a configurable TTL, and serves evaluations synchronously from that cache.</p>
    <div>
      <h3>What you can do with Flagship</h3>
      <a href="#what-you-can-do-with-flagship">
        
      </a>
    </div>
    <p>Flagship supports the patterns you'd expect from a feature flag service and the ones that become critical when AI-generated code is landing in production daily.</p><p>Flag values can be boolean, strings, numbers, or full JSON objects — useful for configuration blocks, UI theme definitions, or routing users to different API versions without maintaining separate code paths.</p>
    <div>
      <h4>Targeting Rules</h4>
      <a href="#targeting-rules">
        
      </a>
    </div>
    <p>Each flag can have multiple rules, evaluated in priority order. The first rule that matches wins.</p><p>A rule consists of:</p><ul><li><p>Conditions that determine whether the rule applies to a given context</p></li><li><p>A flag variation to serve when the rule matches</p></li><li><p>An optional rollout for percentage-based delivery</p></li><li><p>A priority that determines evaluation order when multiple rules are present (lower number = higher priority)</p></li></ul>
    <div>
      <h4>Nested Logical Conditions</h4>
      <a href="#nested-logical-conditions">
        
      </a>
    </div>
    <p>Conditions can be composed using AND/OR logic, nested up to five levels deep. A single rule can express things like:</p>
            <pre><code>(plan == “enterprise” AND region == “us” ) OR (user.email.endsWith(“@cloudflare.com”))
= serve (“premium”)</code></pre>
            <p>At the top level of a rule, multiple conditions are combined with implicit AND where all conditions must pass for the rule to match. Within each condition, you can nest AND/OR groups for more complex logic.</p>
    <div>
      <h4>Flag Rollouts by Percentage</h4>
      <a href="#flag-rollouts-by-percentage">
        
      </a>
    </div>
    <p>Unlike <a href="https://developers.cloudflare.com/workers/configuration/versions-and-deployments/gradual-deployments/"><u>gradual deployments</u></a>, which split traffic between different uploaded versions of your Worker, feature flags let you roll out behavior by percentage within a single version that is serving 100% of traffic. </p><p>Any rule can include a percentage rollout. Instead of serving a variation to everyone who matches the conditions, you serve it to a percentage of them. </p><p>Rollouts use consistent hashing on the specified context attribute. The same attribute value (userId, for example) always hashes to the same bucket, so they won't flip between variations across requests. You can ramp from 5% to 10% to 50% to 100% of users, so those who were already in the rollout stay in it.</p>
    <div>
      <h3>Built for what comes next</h3>
      <a href="#built-for-what-comes-next">
        
      </a>
    </div>
    <p>AI-generated code entering production is only going to accelerate. Agentic workflows will push it further — agents that autonomously deploy, test, and iterate on code in production. The teams that thrive in this world won't be the ones shipping the fastest. They'll be the ones who can ship fast and still maintain control over what their users see, roll back in seconds when something breaks, and gradually expose new code paths with confidence.</p><p>That's what Flagship is built for:</p><ul><li><p><b>Evaluation across region Earth,</b> cached globally using K/V.</p></li><li><p><b>A full audit trail.</b> Every flag change is recorded with field-level diffs, so you know who changed what and when.</p></li><li><p><b>Dashboard integration</b>. Anyone on the team can toggle a flag or adjust a rollout without touching code.</p></li><li><p><b>OpenFeature compatibility.</b> Adopt Flagship without rewriting your evaluation code. Leave without rewriting it either.</p></li></ul>
    <div>
      <h3>Get started with Flagship</h3>
      <a href="#get-started-with-flagship">
        
      </a>
    </div>
    <p>Starting today, Flagship is in private beta. You can request for access <a href="https://www.cloudflare.com/lp/flagship-private-beta/"><u>here</u></a>. We'll share more details on pricing as we approach general availability.</p><ul><li><p>Visit the<a href="https://dash.cloudflare.com"> <u>Cloudflare dashboard</u></a> to create your first Flagship app</p></li><li><p>Install the SDK: <code>npm i @cloudflare/flagship</code>; or use the Worker binding directly in your Worker</p></li><li><p>Read the <a href="https://developers.cloudflare.com/flagship/get-started/"><u>documentation</u></a> for integration guides and API reference</p></li><li><p>Check out the <a href="https://github.com/cloudflare/flagship"><u>source code</u></a> for examples and to contribute</p></li></ul><p>If you're currently hardcoding flags in your Workers, or evaluating flags through an external service that adds latency to every request, give Flagship a try. We'd love to hear what you build.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Durable Objects]]></category>
            <category><![CDATA[Performance]]></category>
            <category><![CDATA[Feature Flags]]></category>
            <guid isPermaLink="false">3d8fI4f0doBxr4ybfKEymB</guid>
            <dc:creator>Rohan Mukherjee</dc:creator>
            <dc:creator>Abhishek Kankani</dc:creator>
        </item>
        <item>
            <title><![CDATA[Artifacts: versioned storage that speaks Git]]></title>
            <link>https://blog.cloudflare.com/artifacts-git-for-agents-beta/</link>
            <pubDate>Thu, 16 Apr 2026 13:01:00 GMT</pubDate>
            <description><![CDATA[ Give your agents, developers, and automations a home for code and data. We’ve just launched Artifacts: Git-compatible versioned storage built for agents. Create tens of millions of repos, fork from any remote, and hand off a URL to any Git client.
 ]]></description>
            <content:encoded><![CDATA[ <p>Agents have changed how we think about source control, file systems, and persisting state. Developers and agents are generating more code than ever — more code will be written over the next 5 years than in all of programming history — and it’s driven an order-of-magnitude change in the scale of the systems needed to meet this demand. Source control platforms are especially struggling here: they were built to meet the needs of humans, not a 10x change in volume driven by agents who never sleep, can work on several issues at once, and never tire.</p><p>We think there’s a need for a new primitive: a distributed, versioned filesystem that’s built for agents first and foremost, and that can serve the types of applications that are being built today.</p><p>We’re calling this Artifacts: a versioned file system that speaks Git. You can create repositories programmatically, alongside your agents, sandboxes, Workers, or any other compute paradigm, and connect to it from any regular Git client.</p><p>Want to give every agent session a repo? Artifacts can do it. Every sandbox instance? Also Artifacts. Want to create 10,000 forks from a known-good starting point? You guessed it: Artifacts again. Artifacts exposes a REST API and native Workers API for creating repositories, generating credentials, and commits for environments where a Git client isn’t the right fit (i.e. in any serverless function).</p><p>Artifacts is available in private beta and we’re aiming to open this up as a public beta by early May.</p>
            <pre><code>// Create a repo
const repo = await env.AGENT_REPOS.create(name)
// Pass back the token &amp; remote to your agent
return { repo.remote, repo.token }</code></pre>
            
            <pre><code># Clone it and use it like any regular git remote
$ git clone https://x:${TOKEN}@123def456abc.artifacts.cloudflare.net/git/repo-13194.git
</code></pre>
            <p>That’s it. A bare repo, ready to go, created on the fly, that any git client can operate it against.</p><p>And if you want to bootstrap an Artifacts repo from an existing git repository so that your agent can work on it independently and push independent changes, you can do that too with .import():</p>
            <pre><code>interface Env {
  ARTIFACTS: Artifacts
}

export default {
  async fetch(request: Request, env: Env) {
    // Import from GitHub
    const { remote, token } = await env.ARTIFACTS.import({
      source: {
        url: "https://github.com/cloudflare/workers-sdk",
        branch: "main",
      },
      target: {
        name: "workers-sdk",
      },
    })

    // Get a handle to the imported repo
    const repo = await env.ARTIFACTS.get("workers-sdk")

    // Fork to an isolated, read-only copy
    const fork = await repo.fork("workers-sdk-review", {
      readOnly: true,
    })

    return Response.json({ remote: fork.remote, token: fork.token })
  },
}</code></pre>
            <p><a href="http://developers.cloudflare.com/artifacts/"><u>Check out the documentation</u></a> to get started, or if you want to understand how Artifacts is being used, how it was built, and how it works under the hood: read on.</p>
    <div>
      <h2>Why Git? What’s a versioned file system?</h2>
      <a href="#why-git-whats-a-versioned-file-system">
        
      </a>
    </div>
    <p>Agents know Git. It’s deep in the training data of most models. The happy path <i>and </i>the edge cases are well known to agents, and code-optimized models (and/or harnesses) are particularly good at using git.</p><p>Further, Git’s data model is not only good for source control, but for <i>anything</i> where you need to track state, time travel, and persist large amounts of small data. Code, config, session prompts and agent history: all of these are things (“objects”) that you often want to store in small chunks (“commits”) and be able to revert or otherwise roll back to (“history”). </p><p>We could have invented an entirely new, bespoke protocol… but then you have the bootstrap problem. AI models don’t know it, so you have to distribute skills, or a CLI, or hope that users are plugged into your docs MCP… all of that adds friction.

If we can just give agents an authenticated, secure HTTPS Git remote URL and have them operate as if it were a Git repo, though? That turns out to work pretty well. And for non-Git-speaking clients — such as a Cloudflare Worker, a Lambda function, or a Node.js app — we’ve exposed a REST API and (soon) language-specific SDKs. Those clients can also use <a href="https://isomorphic-git.org/"><u>isomorphic-git</u></a>, but in many cases a simpler TypeScript API can reduce the API surface needed.</p>
    <div>
      <h3>Not just for source control</h3>
      <a href="#not-just-for-source-control">
        
      </a>
    </div>
    <p>Artifacts’ Git API might make you think it’s just for source control, but it turns out that the Git API and data model is a powerful way to persist state in a way that allows you to fork, time-travel and diff state for <i>any</i> data.</p><p>Inside Cloudflare, we’re using Artifacts for our internal agents: automatically persisting the current state of the filesystem <i>and</i> the session history in a per-session Artifacts repo. This enables us to:</p><ul><li><p>Persist sandbox state without having to provision (and keep) block storage around.</p></li><li><p>Share sessions with others and allow them to time-travel back through both session (prompt) state <i>and</i> file state, irrespective of whether there were commits to the “actual” repository (source control).</p></li><li><p>And the best: <i>fork</i> a session from any point, allowing our team to share sessions with a co-worker and have them pick it up from them. Debugging something and want another set of eyes? Send a URL and fork it. Want to riff on an API? Have a co-worker fork it and pick up from where you left off.</p></li></ul><p>We’ve also spoken to teams who want to use Artifacts in cases where the Git protocol isn’t a requirement at all, but the semantics (reverting, cloning, diffing) <i>are</i>. Storing per-customer config as part of your product, and want the ability to roll back? Artifacts can be a good representation of this.</p><p>We’re excited to see teams explore the non-Git use-cases around Artifacts just as much as the Git-focused ones.</p>
    <div>
      <h2>Under the hood</h2>
      <a href="#under-the-hood">
        
      </a>
    </div>
    <p>Artifacts are built on top of Durable Objects. The ability to create millions (or tens of millions+) of instances of stateful, isolated compute is inherent to how Durable Objects work today, and that’s exactly what we needed for supporting millions of Git repos per namespace.</p><p>Major League Baseball (for live game fan-out), Confluence Whiteboards, and our own <a href="https://developers.cloudflare.com/agents/"><u>Agents SDK</u></a> use Durable Objects under the hood at significant scale, and so we’re building this on a primitive that we’ve had in production for some time.</p><p>What we did need, however, was a Git server implementation that could run on Cloudflare Workers. It needed to be small, as complete as possible, extensible (<a href="https://git-scm.com/docs/git-notes"><u>notes</u></a>, <a href="https://git-lfs.com/"><u>LFS</u></a>), and efficient. So we built one in <a href="https://ziglang.org/"><u>Zig</u></a>, and compiled it to Wasm.</p><p>Why did we use Zig? Three reasons:</p><ol><li><p> The entire git protocol engine is written in pure Zig (no libc), compiled to a ~100KB WASM binary (with room for optimization!). It implements SHA-1, zlib inflate/deflate, delta encoding/decoding, pack parsing, and the full git smart HTTP protocol — all from scratch, with zero external dependencies other than the standard library.</p></li><li><p> Zig gives us  manual control over memory allocation which is important in constrained environments like Durable Objects. The Zig Build System lets us easily share  code between the WASM runtime (production) and native builds (testing against libgit2 for correctness verification).</p></li><li><p>The WASM module communicates with the JS host via a thin callback interface: 11 host-imported functions for storage operations (host_get_object, host_put_object, etc.) and one for streaming output (host_emit_bytes). The WASM side is fully testable in isolation.</p></li></ol><p>Under the hood, Artifacts also uses R2 (for snapshots) and KV (for tracking auth tokens):</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/35SxJbQfntIscpotc0GBt8/48ae11213d7483c9b488321baacf78e7/BLOG-3269_2.png" />
          </figure><p><sup><code><i>How Artifacts works (Workers, Durable Objects, and WebAssembly)</i></code></sup></p><p>A Worker acts as the front-end, handling authentication &amp; authorization, key metrics (errors, latency) and looking up each Artifacts repository (Durable Object) on the fly. </p><p>Specifically:</p><ul><li><p>Files are stored in the underlying Durable Object’s SQLite database.</p><ul><li><p>Durable Object storage has a 2MB max row size, so large Git objects are chunked and stored across multiple rows.</p></li><li><p>We make use of the sync KV API (state.storage.kv)  which is backed by SQLite under the hood.</p></li></ul></li><li><p> DOs have ~128MB memory limits: this means we can spawn tens of millions of them (they’re fast and light) but have to work within those limits.</p><ul><li><p>We make heavy use of streaming in both the fetch and push paths, directly returning a `ReadableStream&lt;Uint8Array&gt;` built from the raw WASM output chunks.</p></li><li><p>We avoid calculating our own git deltas, instead,  the raw deltas and base hashes are persisted alongside the resolved object. On fetch, if the requesting client already has the base object, Zig emits the delta instead of the full object, which saves bandwidth <i>and</i> memory.</p></li></ul></li><li><p>Support for both v1 and v2 of the git protocol.</p><ul><li><p>We support capabilities including ls-refs, shallow clones (deepen, deepen-since, deepen-relative), and incremental fetch with have/want negotiation.</p></li><li><p>We have an extensive test suite with conformance tests against git clients and verification tests against a libgit2 server designed to validate protocol support.</p></li></ul></li></ul><p>On top of this, we have native support for <a href="https://git-scm.com/docs/git-notes"><u>git-notes</u></a>. Artifacts is designed to be agent-first, and notes enable agents to add notes (metadata) to Git objects. This includes prompts, agent attribution and other metadata that can be read/written from the repo without mutating the objects themselves.</p>
    <div>
      <h2>Big repos, big problems? Meet ArtifactFS.</h2>
      <a href="#big-repos-big-problems-meet-artifactfs">
        
      </a>
    </div>
    <p>Most repos aren’t that big, and Git is <a href="https://github.blog/open-source/git/gits-database-internals-i-packed-object-store/"><u>designed to be extremely efficient</u></a> in terms of storage: most repositories take only a few seconds to clone at most, and that’s dominated by network setup time, auth, and <a href="https://git-scm.com/book/ms/v2/Git-Internals-Git-Objects"><u>checksumming</u></a>. In most agent or sandbox scenarios, that’s workable: just clone the repo as the sandbox starts and get to work.</p><p>But what about a multi-GB repository and/or repos with millions of objects? How can we clone that repo quickly, without blocking the agent’s ability to get to work for minutes and consuming compute?</p><p>A popular web framework (at 2.4GB and with a long history!) takes close to 2 minutes to clone. A shallow clone is faster, but not enough to get down to single digit seconds, and we don’t always want to omit history (agents find it useful).</p><p>Can we get large repos down to ~10-15 seconds so that our agent can get to work? Well, yes: with a few tricks.</p><p>As part of our launch of Artifacts, <a href="https://github.com/cloudflare/artifact-fs"><u>we’re open-sourcing ArtifactFS</u></a>, a filesystem driver designed to mount large Git repos as quickly as possible, hydrating file contents on the fly instead of blocking on the initial clone. It's ideal for agents, sandboxes, containers and other use cases where startup time is critical. If you can shave ~90-100 seconds off your sandbox startup time for every large repo, and you’re running 10,000 of those sandbox jobs per month: that’s 2,778 sandbox hours saved.</p><p>You can think of ArtifactFS as “Git clone but async”:</p><ul><li><p>ArtifactFS runs a blobless clone of a git repository: it fetches the file tree and refs, but not the file contents. It can do that during sandbox startup, which then allows your agent harness to get to work.</p></li><li><p>In the background, it starts to hydrate (download) file contents concurrently via a lightweight daemon.</p></li><li><p>It prioritizes files that agents typically want to operate on first: package manifests (<code>package.json, go.mod</code>), configuration files, and code, deprioritizing binary blobs (images, executables and other non-text-files) where possible so that agents can scan the file tree as the files themselves are hydrated.</p></li><li><p>If a file isn’t fully hydrated when the agent tries to read it, the read will block until it has.</p></li></ul><p>The filesystem does not attempt to “sync” files back to the remote repository: with thousands or millions of objects, that’s typically very slow, and since we’re speaking git, we don’t need to. Your agent just needs to commit and push, as it would with any repository. No new APIs to learn.</p><p>Importantly, ArtifactFS works with any Git remote, not just our own Artifacts. If you’re cloning large repos from GitHub, GitLab, or self-hosted Git infrastructure: you can still use ArtifactFS.</p>
    <div>
      <h2>What’s coming?</h2>
      <a href="#whats-coming">
        
      </a>
    </div>
    <p>Our release today is just the beta, and we’re already working on a number of features that you’ll see land over the next few weeks:</p><ul><li><p>Expanding the <a href="https://developers.cloudflare.com/artifacts/observability/metrics/"><u>available metrics</u></a> we expose. Today we’re shipping metrics for key operations counts per namespace, repo and stored bytes per repo, so that managing millions of Artifacts isn’t toilsome.</p></li><li><p>Support for <a href="https://developers.cloudflare.com/queues/event-subscriptions/"><u>Event Subscriptions</u></a> for repo-level events so that we can emit events on pushes, pulls, clones, and forks to any repository within a namespace. This will also allow you to consume events, write webhooks, and use those events to notify end-users, drive lifecycle events within your products, and/or run post-push jobs (like CI/CD).</p></li><li><p>Native TypeScript, Go and Python client SDKs for interacting with the Artifacts API</p></li><li><p>Repo-level search APIs and namespace-wide search APIs, e.g. “find all the repos with a <code>package.json</code> file”. </p></li></ul><p>We’re also planning an API for <a href="https://developers.cloudflare.com/workers/ci-cd/builds/"><u>Workers Builds</u></a>, allowing you to run CI/CD jobs on any agent-driven workflow.</p>
    <div>
      <h2>What will it cost me?</h2>
      <a href="#what-will-it-cost-me">
        
      </a>
    </div>
    <p>We’re still early with Artifacts, but want our pricing to work at agent-scale: it needs to be cost effective to have millions of repos, unused (or rarely used) repos shouldn’t be a drag, and our pricing should match the massively-single-tenant nature of agents.</p><p>You also shouldn’t have to think about whether a repo is going to be used or not, whether it’s hot or cold, and/or whether an agent is going to wake it up. We’ll charge you for the storage you consume and the operations (e.g. clones, forks, pushes &amp; pulls) against each repo.</p><table><tr><th><p></p></th><th><p><b>$/unit</b></p></th><th><p><b>Included</b></p></th></tr><tr><td><p><b>Operations</b></p></td><td><p>$0.15 per 1,000 operations</p></td><td><p>First 10k included (per month)</p></td></tr><tr><td><p><b>Storage</b></p></td><td><p>$0.50/GB-mo</p></td><td><p>First 1GB included.</p></td></tr></table><p>Big, busy repos will cost more than smaller, less-often-used repos, whether you have 1,000, 100,000, or 10 million of them.</p><p>We’ll also be bringing Artifacts to the Workers Free plan (with some fair limits) as the beta progresses, and we’ll provide updates throughout the beta should this pricing change and ahead of billing any usage.</p>
    <div>
      <h2>Where do I start? </h2>
      <a href="#where-do-i-start">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3xLMMKCN1HNGWbkSyq0tDZ/2a8d49383957804f3ce204783e11ae80/BLOG-3269_3.png" />
          </figure><p>Artifacts is launching in private beta, and we expect public beta to be ready in early May (2026, to be clear!). We’ll be allowing customers in progressively over the next few weeks, and <a href="https://forms.gle/DwBoPRa3CWQ8ajFp7"><u>you can register interest for the private beta</u></a> directly.</p><p>In the meantime, you can learn more about Artifacts by:</p><ul><li><p>Reading the <a href="http://developers.cloudflare.com/artifacts/get-started/workers/"><u>getting started guide</u></a> in the docs.</p></li><li><p>Visiting the Cloudflare dashboard (Build &gt; Storage &amp; Databases &gt; Artifacts)</p></li><li><p>Reading through the <a href="http://developers.cloudflare.com/artifacts/api/rest-api/"><u>REST API examples</u></a></p></li><li><p>Learning more about <a href="http://developers.cloudflare.com/artifacts/concepts/how-artifacts-works/"><u>how Artifacts works</u></a> under the hood</p></li></ul><p>Follow <a href="http://developers.cloudflare.com/changelog/product/artifacts/"><u>the changelog</u></a> to track the beta as it progresses.</p>
    <div>
      <h2>Watch on Cloudflare TV</h2>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div>
  
</div>
<p></p> ]]></content:encoded>
            <category><![CDATA[Agents Week]]></category>
            <category><![CDATA[Agents]]></category>
            <category><![CDATA[GitHub]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Storage]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <guid isPermaLink="false">2sshzOlmGVsrtBz2mgeceE</guid>
            <dc:creator>Dillon Mulroy</dc:creator>
            <dc:creator>Matt Carey</dc:creator>
            <dc:creator>Matt Silverlock</dc:creator>
        </item>
        <item>
            <title><![CDATA[Deploy Postgres and MySQL databases with PlanetScale + Workers]]></title>
            <link>https://blog.cloudflare.com/deploy-planetscale-postgres-with-workers/</link>
            <pubDate>Thu, 16 Apr 2026 13:00:22 GMT</pubDate>
            <description><![CDATA[ Learn how to deploy PlanetScale Postgres and MySQL databases via Cloudflare and connect Cloudflare Workers. ]]></description>
            <content:encoded><![CDATA[ <p>Cloudflare announced our PlanetScale partnership last September to give <a href="https://workers.cloudflare.com/"><u>Cloudflare Workers</u></a> direct access to Postgres and MySQL databases for fast, full-stack applications.</p><p>Soon, we’re bringing our technologies even closer: you’ll be able to create PlanetScale Postgres and MySQL databases directly from the Cloudflare dashboard and API, and have them billed to your Cloudflare account. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5Tj4gJrV5hxlIWxlmoXVZe/7661c1e47c0c868b88301b5f4aca4441/BLOG-3213_2.png" />
          </figure><p>You choose the data storage that fits your Worker application needs and keep a single system for billing as a Cloudflare self-serve or enterprise customer. Cloudflare credits like those given in our <a href="https://www.cloudflare.com/forstartups/"><u>startup program</u></a> or Cloudflare committed spend can be used towards PlanetScale databases.</p>
    <div>
      <h2>Postgres &amp; MySQL for Workers</h2>
      <a href="#postgres-mysql-for-workers">
        
      </a>
    </div>
    <p>SQL relational databases like Postgres and MySQL are a foundation of modern applications. In particular, Postgres has risen in developer popularity with its rich tooling ecosystem (ORMs, GUIs, etc) and extensions like <a href="https://github.com/pgvector/pgvector/"><u>pgvector</u></a> for building vector search in AI-driven applications. Postgres is the default choice for most developers who need a powerful, flexible, and scalable database to power their applications.</p><p>You can already connect your PlanetScale account and create Postgres databases directly from the <a href="https://dash.cloudflare.com/?to=/:account/workers/hyperdrive?modal=1">Cloudflare dashboard</a> for your Workers. Starting next month, a new Cloudflare subscription will bill for new PlanetScale databases direct to your Cloudflare account as a self-serve or enterprise user.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1CHTq1qoaNGSNO5atsiS8J/a8eba618b77362aa467d94c4f625c600/BLOG-3213_3.png" />
          </figure><p><sup><i>How to create PlanetScale databases via </i></sup><a href="https://dash.cloudflare.com/?to=/:account/workers/hyperdrive?modal=1"><sup><i><u>Cloudflare dashboard</u></i></sup></a><sup><i> after your PlanetScale account is connected. Cloudflare billing is coming next month.</i></sup></p><p>With our built-in integration, PlanetScale databases automatically work with Workers using Hyperdrive, our database connectivity service. <a href="https://blog.cloudflare.com/how-hyperdrive-speeds-up-database-access/"><u>Hyperdrive</u></a> service manages database connection pools and query caching to make database queries fast and reliable. You just add a <a href="https://developers.cloudflare.com/workers/runtime-apis/bindings/"><u>binding</u></a> to your Worker’s <a href="https://developers.cloudflare.com/workers/wrangler/configuration/#hyperdrive"><u>config file</u></a>: </p>
            <pre><code>// wrangler.jsonc file
{
  "hyperdrive": [
    {
      "binding": "DATABASE",
      "id": &lt;AUTO_CREATED_ID&gt;
    }
  ]
}
</code></pre>
            <p>And start running SQL queries via your Worker with your Postgres client of choice:</p>
            <pre><code>import { Client } from "pg";

export default {
  async fetch(request, env, ctx) {
   
    const client = new Client({ connectionString: env.DATABASE.connectionString });
    await client.connect();

    const result = await client.query("SELECT * FROM pg_tables");
    ...
}
</code></pre>
            
    <div>
      <h2>PlanetScale developer experience</h2>
      <a href="#planetscale-developer-experience">
        
      </a>
    </div>
    <p>PlanetScale was the obvious choice to provide to the Workers community due to it’s unrivaled performance and reliability. Developers can choose from two of the most popular relational databases <a href="https://planetscale.com/docs/postgres/postgres-compatibility"><u>with Postgres</u></a> or Vitess MySQL. PlanetScale matches how Cloudflare treats performance and reliability as key features of a developer platform. And with features like <a href="https://planetscale.com/docs/postgres/monitoring/query-insights"><u>query insights</u></a> and <a href="https://planetscale.com/docs/connect/ai-tooling"><u>agent driven</u></a> workflows for improving SQL query performance and <a href="https://planetscale.com/docs/postgres/branching"><u>branching</u></a> for deploying code safely, including database changes, the PlanetScale database developer experience is first-class.</p><p>Cloudflare users get the exact same PlanetScale database developer experience. Your PlanetScale databases can be deployed directly from Cloudflare with connections managed via Hyperdrive, which already makes your existing regional databases fast with global Workers. This means access to the same PlanetScale <a href="https://planetscale.com/docs/plans/planetscale-skus"><u>database clusters</u></a> at standard PlanetScale <a href="https://planetscale.com/pricing"><u>pricing</u></a> with all features included like query insights and detailed breakdown of <a href="https://planetscale.com/docs/billing#organization-usage-and-billing-page"><u>usage and costs</u></a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2Pfh4oM8zQSUGJKGEsxF3W/700f627c38279d9d90337b38de72b44e/BLOG-3213_4.png" />
          </figure><p><sup><i>A single node on PlanetScale Postgres starts at </i></sup><a href="https://planetscale.com/blog/5-dollar-planetscale"><sup><i><u>$5/month</u></i></sup></a><sup><i>.</i></sup></p>
    <div>
      <h2>Workers placement</h2>
      <a href="#workers-placement">
        
      </a>
    </div>
    <p>With centralized databases, Workers can run right next to your primary database to reduce latency with an <a href="https://developers.cloudflare.com/workers/configuration/placement/#configure-explicit-placement-hints"><u>explicit placement hint</u></a>. By default, Workers execute closest to a user request, which adds network latency when querying a central database especially for multiple queries. Instead, you can configure your Worker to execute in the closest Cloudflare data center to your PlanetScale database. In the future, Cloudflare can automatically set a placement hint based on the location of your PlanetScale database and reduce network latency to single digit milliseconds.</p>
            <pre><code>{
  "placement": {
    "region": "aws:us-east-1"
  }
}
</code></pre>
            
    <div>
      <h2>Coming soon</h2>
      <a href="#coming-soon">
        
      </a>
    </div>
    <p>You can deploy a PlanetScale Postgres database or connect an existing PlanetScale database to Workers today via the <a href="https://dash.cloudflare.com/?to=/:account/workers/hyperdrive?modal=1"><u>Cloudflare dashboard</u></a>. Everything today is still billed via PlanetScale.</p><p>Launching next month, new PlanetScale databases can be billed to your Cloudflare account. </p><p>We are building more with our PlanetScale partners, such as Cloudflare API integration, so tell us what you’d like to see next.</p> ]]></content:encoded>
            <category><![CDATA[SQL]]></category>
            <category><![CDATA[Database]]></category>
            <category><![CDATA[Storage]]></category>
            <category><![CDATA[Postgres]]></category>
            <category><![CDATA[MySQL]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[PlanetScale]]></category>
            <guid isPermaLink="false">1IGJnHwj5QVRJm9iCdEqYV</guid>
            <dc:creator>Vy Ton</dc:creator>
            <dc:creator>Matt Silverlock</dc:creator>
        </item>
        <item>
            <title><![CDATA[Project Think: building the next generation of AI agents on Cloudflare]]></title>
            <link>https://blog.cloudflare.com/project-think/</link>
            <pubDate>Wed, 15 Apr 2026 13:01:00 GMT</pubDate>
            <description><![CDATA[ Announcing a preview of the next edition of the Agents SDK — from lightweight primitives to a batteries-included platform for AI agents that think, act, and persist.
 ]]></description>
            <content:encoded><![CDATA[ <p>Today, we're introducing Project Think: the next generation of the <a href="https://developers.cloudflare.com/agents/"><u>Agents SDK</u></a>. Project Think is a set of new primitives for building long-running agents (durable execution, sub-agents, sandboxed code execution, persistent sessions) and an opinionated base class that wires them all together. Use the primitives to build exactly what you need, or use the base class to get started fast.</p><p>Something happened earlier this year that changed how we think about AI. Tools like <a href="https://github.com/badlogic/pi-mono"><u>Pi</u></a>, <a href="https://github.com/openclaw"><u>OpenClaw</u></a>, <a href="https://docs.anthropic.com/en/docs/agents"><u>Claude Code</u></a>, and <a href="https://openai.com/codex"><u>Codex</u></a> proved a simple but powerful idea: give an LLM the ability to read files, write code, execute it, and remember what it learned, and you get something that looks less like a developer tool and more like a general-purpose assistant.</p><p>These coding agents aren't just writing code anymore. People are using them to manage calendars, analyze datasets, negotiate purchases, file taxes, and automate entire business workflows. The pattern is always the same: the agent reads context, reasons about it, writes code to take action, observes the result, and iterates. Code is the universal medium of action.</p><p>Our team has been using these coding agents every day. And we kept running into the same walls:</p><ul><li><p><b>They only run on your laptop or an expensive VPS:</b> there's no sharing, no collaboration, no handoff between devices.</p></li><li><p><b>They're expensive when idle</b>: a fixed monthly cost whether the agent is working or not. Scale that to a team, or a company, and it adds up fast.</p></li><li><p><b>They require management and manual setup</b>: installing dependencies, managing updates, configuring identity and secrets.</p></li></ul><p>And there's a deeper structural issue. Traditional applications serve many users from one instance. As mentioned in our Welcome to Agents Week post, <a href="https://blog.cloudflare.com/welcome-to-agents-week/"><u>agents are one-to-one</u></a>. Each agent is a unique instance, serving one user, running one task. A restaurant has a menu and a kitchen optimized to churn out dishes at volume. An agent is more like a personal chef: different ingredients, different techniques, different tools every time.</p><p>That fundamentally changes the scaling math. If a hundred million knowledge workers each use an agentic assistant at even modest concurrency, you need capacity for tens of millions of simultaneous sessions. At current per-container costs, that's unsustainable. We need a different foundation.</p><p>That's what we've been building.</p>
    <div>
      <h2>Introducing Project Think</h2>
      <a href="#introducing-project-think">
        
      </a>
    </div>
    <p>Project Think ships a set of new primitives for the Agents SDK:</p><ul><li><p><b>Durable execution</b> with fibers: crash recovery, checkpointing, automatic keepalive</p></li><li><p><b>Sub-agents</b>: isolated child agents with their own SQLite and typed RPC</p></li><li><p><b>Persistent sessions</b>: tree-structured messages, forking, compaction, full-text search</p></li><li><p><b>Sandboxed code execution</b>: Dynamic Workers, codemode, runtime npm resolution</p></li><li><p><b>The execution ladder</b>: workspace, isolate, npm, browser, sandbox</p></li><li><p><b>Self-authored extensions</b>: agents that write their own tools at runtime</p></li></ul><p>Each of these is usable directly with the Agent base class. Build exactly what you need with the primitives, or use the Think base class to get started fast. Let's look at what each one does.</p>
    <div>
      <h2>Long-running agents</h2>
      <a href="#long-running-agents">
        
      </a>
    </div>
    <p>Agents, as they exist today, are ephemeral. They run for a session, tied to a single process or device, and then they are gone. A coding agent that dies when your laptop sleeps, that’s a tool. An agent that persists — that can wake up on demand, continue work after interruptions, and carry forward the state without depending on your local runtime — that starts to look like infrastructure. And it changes the scaling model for agents completely.</p><p>The Agents SDK builds on <a href="https://developers.cloudflare.com/durable-objects/"><u>Durable Objects</u></a> to give every agent an identity, persistent state, and the ability to wake on message. This is the <a href="https://en.wikipedia.org/wiki/Actor_model"><u>actor model</u></a>: each agent is an addressable entity with its own SQLite database. It consumes zero compute when hibernated. When something happens (an HTTP request, a WebSocket message, a scheduled alarm, an inbound email) the platform wakes the agent, loads its state, and hands it the event. The agent does its work, then goes back to sleep.</p><table><tr><th><p>
</p></th><th><p><b>VMs / Containers</b></p></th><th><p><b>Durable Objects</b></p></th></tr><tr><td><p><b>Idle cost</b></p></td><td><p>Full compute cost, always</p></td><td><p>Zero (hibernated)</p></td></tr><tr><td><p><b>Scaling</b></p></td><td><p>Provision and manage capacity</p></td><td><p>Automatic, per-agent</p></td></tr><tr><td><p><b>State</b></p></td><td><p>External database required</p></td><td><p>Built-in SQLite</p></td></tr><tr><td><p><b>Recovery</b></p></td><td><p>You build it (process managers, health checks)</p></td><td><p>Platform restarts, state survives</p></td></tr><tr><td><p><b>Identity / routing</b></p></td><td><p>You build it (load balancers, sticky sessions)</p></td><td><p>Built-in (name → agent)</p></td></tr><tr><td><p><b>10,000 agents, each active 1% of the time</b></p></td><td><p>10,000 always-on instances</p></td><td><p>~100 active at any moment</p></td></tr></table><p>This changes the economics of running agents at scale. Instead of "one expensive agent per power user," you can build "one agent per customer" or "one agent per task" or "one agent per email thread." The marginal cost of spawning a new agent is effectively zero.</p>
    <div>
      <h3>Surviving crashes: durable execution with fibers</h3>
      <a href="#surviving-crashes-durable-execution-with-fibers">
        
      </a>
    </div>
    <p>An LLM call takes 30 seconds. A multi-turn agent loop can run for much longer. At any point during that window, the execution environment can vanish: a deploy, a platform restart, hitting resource limits. The upstream connection to the model provider is severed permanently, in-memory state is lost, and connected clients see the stream stop with no explanation.</p><p><code></code><a href="https://developers.cloudflare.com/agents/api-reference/durable-execution/"><code><u>runFiber()</u></code></a> solves this. A fiber is a durable function invocation: registered in SQLite before execution begins, checkpointable at any point via <code>stash()</code>, and recoverable on restart via <code>onFiberRecovered</code>.</p>
            <pre><code>import { Agent } from "agents";

export class ResearchAgent extends Agent {
  async startResearch(topic: string) {
    void this.runFiber("research", async (ctx) =&gt; {
      const findings = [];

      for (let i = 0; i &lt; 10; i++) {
        const result = await this.callLLM(`Research step ${i}: ${topic}`);
        findings.push(result);

        // Checkpoint: if evicted, we resume from here
        ctx.stash({ findings, step: i, topic });

        this.broadcast({ type: "progress", step: i });
      }

      return { findings };
    });
  }

  async onFiberRecovered(ctx) {
    if (ctx.name === "research" &amp;&amp; ctx.snapshot) {
      const { topic } = ctx.snapshot;
      await this.startResearch(topic);
    }
  }
}
</code></pre>
            <p>The SDK keeps the agent alive automatically during fiber execution, no special configuration needed. For work measured in minutes, keepAlive() / keepAliveWhile() prevents eviction during active work. For longer operations (CI pipelines, design reviews, video generation) the agent starts the work, persists the job ID, hibernates, and wakes on callback.</p>
    <div>
      <h3>Delegating work: sub-agents via Facets</h3>
      <a href="#delegating-work-sub-agents-via-facets">
        
      </a>
    </div>
    <p>A single agent shouldn't do everything itself. <a href="https://developers.cloudflare.com/agents/api-reference/sub-agents/"><u>Sub-agents</u></a> are child Durable Objects colocated with the parent via <a href="https://blog.cloudflare.com/durable-object-facets-dynamic-workers/"><u>Facets</u></a>, each with their own isolated SQLite and execution context:</p>
            <pre><code>import { Agent } from "agents";

export class ResearchAgent extends Agent {
  async search(query: string) { /* ... */ }
}

export class ReviewAgent extends Agent {
  async analyze(query: string) { /* ... */ }
}

export class Orchestrator extends Agent {
  async handleTask(task: string) {
    const researcher = await this.subAgent(ResearchAgent, "research");
    const reviewer = await this.subAgent(ReviewAgent, "review");

    const [research, review] = await Promise.all([
      researcher.search(task),
      reviewer.analyze(task)
    ]);

    return this.synthesize(research, review);
  }
}
</code></pre>
            <p>Sub-agents are isolated at the storage level. Each one gets its own SQLite database, and there’s no implicit sharing of data between them. This is enforced by the runtime where sub-agent RPC latency is a function call. TypeScript catches misuse at compile time.</p>
    <div>
      <h3>Conversations that persist: the Session API</h3>
      <a href="#conversations-that-persist-the-session-api">
        
      </a>
    </div>
    <p>Agents that run for days or weeks need more than the typical flat list of messages. The experimental <a href="https://developers.cloudflare.com/agents/api-reference/sessions/"><u>Session API</u></a> models this explicitly. Available on the Agent base class, conversations are stored as trees, where each message has a parent_id. This enables forking (explore an alternative without losing the original path), non-destructive compaction (summarize older messages rather than deleting them), and full-text search across conversation history via <a href="https://www.sqlite.org/fts5.html"><u>FTS5</u></a>.</p>
            <pre><code>import { Agent } from "agents";
import { Session, SessionManager } from "agents/experimental/memory/session";

export class MyAgent extends Agent {
  sessions = SessionManager.create(this);

  async onStart() {
    const session = this.sessions.create("main");
    const history = session.getHistory();
    const forked = this.sessions.fork(session.id, messageId, "alternative-approach");
  }
}
</code></pre>
            <p>Session is usable directly with <code>Agent</code>, and it's the storage layer that the <code>Think</code> base class builds on.</p>
    <div>
      <h2>From tool calls to code execution</h2>
      <a href="#from-tool-calls-to-code-execution">
        
      </a>
    </div>
    <p>Conventional tool-calling has an awkward shape. The model calls a tool, pulls the result back through the context window, calls another tool, pulls that back, and so on. As the tool surface grows, this gets both expensive and clumsy. A hundred files means a hundred round-trips through the model.</p><p>But <a href="https://blog.cloudflare.com/code-mode/"><u>models are better at writing code to use a system than they are at playing the tool-calling game</u></a>. This is the insight behind <a href="https://github.com/cloudflare/agents/tree/main/packages/codemode"><u>@cloudflare/codemode</u></a>: instead of sequential tool calls, the LLM writes a single program that handles the entire task.</p>
            <pre><code>// The LLM writes this. It runs in a sandboxed Dynamic Worker.
const files = await tools.find({ pattern: "**/*.ts" });
const results = [];
for (const file of files) {
  const content = await tools.read({ path: file });
  if (content.includes("TODO")) {
    results.push({ file, todos: content.match(/\/\/ TODO:.*/g) });
  }
}
return results;
</code></pre>
            <p>Instead of 100 round-trips to the model, you just run a single program. This leads to fewer tokens used, faster execution, and better results. The <a href="https://github.com/cloudflare/mcp"><u>Cloudflare API MCP server</u></a> demonstrates this at scale. We expose only two tools <code>(search()</code> and <code>execute())</code>, which consume ~1,000 tokens, vs. ~1.17 million tokens for the naive tool-per-endpoint equivalent. This is a 99.9% reduction.</p>
    <div>
      <h3>The missing primitive: safe sandboxes</h3>
      <a href="#the-missing-primitive-safe-sandboxes">
        
      </a>
    </div>
    <p>Once you accept that models should write code on behalf of users, the question becomes: where does that code run? Not eventually, not after a product team turns it into a roadmap item. Right now, for this user, against this system, with tightly defined permissions.</p><p><a href="https://blog.cloudflare.com/dynamic-workers/"><u>Dynamic Workers</u></a> are that sandbox. A fresh V8 isolate spun up at runtime, in milliseconds, with a few megabytes of memory. That's roughly 100x faster and up to 100x more memory-efficient than a container. You can start a new one for every single request, run a snippet of code, and throw it away.</p><p>The critical design choice is the capability model. Instead of starting with a general-purpose machine and trying to constrain it, Dynamic Workers begin with almost no ambient authority (<code>globalOutbound: null</code>, no network access) and the developer grants capabilities explicitly, resource by resource, through bindings. We go from asking "how do we stop this thing from doing too much?" to "what exactly do we want this thing to be able to do?"</p><p>This is the right question for agent infrastructure.</p>
    <div>
      <h3>The execution ladder</h3>
      <a href="#the-execution-ladder">
        
      </a>
    </div>
    <p>This capability model leads naturally to a spectrum of compute environments, an <b>execution ladder</b> that the agent escalates through as needed:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6yokfTVcg8frH4snf7c4sp/2306d721650b4956b28e2198f7cf915d/BLOG-3200_2.png" />
          </figure><p><b>Tier 0</b> is the Workspace, a durable virtual filesystem backed by SQLite and R2. Read, write, edit, search, grep, diff. Powered by <a href="https://www.npmjs.com/package/@cloudflare/shell"><code><u>@cloudflare/shell</u></code></a>.</p><p><b>Tier 1</b> is a Dynamic Worker: LLM-generated JavaScript running in a sandboxed isolate with no network access. Powered by <a href="https://www.npmjs.com/package/@cloudflare/codemode"><code><u>@cloudflare/codemode</u></code></a>.</p><p><b>Tier 2</b> adds npm. <a href="https://github.com/cloudflare/agents/tree/main/packages/worker-bundler"><code><u>@cloudflare/worker-bundler</u></code></a> fetches packages from the registry, bundles them with esbuild, and loads the result into the Dynamic Worker. The agent writes <code>import { z } from "zod"</code> and it just works.</p><p><b>Tier 3</b> is a headless browser via <a href="https://developers.cloudflare.com/browser-rendering/"><u>Cloudflare Browser Run</u></a>. Navigate, click, extract, screenshot. Useful when the service doesn't support agents yet via MCP or APIs.</p><p><b>Tier 4</b> is a <a href="https://developers.cloudflare.com/sandbox/"><u>Cloudflare Sandbox</u></a> configured with your toolchains, repos, and dependencies: <code>git clone, npm test, cargo build</code>, synced bidirectionally with the Workspace.</p><p>The key design principle: <b>the agent should be useful at Tier 0 alone, where each tier is additive.</b> The user can add capabilities as they go.</p>
    <div>
      <h3>Building blocks, not a framework</h3>
      <a href="#building-blocks-not-a-framework">
        
      </a>
    </div>
    <p>All of these primitives are available as standalone packages. <a href="https://blog.cloudflare.com/dynamic-workers/"><u>Dynamic Workers</u></a>, <a href="https://github.com/cloudflare/agents/tree/main/packages/codemode"><code><u>@cloudflare/codemode</u></code></a>, <a href="https://github.com/cloudflare/agents/tree/main/packages/worker-bundler"><code><u>@cloudflare/worker-bundler</u></code></a>, and <a href="https://www.npmjs.com/package/@cloudflare/shell"><code><u>@cloudflare/shell</u></code></a> (a durable filesystem with tools) are all usable directly with the Agent base class. You can combine them to give any agent a workspace, code execution, and runtime package resolution without adopting an opinionated framework.</p>
    <div>
      <h2>The platform</h2>
      <a href="#the-platform">
        
      </a>
    </div>
    <p>Here's the complete stack for building agents on Cloudflare:</p><table><tr><th><p><b>Capability</b></p></th><th><p><b>What it does</b></p></th><th><p><b>Powered by</b></p></th></tr><tr><td><p>Per-agent isolation</p></td><td><p>Every agent is its own world</p></td><td><p><a href="https://developers.cloudflare.com/durable-objects/"><u>Durable Objects</u></a> (DOs)</p></td></tr><tr><td><p>Zero cost when idle</p></td><td><p>$0 until the agent wakes up</p></td><td><p><a href="https://developers.cloudflare.com/durable-objects/best-practices/websockets/#websocket-hibernation-api"><u>DO Hibernation</u></a></p></td></tr><tr><td><p>Persistent state</p></td><td><p>Queryable, transactional storage</p></td><td><p><a href="https://developers.cloudflare.com/durable-objects/best-practices/access-durable-objects-storage/"><u>DO SQLite</u></a></p></td></tr><tr><td><p>Durable filesystem</p></td><td><p>Files that survive restarts</p></td><td><p>Workspace (SQLite + <a href="https://developers.cloudflare.com/r2/"><u>R2</u></a>)</p></td></tr><tr><td><p>Sandboxed code execution</p></td><td><p>Run LLM-generated code safely</p></td><td><p><a href="https://blog.cloudflare.com/dynamic-workers/"><u>Dynamic Workers</u></a> + <a href="https://github.com/cloudflare/agents/tree/main/packages/codemode"><code><u>@cloudflare/codemode</u></code></a></p></td></tr><tr><td><p>Runtime dependencies</p></td><td><p><code>import * from react</code> just works</p></td><td><p><a href="https://github.com/cloudflare/agents/tree/main/packages/worker-bundler"><code><u>@cloudflare/worker-bundler</u></code></a></p></td></tr><tr><td><p>Web automation</p></td><td><p>Browse, navigate, fill forms</p></td><td><p><a href="https://developers.cloudflare.com/browser-rendering/"><u>Browser Run</u></a></p></td></tr><tr><td><p>Full OS access</p></td><td><p>git, compilers, test runners</p></td><td><p><a href="https://developers.cloudflare.com/sandbox/"><u>Sandboxes</u></a></p></td></tr><tr><td><p>Scheduled execution</p></td><td><p>Proactive, not just reactive</p></td><td><p><a href="https://developers.cloudflare.com/durable-objects/api/alarms/"><u>DO Alarms + Fibers</u></a></p></td></tr><tr><td><p>Real-time streaming</p></td><td><p>Token-by-token to any client</p></td><td><p>WebSockets</p></td></tr><tr><td><p>External tools</p></td><td><p>Connect to any tool server</p></td><td><p>MCP</p></td></tr><tr><td><p>Agent coordination</p></td><td><p>Typed RPC between agents</p></td><td><p>Sub-agents (<a href="https://developers.cloudflare.com/dynamic-workers/usage/durable-object-facets/"><u>Facets</u></a>)</p></td></tr><tr><td><p>Model access</p></td><td><p>Connect to an LLM to power the agent</p></td><td><p><a href="https://developers.cloudflare.com/ai-gateway/"><u>AI Gateway</u></a> + <a href="https://developers.cloudflare.com/workers-ai/"><u>Workers AI</u></a> (or Bring Your Own Model)</p></td></tr></table><p>Each of these is a building block. Together, they form something new: a platform where anyone can build, deploy, and run AI agents as capable as the ones running on your local machine today, but <a href="https://www.cloudflare.com/learning/serverless/what-is-serverless/"><u>serverless</u></a>, durable, and safe by construction. </p>
    <div>
      <h2>The Think base class</h2>
      <a href="#the-think-base-class">
        
      </a>
    </div>
    <p>Now that you've seen the primitives, here's what happens when you wire them all together.</p><p><code>Think</code> is an opinionated harness that handles the full chat lifecycle: agentic loop, message persistence, streaming, tool execution, stream resumption, and extensions. You focus on what makes your agent unique.</p><p>The minimal subclass looks like this:</p>
            <pre><code>import { Think } from "@cloudflare/think";
import { createWorkersAI } from "workers-ai-provider";

export class MyAgent extends Think&lt;Env&gt; {
  getModel() {
    return createWorkersAI({ binding: this.env.AI })(
      "@cf/moonshotai/kimi-k2.5"
    );
  }
}
</code></pre>
            <p>That’s effectively all you need to have a working chat agent with streaming, persistence, abort/cancel, error handling, resumable streams, and a built-in workspace filesystem. Deploy with <code>npx wrangler deploy</code>.</p><p>Think makes decisions for you. When you need more control, you can override the ones you care about:</p><table><tr><td><p><b>Override</b></p></td><td><p><b>Purpose</b></p></td></tr><tr><td><p><code>getModel()</code></p></td><td><p>Return the <code>LanguageModel</code> to use</p></td></tr><tr><td><p><code>getSystemPrompt()</code></p></td><td><p>System prompt</p></td></tr><tr><td><p><code>getTools()</code></p></td><td><p>AI SDK compatible <code>ToolSet</code> for the agentic loop</p></td></tr><tr><td><p><code>maxSteps</code></p></td><td><p>Max tool-call rounds per turn</p></td></tr><tr><td><p><code>configureSession()</code></p></td><td><p>Context blocks, compaction, search, skills</p></td></tr></table><p>Under the hood, Think runs the complete agentic loop on every turn: it assembles the context (base instructions + tool descriptions + skills + memory + conversation history), calls <code>streamText</code>, executes tool calls (with output truncation to prevent context blowup), appends results, loops until the model is done or the step limit is reached. All messages are persisted after each turn.</p>
    <div>
      <h3>Lifecycle hooks</h3>
      <a href="#lifecycle-hooks">
        
      </a>
    </div>
    <p>Think gives you hooks at every stage of the chat turn, without requiring you to own the whole pipeline:</p>
            <pre><code>beforeTurn()
  → streamText()
    → beforeToolCall()
    → afterToolCall()
  → onStepFinish()
→ onChatResponse()
</code></pre>
            <p>Switch to a lower cost model for follow-up turns, limit the tools it can use, and pass in client-side context on each turn. Also log every tool call to analytics and automatically trigger one more follow-up turn after the model completes, all without replacing <code>onChatMessage</code>.</p>
    <div>
      <h3>Persistent memory and long conversations</h3>
      <a href="#persistent-memory-and-long-conversations">
        
      </a>
    </div>
    <p>Think builds on <a href="https://developers.cloudflare.com/agents/api-reference/sessions/?cf_target_id=E7A3D837FA7DC4C7DDA822B3DE0F831B"><u>Session API</u></a> as its storage layer, giving you tree-structured messages with branching built in.</p><p>On top of that, it adds persistent memory through <b>context blocks</b>. These are structured sections of the system prompt that the model can read and update over time, and they persist across hibernation<b>.</b>The model sees "MEMORY (Important facts, use set_context to update) [42%, 462/1100 tokens]" and can proactively remember things.</p>
            <pre><code>configureSession(session: Session) {
  return session
    .withContext("soul", {
      provider: { get: async () =&gt; "You are a helpful coding assistant." }
    })
    .withContext("memory", {
      description: "Important facts learned during conversation.",
      maxTokens: 2000
    })
    .withCachedPrompt();
}
</code></pre>
            <p>Sessions are flexible. You can run multiple conversations per agent and fork them to try a different direction without losing the original.<b> </b></p><p>As context grows, Think handles limits with non-destructive compaction. Older messages are summarized instead of removed, while the full history remains stored in SQLite.<b> </b></p><p>Search is built in as well. Using FTS5, you can query conversation history within a session or across all the sessions. The agent is also able to search its own past using<b> </b><code>search_context</code> tool.</p>
    <div>
      <h3>The full execution ladder, wired in</h3>
      <a href="#the-full-execution-ladder-wired-in">
        
      </a>
    </div>
    <p>Think integrates the entire execution ladder into a single <code>getTools()</code> return:</p>
            <pre><code>import { Think } from "@cloudflare/think";
import { createWorkspaceTools } from "@cloudflare/think/tools/workspace";
import { createExecuteTool } from "@cloudflare/think/tools/execute";
import { createBrowserTools } from "@cloudflare/think/tools/browser";
import { createSandboxTools } from "@cloudflare/think/tools/sandbox";
import { createExtensionTools } from "@cloudflare/think/tools/extensions";

export class MyAgent extends Think&lt;Env&gt; {
  extensionLoader = this.env.LOADER;

  getModel() {
    /* ... */
  }

  getTools() {
    return {
      execute: createExecuteTool({
        tools: createWorkspaceTools(this.workspace),
        loader: this.env.LOADER
      }),
      ...createBrowserTools(this.env.BROWSER),
      ...createSandboxTools(this.env.SANDBOX), // configured per-agent: toolchains, repos, snapshots
      ...createExtensionTools({ manager: this.extensionManager! }),
      ...this.extensionManager!.getTools()
    };
  }
}
</code></pre>
            
    <div>
      <h3>Self-authored extensions</h3>
      <a href="#self-authored-extensions">
        
      </a>
    </div>
    <p>Think takes code execution one step further. An agent can write its own extensions: TypeScript programs that run in Dynamic Workers, declaring permissions for network access and workspace operations.</p>
            <pre><code>{
  "name": "github",
  "description": "GitHub integration: PRs, issues, repos",
  "tools": ["create_pr", "list_issues", "review_pr"],
  "permissions": {
    "network": ["api.github.com"],
    "workspace": "read-write"
  }
}
</code></pre>
            <p>Think's <code>ExtensionManager</code> bundles the extension (optionally with npm deps via <code>@cloudflare/worker-bundler</code>), loads it into a Dynamic Worker, and registers the new tools. The extension persists in DO storage and survives hibernation. The next time the user asks about pull requests, the agent has a <code>github_create_pr </code>tool that didn't exist 30 seconds ago.</p><p>This is the kind of self-improvement loop that makes agents genuinely more useful over time. Not through fine-tuning or RLHF, but through code. The agent is able to write new capabilities for itself, all in sandboxed, auditable, and revocable TypeScript.</p>
    <div>
      <h3>Sub-agent RPC</h3>
      <a href="#sub-agent-rpc">
        
      </a>
    </div>
    <p>Think also works as a sub-agent, called via <code>chat()</code> over RPC from a parent, with streaming events via callback:</p>
            <pre><code>const researcher = await this.subAgent(ResearchSession, "research");
const result = await researcher.chat(`Research this: ${task}`, streamRelay);
</code></pre>
            <p>Each child gets its own conversation tree, memory, tools, and model. The parent doesn't need to know the details.</p>
    <div>
      <h3>Getting started</h3>
      <a href="#getting-started">
        
      </a>
    </div>
    <p>Project Think is experimental. The API surface is stable but will continue to evolve in the coming days and weeks. We're already using it internally to build our own background agent infrastructure, and we're sharing it early so you can build alongside us.</p>
            <pre><code>npm install @cloudflare/think agents ai @cloudflare/shell zod workers-ai-provider</code></pre>
            
            <pre><code>// src/server.ts
import { Think } from "@cloudflare/think";
import { createWorkersAI } from "workers-ai-provider";
import { routeAgentRequest } from "agents";

export class MyAgent extends Think&lt;Env&gt; {
  getModel() {
    return createWorkersAI({ binding: this.env.AI })(
      "@cf/moonshotai/kimi-k2.5"
    );
  }
}

export default {
  async fetch(request: Request, env: Env) {
    return (
      (await routeAgentRequest(request, env)) ||
      new Response("Not found", { status: 404 })
    );
  }
} satisfies ExportedHandler&lt;Env&gt;;
</code></pre>
            
            <pre><code>// src/client.tsx
import { useAgent } from "agents/react";
import { useAgentChat } from "@cloudflare/ai-chat/react";

function Chat() {
  const agent = useAgent({ agent: "MyAgent" });
  const { messages, sendMessage, status } = useAgentChat({ agent });
  // Render your chat UI
}
</code></pre>
            <p>Think speaks the same WebSocket protocol as <code>@cloudflare/ai-chat</code>, so existing UI components work out of the box. If you've built on <a href="https://developers.cloudflare.com/agents/api-reference/chat-agents/"><code><u>AIChatAgent</u></code></a>, your client code doesn't change.</p>
    <div>
      <h2>The third wave</h2>
      <a href="#the-third-wave">
        
      </a>
    </div>
    <p>We see three waves of AI agents:</p><p><b>The first wave was chatbots.</b> They were stateless, reactive, and fragile. Every conversation started from scratch with no memory, no tools, and no ability to act. This made them useful for answering questions, but limited them to only answering questions.</p><p><b>The second wave was coding agents.</b> These are stateful, tool-using and far more capable tools like Pi, Claude Code, OpenClaw, and Codex. These agents can read codebases, write code, execute it, and iterate. These proved that an LLM with the right tools is a general-purpose machine, but they run on your laptop, for one user, with no durability guarantees.</p><p><b>Now we are entering the third wave: agents as infrastructure.</b> Durable, distributed, structurally safe, and serverless. These are agents that run on the Internet, survive failures, cost nothing when idle, and enforce security through architecture rather than behavior. Agents that any developer can build and deploy for any number of users.</p><p>This is the direction we’re betting on.</p><p>The Agents SDK is already powering thousands of production agents. With Project Think and the the primitives it introduces, we're adding the missing pieces to make those agents dramatically more capable: persistent workspaces, sandboxed code execution, durable long-running tasks, structural security, sub-agent coordination, and self-authored extensions.</p><p>It's available today in preview. We're building alongside you, and we'd genuinely love to see what you (and your coding agent) create with it.</p><hr /><p><sup><i>Think is part of the Cloudflare Agents SDK, available as @cloudflare/think. The features described in this post are in preview. APIs may change as we incorporate feedback. Check the </i></sup><a href="https://github.com/cloudflare/agents/blob/main/docs/think/index.md"><sup><i><u>documentation</u></i></sup></a><sup><i> and </i></sup><a href="https://github.com/cloudflare/agents/tree/main/examples/assistant"><sup><i><u>example</u></i></sup></a><sup><i> to get started.</i></sup></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/161Wz7Tf8Cpzn2u2cBCH3V/37633c016734590005edd280732e89b9/BLOG-3200_3.png" />
          </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Agents Week]]></category>
            <category><![CDATA[Agents]]></category>
            <category><![CDATA[Storage]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Durable Objects]]></category>
            <category><![CDATA[AI]]></category>
            <guid isPermaLink="false">3r2ykMs0LTSPwVHmVWldCy</guid>
            <dc:creator>Sunil Pai</dc:creator>
            <dc:creator>Kate Reznykova</dc:creator>
        </item>
        <item>
            <title><![CDATA[Browser Run: give your agents a browser]]></title>
            <link>https://blog.cloudflare.com/browser-run-for-ai-agents/</link>
            <pubDate>Wed, 15 Apr 2026 13:00:00 GMT</pubDate>
            <description><![CDATA[ Browser Rendering is now Browser Run, with Live View, Human in the Loop, CDP access, session recordings, and 4x higher concurrency limits for AI agents. ]]></description>
            <content:encoded><![CDATA[ <p>AI agents need to interact with the web. To do that, they need a browser. They need to navigate sites, read pages, fill forms, extract data, and take screenshots. They need to observe whether things are working as expected, with a way for their humans to step in if needed. And they need to do all of this at scale.</p><p>Today, we’re renaming Browser Rendering to <b>Browser Run</b>, and shipping key features that make it <i>the</i> browser for <a href="https://www.cloudflare.com/learning/ai/what-is-agentic-ai/"><u>AI agents</u></a>. The name Browser Rendering never fully captured what the product does. Browser Run lets you run full browser sessions on Cloudflare's global network, drive them with code or AI, record and replay sessions, crawl pages for content, debug in real time, and let humans intervene when your agent needs help. </p><p>Here’s what’s new:</p><ul><li><p><b>Live View</b>: see what your agent sees and is doing, in real time. Know instantly if things are working, and when they’re not, see exactly why.</p></li><li><p><b>Human in the Loop</b>: when your agent hits a snag like a login page or unexpected edge case, it can hand off to a human instead of failing. The human steps in, resolves, then hands back control.</p></li><li><p><b>Chrome DevTools Protocol (CDP) Endpoint</b>: the Chrome DevTools Protocol is how agents control browsers. Browser Run now exposes it directly, so agents get maximum control over the browser and existing CDP scripts work on Cloudflare.</p></li><li><p><b>MCP Client Support:</b> AI coding agents like Claude Desktop, Cursor, and OpenCode can now use Browser Run as their remote browser.</p></li><li><p><b>WebMCP Support</b>: agents will outnumber humans using the web. WebMCP allows websites to declare what actions are available for agents to discover and call, making navigation more reliable.</p></li><li><p><b>Session Recordings</b>: capture every browser session for debugging purposes. When something goes wrong, you have the full recording with DOM changes, user interactions, and page navigation.</p></li><li><p><b>Higher limits</b>: run more tasks at once with 120 concurrent browsers, up from 30. </p></li></ul><div>
  
</div>
<p></p><p><sup><i>An AI agent searching Amazon for an orange lava lamp, comparing options, and handing off to a human when sign-in is required to complete the purchase</i></sup></p>
    <div>
      <h2>Everything an agent needs</h2>
      <a href="#everything-an-agent-needs">
        
      </a>
    </div>
    <p>Let’s think about what agents need when browsing the web and how each feature fits in:</p>
<div><table><colgroup>
<col></col>
<col></col>
</colgroup>
<thead>
  <tr>
    <th><span>What an agent needs</span></th>
    <th><span>Browser Run (formerly Browser Rendering)</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>1) Browsers on-demand</span></td>
    <td><span>Chrome browser on Cloudflare’s global network</span></td>
  </tr>
  <tr>
    <td><span>2) A way to control the browser</span></td>
    <td><span>Take actions like navigate, click, fill forms, screenshot, and more with Puppeteer, Playwright, </span><span>CDP (new)</span><span>, </span><span>MCP Client Support (new)</span><span> and </span><span>WebMCP (new)</span></td>
  </tr>
  <tr>
    <td><span>3) Observability</span></td>
    <td><span>Live View (new)</span><span>, </span><span>Session Recordings (new)</span><span>, and </span><span>Dashboard redesign (new)</span></td>
  </tr>
  <tr>
    <td><span>4) Human intervention</span></td>
    <td><span>Human in the Loop (new)</span></td>
  </tr>
  <tr>
    <td><span>5) Scale</span></td>
    <td><span>10 requests/second for Quick Actions, </span><span>120 concurrent browsers (4x increase)</span></td>
  </tr>
</tbody></table></div>
    <div>
      <h2>1) Open a browser</h2>
      <a href="#1-open-a-browser">
        
      </a>
    </div>
    <p>First, an agent needs a browser. With Browser Run, agents can spin up a headless Chrome instance on Cloudflare’s global network, on demand. No infrastructure to manage, no Chrome versions to maintain. Browser sessions open near users for low latency, and scale up and down as needed. Pair Browser Run with the <a href="https://developers.cloudflare.com/agents/api-reference/browse-the-web/"><u>Agents SDK</u></a> to build long-running agents that browse the web, remember everything, and act on their own. </p>
    <div>
      <h2>2) Take actions</h2>
      <a href="#2-take-actions">
        
      </a>
    </div>
    <p>Once your agent has a browser, it needs ways to control it. Browser Run supports multiple approaches: new low-level protocol access with the Chrome DevTools Protocol (CDP) and WebMCP, in addition to existing higher-level automation using <a href="https://developers.cloudflare.com/browser-rendering/puppeteer/"><u>Puppeteer</u></a> and <a href="https://developers.cloudflare.com/browser-rendering/playwright/"><u>Playwright</u></a>, and <a href="https://developers.cloudflare.com/browser-rendering/rest-api/"><u>Quick Actions</u></a> for simple tasks. Let’s look at the details.</p>
    <div>
      <h3>Chrome DevTools Protocol (CDP) endpoint</h3>
      <a href="#chrome-devtools-protocol-cdp-endpoint">
        
      </a>
    </div>
    <p>The <a href="https://chromedevtools.github.io/devtools-protocol/"><u>Chrome DevTools Protocol (CDP)</u></a> is the low-level protocol that powers browser automation. Exposing CDP directly means the growing ecosystem of agent tools and existing CDP automation scripts can use Browser Run. When you open Chrome DevTools and inspect a page, CDP is what's running underneath. Puppeteer, Playwright, and most agent frameworks are built on top of it.</p><p>Every way that you have been using Browser Run has actually been through CDP already. What’s new is that we're now <a href="https://developers.cloudflare.com/browser-rendering/cdp/"><u>exposing CDP directly</u></a> as an endpoint. This matters for agents because CDP gives agents the most control possible over the browser. Agent frameworks already speak CDP natively, and can now connect to Browser Run directly. CDP also unlocks browser actions that aren't available through Puppeteer or Playwright, like JavaScript debugging. And because you're working with raw CDP messages instead of going through higher-level libraries, you can pass messages directly to models for more token-efficient browser control.</p><p>If you already have CDP automation scripts running against self-hosted Chrome, they work on Browser Run with a one-line config change. Point your WebSocket URL at Browser Run and stop managing your own browser infrastructure.</p>
            <pre><code>// Before: connecting to self-hosted Chrome
const browser = await puppeteer.connect({
  browserWSEndpoint: 'ws://localhost:9222/devtools/browser'
});

// After: connecting to Browser Run
const browser = await puppeteer.connect({
  browserWSEndpoint: 'wss://api.cloudflare.com/client/v4/accounts/&lt;ACCOUNT_ID&gt;/browser-rendering/devtools/browser',
  headers: { 'Authorization': 'Bearer &lt;API_TOKEN&gt;' }
});
</code></pre>
            <p>The CDP endpoint also makes Browser Run more accessible. You can now connect from any language, any environment, without needing to write a <a href="https://developers.cloudflare.com/workers/"><u>Cloudflare Worker</u></a>. (If you're already using Workers, nothing changes.)</p>
    <div>
      <h4>Using Browser Run with MCP Clients</h4>
      <a href="#using-browser-run-with-mcp-clients">
        
      </a>
    </div>
    <p>Now that Browser Run exposes the Chrome DevTools Protocol (CDP), MCP clients including Claude Desktop, Cursor, Codex, and OpenCode can use Browser Run as their remote browser. The <a href="https://github.com/ChromeDevTools/chrome-devtools-mcp"><u>chrome-devtools-mcp package</u></a> from the Chrome DevTools team is an MCP server that gives your AI coding assistant access to the full power of Chrome DevTools for reliable automation, in-depth debugging, and performance analysis.</p><p>Here’s an example of how to configure Browser Run for Claude Desktop:</p>
            <pre><code>{
  "mcpServers": {
    "browser-rendering": {
      "command": "npx",
      "args": [
        "-y",
        "chrome-devtools-mcp@latest",
        "--wsEndpoint=wss://api.cloudflare.com/client/v4/accounts/&lt;ACCOUNT_ID&gt;/browser-rendering/devtools/browser?keep_alive=600000",
        "--wsHeaders={\"Authorization\":\"Bearer &lt;API_TOKEN&gt;\"}"
      ]
    }
  }
}
</code></pre>
            <p>For other MCP clients, see <a href="https://developers.cloudflare.com/browser-rendering/cdp/mcp-clients/"><u>documentation for using Browser Run with MCP clients</u></a>.</p>
    <div>
      <h3>WebMCP support</h3>
      <a href="#webmcp-support">
        
      </a>
    </div>
    <p>The Internet was built for humans, so navigating as an AI agent today is unreliable. We’re betting on a future where more agents use the web than humans. In that world, sites need to be agent-friendly.</p><p>That’s why we’re launching support for <a href="https://developer.chrome.com/blog/webmcp-epp"><u>WebMCP</u></a>, a new browser API from the Google Chrome team that landed in Chromium 146+. WebMCP lets websites expose tools directly to AI agents, declaring what actions are available for agents to discover and call on each page. This helps agents navigate the web more reliably. Instead of agents needing to figure out how to use a site, websites can expose their tools for agents to discover and call</p><p>Two APIs make this work:</p><ul><li><p><code>navigator.modelContext</code> allows websites to register their tools</p></li><li><p><code>navigator.modelContextTesting</code> allows agents to discover and execute those tools</p></li></ul><p>Today, an agent visiting a travel booking site has to figure out the UI by looking at it. With WebMCP, the site declares “here’s a search_flights tool that takes an origin, destination, and date.” The agent calls it directly, without having to loop through slow screenshot-analyze-click loops. This makes navigation more reliable regardless of potential changes to the UI.</p><p>Tools are discovered on the page rather than preloaded. This matters for the long tail of the web, where preloading an MCP server for every possible site is not feasible and would bloat the context window. </p><div>
  
</div><p><sup><i>Using WebMCP to book a hotel through the Chrome DevTools console, discovering available tools with listTools()</i></sup></p><p>We have an experimental pool with browser instances running Chrome beta so you can test emerging browser features before they reach stable Chrome. We also just shipped <a href="https://developers.cloudflare.com/browser-rendering/reference/wrangler-commands/"><u>Wrangler browser commands</u></a> that let you manage browser sessions directly from the CLI, letting you create, manage, and view browser sessions directly from your terminal. To <a href="https://developers.cloudflare.com/browser-run/features/webmcp/"><u>access WebMCP-enabled browsers</u></a>, use the following Wrangler command to create a session in the experimental pool:</p>
            <pre><code>npm i -g wrangler@latest
wrangler browser create --lab --keepAlive 300  
</code></pre>
            
    <div>
      <h3>Existing ways to use Browser Run</h3>
      <a href="#existing-ways-to-use-browser-run">
        
      </a>
    </div>
    <p>While CDP and WebMCP are new, you could already use <a href="https://developers.cloudflare.com/browser-rendering/puppeteer/"><u>Puppeteer</u></a>, <a href="https://developers.cloudflare.com/browser-rendering/playwright/"><u>Playwright</u></a>, or <a href="https://developers.cloudflare.com/browser-rendering/stagehand/"><u>Stagehand</u></a> for full browser automation through Browser Run. And for simple tasks like <a href="https://developers.cloudflare.com/browser-rendering/rest-api/screenshot-endpoint/"><u>capturing screenshots</u></a>, <a href="https://developers.cloudflare.com/browser-rendering/rest-api/pdf-endpoint/"><u>generating PDFs</u></a>, and <a href="https://developers.cloudflare.com/browser-rendering/rest-api/markdown-endpoint/"><u>extracting markdown</u></a>, there are the <a href="https://developers.cloudflare.com/browser-rendering/rest-api/"><u>Quick Action endpoints</u></a>. </p>
    <div>
      <h4>/crawl endpoint — crawl web content</h4>
      <a href="#crawl-endpoint-crawl-web-content">
        
      </a>
    </div>
    <p>We also recently shipped a <a href="https://developers.cloudflare.com/browser-rendering/rest-api/crawl-endpoint/"><u>/crawl endpoint</u></a> that lets you crawl entire sites with a single API call. Give it a starting URL and pages are automatically discovered and scraped, then returned in your preferred format (HTML, Markdown, and structured JSON), with additional parameters to control crawl depth and scope, skip pages that haven’t changed, and specify certain paths to include or exclude. </p><p>We intentionally built /crawl to be a <a href="https://developers.cloudflare.com/browser-run/faq/#will-browser-run-be-detected-by-bot-management"><u>well-behaved crawler</u></a>. That means it respects site owner’s preferences out of the box, is a <a href="https://developers.cloudflare.com/bots/concepts/bot/signed-agents/"><u>signed agent</u></a> with a distinct bot ID that is cryptographically signed using <a href="https://developers.cloudflare.com/bots/reference/bot-verification/web-bot-auth/"><u>Web Bot Auth</u></a>, a non-customizable <a href="https://developers.cloudflare.com/browser-rendering/rest-api/crawl-endpoint/#user-agent"><u>User-Agent</u></a>, and follows robots.txt and <a href="https://www.cloudflare.com/ai-crawl-control/"><u>AI Crawl Control</u></a>. It does not bypass Cloudflare’s bot protections or CAPTCHAs. Site owners choose whether their content is accessible and /crawl respects it. </p>
            <pre><code># Initiate a crawl
curl -X POST 'https://api.cloudflare.com/client/v4/accounts/{account_id}/browser-rendering/crawl' \
  -H 'Authorization: Bearer &lt;apiToken&gt;' \
  -H 'Content-Type: application/json' \
  -d '{
    "url": "https://blog.cloudflare.com/"
  }'
</code></pre>
            
    <div>
      <h2>3) Observe</h2>
      <a href="#3-observe">
        
      </a>
    </div>
    <p>Things don’t always go right the first try. We kept hearing from customers that when their automations failed, they had no idea why. That’s why we’ve added multiple ways to observe what’s happening, so you can see exactly what your agent sees, both live and after the fact. </p>
    <div>
      <h3>Live View</h3>
      <a href="#live-view">
        
      </a>
    </div>
    <p><a href="https://developers.cloudflare.com/browser-run/features/live-view/"><u>Live View</u></a> lets you watch your agent’s browser session in real time. Whether you’re debugging an agent or running a long automation script, you see exactly what’s happening as it happens. This includes the page itself, as well as the DOM, console, and network requests. When something goes wrong — the expected button isn't there, the page needs authentication, or a CAPTCHA appears — you can catch it immediately.</p><p>There are two ways to access Live View. From code, obtain the <code>session_id</code> of the browser you want to inspect and open the <code>devtoolsFrontendURL</code> from the response in Chrome. Or from the Cloudflare dashboard, open the new Live Sessions tab in the Browser Run section and click into any active session.</p><div>
  
</div><p><sup><i>Live View of an AI agent booking a hotel, showing real-time browser activity</i></sup></p>
    <div>
      <h3>Session Recordings</h3>
      <a href="#session-recordings">
        
      </a>
    </div>
    <p>Live View is great when you’re available, but you can’t watch every session. <a href="https://developers.cloudflare.com/browser-run/features/session-recording/"><u>Session Recordings</u></a> captures DOM changes, mouse and keyboard events, and page navigation as structured JSON so you can replay any session after it ends. </p><p>Enable Session Recordings by passing <code>recording:true</code> when launching a browser. After the session closes, you can access the recording in the Cloudflare dashboard from the Runs tab or retrieve recordings via API and replay them with the <a href="https://github.com/rrweb-io/rrweb/tree/master/packages/rrweb-player"><u>rrweb-player</u></a>. Next, we’re adding the ability to inspect DOM state and console output at any point during the recording.   </p><div>
  
</div><p><sup><i>Session recording replay of a browser automation browsing the Sentry Shop and adding a bomber jacket to the cart </i></sup></p>
    <div>
      <h3>Dashboard Redesign</h3>
      <a href="#dashboard-redesign">
        
      </a>
    </div>
    <p>Previously, the <a href="https://dash.cloudflare.com/?to=/:account/workers/browser-run"><u>Browser Run dashboard</u></a> only showed logs from browser sessions. Requests for screenshots, PDFs, markdown, and crawls were not visible. The redesigned dashboard changes that. The new Runs tab shows every request. You can filter by endpoint and view details including target URLs, status, and duration.  </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7eExar2kjoc2QSq6skzTf6/ba4ad79fa01eb060f8b14cd5afa342e5/BLOG-3221_2.png" />
          </figure><p><sup><i>The Browser Run dashboard Runs tab showing browser sessions and quick actions like PDF, Screenshot, and Crawl in a single view, with a crawl job expanded to show its progress</i></sup></p>
    <div>
      <h2>4) Intervene</h2>
      <a href="#4-intervene">
        
      </a>
    </div>
    <p>Agents are good, but they’re not perfect. Sometimes they need their human to step in. Browser Run supports Human in the Loop workflows where a human can take control of a live browser session, handle what the automation cannot, then let the session continue. </p>
    <div>
      <h3>Human in the Loop</h3>
      <a href="#human-in-the-loop">
        
      </a>
    </div>
    <p>When automation hits a wall, you don't have to restart. With <a href="https://developers.cloudflare.com/browser-run/features/human-in-the-loop/"><u>Human in the Loop</u></a>, you can step in and interact with the page directly to click, type, navigate, enter credentials, or submit forms. This unlocks workflows that agents cannot handle.</p><p>Today, you can step in by opening the Live View URL for any active session. Next, we’re adding a handoff flow where the agent can signal that it needs help, notify a human to step in, then hand control back to the agent once the issue is resolved.</p><div>
  
</div>
<p></p><p><sup><i>An AI agent searching Amazon for an orange lava lamp, comparing options, and handing off to a human when sign-in is required to complete the purchase</i></sup></p>
    <div>
      <h2>5) Scale</h2>
      <a href="#5-scale">
        
      </a>
    </div>
    <p>Customers have asked us to raise limits so that they can do more, faster.</p>
    <div>
      <h3>Higher limits</h3>
      <a href="#higher-limits">
        
      </a>
    </div>
    <p>We've quadrupled the <a href="https://developers.cloudflare.com/browser-rendering/limits/"><u>default concurrent browser limit from 30 to 120</u></a>. Every session gives you instant access to a browser from a global pool of warm instances, so there's no cold start waiting for a browser to spin up. In March, we also <a href="https://developers.cloudflare.com/changelog/post/2026-03-04-br-rest-api-limit-increase/"><u>increased limits for Quick Actions</u></a> to 10 requests per second. If you need higher limits, they're available by request.</p>
    <div>
      <h2>What's next</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <ul><li><p><b>Human in the Loop Handoff</b>: today you can intervene in a browser session through Live View. Soon, the agent will be able to signal when it needs help, so you can build in notifications to alert a human to step in.</p></li><li><p><b>Session Recordings Inspection</b>: you can already scrub through the timeline and replay any session. Soon, you’ll be able to inspect DOM state and console output as well.</p></li><li><p><b>Traces and Browser Logs</b>: access debugging information without instrumenting your code. Console logs, network requests, timing data. If something broke, you'll know where.</p></li><li><p><b>Screenshot, PDF, and markdown directly from Workers</b>: the same simple tasks available through the <a href="https://developers.cloudflare.com/browser-rendering/rest-api/"><u>REST API</u></a> are coming to <a href="https://developers.cloudflare.com/browser-rendering/workers-bindings/"><u>Workers Bindings</u></a>. e<code>nv.BROWSER.screenshot()</code> just works, with no API tokens needed.</p></li></ul>
    <div>
      <h2>Get started</h2>
      <a href="#get-started">
        
      </a>
    </div>
    <p>Browser Run is available today on both the Workers Free and Workers Paid plans. Everything we shipped today — Live View, Human in the Loop, Session Recordings, and higher concurrency limits — is ready to use. </p><p>If you were already using Browser Rendering, everything works the same, just with a new name and more features.  </p><p>Check out the <a href="https://developers.cloudflare.com/browser-rendering/"><u>documentation</u></a> to get started. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6mQq5rxfDK3oU80JCRUL7P/7842cac72f0f4170cc697011230146ab/BLOG-3221_3.png" />
          </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Agents Week]]></category>
            <category><![CDATA[Chrome]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Browser Rendering]]></category>
            <category><![CDATA[Agents]]></category>
            <category><![CDATA[Browser Run]]></category>
            <guid isPermaLink="false">160lCssR1GA8lEUV718ev5</guid>
            <dc:creator>Kathy Liao</dc:creator>
        </item>
        <item>
            <title><![CDATA[Rearchitecting the Workflows control plane for the agentic era]]></title>
            <link>https://blog.cloudflare.com/workflows-v2/</link>
            <pubDate>Wed, 15 Apr 2026 13:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare Workflows, a durable execution engine for multi-step applications, now supports higher concurrency and creation rate limits through a rearchitectured control plane, helping scale to meet the use cases for durable background agents.
 ]]></description>
            <content:encoded><![CDATA[ <p>When we originally built <a href="https://developers.cloudflare.com/workflows/"><u>Workflows</u></a>, our durable execution engine for multi-step applications, it was designed for a world in which workflows were triggered by human actions, like a user signing up or placing an order. For use cases like onboarding flows, workflows only had to support one instance per person — and people can only click so fast. </p><p>Over time, what we’ve actually seen is a quantitative shift in the workload and access pattern: fewer human-triggered workflows, and more agent-triggered workflows, created at machine speed. </p><p>As agents become persistent and autonomous infrastructure, operating on behalf of users for hours or days, they need a durable, asynchronous execution engine for the work they are doing. Workflows provides exactly that: every step is independently retryable, the workflow can pause for human-in-the-loop approval, and each instance survives failures without losing progress.  </p><p>Moreover, workflows themselves are being used to implement agent loops and serve as the durable harnesses that manage and keep agents alive. Our<a href="https://developers.cloudflare.com/changelog/post/2026-02-03-agents-workflows-integration/"> <u>Agents SDK integration</u></a> accelerated this, making it easy for agents to spawn workflow instances and get real-time progress back. A single agent session can now kick off dozens of workflows, and many agents running concurrently means thousands of instances created in seconds. With <a href="https://blog.cloudflare.com/project-think"><u>Project Think</u></a> now available, we anticipate that velocity will only increase.</p><p>To help developers scale their agents and applications on Workflows, we are excited to announce that we now support:</p><ul><li><p>50,000 concurrent instances (number of workflow executions running in parallel), <a href="https://developers.cloudflare.com/changelog/post/2025-02-25-workflows-concurrency-increased/"><u>originally 4,500</u></a></p></li><li><p>300 instances/second created per account, previously 100</p></li><li><p>2 million queued instances (meaning instances that have been created or awoken and are waiting for a concurrency slot) per workflow, up from 1 million</p></li></ul><p>We redesigned the Workflows control plane from usage data and first principles to support these increases. For V1 of the control plane, a single Durable Object (DO) could serve as the central registry and coordinator of an entire account. For V2, we built two new components to help horizontally scale the system and alleviate the bottlenecks that V1 introduced, before migrating all customers — with live traffic — seamlessly onto the new version.</p>
    <div>
      <h2>V1: initial architecture of Workflows</h2>
      <a href="#v1-initial-architecture-of-workflows">
        
      </a>
    </div>
    <p>As described in our <a href="https://blog.cloudflare.com/building-workflows-durable-execution-on-workers/#building-cloudflare-on-cloudflare"><u>public beta blog post</u></a>, we built <a href="https://www.cloudflare.com/developer-platform/products/workflows/"><u>Workflows</u></a> entirely on our own developer platform. Fundamentally, a workflow is a series of durable steps, each independently retryable, that can execute tasks, wait for external events, or sleep until a predetermined time. </p>
            <pre><code>export class MyWorkflow extends WorkflowEntrypoint {

  async run(event, step) {
    const data = await step.do("fetch-data", async () =&gt; {
      return fetchFromAPI();
    });

    const approval = await step.waitForEvent("approval", {
      type: "approval",
      timeout: "24 hours",
    });

    await step.do("process-and-save", async () =&gt; {
      return store(transform(data));
    });
  }
}
</code></pre>
            <p>To trigger each instance, execute its logic, and store its metadata, we leverage SQLite-backed <a href="https://www.cloudflare.com/developer-platform/products/durable-objects/"><u>Durable Objects</u></a>, which are a simple but powerful primitive for coordination and storage within a distributed system. </p><p>In the control plane, some Durable Objects — like the <i>Engine</i>, which executes the actual workflow instance, including its step, retry, and sleep logic — are spun up at a ratio of 1:1 per instance. On the other hand, the <i>Account</i> is an account-level Durable Object that manages all workflows and workflow instances for that account.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/55bqaUjc30HJHe9spWYTo8/d8053955660553db8b64a484fb321ec7/BLOG-3116_2.png" />
          </figure><p>To learn more about the V1 control plane, refer to our <a href="https://blog.cloudflare.com/building-workflows-durable-execution-on-workers/"><u>Workflows announcement blog post</u></a>.</p><p>After we launched Workflows into beta, we were thrilled to see customers quickly scaling their use of the product, but we also realized that having a single Durable Object to store all that account-level information introduced a bottleneck. Many customers needed to create and execute hundreds or even thousands of Workflow instances per minute, which could quickly overwhelm the <i>Account</i> in our original architecture. The original rate limits — 4,500 concurrency slots and 100 instance creations per 10 seconds — were a result of this limitation. </p><p>On the V1 control plane, these limits were a hard cap. Any and all operations depending on <i>Account</i>, including create, update, and list, had to go through that single DO. Users with high concurrency workloads could have thousands of instances starting and ending at any given moment, building up to thousands of requests per second to <i>Account</i>. To solve for this, we rearchitected the workflow control plane such that it horizontally scales to higher concurrency and creation rate limits. </p>
    <div>
      <h2>V2: horizontal scale for higher throughput</h2>
      <a href="#v2-horizontal-scale-for-higher-throughput">
        
      </a>
    </div>
    <p>For the new version, we rethought every single operation from the ground up with the goal of optimizing for high-volume workflows. Ultimately, Workflows should scale to support whatever developers need – whether that is thousands of instances created per second or millions of instances running at a time. We also wanted to ensure that V2 allowed for flexible limits, which we can toggle and continue increasing, rather than the hard cap which V1 limits imposed. After many design iterations, we settled on the following pillars for our new architecture: </p><ul><li><p>The source of truth for the existence of a given instance should be its <i>Engine</i> and nothing else. </p><ul><li><p>In the V1 control plane architecture, we lacked a check before queuing the instance as to whether its <i>Engine</i> actually existed. This allowed for a bad state where an instance may have been queued without its corresponding <i>Engine </i>having spun up. </p></li><li><p>Instance lifecycle and liveness mechanisms must be horizontally scalable per-workflow and distributed throughout many regions.</p></li></ul></li><li><p>The new Account singleton should only store the minimum necessary metadata and have an invariant maximum amount of concurrent requests.</p></li></ul>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1txhhObwwIcV8C2gr9Hjfe/df7ea739567c7e42471458357c16583d/unnamed.png" />
          </figure><p>There are two new, critical components in the V2 control plane which allowed us to improve the scalability of Workflows: <i>SousChef</i> and <i>Gatekeeper</i>. The first component, <i>SousChef</i>, is a “second in command” to the <i>Account</i>. Recall that previously, the <i>Account</i> managed the metadata and lifecycle for all of the instances across all of the workflows within a given account. <i>SousChef</i> was introduced to keep track of metadata and lifecycle on a <b>subset</b> of instances in a given workflow. Within an account, a distribution of <i>SousChefs</i> can then report back to <i>Account</i> in a more efficient and manageable way. (An added benefit of this design: not only did we already have per-account isolation, but we also inadvertently gained “per-workflow” isolation within the same account, since each <i>SousChef</i> only takes care of one specific workflow).</p><p>The second component, <i>Gatekeeper</i>, is a mechanism to distribute concurrency “slots” (derived from concurrency limits) across all <i>SousChefs</i> within the account. It acts as a leasing system. When an instance is created, it is randomly assigned to one of the <i>SousChefs</i> within that account. Then the <i>SousChef</i> makes a request to <i>Account</i> to trigger that instance. Either a slot is granted, or the instance is queued. Once the slot is granted, the <i>SousChef</i> triggers execution of the instance and assumes responsibility that the instance never gets stuck. </p><p><i>Gatekeeper</i> was needed to make sure that <i>Engines</i> never overloaded their <i>Account</i> (a pressing risk on V1) so every communication between <i>SousChefs</i> and their <i>Account</i> happens on a periodic cycle, once per second — each cycle will also batch all slot requests, ensuring that only one JSRPC call is made. This ensures the instance creation rate can never overload or influence the most important component, <i>Account</i> (as an aside: if the <i>SousChef </i>count is too high, we rate-limit calls or spread across different <i>SousChefs</i> throughout different time periods). Also, this periodic property allows us to preserve fairness on older instances and to ensure max-min fairness through the many <i>SousChefs</i>, allowing them all to progress. For example, if an instance wakes up, it should be prioritized for a slot over a newly created instance, but each <i>SousChef</i> ensures that its own instances do not get stuck.</p><p>This architecture is more distributed, and therefore, more scalable. Now, when an instance is created, the request path is:</p><ol><li><p>Check control plane version</p></li><li><p>Check if a cached version of the workflow and version details is available in that location</p><ol><li><p>If not, check <i>Account</i> to get workflow name, unique ID, and version, and cache that information</p></li></ol></li><li><p>Store only necessary metadata (instance payload, creation date) onto its own <i>Engine</i></p></li></ol><p>So, how does <i>Engine</i> tell the control plane that it now exists? That happens in the background after instance metadata is set. As background operations on a Durable Object can fail, due to eviction or server failure, we also set an “alarm” on <i>Engine</i> in the creation hot-path. That way, if the background task does not finish, the alarm <b>ensures</b> that the instance will begin. </p><p>A <a href="https://developers.cloudflare.com/durable-objects/api/alarms/"><u>Durable Object alarm</u></a> allows a Durable Object instance to be awakened at a fine-grained time in the future with an<b> at-least-once </b>execution model, with automatic retries built in. We extensively use this combination of background “tasks” and alarms to remove operations off the hot-path while still ensuring that everything will happen as planned. That’s how we keep critical operations like <i>creating an instance</i> fast without ever compromising on reliability. </p><p>Other than unlocking scale, this version of the control plane means that: </p><ul><li><p>Instance listing performance is faster, and actually consistent with cursor pagination; </p></li><li><p>Any operation on an instance does exactly one network hop (as it can go directly to its <i>Engine</i>, ensuring that eyeball request latency is as small as we can manage);</p></li><li><p>We can ensure that more instances are actually behaving correctly (by running on time) concurrently (and correct them if not, making sure that <i>Engines</i> are never late to continue execution).</p></li></ul>
    <div>
      <h2>V1 → V2 migration</h2>
      <a href="#v1-v2-migration">
        
      </a>
    </div>
    <p>Now that we had a new version of the Workflows control plane that can handle a higher volume of user load, we needed to do the “boring” part: migrating our customers and instances to the new system. At Cloudflare’s scale, this becomes a problem in and of itself, so the “boring” part becomes the biggest challenge. Well before its one-year mark, Workflows had already racked up millions of instances and thousands of customers. Also, some tech debt on V1’s control plane meant that a queued instance might not have its own <i>Engine</i> Durable Object created yet, complicating matters further.</p><p>Such a migration is tricky because customers might have instances running at any given moment; we needed a way to add the <i>SousChef</i> and <i>Gatekeeper</i> components into older accounts without causing any disruption or downtime.</p><p>We ultimately decided that we would migrate existing <i>Accounts </i>(which we’ll refer to as <i>AccountOlds) </i>to behave like <i>SousChefs. </i>By persisting the <i>Account</i> DOs, we maintained the instance metadata, and simply converted the DO into a <i>SousChef</i> “DO”: </p>
            <pre><code>// You might be wondering what's this SousChef class? This is the SousChef DO class!
import { SousChef } from "@repo/souschef";

class AccountOld extends DurableObject {
  constructor(state: DurableObjectState, env: Env) {
    // We added the following snippet to the end of our AccountOld DO's
    // constructor. This ensures that if we want, we can use any primitive
    // that is available on SousChef DO
    if (this.currentVersion === ControlPlaneVersions.SOUS_CHEFS) {
      this.sousChef = new SousChef(this.ctx, this.env);
      await this.sousChef.setup()
    }
  }

  async updateInstance(params: UpdateInstanceParams) {
    if (this.currentVersion === ControlPlaneVersions.SOUS_CHEFS) {
      assert(this.sousChef !== undefined, 'SousChef must exist on v2');
      return this.sousChef.updateInstance(params);
    }

    // old logic remains the same
  }

  @RequiresVersion&lt;AccountOld&gt;(ControlPlaneVersions.V1)
  async getMetadata() {
    // this method can only be run if 
    // this.currentVersion === ControlPlaneVersions.V1
  }
}</code></pre>
            <p>We can instantiate the <i>SousChef</i> class within the <i>AccountOld</i> because the SQL tables that track instance metadata, on both <i>SousChefs</i> and <i>AccountOld</i> DOs, are the same on both. As such, we could just decide which version of the code to use. If this hadn’t been the case, we would have been forced to migrate the metadata of millions of instances, which would have made the migration more difficult and longer running for each account. So, how did the migration work?</p><p>First, we prepared <i>AccountOld</i> DOs to be switched to behave as <i>SousChefs</i> (which meant creating a release with a version of the snippet above). Then, we enabled control plane V2 per account, which triggered the next three steps roughly at the same time:</p><ul><li><p>All new instance creation requests are now routed to the new <i>SousChefs</i> (<i>SousChefs</i> are created when they receive the first request), new instances never go to <i>AccountOld</i> again;</p></li><li><p><i>AccountOld</i> DOs start migrating themselves to behave like <i>SousChefs</i>;</p></li><li><p>The new <i>Account</i> DO is spun up with the corresponding metadata.</p></li></ul><p>After all accounts were migrated to the new control plane version, we were able to sunset <i>AccountOld</i> DOs as their instance retention periods expired. Once all instances on all accounts on <i>AccountOlds</i> were migrated, we could spin down those DOs permanently. The migration was completed with no downtime in a process that truly felt like changing a car’s wheels while driving.</p>
    <div>
      <h2>Try it out</h2>
      <a href="#try-it-out">
        
      </a>
    </div>
    <p>If you are new to Workflows, try our <a href="https://developers.cloudflare.com/workflows/get-started/guide/"><u>Get Started guide</u></a> or <a href="https://developers.cloudflare.com/workflows/get-started/durable-agents/"><u>build your first durable agent</u></a> with Workflows.</p><p>If your use case requires higher limits than our new defaults — a concurrency limit of 50,000 slots and account-level creation rate limit of 300 instances per second, 100 per workflow — reach out via your account team or the <a href="https://forms.gle/ukpeZVLWLnKeixDu7"><u>Workers Limit Request Form</u></a>. You can also reach out with feedback, feature requests, or just to share how you are using Workflows on our <a href="https://discord.com/channels/595317990191398933/1296923707792560189"><u>Discord server</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Agents Week]]></category>
            <category><![CDATA[Agents]]></category>
            <category><![CDATA[Durable Objects]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <guid isPermaLink="false">5R3ZpKlSDaSxbIwmpXwWYJ</guid>
            <dc:creator>Luís Duarte</dc:creator>
            <dc:creator>Mia Malden</dc:creator>
            <dc:creator>André Venceslau</dc:creator>
        </item>
        <item>
            <title><![CDATA[Add voice to your agent]]></title>
            <link>https://blog.cloudflare.com/voice-agents/</link>
            <pubDate>Wed, 15 Apr 2026 13:00:00 GMT</pubDate>
            <description><![CDATA[ An experimental voice pipeline for the Agents SDK enables real-time voice interactions over WebSockets. Developers can now build agents with continuous STT and TTS in just ~30 lines of server-side code.
 ]]></description>
            <content:encoded><![CDATA[ <p>For many of us, our first experiences with AI agents have been through typing into a chat box. And for those of us using agents day to day, we have likely gotten very good at writing detailed prompts or markdown files to guide them.</p><p>But some of the moments where agents would be most useful are not always text-first. You might be on a long commute, juggling a few open sessions, or just wanting to speak naturally to an agent, have it speak back, and continue the interaction.</p><p>Adding voice to an agent should not require moving that agent into a separate voice framework. Today, we are releasing an experimental voice pipeline for the <a href="https://developers.cloudflare.com/agents/api-reference/voice/"><u>Agents SDK</u></a>.</p><p>With <code>@cloudflare/voice</code>, you can add real-time voice to the same Agent architecture you already use. Voice just becomes another way you can talk to the same Durable Object, with the same tools, persistence, and WebSocket connection model that the Agents SDK already provides. </p><p><code>@cloudflare/voice</code> is an experimental package for the Agents SDK that provides: </p><ul><li><p><code>withVoice(Agent)</code> for full conversation voice agents</p></li><li><p><code>withVoiceInput(Agent)</code> for speech-to-text-only use cases, like dictation or voice search </p></li><li><p><code>useVoiceAgent</code> and <code>useVoiceInput</code> hooks for React apps </p></li><li><p><code>VoiceClient</code> for framework-agnostic clients </p></li><li><p>Built-in <a href="https://developers.cloudflare.com/workers-ai/"><u>Workers AI</u></a> providers, so that you can get started without external API keys: </p><ul><li><p>Continuous STT with <a href="https://developers.cloudflare.com/workers-ai/models/flux/"><u>Deepgram Flux</u></a></p></li><li><p>Continuous STT with<a href="https://developers.cloudflare.com/workers-ai/models/nova-3/"><u> Deepgram Nova 3</u></a></p></li><li><p>Text-to-speech with <a href="https://developers.cloudflare.com/workers-ai/models/aura-1/"><u>Deepgram Aura</u></a></p></li></ul></li></ul><p>This means you can now build an agent that users can talk to in real time over a single WebSocket connection, while keeping the same Agent class, Durable Object instance, and the same SQLite-backed conversation history. </p><p>Just as importantly, we want this to be bigger than one fixed default stack. The provider interfaces in <code>@cloudflare/voice</code> are intentionally small, and we want speech, telephony, and transport providers to build with us, so developers can mix and match the right components for their use case, instead of being locked into a single voice architecture.</p>
    <div>
      <h2>Get started with voice</h2>
      <a href="#get-started-with-voice">
        
      </a>
    </div>
    <p>Here’s the minimal server-side pattern for a voice agent in the Agents SDK: </p>
            <pre><code>import { Agent, routeAgentRequest } from "agents";
import {
  withVoice,
  WorkersAIFluxSTT,
  WorkersAITTS,
  type VoiceTurnContext
} from "@cloudflare/voice";

const VoiceAgent = withVoice(Agent);

export class MyAgent extends VoiceAgent&lt;Env&gt; {
  transcriber = new WorkersAIFluxSTT(this.env.AI);
  tts = new WorkersAITTS(this.env.AI);

  async onTurn(transcript: string, context: VoiceTurnContext) {
    return `You said: ${transcript}`;
  }
}

export default {
  async fetch(request: Request, env: Env) {
    return (
      (await routeAgentRequest(request, env)) ??
      new Response("Not found", { status: 404 })
    );
  }
} satisfies ExportedHandler&lt;Env&gt;;

</code></pre>
            <p>That’s the whole server. You add a continuous transcriber, a text-to-speech provider, and implement <code>onTurn()</code>. 

On the client side, you can connect to it with a React hook: </p>
            <pre><code>import { useVoiceAgent } from "@cloudflare/voice/react";

function App() {
  const {
    status,
    transcript,
    interimTranscript,
    startCall,
    endCall,
    toggleMute
  } = useVoiceAgent({ agent: "my-agent" });

  return (
    &lt;div&gt;
      &lt;p&gt;Status: {status}&lt;/p&gt;
      {interimTranscript &amp;&amp; &lt;p&gt;&lt;em&gt;{interimTranscript}&lt;/em&gt;&lt;/p&gt;}
      &lt;ul&gt;
        {transcript.map((msg, i) =&gt; (
          &lt;li key={i}&gt;
            &lt;strong&gt;{msg.role}:&lt;/strong&gt; {msg.text}
          &lt;/li&gt;
        ))}
      &lt;/ul&gt;
      &lt;button onClick={startCall}&gt;Start Call&lt;/button&gt;
      &lt;button onClick={endCall}&gt;End Call&lt;/button&gt;
      &lt;button onClick={toggleMute}&gt;Mute / Unmute&lt;/button&gt;
    &lt;/div&gt;
  );
}
</code></pre>
            <p>If you are not using React, you can use <code>VoiceClient</code> directly from <code>@cloudflare/voice/client</code>. </p>
    <div>
      <h2>How the voice pipeline works</h2>
      <a href="#how-the-voice-pipeline-works">
        
      </a>
    </div>
    <p>With the <a href="https://github.com/cloudflare/agents"><u>Agents SDK</u></a>, every agent is a <a href="https://developers.cloudflare.com/durable-objects/"><u>Durable Object</u></a> — a stateful, addressable server instance with its own <a href="https://developers.cloudflare.com/agents/api-reference/store-and-sync-state/"><u>SQLite database</u></a>, <a href="https://developers.cloudflare.com/agents/api-reference/websockets/"><u>WebSocket connections</u></a>, and application logic. The voice pipeline extends this model instead of replacing it. </p><p>At a high level, the flow looks like this:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5IBi8TsQiJ18Um47zpqAIr/d284d47a8653c35bc7c027438a5a7a2c/unnamed__55_.png" />
          </figure><p>Here’s how the pipeline breaks down, step by step: </p><ol><li><p><b>Audio transport: </b>The browser captures microphone audio and streams 16 kHz mono PCM over the same WebSocket connection the agent already uses. </p></li><li><p><b>STT session setup:  </b>When the call starts, the agent creates a continuous transcriber session that lives for the duration of the call. </p></li><li><p><b>STT input: </b>Audio streams continuously into that session.</p></li><li><p><b>STT turn detection: </b>The speech-to-text model itself decides when the user has finished an utterance and emits a stable transcript for that turn. </p></li><li><p><b>LLM/application logic: </b>The voice pipeline passes that transcript to your <code>onTurn(</code>) method. </p></li><li><p><b>TTS output: </b>Your response is synthesized to audio and sent back to the client. If <code>onTurn()</code> returns a stream, the pipeline sentence-chunks it and starts sending audio as sentences are ready. </p></li><li><p><b>Persistence: </b>The user and agent messages are persisted in SQLite, so conversation history survives reconnections and deployments.  </p></li></ol>
    <div>
      <h2>Why voice should grow with the rest of your agent</h2>
      <a href="#why-voice-should-grow-with-the-rest-of-your-agent">
        
      </a>
    </div>
    <p>Many voice frameworks focus on the voice loop itself: audio in, transcription, model response, audio out. Those are important primitives, but there’s a lot more to an agent than just voice. </p><p>Real agents running in production will grow. They need state, scheduling, persistence, tools, workflows, telephony, and ways to keep all of that consistent across channels. As your agent grows in complexity, voice stops being a standalone feature and becomes part of a larger system. </p><p>We wanted voice in the Agents SDK to start from that assumption. Instead of building voice as a separate stack, we built it on top of the same Durable Object-based agent platform, so you can pull in the rest of the primitives you need without re-architecting the application later.</p>
    <div>
      <h3>Voice and text share the same state</h3>
      <a href="#voice-and-text-share-the-same-state">
        
      </a>
    </div>
    <p>A user might start by typing, switch to voice, and go back to text. With Agents SDK, these are all just different inputs to the same agent. The same conversation history lives in SQLite, and the same tools are available. This gives you both a cleaner mental model and a much simpler application architecture to reason about. </p>
    <div>
      <h2>Lower latency comes from...</h2>
      <a href="#lower-latency-comes-from">
        
      </a>
    </div>
    
    <div>
      <h3>a shorter network path </h3>
      <a href="#a-shorter-network-path">
        
      </a>
    </div>
    <p>Voice experiences feel good or bad very quickly. Once a user stops speaking, the system needs to transcribe, think, and start speaking back fast enough to feel conversational. </p><p>A lot of voice latency is not pure model time. It’s the cost of bouncing audio and text between different services in different places. Audio needs to go to STT, transcripts go to an LLM, and responses go to a TTS model – and each handoff adds network overhead. </p><p>With the Agents SDK voice pipeline, the agent runs on Cloudflare’s network, and the built-in providers use Workers AI bindings. That keeps the pipeline tighter and reduces the amount of infrastructure you have to stitch together yourself. </p>
    <div>
      <h3>built-in streaming</h3>
      <a href="#built-in-streaming">
        
      </a>
    </div>
    <p>A voice agent interaction feels much more natural if it speaks the first sentence quickly (also called Time-to-First Audio). When <code>onTurn()</code> returns a stream, the pipeline chunks it into sentences and starts synthesis as sentences complete. That means the user can hear the beginning of the answer while the rest is still being generated. </p>
    <div>
      <h2>A more realistic backend </h2>
      <a href="#a-more-realistic-backend">
        
      </a>
    </div>
    <p>Here is a fuller example that streams an LLM response and starts speaking it back, sentence by sentence:</p>
            <pre><code>import { Agent, routeAgentRequest } from "agents";
import {
  withVoice,
  WorkersAIFluxSTT,
  WorkersAITTS,
  type VoiceTurnContext
} from "@cloudflare/voice";
import { streamText } from "ai";
import { createWorkersAI } from "workers-ai-provider";

const VoiceAgent = withVoice(Agent);

export class MyAgent extends VoiceAgent&lt;Env&gt; {
  transcriber = new WorkersAIFluxSTT(this.env.AI);
  tts = new WorkersAITTS(this.env.AI);

  async onTurn(transcript: string, context: VoiceTurnContext) {
    const ai = createWorkersAI({ binding: this.env.AI });

    const result = streamText({
      model: ai("@cf/cloudflare/gpt-oss-20b"),
      system: "You are a helpful voice assistant. Be concise.",
      messages: [
        ...context.messages.map((m) =&gt; ({
          role: m.role as "user" | "assistant",
          content: m.content
        })),
        { role: "user" as const, content: transcript }
      ],
      abortSignal: context.signal
    });

    return result.textStream;
  }
}

export default {
  async fetch(request: Request, env: Env) {
    return (
      (await routeAgentRequest(request, env)) ??
      new Response("Not found", { status: 404 })
    );
  }
} satisfies ExportedHandler&lt;Env&gt;;
</code></pre>
            <p><code>Context.messages</code> gives you recent SQLite-backed conversation history, and <code>context.signal</code> lets the pipeline abort the LLM call if the user interrupts. </p>
    <div>
      <h2>Voice as an input: <code>withVoiceInput</code></h2>
      <a href="#voice-as-an-input-withvoiceinput">
        
      </a>
    </div>
    <p>Not every speech interface needs to speak back. Sometimes you might want dictation, transcription, or voice search. For these use cases, you can use <code>withVoiceInput</code></p>
            <pre><code>import { Agent, type Connection } from "agents";
import { withVoiceInput, WorkersAINova3STT } from "@cloudflare/voice";

const InputAgent = withVoiceInput(Agent);

export class DictationAgent extends InputAgent&lt;Env&gt; {
  transcriber = new WorkersAINova3STT(this.env.AI);

  onTranscript(text: string, _connection: Connection) {
    console.log("User said:", text);
  }
}
</code></pre>
            <p>On the client, <code>useVoiceInput</code> gives you a lightweight interface centered on transcriptions: </p>
            <pre><code>import { useVoiceInput } from "@cloudflare/voice/react";

const { transcript, interimTranscript, isListening, start, stop, clear } =
  useVoiceInput({ agent: "DictationAgent" });
</code></pre>
            <p>This is useful when speech is an input method, and you don’t need a full conversational loop. </p>
    <div>
      <h2>Voice and text on the same connection</h2>
      <a href="#voice-and-text-on-the-same-connection">
        
      </a>
    </div>
    <p>The same client can call <code>sendText(“What’s the weather?”)</code>, which bypasses STT and sends the text directly to <code>onTurn()</code>. During an active call, the response can be spoken and shown as text. Outside a call, it can remain text-only. </p><p>This gives you a genuinely multimodal agent, without splitting the implementation into different code paths. </p>
    <div>
      <h2>What else can you build? </h2>
      <a href="#what-else-can-you-build">
        
      </a>
    </div>
    <p>Because a voice agent is still an agent, all the normal Agents SDK capabilities still apply. </p>
    <div>
      <h3>Tools and scheduling</h3>
      <a href="#tools-and-scheduling">
        
      </a>
    </div>
    <p>You can greet a caller when a session starts: </p>
            <pre><code>import { Agent, type Connection } from "agents";
import { withVoice, WorkersAIFluxSTT, WorkersAITTS } from "@cloudflare/voice";

const VoiceAgent = withVoice(Agent);

export class MyAgent extends VoiceAgent&lt;Env&gt; {
  transcriber = new WorkersAIFluxSTT(this.env.AI);
  tts = new WorkersAITTS(this.env.AI);

  async onTurn(transcript: string) {
    return `You said: ${transcript}`;
  }

  async onCallStart(connection: Connection) {
    await this.speak(connection, "Hi! How can I help you today?");
  }
}
</code></pre>
            <p>You can schedule spoken reminders and expose tools to your LLM just like any other agent: </p>
            <pre><code>import { Agent } from "agents";
import {
  withVoice,
  WorkersAIFluxSTT,
  WorkersAITTS,
  type VoiceTurnContext
} from "@cloudflare/voice";
import { streamText, tool } from "ai";
import { createWorkersAI } from "workers-ai-provider";
import { z } from "zod";

const VoiceAgent = withVoice(Agent);

export class MyAgent extends VoiceAgent&lt;Env&gt; {
  transcriber = new WorkersAIFluxSTT(this.env.AI);
  tts = new WorkersAITTS(this.env.AI);

  async speakReminder(payload: { message: string }) {
    await this.speakAll(`Reminder: ${payload.message}`);
  }

  async onTurn(transcript: string, context: VoiceTurnContext) {
    const ai = createWorkersAI({ binding: this.env.AI });

    const result = streamText({
      model: ai("@cf/cloudflare/gpt-oss-20b"),
      messages: [
        ...context.messages.map((m) =&gt; ({
          role: m.role as "user" | "assistant",
          content: m.content
        })),
        { role: "user" as const, content: transcript }
      ],
      tools: {
        set_reminder: tool({
          description: "Set a spoken reminder after a delay",
          inputSchema: z.object({
            message: z.string(),
            delay_seconds: z.number()
          }),
          execute: async ({ message, delay_seconds }) =&gt; {
            await this.schedule(delay_seconds, "speakReminder", { message });
            return { confirmed: true };
          }
        })
      },
      abortSignal: context.signal
    });

    return result.textStream;
  }
}
</code></pre>
            
    <div>
      <h3>Runtime model switching</h3>
      <a href="#runtime-model-switching">
        
      </a>
    </div>
    <p>The voice pipeline also lets you choose a transcription model dynamically per connection. </p><p>For example, you might prefer Flux for conversational turn-taking and Nova 3 for higher-accuracy dictation. You can switch at runtime by overriding <code>createTranscriber()</code>: </p>
            <pre><code>import { Agent, type Connection } from "agents";
import {
  withVoice,
  WorkersAIFluxSTT,
  WorkersAINova3STT,
  WorkersAITTS,
  type Transcriber
} from "@cloudflare/voice";

export class MyAgent extends VoiceAgent&lt;Env&gt; {
  tts = new WorkersAITTS(this.env.AI);

  createTranscriber(connection: Connection): Transcriber {
    const url = new URL(connection.url ?? "http://localhost");
    const model = url.searchParams.get("model");
    if (model === "nova-3") {
      return new WorkersAINova3STT(this.env.AI);
    }
    return new WorkersAIFluxSTT(this.env.AI);
  }
}
</code></pre>
            <p>On the client, you can pass query parameters through the hook: </p>
            <pre><code>const voiceAgent = useVoiceAgent({
  agent: "my-voice-agent",
  query: { model: "nova-3" }
});
</code></pre>
            
    <div>
      <h2>Pipeline hooks</h2>
      <a href="#pipeline-hooks">
        
      </a>
    </div>
    <p>You can also intercept data between stages: </p><ul><li><p><code>afterTranscribe(transcript, connection)</code></p></li><li><p><code>beforeSynthesize(text, connection)</code></p></li><li><p><code>afterSynthesize(audio, text, connection)</code></p></li></ul><p>These hooks are useful for content filtering, text normalization, language-specific transformations, or custom logging. </p>
    <div>
      <h2>Telephone and transport options</h2>
      <a href="#telephone-and-transport-options">
        
      </a>
    </div>
    <p>By default, the voice pipeline uses a single WebSocket connection as the simplest path for 1:1 voice agents. But that’s not the only option. </p>
    <div>
      <h3>Phone calls via Twilio</h3>
      <a href="#phone-calls-via-twilio">
        
      </a>
    </div>
    <p>You can connect phone calls to the same agent using the Twilio adapter:</p>
            <pre><code>import { TwilioAdapter } from "@cloudflare/voice-twilio";

export default {
  async fetch(request: Request, env: Env) {
    if (new URL(request.url).pathname === "/twilio") {
      return TwilioAdapter.handleRequest(request, env, "MyAgent");
    }

    return (
      (await routeAgentRequest(request, env)) ??
      new Response("Not found", { status: 404 })
    );
  }
};
</code></pre>
            <p>This lets the same agent handle web voice, text input, and phone calls. </p><p>One caveat: the default Workers AI TTS provider returns MP3, while Twilio expects mulaw 8kHz audio. For production telephony, you may want to use a TTS provider that outputs PCM or mulaw directly. </p>
    <div>
      <h3>WebRTC</h3>
      <a href="#webrtc">
        
      </a>
    </div>
    <p>If you need a transport that is better suited to difficult network conditions or will include multiple participants, the voice package also includes SFU utilities and supports custom transports. The default model is WebSocket-native today, but we plan to develop more adapters to connect to our <a href="https://developers.cloudflare.com/realtime/sfu/"><u>global SFU infrastructure</u></a>. </p>
    <div>
      <h2>Build with us </h2>
      <a href="#build-with-us">
        
      </a>
    </div>
    <p>The voice pipeline is provider-agnostic by design. </p><p>Under the hood, each stage is defined by a small interface: a transcriber opens a continuous session and accepts audio frames as they arrive, while a TTS provider takes text and returns audio. If a provider can stream audio output, the pipeline can use that too.</p>
            <pre><code>interface Transcriber {
  createSession(options?: TranscriberSessionOptions): TranscriberSession;
}

interface TranscriberSession {
  feed(chunk: ArrayBuffer): void;
  close(): void;
}

interface TTSProvider {
  synthesize(text: string, signal?: AbortSignal): Promise&lt;ArrayBuffer | null&gt;;
}
</code></pre>
            <p>We didn’t want voice support in Agents SDK to only work with one fixed combination of models and transports. We wanted the default path to be simple, while still making it easy to plug in other providers as the ecosystem grows. </p><p>The built-in providers use Workers AI, so you can get started without external API keys:</p><ul><li><p><code>WorkersAIFluxSTT</code> for conversational streaming STT</p></li><li><p><code>WorkersAINova3STT</code> for dictation-style streaming STT</p></li><li><p><code>WorkersAITTS</code> for text-to-speech</p></li></ul><p>But the bigger goal is interoperability. If you maintain a speech or voice service, these interfaces are small enough to implement without needing to understand the rest of the SDK internals. If your STT provider accepts streaming audio and can detect utterance boundaries, it can satisfy the transcriber interface. If your TTS provider can stream audio output, even better. </p><p>We would love to work on interoperability with:</p><ul><li><p>STT providers like AssemblyAI, Rev.ai, Speechmatics, or any service with a real-time transcription API</p></li><li><p>TTS providers like PlayHT, LMNT, Cartesia, Coqui, Amazon Polly, or Google Cloud TTS</p></li><li><p>telephony adapters for platforms like Vonage, Telnyx, or Bandwidth</p></li><li><p>transport implementations for WebRTC data channels, SFU bridges, and other audio transport layers</p></li></ul><p>We are also interested in collaborations that go beyond individual providers:</p><ul><li><p>latency benchmarking across STT + LLM + TTS combinations</p></li><li><p>multilingual support and better documentation for non-English voice agents</p></li><li><p>accessibility work, especially around multimodal interfaces and speech impairments</p></li></ul><p>If you are building voice infrastructure and want to see a first-class integration, <a href="https://github.com/cloudflare/agents/pulls"><u>open a PR</u></a> or reach out.</p>
    <div>
      <h2>Try it now</h2>
      <a href="#try-it-now">
        
      </a>
    </div>
    <p>The voice pipeline is available today as an experimental package:</p>
            <pre><code>npm create cloudflare@latest -- --template cloudflare/agents-starter
</code></pre>
            <p>Add <code>@cloudflare/voice</code>, give your agent a transcriber and a TTS provider, deploy it, and start talking to it. You can also read the <a href="https://developers.cloudflare.com/agents/api-reference/voice/"><u>API reference</u></a>. </p><p>If you build something interesting, open an issue or PR on <a href="https://github.com/cloudflare/agents"><u>github.com/cloudflare/agents</u></a>. Voice should not require a separate stack, and we think the best voice agents will be the ones built on the same durable application model as everything else.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1SLeHwqXY0ehOUsy5Nzzq2/c16244f45b8411f8817b9a21b45ed4b8/BLOG-3198_3.png" />
          </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Agents Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Agents]]></category>
            <category><![CDATA[Durable Objects]]></category>
            <guid isPermaLink="false">1t4PF8FLpPMGpgBd6UjBBv</guid>
            <dc:creator>Sunil Pai</dc:creator>
            <dc:creator>Korinne Alpers</dc:creator>
        </item>
        <item>
            <title><![CDATA[Scaling MCP adoption: Our reference architecture for simpler, safer and cheaper enterprise deployments of MCP]]></title>
            <link>https://blog.cloudflare.com/enterprise-mcp/</link>
            <pubDate>Tue, 14 Apr 2026 13:00:10 GMT</pubDate>
            <description><![CDATA[ We share Cloudflare's internal strategy for governing MCP using Access, AI Gateway, and MCP server portals. We also launch Code Mode to slash token costs and recommend new rules for detecting Shadow MCP in Cloudflare Gateway.
 ]]></description>
            <content:encoded><![CDATA[ <p>We at Cloudflare have aggressively adopted <a href="https://modelcontextprotocol.io/"><u>Model Context Protocol (MCP)</u></a> as a core part of our AI strategy. This shift has moved well beyond our engineering organization, with employees across product, sales, marketing, and finance teams now using agentic workflows to drive efficiency in their daily tasks. But the adoption of agentic workflow with MCP is not without its security risks. These range from authorization sprawl, <a href="https://www.cloudflare.com/learning/ai/prompt-injection/"><u>prompt injection</u></a>, and <a href="https://www.cloudflare.com/learning/security/what-is-a-supply-chain-attack/"><u>supply chain risks</u></a>. To secure this broad company-wide adoption, we have integrated a suite of security controls from both our <a href="https://www.cloudflare.com/sase/"><u>Cloudflare One (SASE) platform</u></a> and our <a href="https://workers.cloudflare.com/"><u>Cloudflare Developer platform</u></a>, allowing us to govern AI usage with MCP without slowing down our workforce.  </p><p>In this blog we’ll walk through our own best practices for securing MCP workflows, by putting different parts of our platform together to create a unified security architecture for the era of autonomous AI. We’ll also share two new concepts that support enterprise MCP deployments:</p><ul><li><p>We are launching <a href="https://developers.cloudflare.com/cloudflare-one/access-controls/ai-controls/mcp-portals/#code-mode"><u>Code Mode with MCP server portals</u></a>, to drastically reduce token costs associated with MCP usage; </p></li><li><p>We describe how to use <a href="https://developers.cloudflare.com/cloudflare-wan/zero-trust/cloudflare-gateway/"><u>Cloudflare Gateway</u></a> for Shadow MCP detection, to discover use of unauthorized remote MCP servers.</p></li></ul><p>We also talk about how our organization approached deploying MCP, and how we built out our MCP security architecture using Cloudflare products including <a href="https://developers.cloudflare.com/agents/guides/remote-mcp-server/"><u>remote MCP servers</u></a>, <a href="https://www.cloudflare.com/sase/products/access/"><u>Cloudflare Access</u></a>, <a href="https://developers.cloudflare.com/cloudflare-one/access-controls/ai-controls/mcp-portals/"><u>MCP server portals</u></a> and <a href="https://www.google.com/search?q=https://www.cloudflare.com/developer-platform/ai-gateway/"><u>AI Gateway</u></a>. </p>
    <div>
      <h2>Remote MCP servers provide better visibility and control</h2>
      <a href="#remote-mcp-servers-provide-better-visibility-and-control">
        
      </a>
    </div>
    <p><a href="https://www.cloudflare.com/learning/ai/what-is-model-context-protocol-mcp/"><u>MCP</u></a> is an open standard that enables developers to build a two-way connection between AI applications and the data sources they need to access. In this architecture, the MCP client is the integration point with the <a href="https://www.cloudflare.com/learning/ai/what-is-large-language-model/"><u>LLM</u></a> or other <a href="https://www.cloudflare.com/learning/ai/what-is-agentic-ai/"><u>AI agent</u></a>, and the MCP server sits between the <a href="https://www.cloudflare.com/learning/ai/mcp-client-and-server/"><u>MCP client</u></a> and the corporate resources.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/73kpNxOQIlM0UOnty9qGXS/53fe1b92e299b52363ac1870df52f2e9/BLOG-3252_2.png" />
          </figure><p>The separation between MCP clients and MCP servers allows agents to autonomously pursue goals and take actions while maintaining a clear boundary between the AI (integrated at the MCP client) and the credentials and APIs of the corporate resource (integrated at the MCP server). </p><p>Our workforce at Cloudflare is constantly using MCP servers to access information in various internal resources, including our project management platform, our internal wiki, documentation and code management platforms, and more. </p><p>Very early on, we realized that locally-hosted MCP servers were a security liability. Local MCP server deployments may rely on unvetted software sources and versions, which increases the risk of <a href="https://owasp.org/www-project-mcp-top-10/2025/MCP04-2025%E2%80%93Software-Supply-Chain-Attacks&amp;Dependency-Tampering"><u>supply chain attacks</u></a> or <a href="https://owasp.org/www-community/attacks/MCP_Tool_Poisoning"><u>tool injection attacks</u></a>. They prevent IT and security administrators from administrating these servers, leaving it up to individual employees and developers to choose which MCP servers they want to run and how they want to keep them up to date. This is a losing game.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4mngDeTGrsah2DiN7IAnrf/575075096e72d6b967df4327a99c35fc/BLOG-3252_3.png" />
          </figure><p>Instead, we have a centralized team at Cloudflare that manages our MCP server deployment across the enterprise. This team built a shared MCP platform inside our monorepo that provides governed infrastructure out of the box. When an employee wants to expose an internal resource via MCP, they first get approval from our AI governance team, and then they copy a template, write their tool definitions, and deploy, all the while inheriting default-deny write controls with audit logging, auto-generated <a href="https://www.cloudflare.com/learning/serverless/glossary/what-is-ci-cd/"><u>CI/CD pipelines</u></a>, and <a href="https://www.cloudflare.com/learning/security/glossary/secrets-management/"><u>secrets management</u></a> for free. This means standing up a new governed MCP server is minutes of scaffolding. The governance is baked into the platform itself, which is what allowed adoption to spread so quickly. </p><p>Our CI/CD pipeline deploys them as <a href="https://developers.cloudflare.com/agents/guides/remote-mcp-server/"><u>remote MCP servers</u></a> on custom domains on <a href="https://www.cloudflare.com/developer-platform/"><u>Cloudflare’s developer platform</u></a>. This gives us visibility into which MCPs servers are being used by our employees, while maintaining control over software sources. As an added bonus, every remote MCP server on the Cloudflare developer platform is automatically deployed across our global network of data centers, so MCP servers can be accessed by our employees with low latency, regardless of where they might be in the world.</p>
    <div>
      <h3>Cloudflare Access provides authentication</h3>
      <a href="#cloudflare-access-provides-authentication">
        
      </a>
    </div>
    <p>Some of our MCP servers sit in front of public resources, like our <a href="https://docs.mcp.cloudflare.com/mcp"><u>Cloudflare documentation MCP server</u></a> or <a href="https://radar.mcp.cloudflare.com/mcp"><u>Cloudflare Radar MCP server</u></a>, and thus we want them to be accessible to anyone. But many of the MCP servers used by our workforce are sitting in front of our private corporate resources. These MCP servers require user authentication to ensure that they are off limits to everyone but authorized Cloudflare employees. To achieve this, our monorepo template for MCP servers integrates <a href="https://www.cloudflare.com/sase/products/access/"><u>Cloudflare Access</u></a> as the OAuth provider. Cloudflare Access secures login flows and issues access tokens to resources, while acting as an identity aggregator that verifies end user <a href="https://www.cloudflare.com/learning/access-management/what-is-sso/"><u>single-sign on (SSO)</u></a>, <a href="https://www.cloudflare.com/learning/access-management/what-is-multi-factor-authentication/"><u>multifactor authentication (MFA)</u></a>, and a variety of contextual attributes such as IP addresses, location, or device certificates. </p>
    <div>
      <h2>MCP server portals centralize discovery and governance</h2>
      <a href="#mcp-server-portals-centralize-discovery-and-governance">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/70s5yIwdzoaYQIoRF4L0mH/018dae32529c30639a2af48ee4031bc6/BLOG-3252_4.png" />
          </figure><p><sup><i>MCP server portals unify governance and control for all AI activity.</i></sup></p><p>As the number of our remote MCP servers grew, we hit a new wall: discovery. We wanted to make it easy for every employee (especially those that are new to MCP) to find and work with all the MCP servers that are available to them. Our MCP server portals product provided a convenient solution. The employee simply connects their MCP client to the MCP server portal, and the portal immediately reveals every internal and third-party MCP servers they are authorized to use. </p><p>Beyond this, our MCP server portals provide centralized logging, consistent policy enforcement and <a href="https://www.cloudflare.com/learning/access-management/what-is-dlp/"><u>data loss prevention</u></a> (DLP guardrails). Our administrators can see who logged into what MCP portal and create DLP rules that prevent certain data, like personally identifiable data (PII), from being shared with certain MCP servers.</p><p>We can also create policies that control who has access to the portal itself, and what tools from each MCP server should be exposed. For example, we could set up one MCP server portal that is only accessible to employees that are part of our <i>finance </i>group that exposes just the read-only tools for the MCP server in front of our internal code repository. Meanwhile, a different MCP server portal, accessible only to employees on their corporate laptops that are in our <i>engineering </i>team, could expose more powerful read/write tools to our code repository MCP server.</p><p>An overview of our MCP server portal architecture is shown above. The portal supports both remote MCP servers hosted on Cloudflare, and third-party MCP servers hosted anywhere else. What makes this architecture uniquely performant is that all these security and networking components run on the same physical machine within our global network. When an employee's request moves through the MCP server portal, a Cloudflare-hosted remote MCP server, and Cloudflare Access, their traffic never needs to leave the same physical machine. </p>
    <div>
      <h2>Code Mode with MCP server portals reduces costs</h2>
      <a href="#code-mode-with-mcp-server-portals-reduces-costs">
        
      </a>
    </div>
    <p>After months of high-volume MCP deployments, we’ve paid out our fair share of tokens. We’ve also started to think most people are doing MCP wrong.</p><p>The standard approach to MCP requires defining a separate tool for every API operation that is exposed via an MCP server. But this static and exhaustive approach quickly exhausts an agent’s context window, especially for large platforms with thousands of endpoints.</p><p>We previously wrote about how we used server-side <a href="https://blog.cloudflare.com/code-mode-mcp/"><u>Code Mode to power Cloudflare’s MCP server</u></a>, allowing us to expose <a href="https://developers.cloudflare.com/api/?cf_target_id=C3927C0A6A2E9B823D2DF3F28E5F0D30"><u>the thousands of end-points in Cloudflare API</u></a> while reducing token use by 99.9%. The Cloudflare MCP server exposes just two tools: a <code>search</code> tool lets the model write JavaScript to explore what’s available, and an <code>execute</code> tool lets it write JavaScript to call the tools it finds. The model discovers what it needs on demand, rather than receiving everything upfront.</p><p>We like this pattern so much, we had to make it available for everyone. So we have now launched the ability to use the “Code Mode” pattern with <a href="https://developers.cloudflare.com/cloudflare-one/access-controls/ai-controls/mcp-portals/"><u>MCP server portals</u></a>. Now you can front all of your MCP servers with a centralized portal that performs audit controls and progressive tool disclosure, in order to reduce token costs.</p><p>Here is how it works. Instead of exposing every tool definition to a client, all of your underlying MCP servers collapse into just two MCP portal tools: <code>portal_codemode_search</code> and <code>portal_codemode_execute</code>. The <code>search</code> tool gives the model access to a <code>codemode.tools()</code> function that returns all the tool definitions from every connected upstream MCP server. The model then writes JavaScript to filter and explore these definitions, finding exactly the tools it needs without every schema being loaded into context. The <code>execute</code> tool provides a <code>codemode</code> proxy object where each upstream tool is available as a callable function. The model writes JavaScript that calls these tools directly, chaining multiple operations, filtering results, and handling errors in code. All of this runs in a sandboxed environment on the MCP server portal powered by <a href="https://developers.cloudflare.com/dynamic-workers/"><u>Dynamic Workers</u></a>. </p><p>Here is an example of an agent that needs to find a Jira ticket and update it with information from Google Drive. It first searches for the right tools:</p>
            <pre><code>// portal_codemode_search
async () =&gt; {
 const tools = await codemode.tools();
 return tools
  .filter(t =&gt; t.name.includes("jira") || t.name.includes("drive"))
  .map(t =&gt; ({ name: t.name, params: Object.keys(t.inputSchema.properties || {}) }));
}
</code></pre>
            <p> The model now knows the exact tool names and parameters it needs, without the full schemas of tools ever entering its context. It then writes a single <code>execute</code> call to chain the operations together:</p>
            <pre><code>// portal_codemode_execute
async () =&gt; {
 const tickets = await codemode.jira_search_jira_with_jql({
  jql: ‘project = BLOG AND status = “In Progress”’,
  fields: [“summary”, “description”]
 });
 const doc = await codemode.google_workspace_drive_get_content({
  fileId: “1aBcDeFgHiJk”
 });
 await codemode.jira_update_jira_ticket({
  issueKey: tickets[0].key,
  fields: { description: tickets[0].description + “\n\n” + doc.content }
 });
 return { updated: tickets[0].key };
}
</code></pre>
            <p>This is just two tool calls. The first discovers what's available, the second does the work. Without Code Mode, this same workflow would have required the model to receive the full schemas of every tool from both MCP servers upfront, and then make three separate tool invocations.</p><p>Let’s put the savings in perspective: when our internal MCP server portal is connected to just four of our internal MCP servers, it exposes 52 tools that consume approximately 9,400 tokens of context just for their definitions. With Code Mode enabled, those 52 tools collapse into 2 portal tools consuming roughly 600 tokens, a 94% reduction. And critically, this cost stays fixed. As we connect more MCP servers to the portal, the token cost of Code Mode doesn’t grow.</p><p>Code Mode can be activated on an MCP server portal by adding a query parameter to the URL. Instead of connecting to your portal over its usual URL (e.g. <code>https://myportal.example.com/mcp</code>), you attach <code>?codemode=search_and_execute</code> to the URL (e.g. <code>https://myportal.example.com/mcp?codemode=search_and_execute</code>).</p>
    <div>
      <h2>AI Gateway provides extensibility and cost controls</h2>
      <a href="#ai-gateway-provides-extensibility-and-cost-controls">
        
      </a>
    </div>
    <p>We aren’t done yet. We plug <a href="https://www.cloudflare.com/developer-platform/products/ai-gateway/"><u>AI Gateway</u></a> into our architecture by positioning it on the connection between the MCP client and the LLM. This allows us to quickly switch between various LLM providers (to prevent vendor lock-in) and to enforce cost controls (by limiting the number of tokens each employee can burn through). The full architecture is shown below. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3epLuGydMO1YxkdhkcqGPG/74c26deb712d383942e79699b4fb71da/BLOG-3252_5.png" />
          </figure>
    <div>
      <h2>Cloudflare Gateway discovers and blocks shadow MCP</h2>
      <a href="#cloudflare-gateway-discovers-and-blocks-shadow-mcp">
        
      </a>
    </div>
    <p>Now that we’ve provided governed access to authorized MCP servers, let’s look into dealing with unauthorized MCP servers. We can perform shadow MCP discovery using <a href="https://developers.cloudflare.com/cloudflare-wan/zero-trust/cloudflare-gateway/"><u>Cloudflare Gateway</u></a>. Cloudflare Gateway is our comprehensive secure web gateway that provides enterprise security teams with visibility and control over their employees’ Internet traffic.</p><p>We can use the Cloudflare Gateway API to perform a multi-layer scan to find remote MCP servers that are not being accessed via an MCP server portal. This is possible using a variety of existing Gateway and Data Loss Prevention (DLP) selectors, including:</p><ul><li><p>Using the Gateway <code>httpHost</code> selector to scan for </p><ul><li><p>known MCP server hostnames using (like <a href="http://mcp.stripe.com"><u>mcp.stripe.com</u></a>)</p></li><li><p>mcp.* subdomains using wildcard hostname patterns </p></li></ul></li><li><p>Using the Gateway <code>httpRequestURI</code> selector to scan for MCP-specific URL paths like /mcp and /mcp/sse </p></li><li><p>Using DLP-based body inspection to find MCP traffic, even if that traffic uses URI that do not contain the telltale mentions of <code>mcp</code> or <code>sse</code>. Specifically, we use the fact that MCP uses JSON-RPC over HTTP, which means every request contains a "method" field with values like "tools/call", "prompts/get", or "initialize." Here are some regex rules that can be used to detect MCP traffic in the HTTP body:</p></li></ul>
            <pre><code>const DLP_REGEX_PATTERNS = [
  {
    name: "MCP Initialize Method",
    regex: '"method"\\s{0,5}:\\s{0,5}"initialize"',
  },
  {
    name: "MCP Tools Call",
    regex: '"method"\\s{0,5}:\\s{0,5}"tools/call"',
  },
  {
    name: "MCP Tools List",
    regex: '"method"\\s{0,5}:\\s{0,5}"tools/list"',
  },
  {
    name: "MCP Resources Read",
    regex: '"method"\\s{0,5}:\\s{0,5}"resources/read"',
  },
  {
    name: "MCP Resources List",
    regex: '"method"\\s{0,5}:\\s{0,5}"resources/list"',
  },
  {
    name: "MCP Prompts List",
    regex: '"method"\\s{0,5}:\\s{0,5}"prompts/(list|get)"',
  },
  {
    name: "MCP Sampling Create Message",
    regex: '"method"\\s{0,5}:\\s{0,5}"sampling/createMessage"',
  },
  {
    name: "MCP Protocol Version",
    regex: '"protocolVersion"\\s{0,5}:\\s{0,5}"202[4-9]',
  },
  {
    name: "MCP Notifications Initialized",
    regex: '"method"\\s{0,5}:\\s{0,5}"notifications/initialized"',
  },
  {
    name: "MCP Roots List",
    regex: '"method"\\s{0,5}:\\s{0,5}"roots/list"',
  },
];
</code></pre>
            <p>The Gateway API supports additional automation. For example, one can use the custom DLP profile we defined above to block traffic, or redirect it, or just to log and inspect MCP payloads. Put this together, and Gateway can be used to provide comprehensive detection of unauthorized remote MCP servers accessed via an enterprise network. </p><p>For more information on how to build this out, see this <a href="https://developers.cloudflare.com/cloudflare-one/tutorials/detect-mcp-traffic-gateway-logs/"><u>tutorial</u></a>. </p>
    <div>
      <h2>Public-facing MCP Servers are protected with AI Security for Apps</h2>
      <a href="#public-facing-mcp-servers-are-protected-with-ai-security-for-apps">
        
      </a>
    </div>
    <p>So far, we’ve been focused on protecting our workforce’s access to our internal MCP servers. But, like many other organizations, we also have public-facing MCP servers that our customers can use to agentically administer and operate Cloudflare products. These MCP servers are hosted on Cloudflare’s developer platform. (You can find a list of individual MCPs for specific products <a href="https://developers.cloudflare.com/agents/model-context-protocol/mcp-servers-for-cloudflare/"><u>here</u></a>, or refer back to our new approach for providing more efficient access to the entire Cloudflare API using <a href="https://blog.cloudflare.com/code-mode/"><u>Code Mode</u></a>.)</p><p>We believe that every organization should publish official, first-party MCP servers for their products. The alternative is that your customers source unvetted servers from public repositories where packages may contain <a href="https://www.docker.com/blog/mcp-horror-stories-the-supply-chain-attack/"><u>dangerous trust assumptions</u></a>, undisclosed data collection, and any range of unsanctioned behaviors. By publishing your own MCP servers, you control the code, update cadence, and security posture of the tools your customers use.</p><p>Since every remote MCP server is an HTTP endpoint, we can put it behind the <a href="https://www.cloudflare.com/application-services/products/waf/"><u>Cloudflare Web Application Firewall (WAF)</u></a>. Customers can enable the <a href="https://developers.cloudflare.com/waf/detections/ai-security-for-apps/"><u>AI Security for Apps</u></a> feature within the WAF to automatically inspect inbound MCP traffic for prompt injection attempts, sensitive data leakage, and topic classification. Public facing MCPs are protected just as any other web API.  </p>
    <div>
      <h2>The future of MCP in the enterprise</h2>
      <a href="#the-future-of-mcp-in-the-enterprise">
        
      </a>
    </div>
    <p>We hope our experience, products, and reference architectures will be useful to other organizations as they continue along their own journey towards broad enterprise-wide adoption of MCP.</p><p>We’ve secured our own MCP workflows by: </p><ul><li><p>Offering our developers a templated framework for building and deploying remote MCP servers on our developer platform using Cloudflare Access for authentication</p></li><li><p>Ensuring secure, identity-based access to authorized MCP servers by connecting our entire workforce to MCP server portals</p></li><li><p>Controlling costs using AI Gateway to mediate access to the LLMs powering our workforce’s MCP clients, and using Code Mode in MCP server portals to reduce token consumption and context bloat</p></li><li><p><a href="https://developers.cloudflare.com/cloudflare-one/tutorials/detect-mcp-traffic-gateway-logs/"><u>Discovering</u></a> shadow MCP usage by Cloudflare Gateway </p></li></ul><p>For organizations advancing on their own enterprise MCP journeys, we recommend starting by putting your existing remote and third-party MCP servers behind <a href="https://developers.cloudflare.com/cloudflare-one/access-controls/ai-controls/mcp-portals/"><u> Cloudflare MCP server portals</u></a> and enabling Code Mode to start benefitting for cheaper, safer and simpler enterprise deployments of MCP.   </p><p><sub><i>Acknowledgements:  This reference architecture and blog represents this work of many people across many different roles and business units at Cloudflare. This is just a partial list of contributors: Ann Ming Samborski,  Kate Reznykova, Mike Nomitch, James Royal, Liam Reese, Yumna Moazzam, Simon Thorpe, Rian van der Merwe, Rajesh Bhatia, Ayush Thakur, Gonzalo Chavarri, Maddy Onyehara, and Haley Campbell.</i></sub></p><div>  </div><p></p> ]]></content:encoded>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Cloudflare One]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[MCP]]></category>
            <category><![CDATA[Cloudflare Access]]></category>
            <category><![CDATA[Cloudflare Gateway]]></category>
            <category><![CDATA[Agents Week]]></category>
            <guid isPermaLink="false">73AaroR7GH8sXdbfIV99Fl</guid>
            <dc:creator>Sharon Goldberg</dc:creator>
            <dc:creator>Matt Carey</dc:creator>
            <dc:creator>Ivan Anguiano</dc:creator>
        </item>
        <item>
            <title><![CDATA[Secure private networking for everyone: users, nodes, agents, Workers — introducing Cloudflare Mesh]]></title>
            <link>https://blog.cloudflare.com/mesh/</link>
            <pubDate>Tue, 14 Apr 2026 13:00:09 GMT</pubDate>
            <description><![CDATA[ Cloudflare Mesh provides secure, private network access for users, nodes, and autonomous AI agents. By integrating with Workers VPC, developers can now grant agents scoped access to private databases and APIs without manual tunnels.
 ]]></description>
            <content:encoded><![CDATA[ <p>AI agents have changed how teams think about private network access. Your coding agent needs to query a staging database. Your production agent needs to call an internal API. Your personal AI assistant needs to reach a service running on your home network. The clients are no longer just humans or services. They're <a href="https://www.cloudflare.com/learning/ai/what-is-agentic-ai/"><u>agents</u></a>, running autonomously, making requests you didn't explicitly approve, against infrastructure you need to keep secure.</p><p>Each of these workflows has the same underlying problem: agents need to reach private resources, but the tools for doing that were built for humans, not autonomous software. VPNs require interactive login. SSH tunnels require manual setup. Exposing services publicly is a security risk. And none of these approaches give you visibility into what the agent is actually doing once it's connected.</p><p>Today, <b>we're introducing Cloudflare Mesh</b> to connect your private networks together and provide secure access for your agents. We're also integrating Mesh with <a href="https://www.cloudflare.com/developer-platform/"><u>Cloudflare Developer Platform</u></a> so that <a href="https://www.cloudflare.com/developer-platform/products/workers/"><u>Workers</u></a>, <a href="https://www.cloudflare.com/developer-platform/products/durable-objects/"><u>Durable Objects</u></a>, and agents built with the Agents SDK can reach your private infrastructure directly.</p><p>If you’re using <a href="https://www.cloudflare.com/sase/"><u>Cloudflare One’s SASE and Zero Trust suite</u></a>, you already have access to Mesh. You don’t need a new technology paradigm to secure agentic workloads. You need a SASE that was built for the agentic era, and that’s Cloudflare One. Cloudflare Mesh is a new experience with a simpler setup that leverages the on-ramps you’re already familiar with: WARP Connector <i>(now called a Cloudflare Mesh node)</i> and WARP Client <i>(now called Cloudflare One Client)</i>. Together, these create a private network for human, developer, and agent traffic. Mesh is directly integrated into your existing Cloudflare One deployment. Your existing Gateway policies, Access rules, and device posture checks apply to Mesh traffic automatically.</p><p>If you're a developer who just wants private networking for your agents, services, and team, Mesh is where you start. Set it up in minutes, connect your networks, and secure your traffic. And because Mesh runs on the <a href="https://developers.cloudflare.com/cloudflare-one/"><u>Cloudflare One</u></a> platform, you can grow into more advanced capabilities over time: <a href="https://www.cloudflare.com/sase/products/gateway/"><u>Gateway</u></a> network, DNS, and HTTP policies for fine-grained traffic control, <a href="https://www.cloudflare.com/sase/use-cases/infrastructure-access/"><u>Access for Infrastructure</u></a> for SSH and RDP session management, <a href="https://www.cloudflare.com/sase/products/browser-isolation/"><u>Browser Isolation</u></a> for safe web access, <a href="https://developers.cloudflare.com/cloudflare-one/data-loss-prevention/"><u>DLP</u></a> to prevent sensitive data from leaving your network, and <a href="https://www.cloudflare.com/sase/products/casb/"><u>CASB</u></a> for SaaS security. You won’t have to plan for all of this on day one. You just don't have to migrate when you need it.</p>
    <div>
      <h2>New agentic workflows</h2>
      <a href="#new-agentic-workflows">
        
      </a>
    </div>
    <p>Private networking has always been about connecting clients to resources — SSH into a server, query a database, access an internal API. What's changed is who the clients are. A year ago, the answer was your developers and your services. Today, it's increasingly your agents.</p><p>This isn't theoretical. Look at the ecosystem: the explosion of <a href="https://blog.cloudflare.com/remote-model-context-protocol-servers-mcp/"><u>MCP (Model Context Protocol) servers</u></a> providing tool access, coding agents that need to read from private repos and databases, personal assistants running on home hardware. Each of these patterns assumes the agent can reach the resources it needs. When those resources are isolated in private networks, the agent is stuck. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/12RTZcOBkkKwmaRe5VYn9k/e1b00401bee274a7643a4dfd4139e2df/BLOG-3215_2.png" />
          </figure><p>This creates three workflows that are hard to secure today:</p><ol><li><p>Accessing a personal agent from a mobile device. You're running OpenClaw on a Mac mini at home. You want to reach it from your phone, your laptop at a coffee shop, or your work machine. But exposing it to the public Internet (even behind a password) can leave some gaps exposed. Your agent has shell access, file system access, and network access to your home network. One misconfiguration and anyone can reach it.</p></li><li><p>Letting a coding agent access your staging environment. You're using Claude Code, Cursor, or Codex on your laptop. You ask it to check deployment status, query analytics from a staging database, or read from an internal object store. But those services live in a private cloud VPC, so your agent can't reach them without exposing them to the Internet or tunneling your entire laptop into the VPC.</p></li><li><p>Connecting deployed agents to private services. You're building agents into your product using the <a href="https://developers.cloudflare.com/agents/"><u>Agents SDK</u></a> on Cloudflare Workers. Those agents need to call internal APIs, query databases, and access services that aren't on the public Internet. They need private access, but with scoped permissions, audit trails, and no credential leakage.</p></li></ol>
    <div>
      <h2>Cloudflare Mesh: one private network for users, nodes, and agents</h2>
      <a href="#cloudflare-mesh-one-private-network-for-users-nodes-and-agents">
        
      </a>
    </div>
    <p>Cloudflare Mesh is developer-friendly private networking. One lightweight connector, one binary, connects everything: your personal devices, your remote servers, your user endpoints. You don't need to install separate tools for each pattern. One connector on your network, and every access pattern works.</p><p>Once connected, devices in your private network can talk to each other over private IPs, routed through Cloudflare’s global network across 330+ cities giving you better reliability and control over your network.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/ENfLb0kr1Zesh7PnuCvDK/830afd93d49b658023a9ece55e0e505b/BLOG-3215_3.gif" />
          </figure><p>Now, with Mesh, a single solution can solve all of the agent scenarios we mentioned above:</p><ul><li><p>With <b>Cloudflare One Client for iOS</b> on your phone, you can securely connect your mobile devices to your local Mac mini running OpenClaw via a Mesh private network.</p></li><li><p>With <b>Cloudflare One Client for macOS</b> on your laptop, you can connect your laptop to your private network so your coding agents can reach staging databases or APIs and query them.</p></li><li><p>With <b>Mesh nodes</b> on your Linux servers, you can connect VPCs in external clouds together, letting agents access resources and MCPs in external private networks.</p></li></ul><p>Because Mesh is powered by <a href="https://developers.cloudflare.com/cloudflare-one/team-and-resources/devices/cloudflare-one-client/"><u>Cloudflare One Client</u></a>, every connection inherits the security controls of the Cloudflare One platform. Gateway policies apply to Mesh traffic. Device posture checks validate connecting devices. DNS filtering catches suspicious lookups. You get this without additional configuration: the same policies that protect your human traffic protect your agent traffic.</p>
    <div>
      <h2>Choosing between Mesh and Tunnel</h2>
      <a href="#choosing-between-mesh-and-tunnel">
        
      </a>
    </div>
    <p>With the introduction of Mesh, you might ask: when should I use Mesh instead of Tunnel? Both connect external networks privately to Cloudflare, but they serve different purposes. <a href="https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/"><u>Cloudflare Tunnel</u></a> is the ideal solution for unidirectional traffic, where Cloudflare proxies the traffic from the edge to specific private services (like a web server or a database). </p><p>Cloudflare Mesh, on the other hand, provides a full bidirectional, many-to-many network. Every device and node on your Mesh can access one another using their private IPs. An application or agent running in your network can discover and access any other resource on the Mesh without each resource needing its own Tunnel. </p>
    <div>
      <h2>Using the power of Cloudflare’s network</h2>
      <a href="#using-the-power-of-cloudflares-network">
        
      </a>
    </div>
    <p>Cloudflare Mesh gives you the benefits of a mesh network (resiliency, high scalability, low latency and high performance), but, by routing everything through Cloudflare, it resolves a key challenge of mesh networks: NAT traversal.</p><p>Most of the Internet is behind NAT (Network Address Translation). This mechanism allows an entire local network of devices to share a single public IP address by mapping traffic between public headers and private internal addresses. When two devices are behind NAT, direct connections can fail and traffic has to fall back to relay servers. If your relay infrastructure has limited points of presence, a meaningful fraction of your traffic hits those relays, adding latency and reducing reliability. And while it can be possible to self-host your own relay servers to compensate, that means taking on the burden of managing additional infrastructure just to connect your existing network.</p><p>Cloudflare Mesh takes a different approach. All Mesh traffic routes through Cloudflare's global network, the same infrastructure that serves traffic for some of the largest websites of the Internet. For cross-region or multi-cloud traffic, this consistently beats public Internet routing. There's no degraded fallback path, because the Cloudflare edge is the path.</p><p>Routing through Cloudflare also means every packet passes through Cloudflare's security stack. This is the key advantage of building Mesh on the Cloudflare One platform: security isn't a separate product you bolt on later. And by leveraging this same global backbone, we can provide these core pillars to every team from day one:</p><p><b>50 nodes and 50 users free. </b>Your whole team and your whole staging environment on one private network, included with every Cloudflare account. </p><p><b>Global edge routing.</b> 330+ cities, optimized backbone routing. No relay servers with limited points of presence. No degraded fallback paths.</p><p><b>Security controls from day one.</b> Mesh runs on Cloudflare One. Gateway policies, DNS filtering, DLP, traffic inspection, and device posture checks are all available on the same platform. Start with simple private connectivity. Turn on Gateway policies when you need traffic filtering. Enable Access for Infrastructure when you need session-level controls for SSH and RDP. Add DLP when you need to prevent sensitive data from leaving your network. Every capability is one toggle away.</p><p><b>High availability.</b> Create a Mesh node with high availability enabled and spin up multiple connectors using the same token in active-passive mode. They advertise the same IP routes, so if one goes down, traffic fails over automatically.</p>
    <div>
      <h2>Integrated with the Developer Platform with Workers VPC</h2>
      <a href="#integrated-with-the-developer-platform-with-workers-vpc">
        
      </a>
    </div>
    <p>Mesh connects your agents and resources across external clouds, but you also need to be able to connect from your agents built on Workers with Agents SDK as well. To enable this, we’ve extended <a href="https://blog.cloudflare.com/workers-vpc-open-beta/"><u>Workers VPC</u></a> to make your entire Mesh network accessible to Workers and Durable Objects.</p><p>That means that you can connect to your Cloudflare Mesh network from Workers, making the entire network accessible from a single binding’s <code>fetch()</code> call. This complements Workers VPC’s existing support for Cloudflare Tunnel, giving you more choice over how you want to secure your networks. Now, you can specify entire networks that you want to connect to in your <code>wrangler.jsonc</code> file. To bind to your Mesh network, use the <code>cf1:network</code> reserved keyword that binds to the Mesh network of your account:</p>
            <pre><code>"vpc_networks": [
  { "binding": "MESH", "network_id": "cf1:network", "remote": true },
  { "binding": "AWS_VPC", "tunnel_id": "350fd307-...", "remote": true }
]
</code></pre>
            <p>Then, you can use it within your Worker or agent code:</p>
            <pre><code>export default {
  async fetch(request: Request, env: Env, ctx: ExecutionContext) {
    // Reach any internal host on your Mesh, no pre-registration required
    const apiResponse = await env.MESH.fetch("http://10.0.1.50/api/data");

    // Internal hostname resolved via tunnel's private DNS resolver
    const dbResponse = await env.AWS_VPC.fetch("http://internal-db.corp.local:5432");

    return new Response(await apiResponse.text());
  },
};
</code></pre>
            <p>By connecting the Developer Platform to your Mesh networks, you can build Workers that have secure access to your private databases, internal APIs and MCPs, allowing you to build cross-cloud agents and MCPs that provide agentic capabilities to your app. But it also opens up a world where agents can autonomously observe your entire stack end-to-end, cross-reference logs and suggest optimizations in real-time.</p>
    <div>
      <h2>How it all fits together</h2>
      <a href="#how-it-all-fits-together">
        
      </a>
    </div>
    <p>Together, Cloudflare Mesh, Workers VPC, and the Agents SDK provide a unified private network for your agents that spans both Cloudflare and your external clouds. We’ve merged connectivity and compute so your agents can securely reach the resources they need, wherever they live, across the globe.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6tYFghWEN1H3jIwV0Kejc9/04dcaedda5692a9726a7081d19bc413c/BLOG-3215_4.png" />
          </figure><p><b>Mesh nodes</b> are your servers, VMs, and containers. They run a headless version of Cloudflare One Client and get a Mesh IP. Services talk to services over private IPs, bidirectionally, routed through Cloudflare's edge. </p><p><b>Devices</b> are your laptops and phones. They run the Cloudflare One Client and reach Mesh nodes directly: SSH, database queries, API calls, all over private IPs. Your local coding agents use this connection to access private resources. </p><p><b>Agents on Workers</b> reach private services through Workers VPC Network bindings. They get scoped access to entire networks, mediated by MCP. The network enforces what the agent can reach. The MCP server enforces what the agent can do. </p>
    <div>
      <h2>What’s next</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>The current version of Mesh provides the foundation for secure, unified connectivity. But as agentic workflows become more complex, we’re focused on moving beyond simple connectivity toward a network that is more intuitive to manage and more granularly aware of who, or what, is talking to your services. Here is what we are building for the rest of the year.</p>
    <div>
      <h4>Hostname routing</h4>
      <a href="#hostname-routing">
        
      </a>
    </div>
    <p>We're extending Cloudflare Tunnel's <a href="https://blog.cloudflare.com/tunnel-hostname-routing/"><u>hostname routing</u></a> to Mesh this summer. Your Mesh nodes will be able to attract traffic for private hostnames like <code>wiki.local</code> or <code>api.staging.internal</code>, without you having to manage IP lists or worry about how those hostnames resolve on the Cloudflare edge. Route traffic to services by name, not by IP. If your infrastructure uses dynamic IPs, auto-scaling groups, or ephemeral containers, this removes an entire class of routing headaches.</p>
    <div>
      <h4>Mesh DNS</h4>
      <a href="#mesh-dns">
        
      </a>
    </div>
    <p>Today, you reach Mesh nodes by their Mesh IPs: <code>ssh 100.64.0.5</code>. That works, but it's not how you think about your infrastructure. You think in names: <code>postgres-staging</code>, <code>api-prod</code>, <code>nikitas-openclaw</code>.</p><p>Later this year we're building Mesh DNS so that every node and device that joins your Mesh automatically gets a routable internal hostname. No DNS configuration or manual records. Add a node named <code>postgres-staging</code>, and <code>postgres-staging.mesh</code> resolves to the right Mesh IP from any device on your Mesh.</p><p>Combined with hostname routing, you'll be able to <code>ssh postgres-staging.mesh</code> or <code>curl http://api-prod.mesh:3000/health</code> without ever knowing or managing an IP address.</p>
    <div>
      <h4>Identity-aware routing</h4>
      <a href="#identity-aware-routing">
        
      </a>
    </div>
    <p>Today, Mesh nodes authenticate to the Cloudflare edge, but they share an identity at the network layer. Devices authenticate with user identity via the Cloudflare One Client, but nodes don't yet carry distinct, routable identities that Gateway policies can differentiate.</p><p>We want to change that. The goal is identity-aware routing for Mesh, where each node, each device, and eventually each agent gets a distinct identity that policies can evaluate. Instead of writing rules based on IP ranges, you write rules based on who or what is connecting.</p><p>This matters most for agents. Today, when an agent running on Workers calls a tool through a VPC binding, the target service sees a Worker making a request. It doesn't know which agent is calling, who authorized it, or what scope was granted. On the Mesh side, when a local coding agent on your laptop reaches a staging service, Gateway sees your device identity but not the agent's.</p><p>We're working toward a model where agents carry their own identity through the network:</p><ul><li><p>Principal / Sponsor: The human who authorized the action (Nikita from the platform team)</p></li><li><p>Agent: The AI system performing it (the deployment assistant, session #abc123)</p></li><li><p>Scope: What the agent is allowed to do (read deployments, trigger rollbacks, nothing else)</p></li></ul><p>This would let you write policies like: reads from Nikita's agents are allowed, but writes require Nikita directly. Agent traffic can be filtered independently from human traffic. An agent's network access can be revoked without touching Nikita's.</p><p>The infrastructure for this is in place. Mesh nodes provision with per-node tokens, devices authenticate with per-user identity, and Workers VPC bindings scope per-service access. The missing piece is making these identities visible to the policy layer so Gateway can make routing and access decisions based on them. That's what we're building.</p>
    <div>
      <h4>Mesh in containers</h4>
      <a href="#mesh-in-containers">
        
      </a>
    </div>
    <p>Today, Mesh nodes run on VMs and bare-metal Linux servers. But modern infrastructure increasingly runs in containers: Kubernetes pods, Docker Compose stacks, ephemeral CI/CD runners. We're building a Mesh Docker image that lets you add a Mesh node to any containerized environment.</p><p>This means you'll be able to include a Mesh sidecar in your Docker Compose stack and give every service in that stack private network access. A microservice running in a container in your staging cluster could reach a database in your production VPC over Mesh, without either service needing a public endpoint. </p><p>It is also useful for CI/CD pipelines that can access private infrastructure during builds and tests: your GitHub Actions runner pulls the Mesh container image, joins your network, runs integration tests against your staging environment, and tears down. All without VPN credentials to manage or persistent tunnels to maintain: the node disappears when the container exits.</p><p>We expect the Mesh Docker image to be available later this year.</p>
    <div>
      <h2>Get started</h2>
      <a href="#get-started">
        
      </a>
    </div>
    <p>While we continue to evolve these identity and routing capabilities, the foundation for secure, unified networking is available today. You can start bridging your clouds and securing your agents in just a few minutes.</p><p><b>Get started Cloudflare Mesh</b>: Head to Networking &gt; Mesh in the <a href="https://dash.cloudflare.com/?to=/:account/mesh"><u>Cloudflare dashboard</u></a>. Free for up to 50 nodes and 50 users.</p><p><b>Build agents with Agents SDK and Workers VPC: </b>Install the Agents SDK (`npm i agents`), follow the <a href="https://developers.cloudflare.com/workers-vpc/get-started/"><u>Workers VPC quickstart</u></a>, and build a <a href="https://developers.cloudflare.com/agents/guides/remote-mcp-server/"><u>remote MCP server</u></a> with private backend access.</p><p><b>Already on Cloudflare One?</b> Mesh works with your existing setup. Your Gateway policies, device posture checks, and access rules apply to Mesh traffic automatically. See <a href="https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-mesh/"><u>the Mesh documentation</u></a> to add your first node.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4KMBRDqq5wBjdLHyhM3LNW/98064f81453b6ff29d0bf4b86cfc251a/image1.png" />
          </figure>
    <div>
      <h4>Watch on Cloudflare TV</h4>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div>
  
</div>
<p></p> ]]></content:encoded>
            <category><![CDATA[Agents Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Workers AI]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Zero Trust]]></category>
            <category><![CDATA[Cloudflare One]]></category>
            <category><![CDATA[SASE]]></category>
            <guid isPermaLink="false">iAIJH3zvDaXoiDMugrwOv</guid>
            <dc:creator>Nikita Cano</dc:creator>
            <dc:creator>Thomas Gauvin</dc:creator>
        </item>
        <item>
            <title><![CDATA[Building a CLI for all of Cloudflare]]></title>
            <link>https://blog.cloudflare.com/cf-cli-local-explorer/</link>
            <pubDate>Mon, 13 Apr 2026 14:29:45 GMT</pubDate>
            <description><![CDATA[ We’re introducing cf, a new unified CLI designed for consistency across the Cloudflare platform, alongside Local Explorer for debugging local data. These tools simplify how developers and AI agents interact with our nearly 3,000 API operations.
 ]]></description>
            <content:encoded><![CDATA[ <p>Cloudflare has a vast API surface. We have over 100 products, and nearly 3,000 HTTP API operations.</p><p>Increasingly, agents are the primary customer of our APIs. Developers bring their coding agents to build and deploy <a href="https://workers.cloudflare.com/solutions/frontends"><u>applications</u></a>, <a href="https://workers.cloudflare.com/solutions/ai"><u>agents</u></a>, and <a href="https://workers.cloudflare.com/solutions/platforms"><u>platforms</u></a> to Cloudflare, configure their account, and query our APIs for analytics and logs.</p><p>We want to make every Cloudflare product available in all of the ways agents need. For example, we now make Cloudflare’s entire API available in a single Code Mode MCP server that uses <a href="https://blog.cloudflare.com/code-mode-mcp/"><u>less than 1,000 tokens</u></a>. There’s a lot more surface area to cover, though: <a href="https://developers.cloudflare.com/workers/wrangler/commands/"><u>CLI commands</u></a>. <a href="https://blog.cloudflare.com/workers-environment-live-object-bindings/"><u>Workers Bindings</u></a> — including APIs for local development and testing. <a href="https://developers.cloudflare.com/fundamentals/api/reference/sdks/"><u>SDKs</u></a> across multiple languages. Our <a href="https://developers.cloudflare.com/workers/wrangler/configuration/"><u>configuration file</u></a>. <a href="https://developers.cloudflare.com/terraform/"><u>Terraform</u></a>. <a href="https://developers.cloudflare.com/"><u>Developer docs</u></a>. <a href="https://developers.cloudflare.com/api/"><u>API docs</u></a> and OpenAPI schemas. <a href="https://github.com/cloudflare/skills"><u>Agent Skills</u></a>.</p><p>Today, many of our products aren’t available across every one of these interfaces. This is particularly true of our CLI — <a href="https://developers.cloudflare.com/workers/wrangler/"><u>Wrangler</u></a>. Many Cloudflare products have no CLI commands in Wrangler. And agents love CLIs.</p><p>So we’ve been rebuilding Wrangler CLI, to make it the CLI for all of Cloudflare. It provides commands for all Cloudflare products, and lets you configure them together using infrastructure-as-code.</p><p>Today we’re sharing an early version of what the next version of Wrangler will look like as a technical preview. It’s very early, but we get the best feedback when we work in public.</p><p>You can try the Technical Preview today by running <code>npx cf</code>. Or you can install it globally by running <code>npm install -g cf</code>.</p><p>Right now, cf provides commands for just a small subset of Cloudflare products. We’re already testing a version of cf that supports the entirety of the Cloudflare API surface — and we will be intentionally reviewing and tuning the commands for each product, to have output that is ergonomic for both agents and humans. To be clear, this Technical Preview is just a small piece of the future Wrangler CLI. Over the coming months we will bring this together with the parts of Wrangler you know and love.</p><p>To build this in a way that keeps in sync with the rapid pace of product development at Cloudflare, we had to create a new system that allows us to generate commands, configuration, binding APIs, and more.</p>
    <div>
      <h2>Rethinking schemas and our code generation pipeline from first principles</h2>
      <a href="#rethinking-schemas-and-our-code-generation-pipeline-from-first-principles">
        
      </a>
    </div>
    <p>We already generate the Cloudflare <a href="https://blog.cloudflare.com/lessons-from-building-an-automated-sdk-pipeline/"><u>API SDKs</u></a>, <a href="https://blog.cloudflare.com/automatically-generating-cloudflares-terraform-provider/"><u>Terraform provider</u></a>, and <a href="https://blog.cloudflare.com/code-mode-mcp/"><u>Code Mode MCP server</u></a> based on the OpenAPI schema for Cloudflare API. But updating our CLI, Workers Bindings, wrangler.jsonc configuration, Agent Skills, dashboard and docs is still a manual process. This was already error-prone, required too much back and forth, and wouldn’t scale to support the whole Cloudflare API in the next version of our CLI.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2wjmgUzBkjeI0RtyXKIMXm/f7fc6ce3b7323aacb3babdfb461f383f/BLOG-3224_2.png" />
          </figure><p>To do this, we needed more than could be expressed in an OpenAPI schema. OpenAPI schemas describe REST APIs, but we have interactive CLI commands that involve multiple actions that combine both local development and API requests, Workers bindings expressed as RPC APIs, along with Agent Skills and documentation that ties this all together.</p><p>We write a lot of TypeScript at Cloudflare. It’s the lingua franca of software engineering. And we keep finding that it just works better to express APIs in TypeScript — as we do with <a href="https://blog.cloudflare.com/capnweb-javascript-rpc-library/"><u>Cap n’ Web</u></a>, <a href="https://blog.cloudflare.com/code-mode/"><u>Code Mode</u></a>, and the <a href="https://developers.cloudflare.com/workers/runtime-apis/rpc/"><u>RPC system</u></a> built into the Workers platform.</p><p>So we introduced a new TypeScript schema that can define the full scope of APIs, CLI commands and arguments, and context needed to generate any interface. The schema format is “just” a set of TypeScript types with conventions, linting, and guardrails to ensure consistency. But because it is our own format, it can easily be adapted to support any interface we need, today or in the future, while still <i>also</i> being able to generate an OpenAPI schema:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4H0xSIPMmrixUWsFL86RUJ/998b93a465d26d856885b4d833ac19d4/BLOG-3224_3.png" />
          </figure><p>To date most of our focus has been at this layer — building the machine we needed, so that we can now start building the CLI and other interfaces we’ve wanted for years to be able to provide. This lets us start to dream bigger about what we could standardize across Cloudflare and make better for Agents — especially when it comes to context engineering our CLI.</p>
    <div>
      <h2>Agents and CLIs — consistency and context engineering</h2>
      <a href="#agents-and-clis-consistency-and-context-engineering">
        
      </a>
    </div>
    <p>Agents expect CLIs to be consistent. If one command uses <code>&lt;command&gt; info</code> as the syntax for getting information about a resource, and another uses <code>&lt;command&gt; get</code>, the agent will expect one and call a non-existent command for the other. In a large engineering org of hundreds or thousands of people, and with many products, manually enforcing consistency through reviews is Swiss cheese. And you can enforce it at the CLI layer, but then naming differs between the CLI, REST API and SDKs, making the problem arguably worse.</p><p>One of the first things we’ve done is to start creating rules and guardrails, enforced at the schema layer. It’s always <code>get</code>, never <code>info</code>. Always <code>--force</code>, never <code>--skip-confirmations</code>. Always <code>--json</code>, never <code>--format</code>, and always supported across commands. </p><p>Wrangler CLI is also fairly unique — it provides commands and configuration that can work with both simulated local resources, or remote resources, like <a href="https://developers.cloudflare.com/d1/"><u>D1 databases</u></a>, <a href="https://developers.cloudflare.com/r2"><u>R2 storage buckets</u></a>, and <a href="https://developers.cloudflare.com/kv"><u>KV namespaces</u></a>. This means consistent defaults matter even more. If an agent thinks it’s modifying a remote database, but is actually adding records to local database, and the developer is using remote bindings to develop locally against a remote database, their agent won’t understand why the newly-added records aren’t showing up when the agent makes a request to the local dev server. Consistent defaults, along with output that clearly signals whether commands are applied to remote or local resources, ensure agents have explicit guidance.</p>
    <div>
      <h2>Local Explorer — what you can do remotely, you can now do locally</h2>
      <a href="#local-explorer-what-you-can-do-remotely-you-can-now-do-locally">
        
      </a>
    </div>
    <p>Today we are also releasing Local Explorer, a new feature available in open beta in both Wrangler and the Cloudflare Vite plugin.</p><p>Local Explorer lets you introspect the simulated resources that your Worker uses when you are developing locally, including <a href="https://www.cloudflare.com/developer-platform/products/workers-kv/"><u>KV</u></a>, <a href="https://www.cloudflare.com/developer-platform/products/r2/"><u>R2</u></a>, D1, <a href="https://www.cloudflare.com/developer-platform/products/durable-objects/"><u>Durable Objects</u></a> and <a href="https://www.cloudflare.com/developer-platform/products/workflows/"><u>Workflows</u></a>. The same things you can do via the Cloudflare API and Dashboard with each of these, you can also do entirely locally, powered by the same underlying API structure.</p><p>For years we’ve <a href="https://blog.cloudflare.com/wrangler3/"><u>made a bet on fully local development</u></a> — not just for Cloudflare Workers, but for the entire platform. When you use D1, even though D1 is a hosted, serverless database product, you can run your database and communicate with it via bindings entirely locally, without any extra setup or tooling. Via <a href="https://developers.cloudflare.com/workers/testing/miniflare/"><u>Miniflare</u></a>, our local development platform emulator, the Workers runtime provides the exact same APIs in local dev as in production, and uses a local SQLite database to provide the same functionality. This makes it easy to write and run tests that run fast, without the need for network access, and work offline.</p><p>But until now, working out what data was stored locally required you to reverse engineer, introspect the contents of the <code>.wrangler/state</code> directory, or install third-party tools.</p><p>Now whenever you run an app with Wrangler CLI or the Cloudflare Vite plugin, you will be prompted to open the local explorer (keyboard shortcut <code>e</code>). This provides you with a simple, local interface to see what bindings your Worker currently has attached, and what data is stored against them.</p><div>
  
</div><p>When you build using Agents, Local Explorer is a great way to understand what the agent is doing with data, making the local development cycle much more interactive. You can turn to Local Explorer anytime you need to verify a schema, seed some test records, or just start over and <code>DROP TABLE</code>.</p><p>Our goal here is to provide a mirror of the Cloudflare API that only modifies local data, so that all of your local resources are available via the same APIs that you use remotely. And by making the API shape match across local and remote, when you run CLI commands in the upcoming version of the CLI and pass a <code>--local</code> flag, the commands just work. The only difference is that the command makes a request to this local mirror of the Cloudflare API instead.</p><p>Starting today, this API is available at <code>/cdn-cgi/explorer/api</code> on any Wrangler- or Vite Plugin- powered application. By pointing your agent at this address, it will find an OpenAPI specification to be able to manage your local resources for you, just by talking to your agent.</p>
    <div>
      <h2>Tell us your hopes and dreams for a Cloudflare-wide CLI </h2>
      <a href="#tell-us-your-hopes-and-dreams-for-a-cloudflare-wide-cli">
        
      </a>
    </div>
    <p>Now that we have built the machine, it’s time to take the best parts of Wrangler today, combine them with what’s now possible, and make Wrangler the best CLI possible for using all of Cloudflare.</p><p>You can try the technical preview today by running <code>npx cf</code>. Or you can install it globally by running <code>npm install -g cf</code>.</p><p>With this very early version, we want your feedback — not just about what the technical preview does today, but what you want from a CLI for Cloudflare’s entire platform. Tell us what you wish was an easy one-line CLI command but takes a few clicks in our dashboard today. What you wish you could configure in <code>wrangler.jsonc</code> — like DNS records or Cache Rules. And where you’ve seen your agents get stuck, and what commands you wish our CLI provided for your agent to use.</p><p>Jump into the <a href="https://discord.cloudflare.com/"><u>Cloudflare Developers Discord</u></a> and tell us what you’d like us to add first to the CLI, and stay tuned for many more updates soon.</p><p><i>Thanks to Emily Shen for her valuable contributions to kicking off the Local Explorer project.</i></p> ]]></content:encoded>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[D1]]></category>
            <category><![CDATA[API]]></category>
            <category><![CDATA[Agents Week]]></category>
            <guid isPermaLink="false">5r3Nx1IDtp6B6GRDHqQyWL</guid>
            <dc:creator>Matt “TK” Taylor</dc:creator>
            <dc:creator>Dimitri Mitropoulos</dc:creator>
            <dc:creator>Dan Carter</dc:creator>
        </item>
        <item>
            <title><![CDATA[Durable Objects in Dynamic Workers: Give each AI-generated app its own database]]></title>
            <link>https://blog.cloudflare.com/durable-object-facets-dynamic-workers/</link>
            <pubDate>Mon, 13 Apr 2026 13:08:35 GMT</pubDate>
            <description><![CDATA[ We’re introducing Durable Object Facets, allowing Dynamic Workers to instantiate Durable Objects with their own isolated SQLite databases. This enables developers to build platforms that run persistent, stateful code generated on-the-fly.
 ]]></description>
            <content:encoded><![CDATA[ <p>A few weeks ago, we announced <a href="https://blog.cloudflare.com/dynamic-workers/"><u>Dynamic Workers</u></a>, a new feature of the Workers platform which lets you load Worker code on-the-fly into a secure sandbox. The Dynamic Worker Loader API essentially provides direct access to the basic compute isolation primitive that Workers has been based on all along: isolates, not containers. Isolates are much lighter-weight than containers, and as such, can load 100x faster using 1/10 the memory. They are so efficient, they can be treated as "disposable": start one up to run a few lines of code, then throw it away. Like a secure version of eval(). </p><p>Dynamic Workers have many uses. In the original announcement, we focused on how to use them to run AI-agent-generated code as an alternative to tool calls. In this use case, an AI agent performs actions at the request of a user by writing a few lines of code and executing them. The code is single-use, intended to perform one task one time, and is thrown away immediately after it executes.</p><p>But what if you want an AI to generate more persistent code? What if you want your AI to build a small application with a custom UI the user can interact with? What if you want that application to have long-lived state? But of course, you still want it to run in a secure sandbox.</p><p>One way to do this would be to use Dynamic Workers, and simply provide the Worker with an <a href="https://developers.cloudflare.com/workers/runtime-apis/rpc/"><u>RPC</u></a> API that gives it access to storage. Using <a href="https://developers.cloudflare.com/dynamic-workers/usage/bindings/"><u>bindings</u></a>, you could give the Dynamic Worker an API that points back to your remote SQL database (perhaps backed by <a href="https://developers.cloudflare.com/d1/"><u>Cloudflare D1</u></a>, or a Postgres database you access through <a href="https://developers.cloudflare.com/hyperdrive/"><u>Hyperdrive</u></a> — it's up to you).</p><p>But Workers also has a unique and extremely fast type of storage that may be a perfect fit for this use case: <a href="https://developers.cloudflare.com/durable-objects/"><u>Durable Objects</u></a>. A Durable Object is a special kind of Worker that has a unique name, with one instance globally per name. That instance has a SQLite database attached, which lives <i>on local disk</i> on the machine where the Durable Object runs. This makes storage access ridiculously fast: there is effectively <a href="https://blog.cloudflare.com/sqlite-in-durable-objects/"><u>zero latency</u></a>.</p><p>Perhaps, then, what you really want is for your AI to write code for a Durable Object, and then you want to run that code in a Dynamic Worker.</p>
    <div>
      <h2><b>But how?</b></h2>
      <a href="#but-how">
        
      </a>
    </div>
    <p>This presents a weird problem. Normally, to use Durable Objects you have to:</p><ol><li><p>Write a class extending <code>DurableObject</code>.</p></li><li><p>Export it from your Worker's main module.</p></li><li><p><a href="https://developers.cloudflare.com/durable-objects/get-started/#5-configure-durable-object-class-with-sqlite-storage-backend"><u>Specify in your Wrangler config</u></a> that storage should be provision for this class. This creates a Durable Object namespace that points at your class for handling incoming requests.</p></li><li><p><a href="https://developers.cloudflare.com/durable-objects/get-started/#4-configure-durable-object-bindings"><u>Declare a Durable Object namespace binding</u></a> pointing at your namespace (or use <a href="https://developers.cloudflare.com/workers/runtime-apis/context/#exports"><u>ctx.exports</u></a>), and use it to make requests to your Durable Object.</p></li></ol><p>This doesn't extend naturally to Dynamic Workers. First, there is the obvious problem: The code is dynamic. You run it without invoking the Cloudflare API at all. But Durable Object storage has to be provisioned through the API, and the namespace has to point at an implementing class. It can't point at your Dynamic Worker.</p><p>But there is a deeper problem: Even if you could somehow configure a Durable Object namespace to point directly at a Dynamic Worker, would you want to? Do you want your agent (or user) to be able to create a whole namespace full of Durable Objects? To use unlimited storage spread around the world?</p><p>You probably don't. You probably want some control. You may want to limit, or at least track, how many objects they create. Maybe you want to limit them to just one object (probably good enough for vibe-coded personal apps). You may want to add logging and other observability. Metrics. Billing. Etc.</p><p>To do all this, what you really want is for requests to these Durable Objects to go to <i>your</i> code <i>first</i>, where you can then do all the "logistics", and <i>then</i> forward the request into the agent's code. You want to write a <i>supervisor</i> that runs as part of every Durable Object.</p>
    <div>
      <h2><b>Solution: Durable Object Facets</b></h2>
      <a href="#solution-durable-object-facets">
        
      </a>
    </div>
    <p>Today we are releasing, in open beta, a feature that solves this problem.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/mUUk7svflWvIp5Ff3npbG/cd2ec9a7111681657c37e3560fd9af58/BLOG-3211_2.png" />
          </figure><p><a href="https://developers.cloudflare.com/dynamic-workers/usage/durable-object-facets/"><u>Durable Object Facets</u></a> allow you to load and instantiate a Durable Object class dynamically, while providing it with a SQLite database to use for storage. With Facets:</p><ul><li><p>First you create a normal Durable Object namespace, pointing to a class <i>you</i> write.</p></li><li><p>In that class, you load the agent's code as a Dynamic Worker, and call into it.</p></li><li><p>The Dynamic Worker's code can implement a Durable Object class directly. That is, it literally exports a class declared as <code>extends DurableObject</code>.</p></li><li><p>You are instantiating that class as a "facet" of your own Durable Object.</p></li><li><p>The facet gets its own SQLite database, which it can use via the normal Durable Object storage APIs. This database is separate from the supervisor's database, but the two are stored together as part of the same overall Durable Object.</p></li></ul>
    <div>
      <h2><b>How it works</b></h2>
      <a href="#how-it-works">
        
      </a>
    </div>
    <p>Here is a simple, complete implementation of an app platform that dynamically loads and runs a Durable Object class:</p>
            <pre><code>import { DurableObject } from "cloudflare:workers";

// For the purpose of this example, we'll use this static
// application code, but in the real world this might be generated
// by AI (or even, perhaps, a human user).
const AGENT_CODE = `
  import { DurableObject } from "cloudflare:workers";

  // Simple app that remembers how many times it has been invoked
  // and returns it.
  export class App extends DurableObject {
    fetch(request) {
      // We use storage.kv here for simplicity, but storage.sql is
      // also available. Both are backed by SQLite.
      let counter = this.ctx.storage.kv.get("counter") || 0;
      ++counter;
      this.ctx.storage.kv.put("counter", counter);

      return new Response("You've made " + counter + " requests.\\n");
    }
  }
`;

// AppRunner is a Durable Object you write that is responsible for
// dynamically loading applications and delivering requests to them.
// Each instance of AppRunner contains a different app.
export class AppRunner extends DurableObject {
  async fetch(request) {
    // We've received an HTTP request, which we want to forward into
    // the app.

    // The app itself runs as a child facet named "app". One Durable
    // Object can have any number of facets (subject to storage limits)
    // with different names, but in this case we have only one. Call
    // this.ctx.facets.get() to get a stub pointing to it.
    let facet = this.ctx.facets.get("app", async () =&gt; {
      // If this callback is called, it means the facet hasn't
      // started yet (or has hibernated). In this callback, we can
      // tell the system what code we want it to load.

      // Load the Dynamic Worker.
      let worker = this.#loadDynamicWorker();

      // Get the exported class we're interested in.
      let appClass = worker.getDurableObjectClass("App");

      return { class: appClass };
    });

    // Forward request to the facet.
    // (Alternatively, you could call RPC methods here.)
    return await facet.fetch(request);
  }

  // RPC method that a client can call to set the dynamic code
  // for this app.
  setCode(code) {
    // Store the code in the AppRunner's SQLite storage.
    // Each unique code must have a unique ID to pass to the
    // Dynamic Worker Loader API, so we generate one randomly.
    this.ctx.storage.kv.put("codeId", crypto.randomUUID());
    this.ctx.storage.kv.put("code", code);
  }

  #loadDynamicWorker() {
    // Use the Dynamic Worker Loader API like normal. Use get()
    // rather than load() since we may load the same Worker many
    // times.
    let codeId = this.ctx.storage.kv.get("codeId");
    return this.env.LOADER.get(codeId, async () =&gt; {
      // This Worker hasn't been loaded yet. Load its code from
      // our own storage.
      let code = this.ctx.storage.kv.get("code");

      return {
        compatibilityDate: "2026-04-01",
        mainModule: "worker.js",
        modules: { "worker.js": code },
        globalOutbound: null,  // block network access
      }
    });
  }
}

// This is a simple Workers HTTP handler that uses AppRunner.
export default {
  async fetch(req, env, ctx) {
    // Get the instance of AppRunner named "my-app".
    // (Each name has exactly one Durable Object instance in the
    // world.)
    let obj = ctx.exports.AppRunner.getByName("my-app");

    // Initialize it with code. (In a real use case, you'd only
    // want to call this once, not on every request.)
    await obj.setCode(AGENT_CODE);

    // Forward the request to it.
    return await obj.fetch(req);
  }
}
</code></pre>
            <p>In this example:</p><ul><li><p><code>AppRunner</code> is a "normal" Durable Object written by the platform developer (you).</p></li><li><p>Each instance of <code>AppRunner</code> manages one application. It stores the app code and loads it on demand.</p></li><li><p>The application itself implements and exports a Durable Object class, which the platform expects is named <code>App</code>.</p></li><li><p><code>AppRunner</code> loads the application code using Dynamic Workers, and then executes the code as a Durable Object Facet.</p></li><li><p>Each instance of <code>AppRunner</code> is one Durable Object composed of <i>two</i> SQLite databases: one belonging to the parent (<code>AppRunner</code> itself) and one belonging to the facet (<code>App</code>). These databases are isolated: the application cannot read <code>AppRunner</code>'s database, only its own.</p></li></ul><p>To run the example, copy the code above into a file <code>worker.j</code>s, pair it with the following <code>wrangler.jsonc</code>, and run it locally with <code>npx wrangler dev</code>.</p>
            <pre><code>// wrangler.jsonc for the above sample worker.
{
  "compatibility_date": "2026-04-01",
  "main": "worker.js",
  "migrations": [
    {
      "tag": "v1",
      "new_sqlite_classes": [
        "AppRunner"
      ]
    }
  ],
  "worker_loaders": [
    {
      "binding": "LOADER",
    },
  ],
}
</code></pre>
            
    <div>
      <h2><b>Start building</b></h2>
      <a href="#start-building">
        
      </a>
    </div>
    <p>Facets are a feature of Dynamic Workers, available in beta immediately to users on the Workers Paid plan.</p><p>Check out the documentation to learn more about <a href="https://developers.cloudflare.com/dynamic-workers/"><u>Dynamic Workers</u></a> and <a href="https://developers.cloudflare.com/dynamic-workers/usage/durable-object-facets/"><u>Facets</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Agents Week]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Durable Objects]]></category>
            <category><![CDATA[Storage]]></category>
            <guid isPermaLink="false">2OYAJUdGLODlCXKKdCZMeG</guid>
            <dc:creator>Kenton Varda</dc:creator>
        </item>
        <item>
            <title><![CDATA[Agents have their own computers with Sandboxes GA]]></title>
            <link>https://blog.cloudflare.com/sandbox-ga/</link>
            <pubDate>Mon, 13 Apr 2026 13:08:35 GMT</pubDate>
            <description><![CDATA[ Cloudflare Sandboxes give AI agents a persistent, isolated environment: a real computer with a shell, a filesystem, and background processes that starts on demand and picks up exactly where it left off. ]]></description>
            <content:encoded><![CDATA[ <p>When we launched <a href="https://github.com/cloudflare/sandbox-sdk"><u>Cloudflare Sandboxes</u></a> last June, the premise was simple: <a href="https://www.cloudflare.com/learning/ai/what-is-agentic-ai/"><u>AI agents</u></a> need to develop and run code, and they need to do it somewhere safe.</p><p>If an agent is acting like a developer, this means cloning repositories, building code in many languages, running development servers, etc. To do these things effectively, they will often need a full computer (and if they don’t, they can <a href="https://blog.cloudflare.com/dynamic-workers/"><u>reach for something lightweight</u></a>!).</p><p>Many developers are stitching together solutions using VMs or existing container solutions, but there are lots of hard problems to solve:</p><ul><li><p><b>Burstiness -</b> With each session needing its own sandbox, you often need to spin up many sandboxes quickly, but you don’t want to pay for idle compute on standby.</p></li><li><p><b>Quick state restoration</b> - Each session should start quickly and re-start quickly, resuming past state.</p></li><li><p><b>Security</b> - Agents need to access services securely, but can’t be trusted with credentials.</p></li><li><p><b>Control</b> - It needs to be simple to programmatically control sandbox lifecycle, execute commands, handle files, and more.</p></li><li><p><b>Ergonomics</b> - You need to give a simple interface for both humans and agents to do common operations.</p></li></ul><p>We’ve spent time solving these issues so you don’t have to. Since our initial launch we’ve made Sandboxes an even better place to run agents at scale. We’ve worked with our initial partners such as Figma, who run agents in containers with <a href="https://www.figma.com/make/"><u>Figma Make</u></a>:</p><blockquote><p><i>“Figma Make is built to help builders and makers of all backgrounds go from idea to production, faster. To deliver on that goal, we needed an infrastructure solution that could provide reliable, highly-scalable sandboxes where we could run untrusted agent- and user-authored code. Cloudflare Containers is that solution.”</i></p><p><i>- </i><b><i>Alex Mullans</i></b><i>, AI and Developer Platforms at Figma</i></p></blockquote><p>We want to bring Sandboxes to even more great organizations, so today we are excited to announce that <b>Sandboxes and Cloudflare Containers are both generally available.</b></p><p>Let’s take a look at some of the recent changes to Sandboxes:</p><ul><li><p><b>Secure credential injection </b>lets you make authenticated calls without the agent ever having credential access  </p></li><li><p><b>PTY support</b> gives you and your agent a real terminal</p></li><li><p><b>Persistent code interpreters</b> give your agent a place to execute stateful Python, JavaScript, and TypeScript out of the box</p></li><li><p><b>Background processes and live preview URLs</b> provide a simple way to interact with development servers and verify in-flight changes</p></li><li><p><b>Filesystem watching</b> improves iteration speed as agents make changes</p></li><li><p><b>Snapshots</b> let you quickly recover an agent's coding session</p></li><li><p><b>Higher limits and Active CPU Pricing</b> let you deploy a fleet of agents at scale without paying for unused CPU cycles </p></li></ul>
    <div>
      <h2>Sandboxes 101</h2>
      <a href="#sandboxes-101">
        
      </a>
    </div>
    <p>Before getting into some of the recent changes, let’s quickly look at the basics.</p><p>A Cloudflare Sandbox is a persistent, isolated environment powered by <a href="https://blog.cloudflare.com/containers-are-available-in-public-beta-for-simple-global-and-programmable/"><u>Cloudflare Containers</u></a>. You ask for a sandbox by name. If it's running, you get it. If it's not, it starts. When it's idle, it sleeps automatically and wakes when it receives a request. It’s easy to programmatically interact with the sandbox using methods like <code>exec</code>, <code>gitClone</code>, <code>writeFile</code> and <a href="https://developers.cloudflare.com/sandbox/api/"><u>more</u></a>.</p>
            <pre><code>import { getSandbox } from "@cloudflare/sandbox";
export { Sandbox } from "@cloudflare/sandbox";

export default {
  async fetch(request: Request, env: Env) {
    // Ask for a sandbox by name. It starts on demand.
    const sandbox = getSandbox(env.Sandbox, "agent-session-47");

    // Clone a repository into it.
    await sandbox.gitCheckout("https://github.com/org/repo", {
      targetDir: "/workspace",
      depth: 1,
    });

    // Run the test suite. Stream output back in real time.
    return sandbox.exec("npm", ["test"], { stream: true });
  },
};
</code></pre>
            <p>As long as you provide the same ID, subsequent requests can get to this same sandbox from anywhere in the world.</p>
    <div>
      <h2>Secure credential injection</h2>
      <a href="#secure-credential-injection">
        
      </a>
    </div>
    <p>One of the hardest problems in agentic workloads is authentication. You often need agents to access private services, but you can't fully trust them with raw credentials. </p><p>Sandboxes solve this by injecting credentials at the network layer using a programmable egress proxy. This means that sandbox agents never have access to credentials and you can fully customize auth logic as you see fit:</p>
            <pre><code>class OpenCodeInABox extends Sandbox {
  static outboundByHost = {
    "my-internal-vcs.dev": (request, env, ctx) =&gt; {
      const headersWithAuth = new Headers(request.headers);
      headersWithAuth.set("x-auth-token", env.SECRET);
      return fetch(request, { headers: headersWithAuth });
    }
  }
}
</code></pre>
            <p>For a deep dive into how this works — including identity-aware credential injection, dynamically modifying rules, and integrating with <a href="https://developers.cloudflare.com/workers/runtime-apis/bindings/"><u>Workers bindings</u></a> — read our recent blog post on <a href="https://blog.cloudflare.com/sandbox-auth"><u>Sandbox auth</u></a>.</p>
    <div>
      <h2>A real terminal, not a simulation</h2>
      <a href="#a-real-terminal-not-a-simulation">
        
      </a>
    </div>
    <p>Early agent systems often modeled shell access as a request-response loop: run a command, wait for output, stuff the transcript back into the prompt, repeat. It works, but it is not how developers actually use a terminal. </p><p>Humans run something, watch output stream in, interrupt it, reconnect later, and keep going. Agents benefit from that same feedback loop.</p><p>In February, we shipped PTY support. A pseudo-terminal session in a Sandbox, proxied over WebSocket, compatible with <a href="https://xtermjs.org/"><u>xterm.js</u></a>.</p><p>Just call <code>sandbox.terminal</code> to serve the backend:</p>
            <pre><code>// Worker: upgrade a WebSocket connection into a live terminal session
export default {
  async fetch(request: Request, env: Env) {
    const url = new URL(request.url);
    if (url.pathname === "/terminal") {
      const sandbox = getSandbox(env.Sandbox, "my-session");
      return sandbox.terminal(request, { cols: 80, rows: 24 });
    }
    return new Response("Not found", { status: 404 });
  },
};

</code></pre>
            <p>And use <code>xterm addon</code> to call it from the client:</p>
            <pre><code>// Browser: connect xterm.js to the sandbox shell
import { Terminal } from "xterm";
import { SandboxAddon } from "@cloudflare/sandbox/xterm";

const term = new Terminal();
const addon = new SandboxAddon({
  getWebSocketUrl: ({ origin }) =&gt; `${origin}/terminal`,
});

term.loadAddon(addon);
term.open(document.getElementById("terminal-container")!);
addon.connect({ sandboxId: "my-session" });
</code></pre>
            <p>This allows agents and developers to use a full PTY to debug those sessions live.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3bgyxh8kg3MPfij2v1XXLE/9cff50318ad306b20c3346c3bd3554d9/BLOG-3264_2.gif" />
          </figure><p>Each terminal session gets its own isolated shell, its own working directory, its own environment. Open as many as you need, just like you would on your own machine. Output is buffered server-side, so reconnecting replays what you missed.</p>
    <div>
      <h2>A code interpreter that remembers</h2>
      <a href="#a-code-interpreter-that-remembers">
        
      </a>
    </div>
    <p>For data analysis, scripting, and exploratory workflows, we also ship a higher-level abstraction: a persistent code execution context.</p><p>The key word is “persistent.” Many code interpreter implementations run each snippet in isolation, so state disappears between calls. You can't set a variable in one step and read it in the next.</p><p>Sandboxes allow you to create “contexts” that persist state. Variables and imports persist across calls the same way they would in a Jupyter notebook:</p>
            <pre><code>// Create a Python context. State persists for its lifetime.
const ctx = await sandbox.createCodeContext({ language: "python" });

// First execution: load data
await sandbox.runCode(`
  import pandas as pd
  df = pd.read_csv('/workspace/sales.csv')
  df['margin'] = (df['revenue'] - df['cost']) / df['revenue']
`, { context: ctx });

// Second execution: df is still there
const result = await sandbox.runCode(`
  df.groupby('region')['margin'].mean().sort_values(ascending=False)
`, { context: ctx, onStdout: (line) =&gt; console.log(line.text) });

// result contains matplotlib charts, structured json output, and Pandas tables in HTML
</code></pre>
            
    <div>
      <h2>Start a server. Get a URL. Ship it.</h2>
      <a href="#start-a-server-get-a-url-ship-it">
        
      </a>
    </div>
    <p>Agents are more useful when they can build something and show it to the user immediately. Sandboxes support background processes, readiness checks, and <a href="https://developers.cloudflare.com/sandbox/concepts/preview-urls/"><u>preview URLs</u></a>. This lets an agent start a development server and share a live link without leaving the conversation.</p>
            <pre><code>// Start a dev server as a background process
const server = await sandbox.startProcess("npm run dev", {
  cwd: "/workspace",
});

// Wait until the server is actually ready — don't just sleep and hope
await server.waitForLog(/Local:.*localhost:(\d+)/);

// Expose the running service with a public URL
const { url } = await sandbox.exposePort(3000);

// url is a live public URL the agent can share with the user
console.log(`Preview: ${url}`);
</code></pre>
            <p>With <code><i>waitForPort()</i></code> and <code><i>waitForLog()</i></code>, agents can sequence work based on real signals from the running program instead of guesswork. This is much nicer than a common alternative, which is usually some version of <code>sleep(2000)</code> followed by hope.</p>
    <div>
      <h2>Watch the file system and react immediately</h2>
      <a href="#watch-the-file-system-and-react-immediately">
        
      </a>
    </div>
    <p>Modern development loops are event-driven. Save a file, rerun the build. Edit a config, restart the server. Change a test, rerun the suite.</p><p>We shipped <i>sandbox.watch()</i> in March. It returns an SSE stream backed by native <a href="https://man7.org/linux/man-pages/man7/inotify.7.html"><u>inotify</u></a>, the kernel mechanism Linux uses for filesystem events.</p>
            <pre><code>import { parseSSEStream, type FileWatchSSEEvent } from '@cloudflare/sandbox';

const stream = await sandbox.watch('/workspace/src', {
  recursive: true,
  include: ['*.ts', '*.tsx']
});

for await (const event of parseSSEStream&lt;FileWatchSSEEvent&gt;(stream)) {
  if (event.type === 'modify' &amp;&amp; event.path.endsWith('.ts')) {
    await sandbox.exec('npx tsc --noEmit', { cwd: '/workspace' });
  }
}
</code></pre>
            <p>This is one of those primitives that quietly changes what agents can do. An agent that can observe the filesystem in real time can participate in the same feedback loops as a human developer.</p>
    <div>
      <h2>Waking up quickly with snapshots</h2>
      <a href="#waking-up-quickly-with-snapshots">
        
      </a>
    </div>
    <p>Imagine a (human) developer working on their laptop. They <code>git clone</code> a repo, run <code>npm install</code>, write code, push a PR, then close their laptop while waiting for code review. When it’s time to resume work, they just re-open the laptop and continue where they left off.</p><p>If an agent wants to replicate this workflow on a naive container platform, you run into a snag. How do you resume where you left off quickly? You could keep a sandbox running, but then you pay for idle compute. You could start fresh from the container image, but then you have to wait for a long <code>git clone</code> and <code>npm install</code>.</p><p>Our answer is snapshots, which will be rolling out in the coming weeks.</p><p>A snapshot preserves a container's full disk state, OS config, installed dependencies, modified files, data files and more. Then it lets you quickly restore it later.</p><p>You can configure a Sandbox to automatically snapshot when it goes to sleep.</p>
            <pre><code>class AgentDevEnvironment extends Sandbox {
  sleepAfter = "5m";
  persistAcrossSessions = {type: "disk"}; // you can also specify individual directories
}
</code></pre>
            <p>You can also programmatically take a snapshot and manually restore it. This is useful for checkpointing work or forking sessions. For instance, if you wanted to run four instances of an agent in parallel, you could easily boot four sandboxes from the same state.</p>
            <pre><code>class AgentDevEnvironment extends Sandbox {}

async forkDevEnvironment(baseId, numberOfForks) {
  const baseInstance = await getSandbox(baseId);
  const snapshotId = await baseInstance.snapshot();

  const forks = Array.from({ length: numberOfForks }, async (_, i) =&gt; {
    const newInstance = await getSandbox(`${baseId}-fork-${i}`);
    return newInstance.start({ snapshot: snapshotId });
  });

  await Promise.all(forks);
}
</code></pre>
            <p>Snapshots are stored in <a href="https://developers.cloudflare.com/r2/"><u>R2</u></a> within your account, giving you durability and location-independence. R2's <a href="https://developers.cloudflare.com/cache/how-to/tiered-cache/"><u>tiered caching</u></a> system allows for fast restores across all of Region: Earth.</p><p>In future releases, live memory state will also be captured, allowing running processes to resume exactly where they left off. A terminal and an editor will reopen in the exact state they were in when last closed.</p><p>If you are interested in restoring session state before snapshots go live, you can use the <a href="https://developers.cloudflare.com/sandbox/guides/backup-restore/"><code><u>backup and restore</u></code></a> methods today. These also persist and restore directories using R2, but are not as performant as true VM-level snapshots. Though they still can lead to considerable speed improvements over naively recreating session state.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/LzVucBiNvxOh3NFn0ukxj/3b8e6cd9a5ca241b6c6a7c8556c0a529/BLOG-3264_3.gif" />
          </figure><p><sup><i>Booting a sandbox, cloning  ‘axios’, and npm installing takes 30 seconds. Restoring from a backup takes two seconds.</i></sup></p><p>Stay tuned for the official snapshot release.</p>
    <div>
      <h2>Higher limits and Active CPU Pricing</h2>
      <a href="#higher-limits-and-active-cpu-pricing">
        
      </a>
    </div>
    <p>Since our initial launch, we’ve been steadily increasing capacity. Users on our standard pricing plan can now run 15,000 concurrent instances of the lite instance type, 6,000 instances of basic, and over 1,000 concurrent larger instances. <a href="https://forms.gle/3vvDvXPECjy6F8v56"><u>Reach out</u></a> to run even more!</p><p>We also changed our pricing model to be more cost effective running at scale. Sandboxes now <a href="https://developers.cloudflare.com/changelog/post/2025-11-21-new-cpu-pricing/"><u>only charge for actively used CPU cycles</u></a>. This means that you aren’t paying for idle CPU while your agent is waiting for an LLM to respond.</p>
    <div>
      <h2>This is what a computer looks like </h2>
      <a href="#this-is-what-a-computer-looks-like">
        
      </a>
    </div>
    <p>Nine months ago, we shipped a sandbox that could run commands and access a filesystem. That was enough to prove the concept.</p><p>What we have now is different in kind. A Sandbox today is a full development environment: a terminal you can connect a browser to, a code interpreter with persistent state, background processes with live preview URLs, a filesystem that emits change events in real time, egress proxies for secure credential injection, and a snapshot mechanism that makes warm starts nearly instant. </p><p>When you build on this, a satisfying pattern emerges: agents that do real engineering work. Clone a repo, install it, run the tests, read the failures, edit the code, run the tests again. The kind of tight feedback loop that makes a human engineer effective — now the agent gets it too.</p><p>We're at version 0.8.9 of the SDK. You can get started today:</p><p><code>npm i @cloudflare/sandbox@latest</code></p><div>
  
</div>
<p></p> ]]></content:encoded>
            <category><![CDATA[Agents Week]]></category>
            <category><![CDATA[Agents]]></category>
            <category><![CDATA[Containers]]></category>
            <category><![CDATA[Sandbox]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <guid isPermaLink="false">7jXMXMjQUIpjGzJdPadO4a</guid>
            <dc:creator>Kate Reznykova</dc:creator>
            <dc:creator>Mike Nomitch</dc:creator>
            <dc:creator>Naresh Ramesh</dc:creator>
        </item>
        <item>
            <title><![CDATA[Dynamic, identity-aware, and secure Sandbox auth]]></title>
            <link>https://blog.cloudflare.com/sandbox-auth/</link>
            <pubDate>Mon, 13 Apr 2026 13:00:00 GMT</pubDate>
            <description><![CDATA[ Outbound Workers for Sandboxes provide a programmable, zero-trust egress proxy for AI agents. This allows developers to inject credentials and enforce dynamic security policies without exposing sensitive tokens to untrusted code.
 ]]></description>
            <content:encoded><![CDATA[ <p>As <a href="https://www.cloudflare.com/learning/ai/what-is-large-language-model/"><u>AI Large Language Models</u></a> and harnesses like OpenCode and Claude Code become increasingly capable, we see more users kicking off sandboxed agents in response to chat messages, Kanban updates, <a href="https://www.cloudflare.com/learning/ai/ai-vibe-coding/"><u>vibe coding</u></a> UIs, terminal sessions, GitHub comments, and more.</p><p>The sandbox is an important step beyond simple containers, because it gives you a few things:</p><ul><li><p><b>Security</b>: Any untrusted end user (or a rogue LLM) can run in the sandbox and not compromise the host machine or other sandboxes running alongside it. This is traditionally (<a href="https://developers.cloudflare.com/workers/runtime-apis/bindings/worker-loader/"><u>but not always</u></a>) accomplished with a microVM.</p></li><li><p><b>Speed</b>: An end user should be able to pick up a new sandbox quickly <i>and</i> restore the state from a previously used one quickly.</p></li><li><p><b>Control</b>: The <i>trusted</i> platform needs to be able to take actions within the <i>untrusted</i> domain of the sandbox. This might mean mounting files in the sandbox, or controlling which requests access it, or executing specific commands.</p></li></ul><p>Today, we’re excited to add another key component of control to our <a href="https://developers.cloudflare.com/sandbox/"><u>Sandboxes</u></a> and all <a href="https://developers.cloudflare.com/containers/"><u>Containers</u></a>: outbound Workers. These are programmatic egress proxies that allow users running sandboxes to easily connect to different services, add <a href="https://www.cloudflare.com/learning/performance/what-is-observability/"><u>observability</u></a>, and, importantly for agents, add flexible and safe authentication.</p>
    <div>
      <h2>How it works</h2>
      <a href="#how-it-works">
        
      </a>
    </div>
    <p>Here’s a quick look at adding a secret key to a header using an outbound Worker:</p>
            <pre><code>class OpenCodeInABox extends Sandbox {
  static outboundByHost = {
    "github.com": (request, env, ctx) =&gt; {
      const headersWithAuth = new Headers(request.headers);
      headersWithAuth.set("x-auth-token", env.SECRET);
      return fetch(request, { headers: headersWithAuth });
    }
  }
}
</code></pre>
            <p>Any time code running in the sandbox makes a request to <code>“github.com”</code>, the request is proxied through the handler. This allows you to do anything you want on each request, including logging, modifying, or cancelling it. In this case, we’re safely injecting a secret (more on this later). The proxy runs on the same machine as any sandbox, has access to distributed state, and can be easily modified with simple JavaScript.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/RSbCJMOoqz8xLWMnPyijV/520addf1676d195be6258b427881cdde/BLOG-3199_2.png" />
          </figure><p>We’re excited about all the possibilities this adds to Sandboxes, especially around authentication for agents. Before going into details, let’s back up and take a quick tour of traditional forms of auth, and why we think there’s something better.</p>
    <div>
      <h2>Common auth for agentic workloads</h2>
      <a href="#common-auth-for-agentic-workloads">
        
      </a>
    </div>
    <p>The core issue with agentic auth is that we can’t fully trust the workload. While our LLMs aren’t nefarious (at least not yet), we still need to be able to apply protections to ensure they don’t use data inappropriately or take actions they shouldn’t.</p><p>A few common methods exist to provide auth to agents, and each has downsides:</p><p><b>Standard API tokens</b> are the most basic method of authentication, typically injected into applications via environment variables or in mounted secrets files. This is the arguably simplest method, but least secure. You have to trust that the sandbox won’t somehow be compromised or accidentally exfiltrate the token while making a request. Since you can’t fully trust the agent, you’ll need to set up token expiry and rotation, which can be a hassle.</p><p><b>Workload identity tokens</b>, such as OIDC tokens, can solve some of this pain. Rather than granting the agent a token with general permissions, you can grant it a token that attests its identity. Now, rather than the agent having direct access to some service with a token, it can exchange an identity token for a very short-lived access token. The OIDC token can be invalidated after a specific agent’s workflow completes, and expiry is easier to manage. One of the biggest downsides of workload identity tokens is the potential inflexibility of integrations. Many services don’t have first-class support for OIDC, so in order to get working integrations with upstream services, platforms will need to roll their own token-exchanging services. This makes adoption difficult.</p><p><b>Custom proxies </b>provide maximum flexibility, and can be paired with workload identity tokens. If you can pass some or all of your sandbox egress through a trusted piece of code, you can insert whatever rules you need. Maybe the upstream service your agent is communicating with has a bad RBAC story, and it can’t provide granular permissions. No problem, just write the controls and permissions yourself! This is a great option for agents that you need to lock down with granular controls. However, how do you intercept all of a sandbox’s traffic? How do you set up a proxy that is dynamic and easily programmable? How do you proxy traffic efficiently? These aren’t easy problems to solve.</p><p>With those imperfect methods in mind, what does an ideal auth mechanism look like?</p><p>Ideally, it is:</p><ul><li><p><b>Zero trust.</b> No token is ever granted to an untrusted user for any amount of time.</p></li><li><p><b>Simple. </b>Easy to author. Doesn’t involve a complex system of minting, rotating, and decrypting tokens.</p></li><li><p><b>Flexible.</b> We don’t rely on the upstream system to provide the granular access we need. We can apply whatever rules we want.</p></li><li><p><b>Identity-aware.</b> We can identify the sandbox making the call and apply specific rules for it.</p></li><li><p><b>Observable.</b> We can easily gather information about what calls are being made.</p></li><li><p><b>Performant.</b> We aren’t round-tripping to a centralized or slow source of truth.</p></li><li><p><b>Transparent.</b> The sandboxed workload doesn’t have to know about it. Things just work.</p></li><li><p><b>Dynamic.</b> We can change rules on the fly.</p></li></ul><p>We believe outbound Workers for Sandboxes fit the bill on all of these. Let’s see how.</p>
    <div>
      <h2>Outbound Workers in practice</h2>
      <a href="#outbound-workers-in-practice">
        
      </a>
    </div>
    
    <div>
      <h3>Basics: restriction and observability</h3>
      <a href="#basics-restriction-and-observability">
        
      </a>
    </div>
    <p>First, we’ll look at a very basic example: logging requests and denying specific actions.</p><p>In this case, we’ll use the outbound function, which intercepts all outgoing HTTP requests from the sandbox. With a few lines of JavaScript, it’s easy to ensure only GETs are made and log then deny any disallowed methods.</p>
            <pre><code>class MySandboxedApp extends Sandbox {
  static outbound = (req, env, ctx) =&gt; {
    // Deny any non-GET action and log
    if (req.method !== 'GET') {
      console.log(`Container making ${req.method} request to: ${req.url}`);
      return new Response('Not Allowed', { status: 405, statusText: 'Method Not Allowed'});
    }

    // Proceed if it is a GET request
    return fetch(req);
  };
}
</code></pre>
            <p>This proxy runs on Workers and runs on the same machine as the sandboxed VM. Workers were built for quick response times, often sitting in front of cached CDN traffic, so additional latency is extremely minimal.</p><p>Because this is running on Workers, we get observability out of the box. You can view logs and outbound requests <a href="https://developers.cloudflare.com/workers/observability/logs/workers-logs/"><u>in the Workers dashboard</u></a> or <a href="https://developers.cloudflare.com/workers/observability/logs/logpush/"><u>export them</u></a> to your application performance monitoring tool of choice.</p>
    <div>
      <h3>Zero trust credential injection</h3>
      <a href="#zero-trust-credential-injection">
        
      </a>
    </div>
    <p>How would we use this to enforce a <a href="https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/"><u>zero trust environment</u></a> for our agent? Let’s imagine we want to make a request to a private GitHub instance, but we never want our LLM to access a private token.</p><p>We can use <code>outboundByHost</code> to define functions for specific domains or IPs. In this case, we’ll inject a protected credential if the domain is “my-internal-vcs.dev”. The sandboxed agent <i>never has access</i> to these credentials.</p>
            <pre><code>class OpenCodeInABox extends Sandbox {
  static outboundByHost = {
    "my-internal-vcs.dev": (request, env, ctx) =&gt; {
      const headersWithAuth = new Headers(request.headers);
      headersWithAuth.set("x-auth-token", env.SECRET);
      return fetch(request, { headers: headersWithAuth });
    }
  }
}
</code></pre>
            <p>It is also easy to conditionalize the response based on the identity of the container. You don’t have to inject the same tokens for every sandbox instance.</p>
            <pre><code> static outboundByHost = {
  "my-internal-vcs.dev": (request, env, ctx) =&gt; {
    // note: KV is encrypted at rest and in transit
    const authKey = await env.KEYS.get(ctx.containerId);

    const requestWithAuth = new Request(request);
    requestWithAuth.headers.set("x-auth-token", authKey);
    return fetch(requestWithAuth);
  }
}
</code></pre>
            
    <div>
      <h3>Using the Cloudflare Developer Platform</h3>
      <a href="#using-the-cloudflare-developer-platform">
        
      </a>
    </div>
    <p>As you may have noticed in the last example, another major advantage of using outbound Workers is that it makes integration into the Workers ecosystem easier. Previously, if a user wanted to access <a href="https://www.cloudflare.com/developer-platform/products/r2/"><u>R2</u></a>, they would have to inject an R2 credential, then make a call from their container to the public R2 API. Same for <a href="https://developers.cloudflare.com/kv/"><u>KV</u></a>, <a href="https://developers.cloudflare.com/agents/"><u>Agents</u></a>, other <a href="https://developers.cloudflare.com/containers/"><u>Containers</u></a>, other <a href="https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/"><u>Worker services</u></a>, <a href="https://developers.cloudflare.com/workers/runtime-apis/bindings/"><u>etc</u></a>.</p><p>Now, you just call <a href="https://developers.cloudflare.com/workers/runtime-apis/bindings/"><u>any binding</u></a> from your outbound Workers.</p>
            <pre><code>class MySandboxedApp extends Sandbox {
  static outboundByHost = {
    "my.kv": async (req, env, ctx) =&gt; {
      const key = keyFromReq(req);
      const myResult = await env.KV.get(key);
      return new Response(myResult);
    },
    "objects.cf": async (req, env, ctx) =&gt; {
      const prefix = ctx.containerId
      const path = pathFromRequest(req);
      const object = await env.R2.get(`${prefix}/${path}`);
      const myResult = await env.KV.get(key);
      return new Response(myResult);
    },
  };
}
</code></pre>
            <p>Rather than parsing tokens and setting up policies, we can easily conditionalize access with code and whatever logic we want. In the R2 example, we also were able to use the sandbox’s ID to further scope access with ease.</p>
    <div>
      <h3>Making controls dynamic</h3>
      <a href="#making-controls-dynamic">
        
      </a>
    </div>
    <p>Networking control should also be dynamic. On many platforms, config for Container and VM networking is static, looking something like this:</p>
            <pre><code>{
  defaultEgress: "block",
  allowedDomains: ["github.com", "npmjs.org"]
}
</code></pre>
            <p>This is better than nothing, but we can do better. For many sandboxes, we might want to apply a policy on start, but then override it with another once specific operations have been performed.</p><p>For instance, we can boot a sandbox, grab our dependencies via NPM and Github, and then lock down egress after that. This ensures that we open up the network for as little time as possible.</p><p>To achieve this, we can use <code>outboundHandlers</code>, which allows us to define arbitrary outbound handlers that can be applied programmatically using the <code>setOutboundHandler</code> method. Each of these also takes params, allowing you to customize behavior from code. In this case, we will allow some hostnames with the custom “<code>allowHosts</code>” policy, then turn off HTTP. </p>
            <pre><code>class MySandboxedApp extends Sandbox {
  static outboundHandlers = {
    async allowHosts(req, env, { params }) {
     const url = new URL(request.url);
     const allowedHostname = params.allowedHostnames.includes(url.hostname);

      if (allowedHostname) {
        return await fetch(newRequest);
      } else {
        return new Response(null, { status: 403, statusText: "Forbidden" });
      }
    }
    
    async noHttp(req) {
      return new Response(null, { status: 403, statusText: "Forbidden" });
    }
  }
}

async setUpSandboxes(req, env) {
  const sandbox = await env.SANDBOX.getByName(userId);
  await sandbox.setOutboundHandler("allowHosts", {
    allowedHostnames: ["github.com", "npmjs.org"]
  });
  await sandbox.gitClone(userRepoURL)
  await sandbox.exec("npm install")
  await sandbox.setOutboundHandler("noHttp");
}
</code></pre>
            <p>This could be extended even further. Your agent might ask the end user a question like “Do you want to allow POST requests to <code>cloudflare.com</code>?” based on whatever tools it needs at that time. With dynamic outbound Workers, you can easily modify the sandbox rules on the fly to provide this level of control.</p>
    <div>
      <h2>TLS support with MITM Proxying</h2>
      <a href="#tls-support-with-mitm-proxying">
        
      </a>
    </div>
    <p>To do anything useful with requests beyond allowing or denying them, you need to have access to the content. This means that if you’re making HTTPS requests, they need to be decrypted by the Workers proxy.</p><p>To achieve this, a unique ephemeral certificate authority (CA) and private key are created for each Sandbox instance, and the CA is placed into the sandbox. By default, sandbox instances will trust this CA, while standard container instances can opt into trusting it, for instance by calling <code>sudo update-ca-certificates</code>.</p>
            <pre><code>export class MyContainer extends Container {
  interceptHttps = true;
}

MyContainer.outbound = (req, env, ctx) =&gt; {
  // All HTTP(S) requests will trigger this hook.
  return fetch(req);
};

</code></pre>
            <p>TLS traffic is proxied by a Cloudflare isolated network process by performing a TLS handshake. It creates a leaf CA from an ephemeral and unique private key and uses the SNI extracted in the ClientHello. It will then invoke in the same machine the  configured Worker to handle the HTTPS request.</p><p>Our ephemeral private key and CA will never leave our container runtime sidecar process, and is never shared across other container sidecar processes.</p><p>With this in place, outbound Workers act as a truly transparent proxy. The sandbox doesn't need any awareness of specific protocols or domains — all HTTP and HTTPS traffic flows through the outbound handler for filtering or modification.</p>
    <div>
      <h2>Under the hood</h2>
      <a href="#under-the-hood">
        
      </a>
    </div>
    <p>To enable the functionality shown above in both <a href="https://github.com/cloudflare/containers"><code><u>Container</u></code></a> and <a href="https://github.com/cloudflare/sandbox-sdk"><code><u>Sandbox</u></code></a>, we added new methods to the <a href="https://developers.cloudflare.com/durable-objects/api/container/"><code><u>ctx.container</u></code></a> object: <code>interceptOutboundHttp and interceptOutboundHttps</code>, which intercept outgoing requests on specific hostnames (with basic glob matching), IP ranges, and it can be used to intercept all outbound requests. These methods are called with a <a href="https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/rpc/"><u>WorkerEntrypoint</u></a>, which gets set up as the front door to the outbound Worker.</p>
            <pre><code>export class MyWorker extends WorkerEntrypoint {
 fetch() {
   return new Response(this.ctx.props.message);
 }
}

// ... inside your container DurableObject ...
this.ctx.container.start({ enableInternet: false });
const outboundWorker = this.ctx.exports.MyWorker({ props: { message: 'hello' } });
await this.ctx.container.interceptOutboundHttp('15.0.0.1:80', outboundWorker);

// From now on, all HTTP requests to 15.0.0.1:80 return "hello"
await this.waitForContainerToBeHealthy();

// You can decide to return another message now...
const secondOutboundWorker = this.ctx.exports.MyWorker({ props: { message: 'switcheroo' } });
await this.ctx.container.interceptOutboundHttp('15.0.0.1:80', secondOutboundWorker);
// all HTTP requests to 15.0.0.1 now show "switcheroo", even on connections that were
// open before this interceptOutboundHttp

// You can even set hostnames, CIDRs, for both IPv4 and IPv6
await this.ctx.container.interceptOutboundHttp('example.com', secondOutboundWorker);
await this.ctx.container.interceptOutboundHttp('*.example.com', secondOutboundWorker);
await this.ctx.container.interceptOutboundHttp('123.123.123.123/23', secoundOutboundWorker);</code></pre>
            <p>All proxying to Workers happens locally on the same machine that runs the sandbox VM. Even though communication between container and Worker is “authless”, it is secure.</p><p>These methods can be called at any time, before or after starting the container, even while connections are still open. Connections that send multiple HTTP requests will automatically pick up a new entrypoint, so updating outbound Workers will not break existing TCP connections or interrupt HTTP requests.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/21utJcR5UpS7c0YNj8Y385/4a47d83acad42c5708ca23a7b04342f3/BLOG-3199_3.png" />
          </figure><p>Local development with <a href="https://developers.cloudflare.com/workers/wrangler/commands/#dev"><u>wrangler dev</u></a> also has support for egress interception. To make it possible, we automatically spawn a sidecar process inside the local container’s network namespace. We called this sidecar component <a href="https://github.com/cloudflare/proxy-everything"><i><u>proxy-everything</u></i></a>. Once <i>proxy-everything </i>is attached, it applies the appropriate TPROXY nftable rules, routing matching traffic from the local Container to <a href="https://github.com/cloudflare/workerd"><u>workerd</u></a>, Cloudflare’s open source JavaScript runtime, which runs the outbound Worker. This allows the local development experience to mirror what happens in prod, so testing and development remain simple.</p>
    <div>
      <h2>Giving outbound Workers a try</h2>
      <a href="#giving-outbound-workers-a-try">
        
      </a>
    </div>
    <p>If you haven’t tried Cloudflare Sandboxes, check out the <a href="https://developers.cloudflare.com/sandbox/get-started/"><u>Getting Started guide</u></a>. If you are a current user of <a href="https://developers.cloudflare.com/containers/"><u>Containers</u></a> or <a href="https://developers.cloudflare.com/sandbox/"><u>Sandboxes</u></a>, start using outbound Workers now by <a href="https://developers.cloudflare.com/containers/platform-details/outbound-traffic/"><u>reading the documentation</u></a> and upgrading to <code>@cloudflare/containers@0.3.0</code> or <code>@cloudflare/sandbox@0.8.9</code>.</p>
    <div>
      <h4>Watch on Cloudflare TV</h4>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div>  </div><p></p> ]]></content:encoded>
            <category><![CDATA[Containers]]></category>
            <category><![CDATA[Sandbox]]></category>
            <category><![CDATA[Agents]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <guid isPermaLink="false">3j8ROSAUomMNPrmru3f2U9</guid>
            <dc:creator>Mike Nomitch</dc:creator>
            <dc:creator>Gabi Villalonga Simón</dc:creator>
        </item>
        <item>
            <title><![CDATA[Welcome to Agents Week]]></title>
            <link>https://blog.cloudflare.com/welcome-to-agents-week/</link>
            <pubDate>Sun, 12 Apr 2026 17:01:05 GMT</pubDate>
            <description><![CDATA[ Cloudflare's mission has always been to help build a better Internet. Sometimes that means building for the Internet as it exists. Sometimes it means building for the Internet as it's about to become. 

This week, we're kicking off Agents Week, dedicated to what comes next.
 ]]></description>
            <content:encoded><![CDATA[ <p>Cloudflare's mission has always been to help build a better Internet. Sometimes that means building for the Internet as it exists. Sometimes it means building for the Internet as it's about to become. </p><p>Today, we're kicking off Agents Week, dedicated to building the Internet for what comes next.</p>
    <div>
      <h2>The Internet wasn't built for the age of AI. Neither was the cloud.</h2>
      <a href="#the-internet-wasnt-built-for-the-age-of-ai-neither-was-the-cloud">
        
      </a>
    </div>
    <p>The cloud, as we know it, was a product of the last major technological paradigm shift: smartphones.</p><p>When smartphones put the Internet in everyone's pocket, they didn't just add users — they changed the nature of what it meant to be online. Always connected, always expecting an instant response. Applications had to handle an order of magnitude more users, and the infrastructure powering them had to evolve.</p><p>The approach the industry converged on was straightforward: more users, more copies of your application. As applications grew in complexity, teams broke them into smaller pieces — microservices — so each team could control its own destiny. But the core principle stayed the same: a finite number of applications, each serving many users. Scale meant more copies.</p><p>Kubernetes and containers became the default. They made it easy to spin up instances, load balance, and tear down what you didn't need. Under this one-to-many model, a single instance could serve many users, and even as user counts grew into the billions, the number of things you had to manage stayed finite.</p><p>Agents break this.</p>
    <div>
      <h2>One user, one agent, one task</h2>
      <a href="#one-user-one-agent-one-task">
        
      </a>
    </div>
    <p>Unlike every application that came before them, agents are one-to-one. Each agent is a unique instance. Serving one user, running one task. Where a traditional application follows the same execution path regardless of who's using it, an agent requires its own execution environment: one where the LLM dictates the code path, calls tools dynamically, adjusts its approach, and persists until the task is done.</p><p>Think of it as the difference between a restaurant and a personal chef. A restaurant has a menu — a fixed set of options — and a kitchen optimized to churn them out at volume. That's most applications today. An agent is more like a personal chef who asks: what do you want to eat? They might need entirely different ingredients, utensils, or techniques each time. You can't run a personal-chef service out of the same kitchen setup you'd use for a restaurant.</p><p>Over the past year, we've seen agents take off, with coding agents leading the way — not surprisingly, since developers tend to be early adopters. The way most coding agents work today is by spinning up a container to give the LLM what it needs: a filesystem, git, bash, and the ability to run arbitrary binaries.</p><p>But coding agents are just the beginning. Tools like Claude Cowork are already making agents accessible to less technical users. Once agents move beyond developers and into the hands of everyone — administrative assistants, research analysts, customer service reps, personal planners — the scale math gets sobering fast.</p>
    <div>
      <h2>The math on scaling agents to the masses</h2>
      <a href="#the-math-on-scaling-agents-to-the-masses">
        
      </a>
    </div>
    <p>If the more than 100 million knowledge workers in the US each used an agentic assistant at ~15% concurrency, you'd need capacity for approximately 24 million simultaneous sessions. At 25–50 users per CPU, that's somewhere between 500K and 1M server CPUs — just for the US, with one agent per person.</p><p>Now picture each person running several agents in parallel. Now picture the rest of the world with more than 1 billion knowledge workers. We're not a little short on compute. We're orders of magnitude away.</p><p>So how do we close that gap?</p>
    <div>
      <h2>Infrastructure built for agents</h2>
      <a href="#infrastructure-built-for-agents">
        
      </a>
    </div>
    <p>Eight years ago, we launched <a href="https://workers.cloudflare.com/"><u>Workers</u></a> — the beginning of our developer platform, and a bet on containerless, serverless compute. The motivation at the time was practical: we needed lightweight compute without cold-starts for customers who depended on Cloudflare for speed. Built on V8 isolates rather than containers, Workers turned out to be an order of magnitude more efficient — faster to start, cheaper to run, and natively suited to the "spin up, execute, tear down" pattern.</p><p>What we didn't anticipate was how well this model would map to the age of agents.</p><p>Where containers give every agent a full commercial kitchen: bolted-down appliances, walk-in fridges, the works, whether the agent needs them or not, isolates, on the other hand, give the personal chef exactly the counter space, the burner, and the knife they need for this particular meal. Provisioned in milliseconds. Cleaned up the moment the dish is served.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3UM1NukO0Ho4lQAYk5CoU8/30e1376c4fe61e86204a0de92ae4612b/BLOG-3238_2.png" />
          </figure><p>In a world where we need to support not thousands of long-running applications, but billions of ephemeral, single-purpose execution environments — isolates are the right primitive. </p><p>Each one starts in milliseconds. Each one is securely sandboxed. And you can run orders of magnitude more of them on the same hardware compared to containers.</p><p>Just a few weeks ago, we took this further with the <a href="https://blog.cloudflare.com/dynamic-workers/"><u>Dynamic Workers open beta</u></a>: execution environments spun up at runtime, on demand. An isolate takes a few milliseconds to start and uses a few megabytes of memory. That's roughly 100x faster and up to 100x more memory-efficient than a container. </p><p>You can start a new one for every single request, run a snippet of code, and throw it away — at a scale of millions per second.</p><p>For agents to move beyond early adopters and into everyone's hands, they also have to be affordable. Running each agent in its own container is expensive enough that agentic tools today are mostly limited to coding assistants for engineers who can justify the cost. <b>Isolates, by running orders of magnitude more efficiently, are what make per-unit economics viable at the scale agents require.</b></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6iiD7zACxMNEDdvJWM6zbo/2261c320f3be3cd2fa6ef34593a1ee09/BLOG-3238_3.png" />
          </figure>
    <div>
      <h2>The horseless carriage phase</h2>
      <a href="#the-horseless-carriage-phase">
        
      </a>
    </div>
    <p>While it’s critical to build the right foundation for the future, we’re not there yet. And every paradigm shift has a period where we try to make the new thing work within the old model. The first cars were called "horseless carriages." The first websites were digital brochures. The first mobile apps were shrunken desktop UIs. We're in that phase now with agents.</p><p>You can see it everywhere. </p><p>We're giving agents headless browsers to navigate websites designed for human eyes, when what they need are structured protocols like MCP to discover and invoke services directly. </p><p>Many early MCP servers are thin wrappers around existing REST APIs — same CRUD operations, new protocol — when LLMs are actually far better at writing code than making sequential tool calls. </p><p>We're using CAPTCHAs and behavioral fingerprinting to verify the thing on the other end of a request, when increasingly that thing is an agent acting on someone's behalf — and the right question isn't "are you human?" but "which agent are you, who authorized you, and what are you allowed to do?"</p><p>We're spinning up full containers for agents that just need to make a few API calls and return a result.</p><p>These are just a few examples, but none of this is surprising. It's what transitions look like.</p>
    <div>
      <h2>Building for both</h2>
      <a href="#building-for-both">
        
      </a>
    </div>
    <p>The Internet is always somewhere between two eras. IPv6 is objectively better than IPv4, but dropping IPv4 support would break half the Internet. HTTP/2 and HTTP/3 coexist. TLS 1.2 still hasn't fully given way to 1.3. The better technology exists, the old technology persists, and the job of infrastructure is to bridge both.</p><p>Cloudflare has always been in the business of bridging these transitions. The shift to agents is no different.</p><p>Coding agents genuinely need containers — a filesystem, git, bash, arbitrary binary execution. That's not going away. This week, our container-based sandbox environments are going GA, because we're committed to making them the best they can be. We're going deeper on browser rendering for agents, because there will be a long tail of services that don't yet speak MCP, and agents will still need to interact with them. These aren't stopgaps — they're part of a complete platform.</p><p>But we're also building what comes next: the isolates, the protocols, and the identity models that agents actually need. Our job is to make sure you don't have to choose between what works today and what's right for tomorrow.</p>
    <div>
      <h2>Security in the model, not around it</h2>
      <a href="#security-in-the-model-not-around-it">
        
      </a>
    </div>
    <p>If agents are going to handle our professional and personal tasks — reading our email, operating on our code, interacting with our financial services — then security has to be built into the execution model, not layered on after the fact.</p><p>CISOs have been the first to confront this. The productivity gains from putting agents in everyone's hands are real, but today, most agent deployments are fraught with risk: prompt injection, data exfiltration, unauthorized API access, opaque tool usage. </p><p>A developer's vibe-coding agent needs access to repositories and deployment pipelines. An enterprise's customer service agent needs access to internal APIs and user data. In both cases, securing the environment today means stitching together credentials, network policies, and access controls that were never designed for autonomous software.</p><p>Cloudflare has been building two platforms in parallel: our developer platform, for people who build applications, and our zero trust platform, for organizations that need to secure access. For a while, these served distinct audiences. </p><p>But "how do I build this agent?" and "how do I make sure it's safe?" are increasingly the same question. We're bringing these platforms together so that all of this is native to how agents run, not a separate layer you bolt on.</p>
    <div>
      <h2>Agents that follow the rules</h2>
      <a href="#agents-that-follow-the-rules">
        
      </a>
    </div>
    <p>There's another dimension to the agent era that goes beyond compute and security: economics and governance.</p><p>When agents interact with the Internet on our behalf — reading articles, consuming APIs, accessing services — there needs to be a way for the people and organizations who create that content and run those services to set terms and get paid. Today, the web's economic model is built around human attention: ads, paywalls, subscriptions. </p><p>Agents don't have attention (well, not that <a href="https://arxiv.org/abs/1706.03762"><u>kind of attention</u></a>). They don't see ads. They don't click through cookie banners.</p><p>If we want an Internet where agents can operate freely <i>and</i> where publishers, content creators, and service providers are fairly compensated, we need new infrastructure for it. We’re building tools that make it easy for publishers and content owners to set and enforce policies for how agents interact with their content.</p><p>Building a better Internet has always meant making sure it works for everyone — not just the people building the technology, but the people whose work and creativity make the Internet worth using. That doesn't change in the age of agents. It becomes more important.</p>
    <div>
      <h2>The platform for developers and agents</h2>
      <a href="#the-platform-for-developers-and-agents">
        
      </a>
    </div>
    <p>Our vision for the developer platform has always been to provide a comprehensive platform that just works: from experiment, to MVP, to scaling to millions of users. But providing the primitives is only part of the equation. A great platform also has to think about how everything works together, and how it integrates into your development flow.</p><p>That job is evolving. It used to be purely about developer experience, making it easy for humans to build, test, and ship. Increasingly, it's also about helping agents help humans, and making the platform work not just for the people building agents, but for the agents themselves. Can an agent find the latest most up-to- date best practices? How easily can it discover and invoke the tools and CLIs it needs? How seamlessly can it move from writing code to deploying it?</p><p>This week, we're shipping improvements across both dimensions — making Cloudflare better for the humans building on it and for the agents running on it.</p>
    <div>
      <h2>Building for the future is a team sport</h2>
      <a href="#building-for-the-future-is-a-team-sport">
        
      </a>
    </div>
    <p>Building for the future is not something we can do alone. Every major Internet transition from HTTP/1.1 to HTTP/2 and HTTP/3, from TLS 1.2 to 1.3 — has required the industry to converge on shared standards. The shift to agents will be no different.</p><p>Cloudflare has a long history of contributing to and helping push forward the standards that make the Internet work. We've been <a href="https://blog.cloudflare.com/tag/ietf/"><u>deeply involved in the IETF</u></a> for over a decade, helping develop and deploy protocols like QUIC, TLS 1.3, and Encrypted Client Hello. We were a founding member of WinterTC, the ECMA technical committee for JavaScript runtime interoperability. We open-sourced the Workers runtime itself, because we believe the foundation should be open.</p><p>We're bringing the same approach to the agentic era. We're excited to be part of the Linux Foundation and AAIF, and to help support and push forward standards like MCP that will be foundational for the agentic future. Since Anthropic introduced MCP, we've worked closely with them to build the infrastructure for remote MCP servers, open-sourced our own implementations, and invested in making the protocol practical at scale. </p><p>Last year, alongside Coinbase, we <a href="https://blog.cloudflare.com/x402/"><u>co-founded the x402 Foundation</u></a>, an open, neutral standard that revives the long-dormant HTTP 402 status code to give agents a native way to pay for the services and content they consume. </p><p>Agent identity, authorization, payment, safety: these all need open standards that no single company can define alone.</p>
    <div>
      <h2>Stay tuned</h2>
      <a href="#stay-tuned">
        
      </a>
    </div>
    <p>This week, we're making announcements across every dimension of the agent stack: compute, connectivity, security, identity, economics, and developer experience.</p><p>The Internet wasn't built for AI. The cloud wasn't built for agents. But Cloudflare has always been about helping build a better Internet — and what "better" means changes with each era. This is the era of agents. This week, <a href="https://blog.cloudflare.com/"><u>follow along</u></a> and we'll show you what we're building for it.</p> ]]></content:encoded>
            <category><![CDATA[Agents Week]]></category>
            <category><![CDATA[Agents]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Workers AI]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Serverless]]></category>
            <guid isPermaLink="false">4dZj0C0XnS9BJQnxy2QkzY</guid>
            <dc:creator>Rita Kozlov</dc:creator>
            <dc:creator>Dane Knecht</dc:creator>
        </item>
        <item>
            <title><![CDATA[500 Tbps of capacity: 16 years of scaling our global network]]></title>
            <link>https://blog.cloudflare.com/500-tbps-of-capacity/</link>
            <pubDate>Fri, 10 Apr 2026 18:00:05 GMT</pubDate>
            <description><![CDATA[ Cloudflare’s global network has officially crossed 500 Tbps of external capacity, enough to route more than 20% of the web and absorb the largest DDoS attacks ever recorded. ]]></description>
            <content:encoded><![CDATA[ <p><sup><i>Cloudflare’s global network and backbone in 2026.</i></sup> </p><p>Cloudflare's network recently passed a major milestone: we crossed 500 terabits per second (Tbps) of external capacity.</p><p>When we say 500 Tbps, we mean total provisioned external interconnection capacity: the sum of every port facing a transit provider, private peering partner, Internet exchange, or <a href="https://blog.cloudflare.com/announcing-express-cni/"><u>Cloudflare Network Interconnect</u></a> (CNI) port across all 330+ cities. This is not peak traffic. On any given day, our peak utilization is a fraction of that number. (The rest is our DDoS budget.)</p><p>It’s a long way from where we started. In 2010, we launched from a small office above a nail salon in Palo Alto, with a single transit provider and a reverse proxy you could set up by <a href="https://blog.cloudflare.com/whats-the-story-behind-the-names-of-cloudflares-name-servers/"><u>changing two nameservers</u></a>.</p>
    <div>
      <h3>The early days of transit and peering</h3>
      <a href="#the-early-days-of-transit-and-peering">
        
      </a>
    </div>
    <p>Our first transit provider was nLayer Communications, a network most people now know as GTT. nLayer gave us our first capacity and our first hands-on company experience in peering relationships and the careful balance between cost and performance.</p><p>From there, we grew <a href="https://blog.cloudflare.com/and-then-there-were-threecloudflares-new-data/"><u>city</u></a> by <a href="https://blog.cloudflare.com/luxembourg-chisinau/"><u>city</u></a>: Chicago, Ashburn, San Jose, Amsterdam, Tokyo. Each new data center meant negotiating colocation contracts, pulling fiber, racking servers, and establishing peering through <a href="https://blog.cloudflare.com/think-global-peer-local-peer-with-cloudflare-at-100-internet-exchange-points/"><u>Internet exchanges</u></a>. The Internet isn't actually a cloud, of course. It is a collection of specific rooms full of cables, and we spent years learning the nuances of every one of them.</p><p>Not every city was a straightforward deployment, having to deal with missing hardware, customs strikes, and even <a href="https://blog.cloudflare.com/stories-from-our-recent-global-data-center-upgrade/"><u>dental floss</u></a>. In a single month in 2018, we opened up in 31 cities in 24 days: from <a href="https://blog.cloudflare.com/kathmandu/"><u>Kathmandu</u></a> and <a href="https://blog.cloudflare.com/baghdad/"><u>Baghdad</u></a> to <a href="https://blog.cloudflare.com/reykjavik-cloudflares-northernmost-location/"><u>Reykjavík</u></a> and <a href="https://blog.cloudflare.com/luxembourg-chisinau/"><u>Chișinău</u></a>. When we opened our 127th data center in <a href="https://blog.cloudflare.com/macau/"><u>Macau</u></a>, we were protecting 7 million Internet properties. Today, with data centers in 330+ cities, we protect more than 20% of the web.</p>
    <div>
      <h3>When the network became the security layer </h3>
      <a href="#when-the-network-became-the-security-layer">
        
      </a>
    </div>
    <p>As our footprint grew, customers asked for more than just website caching. They needed to protect employees, replace aging Multiprotocol Label Switching (<a href="https://www.cloudflare.com/learning/network-layer/what-is-mpls/"><u>MPLS</u></a>) circuits, and <a href="https://blog.cloudflare.com/mpls-to-zerotrust/"><u>secure entire enterprise networks</u></a>. Instead of traditional appliances, <a href="https://blog.cloudflare.com/magic-transit-network-functions/"><u>we built systems</u></a> to establish secure tunnels to private subnets and advertise enterprise IP space directly from our global network via BGP.</p><p>The scale of threats grew in parallel. In 2025, we mitigated a 31.4 Tbps <a href="https://blog.cloudflare.com/ddos-threat-report-2025-q4/"><u>DDoS attack</u></a> lasting 35 seconds. The source was the Aisuru-Kimwolf botnet, including many infected Android TVs. It was one of over 5,000 attacks we blocked that day. No engineer was paged.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4TRCTdrET5JTpz0BaxSdF4/4623f0953417f0998f6810dd048773ff/BLOG-3267_2.png" />
          </figure><p>A decade ago, an attack of that magnitude would have required nation-state resources to counter. Today, our network handles it in seconds without human intervention. That is what operating at a 500 Tbps scale requires: moving the intelligence to every server in our network so the network can <a href="https://blog.cloudflare.com/deep-dive-cloudflare-autonomous-edge-ddos-protection/"><u>defend itself</u></a>.</p>
    <div>
      <h3>How our network responds to an attack</h3>
      <a href="#how-our-network-responds-to-an-attack">
        
      </a>
    </div>
    <p>Here is what actually happens when an attack hits our network. Packets arrive at the network interface card (NIC) and immediately enter an eXpress Data Path (<a href="https://en.wikipedia.org/wiki/Express_Data_Path"><u>XDP</u></a>) program chain managed by <i>xdpd</i>, running in driver mode. Among the first programs in that chain is <i>l4drop</i>, which evaluates each packet against mitigation rules in extended Berkeley Packet Filter (eBPF). Those rules are generated by <i>dosd</i>, our denial of service daemon, which runs on every server in our fleet. Each <i>dosd</i> instance samples incoming traffic, builds a table of the heaviest hitters it sees, and broadcasts that table to every other instance in the colo. The result is a shared colo-wide view of traffic, and because every server works from the same data, they reach the same mitigation decision.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6tGvgIFrQkO0BqATmbWVst/c0b8dff1379a9b1437ec784f10876a2c/BLOG-3267_3.png" />
          </figure><p>When <i>dosd</i> detects an attack pattern, the resulting rule is applied locally via <i>l4drop</i> and propagates globally via Quicksilver, our distributed key-value (KV) store, reaching every server in every data center within seconds. Only after surviving <i>l4drop</i> do packets reach Unimog, our Layer 4 (L4) load balancer, which distributes them across healthy servers in the data center. For Magic Transit customers routing enterprise network traffic through our edge, <i>flowtrackd</i> adds a further layer of stateful TCP inspection, tracking connection state and dropping packets that don't belong to legitimate flows.</p><p>The 31.4 Tbps attack we mitigated followed exactly this path. No traffic was backhauled to a centralized scrubbing center. No human intervened. Every server in the targeted data centers independently recognized the attack and began dropping malicious packets at line rate, before those packets consumed a single CPU cycle of application processing. The software is only half the story: none of it works if the ports aren't there to absorb the traffic in the first place.</p>
    <div>
      <h3>A distributed developer platform</h3>
      <a href="#a-distributed-developer-platform">
        
      </a>
    </div>
    <p>Running code on every server in our network was a natural consequence of controlling the full stack. If we already ran eBPF programs on every machine to drop attack traffic, we could run customer application code there too. That insight became <a href="https://blog.cloudflare.com/code-everywhere-cloudflare-workers/"><u>Workers</u></a>, and later <a href="https://blog.cloudflare.com/introducing-workers-kv/"><u>KV</u></a> and <a href="https://blog.cloudflare.com/introducing-workers-durable-objects/"><u>Durable Objects</u></a>.</p><p>Our developer platform runs in every city we operate in, not in a handful of cloud regions. In 2025, we added <a href="https://blog.cloudflare.com/cloudflare-containers-coming-2025/"><u>Containers</u></a> to Workers, so heavier workloads can run at the edge too. V8 isolates and custom filesystem layers minimize cold starts. Your code runs where your users are, on the same servers that drop attack traffic at line rate via l4drop. Attack traffic is dropped before it reaches the network stack. Your application never sees it.</p>
    <div>
      <h3>Forward-looking protocols: IPv6, RPKI, ASPA</h3>
      <a href="#forward-looking-protocols-ipv6-rpki-aspa">
        
      </a>
    </div>
    <p>We were early adopters of <a href="https://blog.cloudflare.com/introducing-cloudflares-automatic-ipv6-gatewa/"><u>IPv6</u></a> and Resource Public Key Infrastructure (<a href="https://blog.cloudflare.com/rpki/"><u>RPKI</u></a>). <a href="https://blog.cloudflare.com/cloudflare-1111-incident-on-june-27-2024/"><u>BGP hijacks</u></a> cause real outages and security breaches. RPKI allows us to drop invalid routes from peers, ensuring traffic goes where it is supposed to. We sign Route Origin Authorizations (ROAs) for our prefixes and enforce Route Origin Validation on ingress. We reject RPKI-invalid routes, even when that occasionally breaks reachability to networks with misconfigured ROAs.</p><p>Autonomous System Provider Authorization (<a href="https://blog.cloudflare.com/aspa-secure-internet/"><u>ASPA</u></a>) is next. RPKI validates who owns a prefix. ASPA validates the path it took to get here. RPKI is a passport check at the destination, confirming the right owner, while ASPA is a flight manifest check: it verifies every network the traffic passed through. A route leak is like a passenger who boarded in the wrong city; RPKI would not catch it, but ASPA will.</p><p>Current ecosystem adoption for ASPA looks like RPKI did in 2015. We were one of the first networks to deploy RPKI at scale, and today, <a href="https://radar.cloudflare.com/routing"><u>867,000 prefixes</u></a> in the global routing table have valid RPKI certificates, up from near zero a decade ago. At our scale, the protocols we choose have real consequences for the broader Internet. We push for adoption early because waiting means more hijacks and more leaks in the meantime.</p>
    <div>
      <h3>AI agents and the evolving Internet</h3>
      <a href="#ai-agents-and-the-evolving-internet">
        
      </a>
    </div>
    <p>AI has changed what it means to have a presence on the web. For most of the Internet’s history, traffic was human-generated, by people clicking links in browsers. Today, AI crawlers, model training pipelines, and autonomous agents <a href="https://blog.cloudflare.com/radar-2025-year-in-review/"><u>now account</u></a> for more than 4% of all HTML requests across our network, comparable to Googlebot itself. "User action" crawling, where an AI visits a page because a human asked it a question, grew over 15x in 2025 alone.</p><p>AI crawlers behave differently than browsers at the infrastructure level. Browsers load a page and stop. Crawlers instead fetch every linked resource at maximum throughput with no pause between requests. At our scale, distinguishing legitimate AI crawling from actual attacks is a real engineering problem. Our <a href="https://blog.cloudflare.com/introducing-ai-crawl-control/"><u>detection systems</u></a> use a combination of verified bot IP ranges, TLS fingerprinting, behavioral analysis, and robots.txt compliance signals to make that distinction, and to give site owners the data they need to <a href="https://blog.cloudflare.com/content-independence-day-no-ai-crawl-without-compensation/"><u>decide which crawlers to allow</u></a>.</p><p>At the TLS layer, for example, a legitimate browser presents a ClientHello with a predictable set of cipher suites, extensions, and ordering that matches its declared User-Agent. A crawler spoofing that User-Agent but using a stripped-down TLS library will present a different fingerprint, and that mismatch is one of the signals our systems use to classify the request before it reaches the origin.</p>
    <div>
      <h3>Help us build the next 500 Tbps</h3>
      <a href="#help-us-build-the-next-500-tbps">
        
      </a>
    </div>
    <p>What started above a nail salon in Palo Alto is now a 500 Tbps network in 330+ cities across 125+ countries, where every server runs our developer platform and security services, not just cache. That is sixteen years of architectural decisions compounding, and we owe it to the 13,000+ networks and partners who peer with us. We are not done.</p><p>If you are a network operator, peer with us. Our peering policy and interconnection details are on <a href="https://peeringdb.com/asn/13335"><u>PeeringDB</u></a>. If you are interested in embedding Cloudflare infrastructure directly within your network, reach out to our team at <a href="#"><u>epp@cloudflare.com</u></a>, to join the Edge Partner Program.</p> ]]></content:encoded>
            <category><![CDATA[Network Services]]></category>
            <category><![CDATA[Cloudflare Network]]></category>
            <category><![CDATA[Peering]]></category>
            <category><![CDATA[DDoS]]></category>
            <category><![CDATA[BGP]]></category>
            <category><![CDATA[RPKI]]></category>
            <category><![CDATA[Workers AI]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[AI]]></category>
            <guid isPermaLink="false">14NmmkLAEg9hFIbB9bosqh</guid>
            <dc:creator>Tanner Ryan</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing EmDash — the spiritual successor to WordPress that solves plugin security]]></title>
            <link>https://blog.cloudflare.com/emdash-wordpress/</link>
            <pubDate>Wed, 01 Apr 2026 13:00:00 GMT</pubDate>
            <description><![CDATA[ Today we are launching the beta of EmDash, a full-stack serverless JavaScript CMS built on Astro 6.0. It combines the features of a traditional CMS with modern security, running plugins in sandboxed Worker isolates. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>The cost of building software has drastically decreased. We recently <a href="https://blog.cloudflare.com/vinext/"><u>rebuilt Next.js in one week</u></a> using AI coding agents. But for the past two months our agents have been working on an even more ambitious project: rebuilding the WordPress open source project from the ground up.</p><p>WordPress powers <a href="https://w3techs.com/technologies/details/cm-wordpress"><u>over 40% of the Internet</u></a>. It is a massive success that has enabled anyone to be a publisher, and created a global community of WordPress developers. But the WordPress open source project will be 24 years old this year. Hosting a website has changed dramatically during that time. When WordPress was born, AWS EC2 didn’t exist. In the intervening years, that task has gone from renting virtual private servers, to uploading a JavaScript bundle to a globally distributed network at virtually no cost. It’s time to upgrade the most popular CMS on the Internet to take advantage of this change.</p><p>Our name for this new CMS is EmDash. We think of it as the spiritual successor to WordPress. It’s written entirely in TypeScript. It is serverless, but you can run it on your own hardware or any platform you choose. Plugins are securely sandboxed and can run in their own <a href="https://developers.cloudflare.com/workers/reference/how-workers-works/"><u>isolate</u></a>, via <a href="https://developers.cloudflare.com/workers/runtime-apis/bindings/worker-loader/"><u>Dynamic Workers</u></a>, solving the fundamental security problem with the WordPress plugin architecture. And under the hood, EmDash is powered by <a href="https://astro.build/"><u>Astro</u></a>, the fastest web framework for content-driven websites.</p><p>EmDash is fully open source, MIT licensed, and <a href="https://github.com/emdash-cms/emdash"><u>available on GitHub</u></a>. While EmDash aims to be compatible with WordPress functionality, no WordPress code was used to create EmDash. That allows us to license the open source project under the more permissive MIT license. We hope that allows more developers to adapt, extend, and participate in EmDash’s development.</p><p>You can deploy the EmDash v0.1.0 preview to your own Cloudflare account, or to any Node.js server today as part of our early developer beta:</p><a href="https://deploy.workers.cloudflare.com/?url=https://github.com/emdash-cms/templates/tree/main/blog-cloudflare"><img src="https://deploy.workers.cloudflare.com/button" /></a>
<p></p><p>Or you can try out the admin interface here in the <a href="https://emdashcms.com/"><u>EmDash Playground</u></a>:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/50n8mewREzoxOFq2jDzpT9/6a38dbfbaeec2d21040137e574a935ad/CleanShot_2026-04-01_at_07.45.29_2x.png" />
          </figure>
    <div>
      <h3>What WordPress has accomplished</h3>
      <a href="#what-wordpress-has-accomplished">
        
      </a>
    </div>
    <p>The story of WordPress is a triumph of open source that enabled publishing at a scale never before seen. Few projects have had the same recognisable impact on the generation raised on the Internet. The contributors to WordPress’s core, and its many thousands of plugin and theme developers have built a platform that democratised publishing for millions; many lives and livelihoods being transformed by this ubiquitous software.</p><p>There will always be a place for WordPress, but there is also a lot more space for the world of content publishing to grow. A decade ago, people picking up a keyboard universally learned to publish their blogs with WordPress. Today it’s just as likely that person picks up Astro, or another TypeScript framework to learn and build with. The ecosystem needs an option that empowers a wide audience, in the same way it needed WordPress 23 years ago. </p><p>EmDash is committed to building on what WordPress created: an open source publishing stack that anyone can install and use at little cost, while fixing the core problems that WordPress cannot solve. </p>
    <div>
      <h3>Solving the WordPress plugin security crisis</h3>
      <a href="#solving-the-wordpress-plugin-security-crisis">
        
      </a>
    </div>
    <p>WordPress’ plugin architecture is fundamentally insecure. <a href="https://patchstack.com/whitepaper/state-of-wordpress-security-in-2025/"><u>96% of security issues</u></a> for WordPress sites originate in plugins. In 2025, more high severity vulnerabilities <a href="https://patchstack.com/whitepaper/state-of-wordpress-security-in-2026/"><u>were found in the WordPress ecosystem</u></a> than the previous two years combined.</p><p>Why, after over two decades, is WordPress plugin security so problematic?</p><p>A WordPress plugin is a PHP script that hooks directly into WordPress to add or modify functionality. There is no isolation: a WordPress plugin has direct access to the WordPress site’s database and filesystem. When you install a WordPress plugin, you are trusting it with access to nearly everything, and trusting it to handle every malicious input or edge case perfectly.</p><p>EmDash solves this. In EmDash, each plugin runs in its own isolated sandbox: a <a href="https://developers.cloudflare.com/dynamic-workers/"><u>Dynamic Worker</u></a>. Rather than giving direct access to underlying data, EmDash provides the plugin with <a href="https://blog.cloudflare.com/workers-environment-live-object-bindings/"><u>capabilities via bindings</u></a>, based on what the plugin explicitly declares that it needs in its manifest. This security model has a strict guarantee: an EmDash plugin can only perform the actions explicitly declared in its manifest. You can know and trust upfront, before installing a plugin, exactly what you are granting it permission to do, similar to going through an OAuth flow and granting a 3rd party app a specific set of scoped permissions.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4JDq2oEgwONHL8uUJsrof2/fb2ae5fcacd5371aaab575c35ca2ce2e/image8.png" />
          </figure><p>For example, a plugin that sends an email after a content item gets saved looks like this:</p>
            <pre><code>import { definePlugin } from "emdash";

export default () =&gt;
  definePlugin({
    id: "notify-on-publish",
    version: "1.0.0",
    capabilities: ["read:content", "email:send"],
    hooks: {
      "content:afterSave": async (event, ctx) =&gt; {
        if (event.collection !== "posts" || event.content.status !==    "published") return;

        await ctx.email!.send({
          to: "editors@example.com",
          subject: `New post published: ${event.content.title}`,
          text: `"${event.content.title}" is now live.`,
         });

        ctx.log.info(`Notified editors about ${event.content.id}`);
      },
    },
  });</code></pre>
            <p>This plugin explicitly requests two capabilities: <code>content:afterSave</code> to hook into the content lifecycle, and <code>email:send</code> to access the <code>ctx.email</code> function. It is impossible for the plugin to do anything other than use these capabilities. It has no external network access. If it does need network access, it can specify the exact hostname it needs to talk to, as part of its definition, and be granted only the ability to communicate with a particular hostname.</p><p>And in all cases, because the plugin’s needs are declared statically, upfront, it can always be clear exactly what the plugin is asking for permission to be able to do, at install time. A platform or administrator could define rules for what plugins are or aren’t allowed to be installed by certain groups of users, based on what permissions they request, rather than an allowlist of approved or safe plugins.</p>
    <div>
      <h3>Solving plugin security means solving marketplace lock-in</h3>
      <a href="#solving-plugin-security-means-solving-marketplace-lock-in">
        
      </a>
    </div>
    <p>WordPress plugin security is such a real risk that WordPress.org <a href="https://developer.wordpress.org/plugins/wordpress-org/plugin-developer-faq/#where-do-i-submit-my-plugin"><u>manually reviews and approves each plugin</u></a> in its marketplace. At the time of writing, that review queue is over 800 plugins long, and takes at least two weeks to traverse. The vulnerability surface area of WordPress plugins is so wide that in practice, all parties rely on marketplace reputation, ratings and reviews. And because WordPress plugins run in the same execution context as WordPress itself and are so deeply intertwined with WordPress code, some argue they must carry forward WordPress’ GPL license.</p><p>These realities combine to create a chilling effect on developers building plugins, and on platforms hosting WordPress sites.</p><p>Plugin security is the root of this problem. Marketplace businesses provide trust when parties otherwise cannot easily trust each other. In the case of the WordPress marketplace, the plugin security risk is so large and probable that many of your customers can only reasonably trust your plugin via the marketplace. But in order to be part of the marketplace your code must be licensed in a way that forces you to give it away for free everywhere other than that marketplace. You are locked in.</p><p>EmDash plugins have two important properties that mitigate this marketplace lock-in:</p><ol><li><p><b>Plugins can have any license</b>: they run independently of EmDash and share no code. It’s the plugin author’s choice.</p></li><li><p><b>Plugin code runs independently in a secure sandbox</b>: a plugin can be provided to an EmDash site, and trusted, without the EmDash site ever seeing the code.</p></li></ol><p>The first part is straightforward — as the plugin author, you choose what license you want. The same way you can when publishing to NPM, PyPi, Packagist or any other registry. It’s an open ecosystem for all, and up to the community, not the EmDash project, what license you use for plugins and themes.</p><p>The second part is where EmDash’s plugin architecture breaks free of the centralized marketplace.</p><p>Developers need to rely on a third party marketplace having vetted the plugin far less to be able to make decisions about whether to use or trust it. Consider the example plugin above that sends emails after content is saved; the plugin declares three things:</p><ul><li><p>It only runs on the <code>content:afterSave</code> hook</p></li><li><p>It has the <code>read:content</code> capability</p></li><li><p>It has the <code>email:send</code> capability</p></li></ul><p>The plugin can have tens of thousands of lines of code in it, but unlike a WordPress plugin that has access to everything and can talk to the public Internet, the person adding the plugin knows exactly what access they are granting to it. The clearly defined boundaries allow you to make informed decisions about security risks and to zoom in on more specific risks that relate directly to the capabilities the plugin is given.</p><p>The more that both sites and platforms can trust the security model to provide constraints, the more that sites and platforms can trust plugins, and break free of centralized control of marketplaces and reputation. Put another way: if you trust that food safety is enforced in your city, you’ll be adventurous and try new places. If you can’t trust that there might be a staple in your soup, you’ll be consulting Google before every new place you try, and it’s harder for everyone to open new restaurants.</p>
    <div>
      <h3>Every EmDash site has x402 support built in — charge for access to content</h3>
      <a href="#every-emdash-site-has-x402-support-built-in-charge-for-access-to-content">
        
      </a>
    </div>
    <p>The business model of the web <a href="https://blog.cloudflare.com/content-independence-day-no-ai-crawl-without-compensation/"><u>is at risk</u></a>, particularly for content creators and publishers. The old way of making content widely accessible, allowing all clients free access in exchange for traffic, breaks when there is no human looking at a site to advertise to, and the client is instead their agent accessing the web on their behalf. Creators need ways to continue to make money in this new world of agents, and to build new kinds of websites that serve what people’s agents need and will pay for. Decades ago a new wave of creators created websites that became great businesses (often using WordPress to power them) and a similar opportunity exists today.</p><p><a href="https://www.x402.org/"><u>x402</u></a> is an open, neutral standard for Internet-native payments. It lets anyone on the Internet easily charge, and any client pay on-demand, on a pay-per-use basis. A client, such as an agent, sends a HTTP request and receives a HTTP 402 Payment Required status code. In response, the client pays for access on-demand, and the server can let the client through to the requested content.</p><p>EmDash has built-in support for x402. This means anyone with an EmDash site can charge for access to their content without requiring subscriptions and with zero engineering work. All you need to do is configure which content should require payment, set how much to charge, and provide a Wallet address. The request/response flow ends up looking like this:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3IKfYGHF6Pgi3jQf1ERRQC/48815ffec3e204f4f2c6f7a40f232a93/image4.png" />
          </figure><p>Every EmDash site has a built-in business model for the AI era.</p>
    <div>
      <h3>Solving scale-to-zero for WordPress hosting platforms</h3>
      <a href="#solving-scale-to-zero-for-wordpress-hosting-platforms">
        
      </a>
    </div>
    <p>WordPress is not serverless: it requires provisioning and managing servers, scaling them up and down like a traditional web application. To maximize performance, and to be able to handle traffic spikes, there’s no avoiding the need to pre-provision instances and run some amount of idle compute, or share resources in ways that limit performance. This is particularly true for sites with content that must be server rendered and cannot be cached.</p><p>EmDash is different: it’s built to run on serverless platforms, and make the most out of the <a href="https://developers.cloudflare.com/workers/reference/how-workers-works/"><u>v8 isolate architecture</u></a> of Cloudflare’s open source runtime <a href="https://github.com/cloudflare/workerd"><u>workerd</u></a>. On an incoming request, the Workers runtime instantly spins up an isolate to execute code and serve a response. It scales back down to zero if there are no requests. And it <a href="https://blog.cloudflare.com/workers-pricing-scale-to-zero/"><u>only bills for CPU time</u></a> (time spent doing actual work).</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3yIX0whveiJ7xQ9P20TeyA/84462e6ec58cab27fbd6bf1703efeabc/image7.png" />
          </figure><p>You can run EmDash anywhere, on any Node.js server — but on Cloudflare you can run millions of instances of EmDash using <a href="https://developers.cloudflare.com/cloudflare-for-platforms/"><u>Cloudflare for Platforms</u></a> that each instantly scale fully to zero or up to as many RPS as you need to handle, using the exact same network and runtime that the biggest websites in the world rely on.</p><p>Beyond cost optimizations and performance benefits, we’ve bet on this architecture at Cloudflare in part because we believe in having low cost and free tiers, and that everyone should be able to build websites that scale. We’re excited to help platforms extend the benefits of this architecture to their own customers, both big and small.</p>
    <div>
      <h3>Modern frontend theming and architecture via Astro</h3>
      <a href="#modern-frontend-theming-and-architecture-via-astro">
        
      </a>
    </div>
    <p>EmDash is powered by Astro, the web framework for content-driven websites. To create an EmDash theme, you create an Astro project that includes:</p><ul><li><p><b>Pages</b>: Astro routes for rendering content (homepage, blog posts, archives, etc.)</p></li><li><p><b>Layouts:</b> Shared HTML structure</p></li><li><p><b>Components:</b> Reusable UI elements (navigation, cards, footers)</p></li><li><p><b>Styles:</b> CSS or Tailwind configuration</p></li><li><p><b>A seed file:</b> JSON that tells the CMS what content types and fields to create</p></li></ul><p>This makes creating themes familiar to frontend developers who are <a href="https://npm-stat.com/charts.html?package=astro&amp;from=2024-01-01&amp;to=2026-03-30"><u>increasingly choosing Astro</u></a>, and to LLMs which are already trained on Astro.</p><p>WordPress themes, though incredibly flexible, operate with a lot of the same security risks as plugins, and the more popular and commonplace your theme, the more of a target it is. Themes run through integrating with <code>functions.php</code> which is an all-encompassing execution environment, enabling your theme to be both incredibly powerful and potentially dangerous. EmDash themes, as with dynamic plugins, turns this expectation on its head. Your theme can never perform database operations.</p>
    <div>
      <h3>An AI Native CMS — MCP, CLI, and Skills for EmDash</h3>
      <a href="#an-ai-native-cms-mcp-cli-and-skills-for-emdash">
        
      </a>
    </div>
    <p>The least fun part about working with any CMS is doing the rote migration of content: finding and replacing strings, migrating custom fields from one format to another, renaming, reordering and moving things around. This is either boring repetitive work or requires one-off scripts and  “single-use” plugins and tools that are usually neither fun to write nor to use.</p><p>EmDash is designed to be managed programmatically by your AI agents. It provides the context and the tools that your agents need, including:</p><ol><li><p><b>Agent Skills:</b> Each EmDash instance includes <a href="https://agentskills.io/home"><u>Agent Skills</u></a> that describe to your agent the capabilities EmDash can provide to plugins, the hooks that can trigger plugins, <a href="https://github.com/emdash-cms/emdash/blob/main/skills/creating-plugins/SKILL.md"><u>guidance on how to structure a plugin</u></a>, and even <a href="https://github.com/emdash-cms/emdash/blob/main/skills/wordpress-theme-to-emdash/SKILL.md"><u>how to port legacy WordPress themes to EmDash natively</u></a>. When you give an agent an EmDash codebase, EmDash provides everything the agent needs to be able to customize your site in the way you need.</p></li><li><p><b>EmDash CLI:</b> The <a href="https://github.com/emdash-cms/emdash/blob/main/docs/src/content/docs/reference/cli.mdx"><u>EmDash CLI</u></a> enables your agent to interact programmatically with your local or remote instance of EmDash. You can <a href="https://github.com/emdash-cms/emdash/blob/main/docs/src/content/docs/reference/cli.mdx#media-upload-file"><u>upload media</u></a>, <a href="https://github.com/emdash-cms/emdash/blob/main/docs/src/content/docs/reference/cli.mdx#emdash-search"><u>search for content</u></a>, <a href="https://github.com/emdash-cms/emdash/blob/main/docs/src/content/docs/reference/cli.mdx#schema-create-collection"><u>create and manage schemas</u></a>, and do the same set of things you can do in the Admin UI.</p></li><li><p><b>Built-in MCP Server:</b> Every EmDash instance provides its own remote Model Context Protocol (MCP) server, allowing you to do the same set of things you can do in the Admin UI.</p></li></ol>
    <div>
      <h3>Pluggable authentication, with Passkeys by default</h3>
      <a href="#pluggable-authentication-with-passkeys-by-default">
        
      </a>
    </div>
    <p>EmDash uses passkey-based authentication by default, meaning there are no passwords to leak and no brute-force vectors to defend against. User management includes familiar role-based access control out of the box: administrators, editors, authors, and contributors, each scoped strictly to the actions they need. Authentication is pluggable, so you can set EmDash up to work with your SSO provider, and automatically provision access based on IdP metadata.</p>
    <div>
      <h3>Import your WordPress sites to EmDash</h3>
      <a href="#import-your-wordpress-sites-to-emdash">
        
      </a>
    </div>
    <p>You can import an existing WordPress site by either going to WordPress admin and exporting a WXR file, or by installing the <a href="https://github.com/emdash-cms/wp-emdash/tree/main/plugins/emdash-exporter"><u>EmDash Exporter plugin</u></a> on a WordPress site, which configures a secure endpoint that is only exposed to you, and protected by a WordPress Application Password you control. Migrating content takes just a few minutes, and automatically works to bring any attached media into EmDash’s media library.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/SUFaWUIoEFSN2z9rclKZW/28870489d502cff34e35ab3b59f19eae/image1.png" />
          </figure><p>Creating any custom content types on WordPress that are not a Post or a Page has meant installing heavy plugins like Advanced Custom Fields, and squeezing the result into a crowded WordPress posts table. EmDash does things differently: you can define a schema directly in the admin panel, which will create entirely new EmDash collections for you, separately ordered in the database. On import, you can use the same capabilities to take any custom post types from WordPress, and create an EmDash content type from it. </p><p>For bespoke blocks, you can use the <a href="https://github.com/emdash-cms/emdash/blob/main/skills/creating-plugins/references/block-kit.md"><u>EmDash Block Kit Agent Skill</u></a> to instruct your agent of choice and build them for EmDash.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5xutdF9nvHYMYlN6XfqRGu/1db0e0d73327e926d606f92fdd7aabec/image3.png" />
          </figure>
    <div>
      <h3>Try it</h3>
      <a href="#try-it">
        
      </a>
    </div>
    <p>EmDash is v0.1.0 preview, and we’d love you to try it, give feedback, and we welcome contributions to the <a href="https://github.com/emdash-cms/emdash/"><u>EmDash GitHub repository</u></a>.</p><p>If you’re just playing around and want to first understand what’s possible — try out the admin interface in the <a href="https://emdashcms.com/"><u>EmDash Playground</u></a>.</p><p>To create a new EmDash site locally, via the CLI, run:</p><p><code>npm create emdash@latest</code></p><p>Or you can do the same via the Cloudflare dashboard below:</p><a href="https://deploy.workers.cloudflare.com/?url=https://github.com/emdash-cms/templates/tree/main/blog-cloudflare"><img src="https://deploy.workers.cloudflare.com/button" /></a>
<p></p><p>We’re excited to see what you build, and if you're active in the WordPress community, as a hosting platform, a plugin or theme author, or otherwise — we’d love to hear from you. Email us at emdash@cloudflare.com, and tell us what you’d like to see from the EmDash project.</p><p>If you want to stay up to date with major EmDash developments, you can leave your email address <a href="https://forms.gle/ofE1LYRYxkpAPqjE7"><u>here</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Open Source]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">64rkKr9jewVmxagIFgbwY4</guid>
            <dc:creator>Matt “TK” Taylor</dc:creator>
            <dc:creator>Matt Kane</dc:creator>
        </item>
        <item>
            <title><![CDATA[How we use Abstract Syntax Trees (ASTs) to turn Workflows code into visual diagrams ]]></title>
            <link>https://blog.cloudflare.com/workflow-diagrams/</link>
            <pubDate>Fri, 27 Mar 2026 13:00:00 GMT</pubDate>
            <description><![CDATA[ Workflows are now visualized via step diagrams in the dashboard. Here’s how we translate your TypeScript code into a visual representation of the workflow.  ]]></description>
            <content:encoded><![CDATA[ <p><a href="https://www.cloudflare.com/developer-platform/products/workflows/"><u>Cloudflare Workflows</u></a> is a durable execution engine that lets you chain steps, retry on failure, and persist state across long-running processes. Developers use Workflows to power background agents, manage data pipelines, build human-in-the-loop approval systems, and more.</p><p>Last month, we <a href="https://developers.cloudflare.com/changelog/post/2026-02-03-workflows-visualizer/"><u>announced</u></a> that every workflow deployed to Cloudflare now has a complete visual diagram in the dashboard.</p><p>We built this because being able to visualize your applications is more important now than ever before. Coding agents are writing code that you may or may not be reading. However, the shape of what gets built still matters: how the steps connect, where they branch, and what's actually happening.</p><p>If you've seen diagrams from visual workflow builders before, those are usually working from something declarative: JSON configs, YAML, drag-and-drop. However, Cloudflare Workflows are just code. They can include <a href="https://developers.cloudflare.com/workflows/build/workers-api/"><u>Promises, Promise.all, loops, conditionals,</u></a> and/or be nested in functions or classes. This dynamic execution model makes rendering a diagram a bit more complicated.</p><p>We use Abstract Syntax Trees (ASTs) to statically derive the graph, tracking <code>Promise</code> and <code>await</code> relationships to understand what runs in parallel, what blocks, and how the pieces connect. </p><p>Keep reading to learn how we built these diagrams, or deploy your first workflow and see the diagram for yourself.</p><a href="https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/workflows-starter-template"><img src="https://deploy.workers.cloudflare.com/button" /></a>
<p></p><p>Here’s an example of a diagram generated from Cloudflare Workflows code:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/44NnbqiNda2vgzIEneHQ3W/044856325693fbeb75ed1ab38b4db1c2/image1.png" />
          </figure>
    <div>
      <h3>Dynamic workflow execution</h3>
      <a href="#dynamic-workflow-execution">
        
      </a>
    </div>
    <p>Generally, workflow engines can execute according to either dynamic or sequential (static) execution order. Sequential execution might seem like the more intuitive solution: trigger workflow → step A → step B → step C, where step B starts executing immediately after the engine completes Step A, and so forth.</p><p><a href="https://developers.cloudflare.com/workflows/"><u>Cloudflare Workflows</u></a> follow the dynamic execution model. Since workflows are just code, the steps execute as the runtime encounters them. When the runtime discovers a step, that step gets passed over to the workflow engine, which manages its execution. The steps are not inherently sequential unless awaited — the engine executes all unawaited steps in parallel. This way, you can write your workflow code as flow control without additional wrappers or directives. Here’s how the handoff works:</p><ol><li><p>An <i>engine</i>, which is a “supervisor” Durable Object for that instance, spins up. The engine is responsible for the logic of the actual workflow execution. </p></li><li><p>The engine triggers a <a href="https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/how-workers-for-platforms-works/#user-workers"><u>user worker</u></a> via <a href="https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/configuration/dynamic-dispatch/"><u>dynamic dispatch</u></a>, passing control over to Workers runtime.</p></li><li><p>When Runtime encounters a <code>step.do</code>, it passes the execution back to the engine.</p></li><li><p>The engine executes the step, persists the result (or throws an error, if applicable) and triggers the user Worker again.  </p></li></ol><p>With this architecture, the engine does not inherently “know” the order of the steps that it is executing — but for a diagram, the order of steps becomes crucial information. The challenge here lies in getting the vast majority of workflows translated accurately into a diagnostically helpful graph; with the diagrams in beta, we will continue to iterate and improve on these representations.</p>
    <div>
      <h3>Parsing the code</h3>
      <a href="#parsing-the-code">
        
      </a>
    </div>
    <p>Fetching the script at <a href="https://developers.cloudflare.com/workers/get-started/guide/#4-deploy-your-project"><u>deploy time</u></a>, instead of run time, allows us to parse the workflow in its entirety to statically generate the diagram. </p><p>Taking a step back, here is the life of a workflow deployment:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1zoOCYji26ahxzh594VavQ/63ad96ae033653ffc7fd98df01ea6e27/image5.png" />
          </figure><p>To create the diagram, we fetch the script after it has been bundled by the internal configuration service which deploys Workers (step 2 under Workflow deployment). Then, we use a parser to create an abstract syntax tree (AST) representing the workflow, and our internal service generates and traverses an intermediate graph with all WorkflowEntrypoints and calls to workflows steps. We render the diagram based on the final result on our API. </p><p>When a Worker is deployed, the configuration service bundles (using <a href="https://esbuild.github.io/"><u>esbuild</u></a> by default) and minifies the code <a href="https://developers.cloudflare.com/workers/wrangler/configuration/#inheritable-keys"><u>unless specified otherwise</u></a>. This presents another challenge — while Workflows in TypeScript follow an intuitive pattern, their minified Javascript (JS) can be dense and indigestible. There are also different ways that code can be minified, depending on the bundler. </p><p>Here’s an example of Workflow code that shows <b>agents executing in parallel:</b></p>
            <pre><code>const summaryPromise = step.do(
         `summary agent (loop ${loop})`,
         async () =&gt; {
           return runAgentPrompt(
             this.env,
             SUMMARY_SYSTEM,
             buildReviewPrompt(
               'Summarize this text in 5 bullet points.',
               draft,
               input.context
             )
           );
         }
       );
        const correctnessPromise = step.do(
         `correctness agent (loop ${loop})`,
         async () =&gt; {
           return runAgentPrompt(
             this.env,
             CORRECTNESS_SYSTEM,
             buildReviewPrompt(
               'List correctness issues and suggested fixes.',
               draft,
               input.context
             )
           );
         }
       );
        const clarityPromise = step.do(
         `clarity agent (loop ${loop})`,
         async () =&gt; {
           return runAgentPrompt(
             this.env,
             CLARITY_SYSTEM,
             buildReviewPrompt(
               'List clarity issues and suggested fixes.',
               draft,
               input.context
             )
           );
         }
       );</code></pre>
            <p>Bundling with <a href="https://rspack.rs/"><u>rspack</u></a>, a snippet of the minified code looks like this:</p>
            <pre><code>class pe extends e{async run(e,t){de("workflow.run.start",{instanceId:e.instanceId});const r=await t.do("validate payload",async()=&gt;{if(!e.payload.r2Key)throw new Error("r2Key is required");if(!e.payload.telegramChatId)throw new Error("telegramChatId is required");return{r2Key:e.payload.r2Key,telegramChatId:e.payload.telegramChatId,context:e.payload.context?.trim()}}),s=await t.do("load source document from r2",async()=&gt;{const e=await this.env.REVIEW_DOCUMENTS.get(r.r2Key);if(!e)throw new Error(`R2 object not found: ${r.r2Key}`);const t=(await e.text()).trim();if(!t)throw new Error("R2 object is empty");return t}),n=Number(this.env.MAX_REVIEW_LOOPS??"5"),o=this.env.RESPONSE_TIMEOUT??"7 days",a=async(s,i,c)=&gt;{if(s&gt;n)return le("workflow.loop.max_reached",{instanceId:e.instanceId,maxLoops:n}),await t.do("notify max loop reached",async()=&gt;{await se(this.env,r.telegramChatId,`Review stopped after ${n} loops for ${e.instanceId}. Start again if you still need revisions.`)}),{approved:!1,loops:n,finalText:i};const h=t.do(`summary agent (loop ${s})`,async()=&gt;te(this.env,"You summarize documents. Keep the output short, concrete, and factual.",ue("Summarize this text in 5 bullet points.",i,r.context)))...</code></pre>
            <p>Or, bundling with <a href="https://vite.dev/"><u>vite</u></a>, here is a minified snippet:</p>
            <pre><code>class ht extends pe {
  async run(e, r) {
    b("workflow.run.start", { instanceId: e.instanceId });
    const s = await r.do("validate payload", async () =&gt; {
      if (!e.payload.r2Key)
        throw new Error("r2Key is required");
      if (!e.payload.telegramChatId)
        throw new Error("telegramChatId is required");
      return {
        r2Key: e.payload.r2Key,
        telegramChatId: e.payload.telegramChatId,
        context: e.payload.context?.trim()
      };
    }), n = await r.do(
      "load source document from r2",
      async () =&gt; {
        const i = await this.env.REVIEW_DOCUMENTS.get(s.r2Key);
        if (!i)
          throw new Error(`R2 object not found: ${s.r2Key}`);
        const c = (await i.text()).trim();
        if (!c)
          throw new Error("R2 object is empty");
        return c;
      }
    ), o = Number(this.env.MAX_REVIEW_LOOPS ?? "5"), l = this.env.RESPONSE_TIMEOUT ?? "7 days", a = async (i, c, u) =&gt; {
      if (i &gt; o)
        return H("workflow.loop.max_reached", {
          instanceId: e.instanceId,
          maxLoops: o
        }), await r.do("notify max loop reached", async () =&gt; {
          await J(
            this.env,
            s.telegramChatId,
            `Review stopped after ${o} loops for ${e.instanceId}. Start again if you still need revisions.`
          );
        }), {
          approved: !1,
          loops: o,
          finalText: c
        };
      const h = r.do(
        `summary agent (loop ${i})`,
        async () =&gt; _(
          this.env,
          et,
          K(
            "Summarize this text in 5 bullet points.",
            c,
            s.context
          )
        )
      )...</code></pre>
            <p>Minified code can get pretty gnarly — and depending on the bundler, it can get gnarly in a bunch of different directions.</p><p>We needed a way to parse the various forms of minified code quickly and precisely. We decided <code>oxc-parser</code> from the <a href="https://oxc.rs/"><u>JavaScript Oxidation Compiler</u></a> (OXC) was perfect for the job. We first tested this idea by having a container running Rust. Every script ID was sent to a <a href="https://developers.cloudflare.com/queues/"><u>Cloudflare Queue</u></a>, after which messages were popped and sent to the container to process. Once we confirmed this approach worked, we moved to a Worker written in Rust. Workers supports running <a href="https://developers.cloudflare.com/workers/languages/rust/"><u>Rust via WebAssembly</u></a>, and the package was small enough to make this straightforward.</p><p>The Rust Worker is responsible for first converting the minified JS into AST node types, then converting the AST node types into the graphical version of the workflow that is rendered on the dashboard. To do this, we generate a graph of pre-defined <a href="https://developers.cloudflare.com/workflows/build/visualizer/"><u>node types</u></a> for each workflow and translate into our graph representation through a series of node mappings. </p>
    <div>
      <h3>Rendering the diagram</h3>
      <a href="#rendering-the-diagram">
        
      </a>
    </div>
    <p>There were two challenges to rendering a diagram version of the workflow: how to track step and function relationships correctly, and how to define the workflow node types as simply as possible while covering all the surface area.</p><p>To guarantee that step and function relationships are tracked correctly, we needed to collect both the function and step names. As we discussed earlier, the engine only has information about the steps, but a step may be dependent on a function, or vice versa. For example, developers might wrap steps in functions or define functions as steps. They could also call steps within a function that come from different <a href="https://blog.cloudflare.com/workers-javascript-modules/"><u>modules</u></a> or rename steps. </p><p>Although the library passes the initial hurdle by giving us the AST, we still have to decide how to parse it.  Some code patterns require additional creativity. For example, functions — within a <code>WorkflowEntrypoint</code>, there can be functions that call steps directly, indirectly, or not at all. Consider <code>functionA</code>, which contains <code>console.log(await functionB(), await functionC()</code>) where <code>functionB</code> calls a <code>step.do()</code>. In that case, both <code>functionA</code> and <code>functionB</code> should be included on the workflow diagram; however, <code>functionC</code> should not. To catch all functions which include direct and indirect step calls, we create a subgraph for each function and check whether it contains a step call itself or whether it calls another function which might. Those subgraphs are represented by a function node, which contains all of its relevant nodes. If a function node is a leaf of the graph, meaning it has no direct or indirect workflow steps within it, it is trimmed from the final output. </p><p>We check for other patterns as well, including a list of static steps from which we can infer the workflow diagram or variables, defined in up to ten different ways. If your script contains multiple workflows, we follow a similar pattern to the subgraphs created for functions, abstracted one level higher. </p><p>For every AST node type, we had to consider every way they could be used inside of a workflow: loops, branches, promises, parallels, awaits, arrow functions… the list goes on. Even within these paths, there are dozens of possibilities. Consider just a few of the possible ways to loop:</p>
            <pre><code>// for...of
for (const item of items) {
	await step.do(`process ${item}`, async () =&gt; item);
}
// while
while (shouldContinue) {
	await step.do('poll', async () =&gt; getStatus());
}
// map
await Promise.all(
	items.map((item) =&gt; step.do(`map ${item}`, async () =&gt; item)),
);
// forEach
await items.forEach(async (item) =&gt; {
	await step.do(`each ${item}`, async () =&gt; item);
});</code></pre>
            <p>And beyond looping, how to handle branching:</p>
            <pre><code>// switch / case
switch (action.type) {
	case 'create':
		await step.do('handle create', async () =&gt; {});
		break;
	default:
		await step.do('handle unknown', async () =&gt; {});
		break;
}

// if / else if / else
if (status === 'pending') {
	await step.do('pending path', async () =&gt; {});
} else if (status === 'active') {
	await step.do('active path', async () =&gt; {});
} else {
	await step.do('fallback path', async () =&gt; {});
}

// ternary operator
await (cond
	? step.do('ternary true branch', async () =&gt; {})
	: step.do('ternary false branch', async () =&gt; {}));

// nullish coalescing with step on RHS
const myStepResult =
	variableThatCanBeNullUndefined ??
	(await step.do('nullish fallback step', async () =&gt; 'default'));

// try/catch with finally
try {
	await step.do('try step', async () =&gt; {});
} catch (_e) {
	await step.do('catch step', async () =&gt; {});
} finally {
	await step.do('finally step', async () =&gt; {});
}</code></pre>
            <p>Our goal was to create a concise API that communicated what developers need to know without overcomplicating it. But converting a workflow into a diagram meant accounting for every pattern (whether it follows best practices, or not) and edge case possible. As we discussed earlier, each step is not explicitly sequential, by default, to any other step. If a workflow does not utilize <code>await</code> and <code>Promise.all()</code>, we assume that the steps will execute in the order in which they are encountered. But if a workflow included <code>await</code>, <code>Promise</code> or <code>Promise.all()</code>, we needed a way to track those relationships.</p><p>We decided on tracking execution order, where each node has a <code>starts:</code> and <code>resolves:</code> field. The <code>starts</code> and <code>resolves</code> indices tell us when a promise started executing and when it ends relative to the first promise that started without an immediate, subsequent conclusion. This correlates to vertical positioning in the diagram UI (i.e., all steps with <code>starts:1</code> will be inline). If steps are awaited when they are declared, then <code>starts</code> and <code>resolves</code> will be undefined, and the workflow will execute in the order of the steps’ appearance to the runtime.</p><p>While parsing, when we encounter an unawaited <code>Promise</code> or <code>Promise.all()</code>, that node (or nodes) are marked with an entry number, surfaced in the <code>starts</code> field. If we encounter an <code>await</code> on that promise, the entry number is incremented by one and saved as the exit number (which is the value in <code>resolves</code>). This allows us to know which promises run at the same time and when they’ll complete in relation to each other.</p>
            <pre><code>export class ImplicitParallelWorkflow extends WorkflowEntrypoint&lt;Env, Params&gt; {
 async run(event: WorkflowEvent&lt;Params&gt;, step: WorkflowStep) {
   const branchA = async () =&gt; {
     const a = step.do("task a", async () =&gt; "a"); //starts 1
     const b = step.do("task b", async () =&gt; "b"); //starts 1
     const c = await step.waitForEvent("task c", { type: "my-event", timeout: "1 hour" }); //starts 1 resolves 2
     await step.do("task d", async () =&gt; JSON.stringify(c)); //starts 2 resolves 3
     return Promise.all([a, b]); //resolves 3
   };

   const branchB = async () =&gt; {
     const e = step.do("task e", async () =&gt; "e"); //starts 1
     const f = step.do("task f", async () =&gt; "f"); //starts 1
     return Promise.all([e, f]); //resolves 2
   };

   await Promise.all([branchA(), branchB()]);

   await step.sleep("final sleep", 1000);
 }
}</code></pre>
            <p>You can see the steps’ alignment in the diagram:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6EZJ38J3H55yH0OnT11vgg/6dde06725cd842725ee3af134b1505c0/image3.png" />
          </figure><p>After accounting for all of those patterns, we settled on the following list of node types: 	</p>
            <pre><code>| StepSleep
| StepDo
| StepWaitForEvent
| StepSleepUntil
| LoopNode
| ParallelNode
| TryNode
| BlockNode
| IfNode
| SwitchNode
| StartNode
| FunctionCall
| FunctionDef
| BreakNode;</code></pre>
            <p>Here are a few samples of API output for different behaviors: </p><p><code>function</code> call:</p>
            <pre><code>{
  "functions": {
    "runLoop": {
      "name": "runLoop",
      "nodes": []
    }
  }
}</code></pre>
            <p><code>if</code> condition branching to <code>step.do</code>:</p>
            <pre><code>{
  "type": "if",
  "branches": [
    {
      "condition": "loop &gt; maxLoops",
      "nodes": [
        {
          "type": "step_do",
          "name": "notify max loop reached",
          "config": {
            "retries": {
              "limit": 5,
              "delay": 1000,
              "backoff": "exponential"
            },
            "timeout": 10000
          },
          "nodes": []
        }
      ]
    }
  ]
}</code></pre>
            <p><code>parallel</code> with <code>step.do</code> and <code>waitForEvent</code>:</p>
            <pre><code>{
  "type": "parallel",
  "kind": "all",
  "nodes": [
    {
      "type": "step_do",
      "name": "correctness agent (loop ${...})",
      "config": {
        "retries": {
          "limit": 5,
          "delay": 1000,
          "backoff": "exponential"
        },
        "timeout": 10000
      },
      "nodes": [],
      "starts": 1
    },
...
    {
      "type": "step_wait_for_event",
      "name": "wait for user response (loop ${...})",
      "options": {
        "event_type": "user-response",
        "timeout": "unknown"
      },
      "starts": 3,
      "resolves": 4
    }
  ]
}</code></pre>
            
    <div>
      <h3>What’s next</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>Ultimately, the goal of these Workflow diagrams is to serve as a full-service debugging tool. That means you’ll be able to:</p><ul><li><p>Trace an execution through the graph in real time</p></li><li><p>Discover errors, wait for human-in-the-loop approvals, and skip steps for testing</p></li><li><p>Access visualizations in local development</p></li></ul><p>Check out the diagrams on your <a href="https://dash.cloudflare.com/?to=/:account/workers/workflows"><u>Workflow overview pages</u></a>. If you have any feature requests or notice any bugs, share your feedback directly with the Cloudflare team by joining the <a href="https://discord.cloudflare.com/"><u>Cloudflare Developers community on Discord</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Workflows]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Developers]]></category>
            <guid isPermaLink="false">4HOWpzOgT3eVU2wFa4adFU</guid>
            <dc:creator>André Venceslau</dc:creator>
            <dc:creator>Mia Malden</dc:creator>
        </item>
    </channel>
</rss>