
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Mon, 27 Apr 2026 12:12:54 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Building the agentic cloud: everything we launched during Agents Week 2026]]></title>
            <link>https://blog.cloudflare.com/agents-week-in-review/</link>
            <pubDate>Mon, 20 Apr 2026 13:00:00 GMT</pubDate>
            <description><![CDATA[ Agents Week 2026 is a wrap. Let’s take a look at everything we announced, from compute and security to the agent toolbox, platform tools, and the emerging agentic web. Everything we shipped for the agentic cloud.
 ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Today marks the end of our first Agents Week, an innovation week dedicated entirely to the age of agents. It couldn’t have been more timely: over the past year, agents have swiftly changed how people work. Coding agents are helping developers ship faster than ever. Support agents resolve tickets end-to-end. Research agents validate hypotheses across hundreds of sources in minutes. And people aren't just running one agent: they're running several in parallel and around the clock.</p><p>As Cloudflare's CTO Dane Knecht and VP of Product Rita Kozlov noted in our <a href="https://blog.cloudflare.com/welcome-to-agents-week/"><u>welcome to Agents Week post</u></a>, the potential scale of agents is staggering: If even a fraction of the world's knowledge workers each run a few agents in parallel, you need compute capacity for tens of millions of simultaneous sessions. The one-app-serves-many-users model the cloud was built on doesn't work for that. But that's exactly what developers and businesses want to do: build agents, deploy them to users, and run them at scale.</p><p>Getting there means solving problems across the entire stack. Agents need <b>compute</b> that scales from full operating systems to lightweight isolates. They need <b>security</b> and identity built into how they run.  They need an <b>agent toolbox</b>: the right models, tools, and context to do real work. All the code that agents generate needs a clear path from afternoon <b>prototype to production</b> app. And finally, as agents drive a growing share of Internet traffic, the web itself needs to adapt for the emerging <b>agentic web</b>. Turns out, the containerless, serverless compute platform we launched eight years ago with Workers was ready-made for this moment. Since then, we've grown it into a full platform, and this week we shipped the next wave of primitives purpose-built for agents, organized around exactly those problems.</p><p>We are here to create Cloud 2.0 — the agentic cloud. Infrastructure designed for a world where agents are a primary workload. </p><p>Here's a list of everything we announced this week — we wouldn’t want you to miss a thing.</p>
    <div>
      <h2>Compute</h2>
      <a href="#compute">
        
      </a>
    </div>
    <p>It starts with compute. Agents need somewhere to run, and somewhere to store and run the code they write. Not all agents need the same thing: some need a full operating system to install packages and run terminal commands, most need something lightweight that starts in milliseconds and scales to millions. This week we shipped the environments to run them, as well as a new Git-compatible workspace for agents:</p><table><tr><th><p><b>Announcement</b></p></th><th><p><b>Summary</b></p></th></tr><tr><td><p><a href="https://blog.cloudflare.com/artifacts-git-for-agents-beta"><u>Artifacts: Versioned storage that speaks Git</u></a></p></td><td><p>Give your agents, developers, and automations a home for code and data. We’ve just launched Artifacts: Git-compatible versioned storage built for agents. Create tens of millions of repos, fork from any remote, and hand off a URL to any Git client.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/sandbox-ga/"><u>Agents have their own computers with Sandboxes GA</u></a></p></td><td><p>Cloudflare Sandboxes give AI agents a persistent, isolated environment: a real computer with a shell, a filesystem, and background processes that starts on demand and picks up exactly where it left off.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/sandbox-auth/"><u>Dynamic, identity-aware, and secure: egress controls for Sandboxes</u></a></p></td><td><p>Outbound Workers for Sandboxes provide a programmable, zero-trust egress proxy for AI agents. This allows developers to inject credentials and enforce dynamic security policies without exposing sensitive tokens to untrusted code.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/durable-object-facets-dynamic-workers/"><u>Durable Objects in Dynamic Workers: Give each AI-generated app its own database</u></a></p></td><td><p>Durable Object Facets allows Dynamic Workers to instantiate Durable Objects with their own isolated SQLite databases. This enables developers to build platforms that run persistent, stateful code generated on-the-fly.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/workflows-v2/"><u>Rearchitecting the Workflows control plane for the agentic era</u></a></p></td><td><p>Cloudflare Workflows, a durable execution engine for multi-step applications, now supports 50,000 concurrency and 300 creation rate limits through a rearchitectured control plane, helping scale to meet the use cases for durable background agents.</p></td></tr></table>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6GiG5cEdgwoLU94AuUuLfS/f9022abc8ce823ef2468860474598b6f/BLOG-3239_2.png" />
          </figure>
    <div>
      <h2>Security</h2>
      <a href="#security">
        
      </a>
    </div>
    <p>Running agents and their code is only half the challenge. Agents connect to private networks, access internal services, and take autonomous actions on behalf of users. When anyone in an organization can spin up their own agents, security can't be an afterthought. It has to be the default. This week, we launched the tools to make that easy.</p><table><tr><th><p><b>Announcement</b></p></th><th><p><b>Summary</b></p></th></tr><tr><td><p><a href="https://blog.cloudflare.com/mesh/"><u>Secure private networking for everyone: users, nodes, agents, Workers — introducing Cloudflare Mesh</u></a></p></td><td><p>Cloudflare Mesh provides secure, private network access for users, nodes, and autonomous AI agents. By integrating with Workers VPC, developers can now grant agents scoped access to private databases and APIs without manual tunnels.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/managed-oauth-for-access/"><u>Managed OAuth for Access: make internal apps agent-ready in one click</u></a></p></td><td><p>Managed OAuth for Cloudflare Access helps AI agents securely navigate internal applications. By adopting RFC 9728, agents can authenticate on behalf of users without using insecure service accounts.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/improved-developer-security/"><u>Securing non-human identities: automated revocation, OAuth, and scoped permissions</u></a></p></td><td><p>Cloudflare is introducing scannable API tokens, enhanced OAuth visibility, and GA for resource-scoped permissions. These tools help developers implement a true least-privilege architecture while protecting against credential leakage.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/enterprise-mcp/"><u>Scaling MCP adoption: our reference architecture for enterprise MCP deployments</u></a></p></td><td><p>We share Cloudflare's internal strategy for governing MCP using Access, AI Gateway, and MCP server portals. We also launch Code Mode to slash token costs and recommend new rules for detecting Shadow MCP in Cloudflare Gateway.</p></td></tr></table>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6dT2Kw2vS0AJvk6Nw1oAg2/c1b9ca10314f85664ce6a939339f1247/BLOG-3239_3.png" />
          </figure>
    <div>
      <h2>Agent Toolbox</h2>
      <a href="#agent-toolbox">
        
      </a>
    </div>
    <p>A capable agent needs to be able to think and remember, communicate, and see. This means being powered with the right models, with access to the right tools and the right context for their task at hand. This week we shipped the primitives — inference, search, memory, voice, email, and a browser — that turn an agent into something that actually gets work done.</p><table><tr><th><p><b>Announcement</b></p></th><th><p><b>Summary</b></p></th></tr><tr><td><p><a href="https://blog.cloudflare.com/project-think/"><u>Project Think: building the next generation of AI agents on Cloudflare</u></a></p></td><td><p>Announcing a preview of the next edition of the Agents SDK — from lightweight primitives to a batteries-included platform for AI agents that think, act, and persist.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/voice-agents/"><u>Add voice to your agent</u></a></p></td><td><p>An experimental voice pipeline for the Agents SDK enables real-time voice interactions over WebSockets. Developers can now build agents with continuous STT and TTS in just ~30 lines of server-side code.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/email-for-agents/"><u>Cloudflare Email Service: now in public beta. Ready for your agents</u></a></p></td><td><p>Agents are becoming multi-channel. That means making them available wherever your users already are — including the inbox. Cloudflare Email Service enters public beta with the infrastructure layer to make that easy: send, receive, and process email natively from your agents.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/ai-platform/"><u>Cloudflare's AI platform: an inference layer designed for agents </u></a></p></td><td><p>We're building Cloudflare into a unified inference layer for agents, letting developers call models from 14+ providers. New features include Workers binding for running third-party models and an expanded catalog with multimodal models.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/high-performance-llms/"><u>Building the foundation for running extra-large language models</u></a></p></td><td><p>We built a custom technology stack to run fast large language models on Cloudflare’s infrastructure. This post explores the engineering trade-offs and technical optimizations required to make high-performance AI inference accessible.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/unweight-tensor-compression/"><u>Unweight: how we compressed an LLM 22% without sacrificing quality</u></a></p></td><td><p>Running large LLMs across Cloudflare’s network requires us to be smarter and more efficient about GPU memory bandwidth. That’s why we developed Unweight, a lossless inference-time compression system that achieves up to a 22% model footprint reduction, so that we can deliver faster and cheaper inference than ever before. </p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/introducing-agent-memory/"><u>Agents that remember: introducing Agent Memory</u></a></p></td><td><p>Cloudflare Agent Memory is a managed service that gives AI agents persistent memory, allowing them to recall what matters, forget what doesn't, and get smarter over time.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/ai-search-agent-primitive/"><u>AI Search: the search primitive for your agents</u></a></p></td><td><p>AI Search is the search primitive for your agents. Create instances dynamically, upload files, and search across instances with hybrid retrieval and relevance boosting. Just create a search instance, upload, and search.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/browser-run-for-ai-agents/"><u>Browser Run: give your agents a browser</u></a></p></td><td><p>Browser Rendering is now Browser Run, with Live View, Human in the Loop, CDP access, session recordings, and 4x higher concurrency limits for AI agents.</p></td></tr></table>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3sTmO0Pt0PabTR2g8pZ963/a3f523a17db3a13301072232664b11c2/BLOG-3239_4.png" />
          </figure>
    <div>
      <h2>Prototype to production</h2>
      <a href="#prototype-to-production">
        
      </a>
    </div>
    <p>The best infrastructure is also one that’s easy to use. We want to meet developers and their agents where they’re already working: in the terminal, in the editor, in a prompt, and make the full Cloudflare platform accessible without context-switching.</p><table><tr><th><p><b>Announcement</b></p></th><th><p><b>Summary</b></p></th></tr><tr><td><p><a href="https://blog.cloudflare.com/cf-cli-local-explorer/"><u>Building a CLI for all of Cloudflare</u></a></p></td><td><p>We’re introducing cf, a new unified CLI designed for consistency across the Cloudflare platform, alongside Local Explorer for debugging local data. These tools simplify how developers and AI agents interact with our nearly 3,000 API operations.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/introducing-agent-lee/"><u>Introducing Agent Lee - a new interface to the Cloudflare stack</u></a></p></td><td><p>Agent Lee is an in-dashboard agent that shifts Cloudflare’s interface from manual tab-switching to a single prompt. Using sandboxed TypeScript, it helps you troubleshoot and manage your stack as a grounded technical collaborator.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/flagship/"><u>Introducing Flagship: feature flags built for the age of AI</u></a></p></td><td><p>Introducing Flagship, a native feature flag service built on Cloudflare’s global network to eliminate the latency of third-party providers. By using KV and Durable Objects, Flagship allows for sub-millisecond flag evaluation.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/deploy-planetscale-postgres-with-workers/"><u>Deploy Postgres and MySQL databases with PlanetScale + Workers</u></a></p></td><td><p>Learn how to deploy PlanetScale Postgres and MySQL databases via Cloudflare and connect Cloudflare Workers.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/registrar-api-beta/"><u>Register domains wherever you build: Cloudflare Registrar API now in beta</u></a></p></td><td><p>The Cloudflare Registrar API is now in beta. Developers and AI agents can search, check availability, and register domains at cost directly from their editor, their terminal, or their agent — without leaving their workflow.</p></td></tr></table>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1BuOIU5Q3gfMIekZJEiBra/732c42b78aa397e7bb19b3816c7d1516/BLOG-3239_5.png" />
          </figure>
    <div>
      <h2>Agentic Web</h2>
      <a href="#agentic-web">
        
      </a>
    </div>
    <p>As more agents come online, they're still browsing an Internet that was built for people. Existing websites need new tools to control what bots can access their content, package and present it for agents, and measure how ready they are for this shift.</p><table><tr><th><p><b>Announcement</b></p></th><th><p><b>Summary</b></p></th></tr><tr><td><p><a href="https://blog.cloudflare.com/agent-readiness/"><u>Introducing the Agent Readiness score. Is your site agent-ready?</u></a></p></td><td><p>The Agent Readiness score can help site owners understand how well their websites support AI agents. Here we explore new standards, share Radar data, and detail how we made Cloudflare’s docs the most agent-friendly on the web.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/ai-redirects/"><u>Redirects for AI Training enforces canonical content</u></a></p></td><td><p>Soft directives don’t stop crawlers from ingesting deprecated content. Redirects for AI Training allows anybody on Cloudflare to redirect verified crawlers to canonical pages with one toggle and no origin changes.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/network-performance-agents-week/"><u>Agents Week: Network performance update</u></a></p></td><td><p>By migrating our request handling layer to a Rust-based architecture called FL2, Cloudflare has increased its performance lead to 60% of the world’s top networks. We use real-user measurements and TCP connection trimeans to ensure our data reflects the actual experience of people on the Internet</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/shared-dictionaries/"><u>Shared dictionary compression that keeps up with the agentic web</u></a></p></td><td><p>We give you a sneak peek of our support for shared compression dictionaries, show you how it improves page load times, and reveal when you’ll be able to try the beta yourself.</p></td></tr></table>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5jKSnMla0hclK999LtHfuu/886f66ffda1c699a5a15bf6748a6cf9b/BLOG-3239_6.png" />
          </figure>
    <div>
      <h2>That’s a wrap</h2>
      <a href="#thats-a-wrap">
        
      </a>
    </div>
    <p>Agents Week 2026 is ending, but the agentic cloud is just getting started. Everything we shipped this week — from compute and security to the agent toolbox and the agentic web — is the foundation. We're going to keep building on it to give you everything you need to build what's next.</p><p>We also have more blog posts coming out today and tomorrow to continue the story, so keep an eye out for the latest <a href="https://blog.cloudflare.com/"><u>at our blog</u></a>.</p><p>If you're building on any of what we announced this week, we want to hear about it. Come find us on <a href="https://x.com/cloudflaredev"><u>X</u></a> or <a href="https://discord.com/invite/cloudflaredev"><u>Discord</u></a>, or head to the <a href="https://developers.cloudflare.com/products/?product-group=Developer+platform"><u>developer documentation</u></a>. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/YLNvB0KU0Eh3KpAj6YgD6/77c190843d1fa76b212571f1d607d8d2/BLOG-3239_7.png" />
          </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Agents Week]]></category>
            <category><![CDATA[Agents]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Durable Objects]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[SDK]]></category>
            <category><![CDATA[Browser Run]]></category>
            <category><![CDATA[Cloudflare Access]]></category>
            <category><![CDATA[Browser Rendering]]></category>
            <category><![CDATA[MCP]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Sandbox]]></category>
            <category><![CDATA[LLM]]></category>
            <category><![CDATA[Cloudflare Gateway]]></category>
            <category><![CDATA[Workers AI]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[API]]></category>
            <guid isPermaLink="false">25CSwW9eXDM4FJOdVPLf8f</guid>
            <dc:creator>Ming Lu</dc:creator>
            <dc:creator>Anni Wang</dc:creator>
        </item>
        <item>
            <title><![CDATA[The AI engineering stack we built internally — on the platform we ship]]></title>
            <link>https://blog.cloudflare.com/internal-ai-engineering-stack/</link>
            <pubDate>Mon, 20 Apr 2026 13:00:00 GMT</pubDate>
            <description><![CDATA[ We built our internal AI engineering stack on the same products we ship. That means 20 million requests routed through AI Gateway, 241 billion tokens processed, and inference running on Workers AI, serving more than 3,683 internal users. Here's how we did it.
 ]]></description>
            <content:encoded><![CDATA[ <p></p><p>In the last 30 days, 93% of Cloudflare’s R&amp;D organization used AI coding tools powered by infrastructure we built on our own platform.</p><p>Eleven months ago, we undertook a major project: to truly integrate AI into our engineering stack. We needed to build the internal MCP servers, access layer, and AI tooling necessary for agents to be useful at Cloudflare. We pulled together engineers from across the company to form a tiger team called iMARS (Internal MCP Agent/Server Rollout Squad). The sustained work landed with the Dev Productivity team, who also own much of our internal tooling including CI/CD, build systems, and automation.</p><p>Here are some numbers that capture our own agentic AI use over the last 30 days:</p><ul><li><p><b>3,683 internal users</b> actively using AI coding tools (60% company-wide, 93% across R&amp;D), out of approximately 6,100 total employees</p></li><li><p><b>47.95 million</b> AI requests </p></li><li><p><b>295 teams</b> are currently utilizing agentic AI tools and coding assistants.</p></li><li><p><b>20.18 million</b> AI Gateway requests per month</p></li><li><p><b>241.37 billion</b> tokens routed through AI Gateway</p></li><li><p><b>51.83 billion</b> tokens processed on Workers AI</p></li></ul><p>The impact on developer velocity internally is clear: we’ve never seen a quarter-to-quarter increase in merge requests to this degree.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3lMKwBTT3m7DDmS4BnoNhB/9002104b9c6c09cf72f4052d71e22363/BLOG-3270_2.png" />
          </figure><p>As AI tooling adoption has grown the 4-week rolling average has climbed from ~5,600/week to over 8,700. The week of March 23 hit 10,952, nearly double the Q4 baseline.</p><p>MCP servers were the starting point, but the team quickly realized we needed to go further: rethink how standards are codified, how code gets reviewed, how engineers onboard, and how changes propagate across thousands of repos.</p><p>This post dives deep into what that looked like over the past eleven months and where we ended up. We're publishing now, to close out Agents Week, because the AI engineering stack we built internally runs on the same products we’re shipping and enhancing this week.</p>
    <div>
      <h2>The architecture at a glance</h2>
      <a href="#the-architecture-at-a-glance">
        
      </a>
    </div>
    <p>The engineer-facing tools layer (<a href="https://opencode.ai/"><u>OpenCode</u></a>, Windsurf, and other MCP-compatible clients) include both open-source and third-party coding assistant tools.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2oZJhdVNbW05lJaSnN3hds/21c5427f275ba908388333884530f455/image8.png" />
          </figure><p>Each layer maps to a Cloudflare product or tool we use:</p><table><tr><th><p>What we built</p></th><th><p>Built with</p></th></tr><tr><td><p>Zero Trust authentication</p></td><td><p><a href="https://developers.cloudflare.com/cloudflare-one/"><b><u>Cloudflare Access</u></b></a></p></td></tr><tr><td><p>Centralized LLM routing, cost tracking, BYOK, and Zero Data Retention controls</p></td><td><p><a href="https://developers.cloudflare.com/ai-gateway/"><b><u>AI Gateway</u></b></a></p></td></tr><tr><td><p>On-platform inference with open-weight models</p></td><td><p><a href="https://developers.cloudflare.com/workers-ai/"><b><u>Workers AI</u></b></a></p></td></tr><tr><td><p>MCP Server Portal with single OAuth</p></td><td><p><a href="https://developers.cloudflare.com/workers/"><b><u>Workers</u></b></a> + <a href="https://developers.cloudflare.com/cloudflare-one/"><b><u>Access</u></b></a></p></td></tr><tr><td><p>AI Code Reviewer CI integration</p></td><td><p><a href="https://developers.cloudflare.com/workers/"><b><u>Workers</u></b></a> + <a href="https://developers.cloudflare.com/ai-gateway/"><b><u>AI Gateway</u></b></a></p></td></tr><tr><td><p>Sandboxed execution for agent-generated code (Code Mode)</p></td><td><p><a href="https://blog.cloudflare.com/dynamic-workers/"><b><u>Dynamic Workers</u></b></a></p></td></tr><tr><td><p>Stateful, long-running agent sessions</p></td><td><p><a href="https://developers.cloudflare.com/agents/"><b><u>Agents SDK</u></b></a> (McpAgent, Durable Objects)</p></td></tr><tr><td><p>Isolated environments for cloning, building, and testing</p></td><td><p><a href="https://developers.cloudflare.com/sandbox/"><b><u>Sandbox SDK</u></b></a> — GA as of Agents Week</p></td></tr><tr><td><p>Durable multi-step workflows</p></td><td><p><a href="https://developers.cloudflare.com/workflows/"><b><u>Workflows</u></b></a> — scaled 10x during Agents Week</p></td></tr><tr><td><p>16K+ entity knowledge graph</p></td><td><p><a href="https://backstage.io/"><b><u>Backstage</u></b></a> (OSS)</p></td></tr></table><p>None of this is internal-only infrastructure. Everything (besides Backstage) listed above is a shipping product, and many of them got substantial updates during Agents Week.</p><p>We’ll walk through this in three acts:</p><ol><li><p><b>The platform layer</b> — how authentication, routing, and inference work (AI Gateway, Workers AI, MCP Portal, Code Mode)</p></li><li><p><b>The knowledge layer</b> — how agents understand our systems (Backstage, AGENTS.md)</p></li><li><p><b>The enforcement layer</b> — how we keep quality high at scale (AI Code Reviewer, Engineering Codex)</p></li></ol>
    <div>
      <h2>Act 1: The platform layer</h2>
      <a href="#act-1-the-platform-layer">
        
      </a>
    </div>
    
    <div>
      <h3>How AI Gateway helped us stay secure and improve the developer experience</h3>
      <a href="#how-ai-gateway-helped-us-stay-secure-and-improve-the-developer-experience">
        
      </a>
    </div>
    <p>When you have over 3,600+ internal users using AI coding tools daily, you need to solve for access and visibility across many clients, use cases, and roles.</p><p>Everything starts with <a href="https://developers.cloudflare.com/cloudflare-one/"><b><u>Cloudflare Access</u></b></a>, which handles all authentication and zero-trust policy enforcement. Once authenticated, every LLM request routes through <a href="https://developers.cloudflare.com/ai-gateway/"><b><u>AI Gateway</u></b></a>. This gives us a single place to manage provider keys, cost tracking, and data retention policies.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5zorZ21OXIbNg7VACGzwZX/e1b35951622bab6ae7ca9fa9b8a5c844/BLOG-3270_4.png" />
          </figure><p><sup><i>The OpenCode AI Gateway overview: 688.46k requests per day, 10.57B tokens per day, routing to four providers through one endpoint.</i></sup></p><p>AI Gateway analytics show how monthly usage is distributed across model providers. Over the last month, internal request volume broke down as follows.</p><table><tr><th><p>Provider</p></th><th><p>Requests/month</p></th><th><p>Share</p></th></tr><tr><td><p>Frontier Labs (OpenAI, Anthropic, Google)</p></td><td><p>13.38M</p></td><td><p>91.16%</p></td></tr><tr><td><p>Workers AI</p></td><td><p>1.3M</p></td><td><p>8.84%</p></td></tr></table><p>Frontier models handle the bulk of complex agentic coding work for now, but Workers AI is already a significant part of the mix and handles an increasing share of our agentic engineering workloads.</p>
    <div>
      <h4>How we increasingly leverage Workers AI</h4>
      <a href="#how-we-increasingly-leverage-workers-ai">
        
      </a>
    </div>
    <p><a href="https://developers.cloudflare.com/workers-ai/"><b><u>Workers AI</u></b></a> is Cloudflare's serverless AI inference platform which runs open-source models on GPUs across our global network. Beyond huge cost improvements compared to frontier models, a key advantage is that inference stays on the same network as your Workers, Durable Objects, and storage. No cross-cloud hops to deal with, which cause more latency, network flakiness, and additional networking configuration to manage.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2WFqiZSL1RGbD2O7gOPDfR/2edc8c69fdea89fef76e600cb3ee5384/BLOG-3270_5.png" />
          </figure><p><sup><i>Workers AI usage in the last month: 51.47B input tokens, 361.12M output tokens.</i></sup></p><p><a href="https://blog.cloudflare.com/workers-ai-large-models/"><b><u>Kimi K2.5</u></b></a>, launched on Workers AI in March 2026, is a frontier-scale open-source model with a 256k context window, tool calling, and structured outputs. As we described in our <a href="https://blog.cloudflare.com/workers-ai-large-models/"><u>Kimi K2.5 launch post</u></a>, we have a security agent that processes over 7 billion tokens per day on Kimi. That would cost an estimated $2.4M per year on a mid-tier proprietary model. But on Workers AI, it's 77% cheaper.</p><p>Beyond security, we use Workers AI for documentation review in our CI pipeline, for generating AGENTS.md context files across thousands of repositories, and for lightweight inference tasks where same-network latency matters more than peak model capability.</p><p>As open-source models continue to improve, we expect Workers AI to handle a growing share of our internal workloads. </p><p>One thing we got right early: routing through a single proxy Worker from day one. We could have had clients connect directly to AI Gateway, which would have been simpler to set up initially. But centralizing through a Worker meant we could add per-user attribution, model catalog management, and permission enforcement later without touching any client configs. Every feature described in the bootstrap section below exists because we had that single choke point. The proxy pattern gives you a control plane that direct connections don't, and if we plug in additional coding assistant tools later, the same Worker and discovery endpoint will handle them.</p>
    <div>
      <h4>How it works: one URL to configure everything</h4>
      <a href="#how-it-works-one-url-to-configure-everything">
        
      </a>
    </div>
    <p>The entire setup starts with one command:</p>
            <pre><code>opencode auth login https://opencode.internal.domain
</code></pre>
            <p>That command triggers a chain that configures providers, models, MCP servers, agents, commands, and permissions, without the user touching a config file.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3sK7wVF3QbeNJ0rjLBabXh/05b9098197b7a3d084e6d860ed22b169/BLOG-3270_6.png" />
          </figure><p><b>Step 1: Discover auth requirements.</b> OpenCode fetches <a href="https://opencode.ai/docs/config/"><u>config</u></a> from a URL like <code>https://opencode.internal.domain/.well-known/opencode</code>. </p><p>This discovery endpoint is served by a Worker and the response has an <code>auth</code> block telling OpenCode how to authenticate, along with a <code>config</code> block with providers, MCP servers, agents, commands, and default permissions:</p>
            <pre><code>{
  "auth": {
    "command": ["cloudflared", "access", "login", "..."],
    "env": "TOKEN"
  },
  "config": {
    "provider": { "..." },
    "mcp": { "..." },
    "agent": { "..." },
    "command": { "..." },
    "permission": { "..." }
  }
}
</code></pre>
            <p><b>Step 2: Authenticate via Cloudflare Access.</b> OpenCode runs the auth command and the user authenticates through the same SSO they use for everything else at Cloudflare. <code>cloudflared</code> returns a signed JWT. OpenCode stores it locally and automatically attaches it to every subsequent provider request.</p><p><b>Step 3: Config is merged into OpenCode.</b> The config provided is shared defaults for the entire organization, but local configs always take priority. Users can override the default model, add their own agents, or adjust project and user scoped permissions without affecting anyone else.</p><p><b>Inside the proxy Worker.</b> The Worker is a simple Hono app that does three things:</p><ol><li><p><b>Serves the shared config.</b> The config is compiled at deploy time from structured source files and contains placeholder values like {baseURL} for the Worker's origin. At request time, the Worker replaces these, so all provider requests route through the Worker rather than directly to model providers. Each provider gets a path prefix (<code>/anthropic, /openai, /google-ai-studio/v1beta, /compat</code> for Workers AI) that the Worker forwards to the corresponding AI Gateway route.</p></li><li><p><b>Proxies requests to AI Gateway.</b> When OpenCode sends a request like <code>POST /anthropic/v1/messages</code>, the Worker validates the Cloudflare Access JWT, then rewrites headers before forwarding:
</p>
            <pre><code>Stripped:   authorization, cf-access-token, host
Added:      cf-aig-authorization: Bearer &lt;API_KEY&gt;
            cf-aig-metadata: {"userId": "&lt;anonymous-uuid&gt;"}
</code></pre>
            <p>The request goes to AI Gateway, which routes it to the appropriate provider. The response passes straight through with zero buffering. The <code>apiKey</code> field in the client config is empty because the Worker injects the real key server-side. No API keys exist on user machines.</p></li><li><p><b>Keeps the model catalog fresh.</b> An hourly cron trigger fetches the current OpenAI model list from <code>models.dev</code>, caches it in Workers KV, and injects <code>store: false</code> on every model for Zero Data Retention. New models get ZDR automatically without a config redeploy.</p></li></ol><p><b>Anonymous user tracking.</b> After JWT validation, the Worker maps the user's email to a UUID using D1 for persistent storage and KV as a read cache. AI Gateway only ever sees the anonymous UUID in <code>cf-aig-metadata</code>, never the email. This gives us per-user cost tracking and usage analytics without exposing identities to model providers or Gateway logs.</p><p><b>Config-as-code. </b>Agents and commands are authored as markdown files with YAML frontmatter. A build script compiles them into a single JSON config validated against the OpenCode JSON schema. Every new session picks up the latest version automatically.</p><p>The overall architecture is simple and easy for anyone to deploy with our developer platform: a proxy Worker, Cloudflare Access, AI Gateway, and a client-accessible discovery endpoint that configures everything automatically. Users run one command and they're done. There’s nothing for them to configure manually, no API keys on laptops or MCP server connections to manually set up. Making changes to our agentic tools and updating what 3,000+ people get in their coding environment is just a <code>wrangler deploy</code> away.</p>
    <div>
      <h3>The MCP Server Portal: one OAuth, multiple MCP tools</h3>
      <a href="#the-mcp-server-portal-one-oauth-multiple-mcp-tools">
        
      </a>
    </div>
    <p>We described our full approach to governing MCP at enterprise scale in a <a href="https://blog.cloudflare.com/enterprise-mcp/"><u>separate post</u></a>, including how we use <a href="https://developers.cloudflare.com/cloudflare-one/access-controls/ai-controls/mcp-portals/"><u>MCP Server Portals</u></a>, Cloudflare Access, and Code Mode together. Here's the short version of what we built internally.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/36gUHwTs8CzZeS03l9yno1/42953a6dc2ac944f4dbe31e3ae51570e/BLOG-3270_7.png" />
          </figure><p>Our internal portal aggregates 13 production MCP servers exposing 182+ tools across Backstage, GitLab, Jira, Sentry, Elasticsearch, Prometheus, Google Workspace, our internal Release Manager, and more. This unifies access and simplifies everything giving us one endpoint and one Cloudflare Access flow governing access to every tool.</p><p>Each MCP server is built on the same foundation: McpAgent from the Agents SDK, <a href="https://github.com/cloudflare/workers-oauth-provider"><b><u>workers-oauth-provider</u></b></a> for OAuth, and Cloudflare Access for identity. The whole thing lives in a single monorepo with shared auth infrastructure, Bazel builds, CI/CD pipelines, and <code>catalog-info.yaml </code>for Backstage registration. Adding a new server is mostly copying an existing one and changing the API it wraps. For more on how this works and the security architecture behind it, see <a href="https://blog.cloudflare.com/enterprise-mcp/"><u>our enterprise MCP reference architecture</u></a>.</p>
    <div>
      <h3>Code Mode at the portal layer</h3>
      <a href="#code-mode-at-the-portal-layer">
        
      </a>
    </div>
    <p>MCP is the right protocol for connecting AI agents to tools, but it has a practical problem: every tool definition consumes context window tokens before the model even starts working. As the number of MCP servers and tools grows, so does the token overhead, and at scale, this becomes a real cost. <a href="https://blog.cloudflare.com/code-mode/"><u>Code Mode </u></a>is the emerging fix: instead of loading every tool schema up front, the model discovers and calls tools through code.</p><p>Our GitLab MCP server originally exposed 34 individual tools (<code>get_merge_request</code>, <code>list_pipelines</code>, <code>get_file_content</code>, and so on). Those 34 tool schemas consumed roughly 15,000 tokens of context window per request. On a 200K context window, that's 7.5% of the budget gone before asking a question. Multiplied across every request, every engineer, every day, it adds up.</p><p>MCP Server Portals now support Code Mode proxying, which lets us solve that problem centrally instead of one server at a time. Rather than exposing every upstream tool definition to the client, the portal collapses them into two portal-level tools: <code>portal_codemode_search and portal_codemode_execute</code>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7jQa4HPmrOaOhUojZCJQ8q/4e81201065a50dd67c07e257b725e8b8/BLOG-3270_8.png" />
          </figure><p>The nice thing about doing this at the portal layer is that it scales cleanly. Without Code Mode, every new MCP server adds more schema overhead to every request. With portal-level Code Mode, the client still only sees two tools even as we connect more servers behind the portal. That means less context bloat, lower token cost, and a cleaner architecture overall.</p>
    <div>
      <h2>Act 2: The knowledge layer</h2>
      <a href="#act-2-the-knowledge-layer">
        
      </a>
    </div>
    
    <div>
      <h3>Backstage: the knowledge graph underneath all of it</h3>
      <a href="#backstage-the-knowledge-graph-underneath-all-of-it">
        
      </a>
    </div>
    <p>Before the iMARS team could build MCP servers that were actually useful, we needed to solve a more fundamental problem: structured data about our services and infrastructure. We need our agents to understand context outside the code base, like who owns what, how services depend on each other, where the documentation lives, and what databases a service talks to.</p><p>We run <a href="https://backstage.io/"><u>Backstage</u></a>, the open-source internal developer portal originally built by Spotify, as our service catalog. It's self-hosted (not on Cloudflare products, for the record) and it tracks things like:</p><ul><li><p>2,055 services, 167 libraries, and 122 packages</p></li><li><p>228 APIs with schema definitions</p></li><li><p>544 systems (products) across 45 domains</p></li><li><p>1,302 databases, 277 ClickHouse tables, 173 clusters</p></li><li><p>375 teams and 6,389 users with ownership mappings</p></li><li><p>Dependency graphs connecting services to the databases, Kafka topics, and cloud resources they rely on</p></li></ul><p>Our Backstage MCP server (13 tools) is available through our MCP Portal, and an agent can look up who owns a service, check what it depends on, find related API specs, and pull Tech Insights scores, all without leaving the coding session.</p><p>Without this structured data, agents are working blind. They can read the code in front of them, but they can't see the system around it. The catalog turns individual repos into a connected map of the engineering organization.</p>
    <div>
      <h3>AGENTS.md: getting thousands of repos ready for AI</h3>
      <a href="#agents-md-getting-thousands-of-repos-ready-for-ai">
        
      </a>
    </div>
    <p>Early in the rollout, we kept seeing the same failure mode: coding agents produced changes that looked plausible and were still wrong for the repo. Usually the problem was local context: the model didn't know the right test command, the team's current conventions, or which parts of the codebase were off-limits. That pushed us toward AGENTS.md: a short, structured file in each repo that tells coding agents how the codebase actually works and forces teams to make that context explicit.</p>
    <div>
      <h4>What AGENTS.md looks like</h4>
      <a href="#what-agents-md-looks-like">
        
      </a>
    </div>
    <p>We built a system that generates AGENTS.md files across our GitLab instance. Because these files sit directly in the model's context window, we wanted them to stay short and high-signal. A typical file looks like this:</p>
            <pre><code># AGENTS.md

## Repository
- Runtime: cloudflare workers
- Test command: `pnpm test`
- Lint command: `pnpm lint`

## How to navigate this codebase
- All cloudflare workers  are in src/workers/, one file per worker
- MCP server definitions are in src/mcp/, each tool in a separate file
- Tests mirror source: src/foo.ts -&gt; tests/foo.test.ts

## Conventions
- Testing: use Vitest with `@cloudflare/vitest-pool-workers` (Codex: RFC 021, RFC 042)
- API patterns: Follow internal REST conventions (Codex: API-REST-01)

## Boundaries
- Do not edit generated files in `gen/`
- Do not introduce new background jobs without updating `config/`

## Dependencies
- Depends on: auth-service, config-service
- Depended on by: api-gateway, dashboard
</code></pre>
            <p></p><p>When an agent reads this file, it doesn't have to infer the repo from scratch. It knows how the codebase is organized, which conventions to follow and which Engineering Codex rules apply.</p>
    <div>
      <h4>How we generate them at scale</h4>
      <a href="#how-we-generate-them-at-scale">
        
      </a>
    </div>
    <p>The generator pipeline pulls entity metadata from our Backstage service catalog (ownership, dependencies, system relationships), analyzes the repository structure to detect the language, build system, test framework, and directory layout, then maps the detected stack to relevant Engineering Codex standards. A capable model then generates the structured document, and the system opens a merge request so the owning team can review and refine it.</p><p>We've processed roughly 3,900 repositories this way. The first pass wasn't always perfect, especially for polyglot repos or unusual build setups, but even that baseline was much better than asking agents to infer everything from scratch.</p><p>The initial merge request solved the bootstrap problem, but keeping these files current mattered just as much. A stale AGENTS.md can be worse than no file at all. We closed that loop with the AI Code Reviewer, which can flag when repository changes suggest that AGENTS.md should be updated.</p>
    <div>
      <h2>Act 3: The enforcement layer</h2>
      <a href="#act-3-the-enforcement-layer">
        
      </a>
    </div>
    
    <div>
      <h3>The AI Code Reviewer</h3>
      <a href="#the-ai-code-reviewer">
        
      </a>
    </div>
    <p>Every merge request at Cloudflare gets an AI code review. Integration is straightforward: teams add a single CI component to their pipeline, and from that point every MR is reviewed automatically.</p><p>We use GitLab's self-hosted solution as our CI/CD platform. The reviewer is implemented as a GitLab CI component that teams include in their pipeline. When an MR is opened or updated, the CI job runs <a href="https://opencode.ai/"><u>OpenCode</u></a> with a multi-agent review coordinator. The coordinator classifies the MR by risk tier (trivial, lite, or full) and delegates to specialized review agents: code quality, security, codex compliance, documentation, performance, and release impact. Each agent connects to the AI Gateway for model access, pulls Engineering Codex rules from a central repo, and reads the repository's AGENTS.md for codebase context. Results are posted back as structured MR comments.</p><p>A separate Workers-based config service handles centralized model selection per reviewer agent, so we can shift models without changing the CI template. The review process itself runs in the CI runner and is stateless per execution.</p>
    <div>
      <h3>The output format</h3>
      <a href="#the-output-format">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5CKN6frPHygTkAHBDrY3y7/e04eb00702a25627291c904fbde2e7ea/BLOG-3270_9.png" />
          </figure><p>We spent time getting the output format right. Reviews are broken into categories (Security, Code Quality, Performance) so engineers can scan headers rather than reading walls of text. Each finding has a severity level (Critical, Important, Suggestion, or Optional Nits) that makes it immediately clear what needs attention versus what's informational.</p><p>The reviewer maintains context across iterations. If it flagged something in a previous review round that has since been fixed, it acknowledges that rather than re-raising the same issue. And when a finding maps to an Engineering Codex rule, it cites the specific rule ID, turning an AI suggestion into a reference to an organizational standard.</p><p>Workers AI handles about 15% of the reviewer's traffic, primarily for documentation review tasks where Kimi K2.5 performs well at a fraction of the cost of frontier models. Models like Opus 4.6 and GPT 5.4 handle security-sensitive and architecturally complex reviews where reasoning capability matters most.</p><p>Over the last 30 days:</p><ul><li><p><b>100%</b> AI code reviewer coverage across all repos on our standard CI pipeline.</p></li><li><p>5.47M AI Gateway requests</p></li><li><p>24.77B tokens processed</p></li></ul><p>We're releasing a <a href="http://blog.cloudflare.com/ai-code-review"><u>detailed technical blog post</u></a> alongside this one that covers the reviewer's internal architecture, including how we route between models, the multi-agent orchestration, and the cost optimization strategies we've developed.</p>
    <div>
      <h3>Engineering Codex: engineering standards as agent skills</h3>
      <a href="#engineering-codex-engineering-standards-as-agent-skills">
        
      </a>
    </div>
    <p>The <b>Engineering Codex</b> is Cloudflare's new internal standards system where our core engineering standards live. We have a multi-stage AI distillation process, which outputs a set of codex rules ("If you need X, use Y. You must do X, if you are doing Y or Z.") along with an agent skill that uses progressive disclosure and nested hierarchical information directories and links across markdown files. </p><p>This skill is available for engineers to use locally as they build with prompts like “how should I handle errors in my Rust service?” or “review this TypeScript code for compliance.” Our Network Firewall team audited <code>rampartd</code> using a multi-agent consensus process where every requirement was scored COMPLIANT, PARTIAL, or NON-COMPLIANT with specific violation details and remediation steps reducing what previously required weeks of manual work to a structured, repeatable process.</p><p>At review time, the AI Code Reviewer cites specific Codex rules in its feedback.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6vXFmpfwqv1Xqo5CszCsEB/ada6417470b2b7533a12244578cbd9d0/BLOG-3270_10.png" />
          </figure><p><sup> </sup><sup><i>AI Code Review: showing categorized findings (Codex Compliance in this case) noting the codex RFC violation.</i></sup></p><p>None of these pieces are especially novel on their own. Plenty of companies run service catalogs, ship reviewer bots, or publish engineering standards. The difference is the wiring. When an agent can pull context from Backstage, read AGENTS.md for the repo it’s editing, and get reviewed against Codex rules by the same toolchain, the first draft is usually close enough to ship. That wasn’t true six months ago.</p>
    <div>
      <h2>The scoreboard</h2>
      <a href="#the-scoreboard">
        
      </a>
    </div>
    <p>From launching this effort to 93% R&amp;D adoption took less than a year.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1OtTpHobBRjorxOV2dWG8w/9b9dc80cba9741ee65e8565f63091e1a/BLOG-3270_11.png" />
          </figure><p><b>Company-wide adoption (Feb 5 – April 15, 2026):</b></p><table><tr><th><p>Metric</p></th><th><p>Value</p></th></tr><tr><td><p>Active users</p></td><td><p><b>3,683</b> (60% of the company)</p></td></tr><tr><td><p>R&amp;D team adoption</p></td><td><p><b>93%</b></p></td></tr><tr><td><p>AI messages</p></td><td><p><b>47.95M</b></p></td></tr><tr><td><p>Teams with AI activity</p></td><td><p><b>295</b></p></td></tr><tr><td><p>OpenCode messages</p></td><td><p><b>27.08M</b></p></td></tr><tr><td><p>Windsurf messages</p></td><td><p><b>434.9K</b></p></td></tr></table><p><b>AI Gateway (last 30 days, combined):</b></p><table><tr><th><p>Metric</p></th><th><p>Value</p></th></tr><tr><td><p>Requests</p></td><td><p><b>20.18M</b></p></td></tr><tr><td><p>Tokens</p></td><td><p><b>241.37B</b></p></td></tr></table><p><b>Workers AI (last 30 days):</b></p><table><tr><th><p>Metric</p></th><th><p>Value</p></th></tr><tr><td><p>Input tokens</p></td><td><p><b>51.47B</b></p></td></tr><tr><td><p>Output tokens</p></td><td><p><b>361.12M</b></p></td></tr></table>
    <div>
      <h2>What's next: background agents</h2>
      <a href="#whats-next-background-agents">
        
      </a>
    </div>
    <p>The next evolution in our internal engineering stack will include background agents: agents that can be spun up on demand with the same tools available locally (MCP portal, git, test runners) but running entirely in the cloud. The architecture uses Durable Objects and the Agents SDK for orchestration, delegating to Sandbox containers when the job requires a full development environment like cloning a repo, installing dependencies, or running tests. The <a href="https://blog.cloudflare.com/sandbox-ga/"><u>Sandbox SDK went GA</u></a> during Agents Week.</p><p>Long-running agents, <a href="https://blog.cloudflare.com/project-think/"><u>shipped natively into the Agents SDK</u></a> during Agents Week, solve the durable session problem that previously required workarounds. The SDK now supports sessions that run for extended periods without eviction, enough for an agent to clone a large repo, run a full test suite, iterate on failures, and open a MR in a single session.</p><p>This represents an eleven-month effort to rethink not just how code gets written, but how it gets reviewed, how standards are enforced, and how changes ship safely across thousands of repos. Every layer runs on the same products our customers use.</p>
    <div>
      <h2>Start building</h2>
      <a href="#start-building">
        
      </a>
    </div>
    <p>Agents Week just <a href="https://blogs.cloudflare.com/agents-week-in-review"><u>shipped</u></a> everything you need. The platform is here.</p>
            <pre><code>npx create-cloudflare@latest --template cloudflare/agents-starter
</code></pre>
            <p>That agents starter gets you running. The diagram below is the full architecture for when you’re ready to grow it, your tools layer on top (chatbot, web UI, CLI, browser extension), the Agents SDK handling session state and orchestration in the middle, and the Cloudflare services you call from it underneath.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3Ebzq5neZbshbrhb5WysYY/54335791b2f731e2095fd57a4fe11cf3/image11.png" />
          </figure><p><b>Docs:</b> <a href="https://developers.cloudflare.com/agents/"><u>Agents SDK</u></a> · <a href="https://developers.cloudflare.com/sandbox/"><u>Sandbox SDK</u></a> · <a href="https://developers.cloudflare.com/ai-gateway/"><u>AI Gateway</u></a> · <a href="https://developers.cloudflare.com/workers-ai/"><u>Workers AI</u></a> · <a href="https://developers.cloudflare.com/workflows/"><u>Workflows</u></a> · <a href="https://developers.cloudflare.com/agents/api-reference/codemode/"><u>Code Mode</u></a> · <a href="https://blog.cloudflare.com/model-context-protocol/"><u>MCP on Cloudflare</u></a></p><p><b>Repos:</b> <a href="https://github.com/cloudflare/agents"><u>cloudflare/agents</u></a> · <a href="https://github.com/cloudflare/sandbox-sdk"><u>cloudflare/sandbox-sdk</u></a> · <a href="https://github.com/cloudflare/mcp-server-cloudflare"><u>cloudflare/mcp-server-cloudflare</u></a> · <a href="https://github.com/cloudflare/skills"><u>cloudflare/skills</u></a></p><p>For more on how we’re using AI at Cloudflare, read the post on <a href="http://blog.cloudflare.com/ai-code-review"><u>our process for AI Code Review</u></a>. And check out <a href="https://www.cloudflare.com/agents-week/updates/"><u>everything we shipped during Agents Week</u></a>.</p><p>We’d love to hear what you build. Find us on <a href="https://discord.cloudflare.com/"><u>Discord</u>,</a> <a href="https://x.com/CloudflareDev"><u>X</u>, and</a> <a href="https://bsky.app/profile/cloudflare.social"><u>Bluesky</u></a>.</p><p><sup><i>Ayush Thakur built the AGENTS.md system and the AI Gateway integration for the OpenCode infrastructure, Scott Roemeschke is the Engineering Manager of the Developer Productivity team at Cloudflare, Rajesh Bhatia leads the Productivity Platform function at Cloudflare. This post was a collaborative effort across the Devtools team, with help from volunteers across the company through the iMARS (Internal MCP Agent/Server Rollout Squad) tiger team.</i></sup></p> ]]></content:encoded>
            <category><![CDATA[Agents Week]]></category>
            <category><![CDATA[Agents]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[SASE]]></category>
            <category><![CDATA[MCP]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Cloudflare Gateway]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Workers AI]]></category>
            <guid isPermaLink="false">4z7ZJ5ZbY3e3YwJtkg6NiD</guid>
            <dc:creator>Ayush Thakur</dc:creator>
            <dc:creator>Scott Roe-Meschke</dc:creator>
            <dc:creator>Rajesh Bhatia</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare’s AI Platform: an inference layer designed for agents]]></title>
            <link>https://blog.cloudflare.com/ai-platform/</link>
            <pubDate>Thu, 16 Apr 2026 14:05:00 GMT</pubDate>
            <description><![CDATA[ We're building AI Gateway into a unified inference layer for AI, letting developers call models from 14+ providers. New features include Workers AI binding integration and an expanded catalog with multimodal models.
 ]]></description>
            <content:encoded><![CDATA[ <p>AI models are changing quickly: the best model to use for agentic coding today might in three months be a completely different model from a different provider. On top of this, real-world use cases often require calling more than one model. Your customer support agent might use a fast, cheap model to classify a user's message; a large, reasoning model to plan its actions; and a lightweight model to execute individual tasks.</p><p>This means you need access to all the models, without tying yourself financially and operationally to a single provider. You also need the right systems in place to monitor costs across providers, ensure reliability when one of them has an outage, and manage latency no matter where your users are.</p><p>These challenges are present whenever you’re building with AI, but they get even more pressing when you’re building <a href="https://www.cloudflare.com/learning/ai/what-is-agentic-ai/"><u>agents</u></a>. A simple chatbot might make one <a href="https://www.cloudflare.com/learning/ai/inference-vs-training/"><u>inference</u></a> call per user prompt. An agent might chain ten calls together to complete a single task and suddenly, a single slow provider doesn't add 50ms, it adds 500ms. One failed request isn't a retry, but suddenly a cascade of downstream failures. </p><p>Since launching AI Gateway and Workers AI, we’ve seen incredible adoption from developers building AI-powered applications on Cloudflare and we’ve been shipping fast to keep up! In just the past few months, we've refreshed the dashboard, added zero-setup default gateways, automatic retries on upstream failures, and more granular logging controls. Today, we’re making Cloudflare into a unified inference layer: one API to access any AI model from any provider, built to be fast and reliable. </p>
    <div>
      <h3>One catalog, one unified endpoint</h3>
      <a href="#one-catalog-one-unified-endpoint">
        
      </a>
    </div>
    <p>Starting today, you can call third-party models using the same AI.run() binding you already use for Workers AI. If you’re using Workers, switching from a Cloudflare-hosted model to one from OpenAI, Anthropic, or any other provider is a one-line change. </p>
            <pre><code>const response = await env.AI.run('anthropic/claude-opus-4-6',{
input: 'What is Cloudflare?',
}, {
gateway: { id: "default" },
});</code></pre>
            <p>For those who don’t use Workers, we’ll be releasing REST API support in the coming weeks, so you can access the full model catalog from any environment.</p><p>We’re also excited to share that you'll now have access to 70+ models across 12+ providers — all through one API, one line of code to switch between them, and one set of credits to pay for them. And we’re quickly expanding this as we go.</p><p>You can browse through our <a href="https://developers.cloudflare.com/ai/models"><u>model catalog</u></a> to find the best model for your use case, from open-source models hosted on Cloudflare Workers AI to proprietary models from the major model providers. We’re excited to be expanding access to models from <b>Alibaba Cloud, AssemblyAI, Bytedance, Google, InWorld, MiniMax, OpenAI, Pixverse, Recraft, Runway, and Vidu</b> — who will provide their models through AI Gateway. Notably, we’re expanding our model offerings to include image, video, and speech models so that you can build multimodal applications</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2ez5tichGEEn5k6SzCgWLm/380a685b14ee9732fdf87c6f88c8f39e/BLOG-3209_2.png" />
          </figure><p>Accessing all your models through one API also means you can manage all your AI spend in one place. Most companies today are calling <a href="https://aidbintel.com/pulse-survey"><u>an average of 3.5 models</u></a> across multiple providers, which means no one provider is able to give you a holistic view of your AI usage. <b>With AI Gateway, you’ll get one centralized place to monitor and manage AI spend.</b></p><p>By including custom metadata with your requests, you can get a breakdown of your costs on the attributes that you care about most, like spend by free vs. paid users, by individual customers, or by specific workflows in your app.</p>
            <pre><code>const response = await env.AI.run('@cf/moonshotai/kimi-k2.5',
      {
prompt: 'What is AI Gateway?'
      },
      {
metadata: { "teamId": "AI", "userId": 12345 }
      }
    );</code></pre>
            
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6ez3O7rmbrCUdD5R5UcuP9/4c219ff5ce1e24a0485a931b6af47608/BLOG-3209_3.png" />
          </figure>
    <div>
      <h3>Bring your own model</h3>
      <a href="#bring-your-own-model">
        
      </a>
    </div>
    <p>AI Gateway gives you access to models from all the providers through one API. But sometimes you need to run a model you've fine-tuned on your own data or one optimized for your specific use case. For that, we are working on letting users bring their own model to Workers AI. </p><p>The overwhelming majority of our traffic comes from dedicated instances for Enterprise customers who are running custom models on our platform, and we want to bring this to more customers. To do this, we leverage Replicate’s <a href="https://cog.run/"><u>Cog</u></a> technology to help you containerize machine learning models.</p><p>Cog is designed to be quite simple: all you need to do is write down dependencies in a cog.yaml file, and your inference code in a Python file. Cog abstracts away all the hard things about packaging ML models, such as CUDA dependencies, Python versions, weight loading, etc. </p><p>Example of a <code>cog.yaml</code> file:</p>
            <pre><code>build:
  python_version: "3.13"
  python_requirements: requirements.txt
predict: "predict.py:Predictor"</code></pre>
            <p>Example of a <a href="http://predict.py"><code><u>predict.py</u></code></a> file, which has a function to set up the model and a function that runs when you receive an inference request (a prediction):</p>
            <pre><code>from cog import BasePredictor, Path, Input
import torch

class Predictor(BasePredictor):
    def setup(self):
        """Load the model into memory to make running multiple predictions efficient"""
        self.net = torch.load("weights.pth")

    def predict(self,
            image: Path = Input(description="Image to enlarge"),
            scale: float = Input(description="Factor to scale image by", default=1.5)
    ) -&gt; Path:
        """Run a single prediction on the model"""
        # ... pre-processing ...
        output = self.net(input)
        # ... post-processing ...
        return output</code></pre>
            <p>Then, you can run cog build to build your container image, and push your Cog container to Workers AI. We will deploy and serve the model for you, which you then access through your usual Workers AI APIs. </p><p>We’re working on some big projects to be able to bring this to more customers, like customer-facing APIs and wrangler commands so that you can push your own containers, as well as faster cold starts through GPU snapshotting. We’ve been testing this internally with Cloudflare teams and some external customers who are guiding our vision. If you’re interested in being a design partner with us, please reach out! Soon, anyone will be able to package their model and use it through Workers AI.</p>
    <div>
      <h3>The fast path to first token</h3>
      <a href="#the-fast-path-to-first-token">
        
      </a>
    </div>
    <p>Using Workers AI models with AI Gateway is particularly powerful if you’re building live agents – where a user's perception of speed hinges on time to first token or how quickly the agent starts responding, rather than how long the full response takes. Even if total inference is 3 seconds, getting that first token 50ms faster makes the difference between an agent that feels zippy and one that feels sluggish.</p><p>Cloudflare's network of data centers in 330 cities around the world means AI Gateway is positioned close to both users and inference endpoints, minimizing the network time before streaming begins.</p><p>Workers AI also hosts open-source models on its public catalog, which now includes large models purpose-built for agents, including <a href="https://developers.cloudflare.com/workers-ai/models/kimi-k2.5"><u>Kimi K2.5</u></a> and real-time voice models. When you call these Cloudflare-hosted models through AI Gateway, there's no extra hop over the public Internet since your code and inference run on the same global network, giving your agents the lowest latency possible.</p>
    <div>
      <h3>Built for reliability with automatic failover</h3>
      <a href="#built-for-reliability-with-automatic-failover">
        
      </a>
    </div>
    <p>When building agents, speed is not the only factor that users care about – reliability matters too. Every step in an agent workflow depends on the steps before it. Reliable inference is crucial for agents because one call failing can affect the entire downstream chain. </p><p>Through AI Gateway, if you're calling a model that's available on multiple providers and one provider goes down, we'll automatically route to another available provider without you having to write any failover logic of your own. </p><p>If you’re building <a href="https://blog.cloudflare.com/project-think/"><u>long-running agents with Agents SDK</u></a>, your streaming inference calls are also resilient to disconnects. AI Gateway buffers streaming responses as they’re generated, independently of your agent's lifetime. If your agent is interrupted mid-inference, it can reconnect to AI Gateway and retrieve the response without having to make a new inference call or paying twice for the same output tokens. Combined with the Agents SDK's built-in checkpointing, the end user never notices.</p>
    <div>
      <h3>Replicate</h3>
      <a href="#replicate">
        
      </a>
    </div>
    <p>The Replicate team has officially <a href="https://blog.cloudflare.com/replicate-joins-cloudflare/"><u>joined</u></a> our AI Platform team, so much so that we don’t even consider ourselves separate teams anymore. We’ve been hard at work on integrations between Replicate and Cloudflare, which include bringing all the Replicate models onto AI Gateway and replatforming the hosted models onto Cloudflare infrastructure. Soon, you’ll be able to access the models you loved on Replicate through AI Gateway, and host the models you deployed on Replicate on Workers AI as well.</p>
    <div>
      <h3>Get started</h3>
      <a href="#get-started">
        
      </a>
    </div>
    <p>To get started, check out our documentation for <a href="https://developers.cloudflare.com/ai-gateway"><u>AI Gateway</u></a> or <a href="https://developers.cloudflare.com/workers-ai/"><u>Workers AI</u></a>. Learn more about building agents on Cloudflare through <a href="https://developers.cloudflare.com/agents/"><u>Agents SDK</u></a>. </p>
    <div>
      <h3>Watch on Cloudflare TV</h3>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div>
  
</div><p></p> ]]></content:encoded>
            <category><![CDATA[Agents Week]]></category>
            <category><![CDATA[Agents]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[AI Gateway]]></category>
            <category><![CDATA[Workers AI]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[LLM]]></category>
            <guid isPermaLink="false">2vIUFXJLJcMgjgY6jFnQ7G</guid>
            <dc:creator>Ming Lu</dc:creator>
            <dc:creator>Michelle Chen</dc:creator>
        </item>
        <item>
            <title><![CDATA[Building the foundation for running extra-large language models]]></title>
            <link>https://blog.cloudflare.com/high-performance-llms/</link>
            <pubDate>Thu, 16 Apr 2026 14:00:00 GMT</pubDate>
            <description><![CDATA[ We built a custom technology stack to run fast large language models on Cloudflare’s infrastructure. This post explores the engineering trade-offs and technical optimizations required to make high-performance AI inference accessible. ]]></description>
            <content:encoded><![CDATA[ <p>An agent needs to be powered by a large language model. A few weeks ago, we announced that Workers AI is officially entering the arena for hosting large open-source models like Moonshot’s Kimi K2.5. Since then, we’ve made Kimi K2.5 3x faster and have more model additions in-flight. These models have been the backbone of a lot of the agentic products, harnesses, and tools that we have been launching this week. </p><p>Hosting AI models is an interesting challenge: it requires a delicate balance between software and very, very expensive hardware. At Cloudflare, we’re good at squeezing every bit of efficiency out of our hardware through clever software engineering. This is a deep dive on how we’re laying the foundation to run extra-large language models.</p>
    <div>
      <h2>Hardware configurations</h2>
      <a href="#hardware-configurations">
        
      </a>
    </div>
    <p>As we mentioned in <a href="https://blog.cloudflare.com/workers-ai-large-models/"><u>our previous Kimi K2.5 blog post</u></a>, we’re using a variety of hardware configurations in order to best serve models. A lot of hardware configurations depend on the size of inputs and outputs that users are sending to the model. For example, if you are using a model to write fanfiction, you might give it a few small prompts (input tokens) while asking it to generate pages of content (output tokens). </p><p>Conversely, if you are running a summarization task, you might be sending in hundreds of thousands of input tokens, but only generating a small summary with a few thousand output tokens. Presented with these opposing use cases, you have to make a choice — should you tune your model configuration so it’s faster at processing input tokens, or faster at generating output tokens?</p><p>When we launched large language models on Workers AI, we knew that most of the use cases would be used for agents. With agents, you send in a large number of input tokens. It starts off with a large system prompt, all the tools, MCPs. With the first user prompt, that context keeps growing. Each new prompt from the user sends a request to the model, which consists of everything that was said before — all the previous user prompts, assistant messages, code generated, etc. For Workers AI, that means we had to focus on two things: fast input token processing and fast tool calling.</p>
    <div>
      <h3>Prefill decode (PD) disaggregation</h3>
      <a href="#prefill-decode-pd-disaggregation">
        
      </a>
    </div>
    <p>One hardware configuration that we use to improve performance and efficiency is disaggregated prefill. There are two stages to processing an LLM request: prefill, which processes the input tokens and populates the KV cache, and decode, which generates output tokens. Prefill is usually compute bound, while decode is memory bound. This means that the parts of the GPU that are used in each stage are different, and since prefill is always done before decode, the stages block one another. Ultimately, it means that we are not efficiently utilizing all of our GPU power if we do both prefill and decode on a single machine.</p><p>With prefill decode disaggregation, separate inference servers are run for each stage. First, a request is sent to the prefill stage which performs prefill and stores it in its KV cache. Then the same request is sent to the decode server, with information about how to transfer the KV cache from the prefill server and begin decoding. This has a number of advantages, because it allows the servers to be tuned independently for the role they are performing, scaled to account for more input-heavy or output-heavy traffic, or even to run on heterogeneous hardware.</p><p>This architecture requires a relatively complex load balancer to achieve. Beyond just routing the requests as described above, it must rewrite the responses (including streaming SSE) of the decode server to include information from the prefill server such as cached tokens. To complicate matters, different inference servers require different information to initiate the KV cache transfer. We extended this to implement token-aware load balancing, in which there is a pool of prefill and decode endpoints, and the load balancer estimates how many prefill or decode tokens are in-flight to each endpoint in the pool and attempts to spread this load evenly. </p><p>After our public model launch, our input/output patterns changed drastically again. We took the time to analyze our new usage patterns and then tuned our configuration to fit our customer’s use cases.</p><p>Here’s a graph of our p90 Time to First Token drop after shifting traffic to our new PD disaggregated architecture, whilst request volume increased, using the same quantity of GPUs. We see a significant improvement in the tail latency variance.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5j6AEIF1GMhnHuJD08VQtw/9e65e2badc5fd1e75557230e8f455ccc/BLOG-3266_2.png" />
          </figure><p>Similarly, p90 time per token went from ~100 ms with high variance to 20-30 ms, a 3x improvement in intertoken latency.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6MNBGFHwI3U0bYwuAosg2g/394f4f5708cc1d3b045dc3f52c64b808/BLOG-3266_3.png" />
          </figure>
    <div>
      <h3>Prompt Caching</h3>
      <a href="#prompt-caching">
        
      </a>
    </div>
    <p>Since agentic use cases usually have long contexts, we optimize for efficient prompt caching in order to not recompute input tensors on every turn. We leverage a header called <code>x-session-affinity</code> in order to help requests route to the right region that previously had the computed input tensors. We wrote about this in our <a href="https://blog.cloudflare.com/workers-ai-large-models/"><u>original blog post</u></a> about launching large LLMs on Workers AI. We added session affinity headers to popular agent harnesses <a href="https://github.com/anomalyco/opencode/pull/20744"><u>like OpenCode</u></a>, where we noticed a significant increase in total throughput. A small difference in prompt caching from our users can sum to a factor of additional GPUs needed to run a model. While we have KV-aware routing internally, we also rely on clients sending the <code>x-session-affinity</code> in order to be explicit about prompt caching. We incentivize the use of the header by offering discounted cached tokens. We highly encourage users to <a href="https://developers.cloudflare.com/workers-ai/features/prompt-caching/"><u>leverage prompt caching</u></a> in order to have faster inference and cheaper pricing.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/gKzvEB0JlL1L0IHIby7TV/fe82399fe870d343a572f053d62b4e52/BLOG-3266_4.png" />
          </figure><p>We worked with our heaviest internal users to adopt this header. The result was an increase in input token cache hit ratios from 60% to 80% during peak times. This significantly increases the request throughput that we can handle, while offering better performance for interactive or time-sensitive sessions like OpenCode or AI code reviews.</p>
    <div>
      <h3>KV-cache optimization</h3>
      <a href="#kv-cache-optimization">
        
      </a>
    </div>
    <p>As we’re serving larger models now, one instance can span multiple GPUs. This means that we had to find an efficient way to share KV cache across GPUs. KV cache is where all the input tensors from prefill (result of prompts in a session) are stored, and initially lives in the VRAM of a GPU. Every GPU has a fixed VRAM size, but if your model instance requires multiple GPUs, there needs to be a way for the KV cache to live across GPUs and talk to each other. To achieve this for Kimi, we leveraged<a href="https://github.com/kvcache-ai/Mooncake"><u> Moonshot AI’s</u></a> Mooncake Transfer Engine and Mooncake Store.</p><p>Mooncake’s Transfer Engine is a high-performance data transfer framework. It works with different Remote Direct Memory Access (RDMA) protocols such as NVLink and NVMe over Fabric, which enables direct memory-to-memory data transfer without involving the CPU. It improves the speed of transferring data across multiple GPU machines, which is particularly important in multi-GPU and multi-node configurations for models. </p><p>When paired with LMCache or SGLang HiCache, the cache is shared across all nodes in the cluster, allowing a prefill node to identify and re-use a cache from a previous request that was originally pre-filled on a different node. This eliminates the need for session aware routing within a cluster and allows us to load balance the traffic much more evenly. Mooncake Store also allows us to extend the cache beyond GPU VRAM, and leverage NVMe storage. This extends the time that sessions remain in cache, improving our cache hit ratio and allowing us to handle more traffic and offer better performance to users.</p>
    <div>
      <h3>Speculative decoding</h3>
      <a href="#speculative-decoding">
        
      </a>
    </div>
    <p>LLMs work by predicting the next token in a sequence, based on the tokens that came before it. With a naive implementation, models only predict the next <i>n</i> token, but we can actually make it predict the next <i>n+1, n+2...</i> tokens in a single forward pass of the model. This popular technique is known as speculative decoding, which we’ve written about in a <a href="https://blog.cloudflare.com/making-workers-ai-faster/"><u>previous post on Workers AI. </u></a></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/25Au0LKNr8ozQg5UQM34wY/06080a8699d5f8edb050450913a19a40/BLOG-3266_5.png" />
          </figure><p>With speculative decoding, we leverage a smaller LLM (the draft model) to generate a few candidate tokens for the target model to choose from. The target model then just has to select from a small pool of candidate tokens in a single forward pass. Validating the tokens is faster and less computationally expensive than using the larger target model to generate the tokens. However, quality is still upheld as the target model ultimately has to accept or reject the draft tokens.</p><p>In agentic use cases, speculative decoding really shines because of the volume of tool calls and structured outputs that models need to generate. A tool call is largely predictable — you know there will be a name, description, and it’s wrapped in a JSON envelope.</p><p>To do this with Kimi K2.5, we leverage <a href="https://huggingface.co/nvidia/Kimi-K2.5-Thinking-Eagle3"><u>NVIDIA’s EAGLE-3</u></a> (Extrapolation Algorithm for Greater Language-model Efficiency) draft model. The levers for tuning speculative decoding include the number of future tokens to generate. As a result, we’re able to achieve high-quality inference while speeding up tokens per second throughput.</p>
    <div>
      <h2>Infire: our proprietary inference engine</h2>
      <a href="#infire-our-proprietary-inference-engine">
        
      </a>
    </div>
    <p>As we announced during Birthday Week in 2025, Cloudflare has a proprietary inference engine, <a href="https://blog.cloudflare.com/cloudflares-most-efficient-ai-inference-engine/"><u>Infire</u></a>, that makes machine learning models faster. Infire is an inference engine written in Rust, designed to support Cloudflare’s unique challenges with inference given our distributed global network. We’ve extended Infire support for this new class of large language models we are planning to run, which meant we had to build a few new features to make it all work.</p>
    <div>
      <h3>Multi-GPU support</h3>
      <a href="#multi-gpu-support">
        
      </a>
    </div>
    <p>Large language models like Kimi K2.5 are over 1 trillion parameters, which is about 560GB of model weights. A typical H100 has about 80GB of VRAM and the model weights need to be loaded in GPU memory in order to run. This means that a model like Kimi K2.5 needs at least 8 H100s in order to load the model into memory and run — and that’s not even including the extra VRAM you would need for KV Cache, which includes your context window.</p><p>Since we initially launched Infire, we had to add support for multi-GPU, letting the inference engine run across multiple GPUs in either pipeline-parallel or tensor-parallel modes with expert-parallelism supported as well.</p><p>For pipeline parallelism, Infire attempts to properly load balance all stages of the pipeline, in order to prevent the GPUs of one stage from starving while other stages are executing. On the other hand, for tensor parallelism, Infire optimizes for reducing cross-GPU communication, making it as fast as possible. For most models, utilizing both pipeline parallelism and tensor parallelism in tandem provides the best balance of throughput and latency.</p>
    <div>
      <h3>Even lower memory overhead</h3>
      <a href="#even-lower-memory-overhead">
        
      </a>
    </div>
    <p>While already having much lower GPU memory overhead than <a href="https://vllm.ai/"><u>vLLM</u></a>, we optimized Infire even further, tightening the memory required for internal state like activations. Currently Infire is capable of running Llama 4 Scout on just two H200 GPUs with more than 56 GiB remaining for KV-cache, sufficient for more than 1.2m tokens. Infire is also capable of running Kimi K2.5 on 8 H100 GPUs (yes that is H100), with more than 30 GiB still available for KV-cache. In both cases you would have trouble even booting vLLM in the first place.</p>
    <div>
      <h3>Faster cold-starts</h3>
      <a href="#faster-cold-starts">
        
      </a>
    </div>
    <p>While adding multi-GPU support, we identified additional opportunities to improve boot times. Even for the largest models, such as Kimi K2.5, Infire can begin serving requests in under 20 seconds. The load times are only bounded by the drive speed.</p>
    <div>
      <h3>Maximizing our hardware for faster throughput</h3>
      <a href="#maximizing-our-hardware-for-faster-throughput">
        
      </a>
    </div>
    <p>Investing in our proprietary inference engine enables us to maximize our hardware by getting up to 20% higher tokens per second throughput on unconstrained systems, and also enabling us to use lower-end hardware to run the latest models, where it was previously completely infeasible.</p>
    <div>
      <h2>The journey doesn’t end</h2>
      <a href="#the-journey-doesnt-end">
        
      </a>
    </div>
    <p>New technologies, research, and models come out on a weekly basis for the machine learning community. We’re continuously optimizing our technology stack in order to provide high-quality, performant inference for our customers while operating our GPUs efficiently. If these sound like interesting challenges for you – <a href="https://www.cloudflare.com/careers/jobs/"><u>we’re hiring</u></a>!</p> ]]></content:encoded>
            <category><![CDATA[Agents Week]]></category>
            <category><![CDATA[Agents]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Infrastructure]]></category>
            <category><![CDATA[Workers AI]]></category>
            <guid isPermaLink="false">71xNLfh83S7Fg78QEcgdhf</guid>
            <dc:creator>Michelle Chen</dc:creator>
            <dc:creator>Kevin Flansburg</dc:creator>
            <dc:creator>Vlad Krasnov</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare Email Service: now in public beta. Ready for your agents]]></title>
            <link>https://blog.cloudflare.com/email-for-agents/</link>
            <pubDate>Thu, 16 Apr 2026 06:00:00 GMT</pubDate>
            <description><![CDATA[ Agents are becoming multi-channel. That means making them available wherever your users already are — including the inbox. Today, Cloudflare Email Service enters public beta with the infrastructure layer to make that easy: send, receive, and process email natively from your agents.
 ]]></description>
            <content:encoded><![CDATA[ <p>Email is the most accessible interface in the world. It is ubiquitous. There’s no need for a custom chat application, no custom SDK for each channel. Everyone already has an email address, which means everyone can already interact with your application or agent. And your agent can interact with anyone.</p><p>If you are building an application, you already rely on email for signups, notifications, and invoices. Increasingly, it is not just your application logic that needs this channel. Your agents do, too. During our private beta, we talked to developers who are building exactly this: customer support agents, invoice processing pipelines, account verification flows, multi-agent workflows. All built on top of email. The pattern is clear: email is becoming a core interface for agents, and developers need infrastructure purpose-built for it.</p><p>Cloudflare Email Service is that piece. With <b>Email Routing</b>, you can receive email to your application or agent. With <b>Email Sending,</b> you can reply to emails or send outbounds to notify your users when your agents are done doing work. And with the rest of the developer platform, you can build a full email client and <a href="https://blog.cloudflare.com/project-think/"><u>Agents SDK</u></a> onEmail hook as native functionality. </p><p>Today, as part of Agents Week, Cloudflare Email Service is entering <b>public beta</b>, allowing any application and any agent to send emails. We are also completing the toolkit for building email-native agents: </p><ul><li><p>Email Sending binding, available from your Workers and the Agents SDK </p></li><li><p>A new Email MCP server</p></li><li><p>Wrangler CLI email commands</p></li><li><p>Skills for coding agents</p></li><li><p>An open-source agentic inbox reference app</p></li></ul>
    <div>
      <h2>Email Sending: now in public beta</h2>
      <a href="#email-sending-now-in-public-beta">
        
      </a>
    </div>
    <p>Email Sending graduates from private beta to <b>public beta </b>today. You can now send transactional emails directly from Workers with a native Workers binding — no API keys, no secrets management.</p>
            <pre><code>export default {
  async fetch(request, env, ctx) {
    await env.EMAIL.send({
      to: "user@example.com",
      from: "notifications@your-domain.com",
      subject: "Your order has shipped",
      text: "Your order #1234 has shipped and is on its way."
    });
    return new Response("Email sent");
  },
};
</code></pre>
            <p>Or send from any platform, any language, using the REST API and our TypeScript, Python, and Go SDKs:</p>
            <pre><code>curl "https://api.cloudflare.com/client/v4/accounts/{account_id}/email-service/send" \
   --header "Authorization: Bearer &lt;API_TOKEN&gt;" \
   --header "Content-Type: application/json" \
   --data '{
     "to": "user@example.com",
     "from": "notifications@your-domain.com",
     "subject": "Your order has shipped",
     "text": "Your order #1234 has shipped and is on its way."
   }'
</code></pre>
            <p>Sending email that actually reaches inboxes usually means wrestling with SPF, DKIM, and DMARC records. When you add your domain to Email Service, we configure all of it automatically. Your emails are authenticated and delivered, not flagged as spam. And because Email Service is a global service built on Cloudflare's network, your emails are delivered with low latency anywhere in the world.</p><p>Combined with <a href="https://developers.cloudflare.com/email-routing/"><u>Email Routing</u></a>, which has been free and available for years, you now have complete bidirectional email within a single platform. Receive an email, process it in a Worker, and reply, all without leaving Cloudflare.</p><p>For the full deep dive on Email Sending, <a href="https://blog.cloudflare.com/email-service/"><u>refer to our Birthday Week announcement</u></a>. The rest of this post describes what Email Service unlocks for agents.</p>
    <div>
      <h2>Agents SDK: your agent is email-native</h2>
      <a href="#agents-sdk-your-agent-is-email-native">
        
      </a>
    </div>
    <p>The Agents SDK for building agents on Cloudflare already has a first-class <a href="https://developers.cloudflare.com/agents/api-reference/agents-api/"><u>onEmail hook</u></a> for receiving and processing inbound email. But until now, your agent could only reply synchronously, or send emails to members of your Cloudflare account. </p><p>With Email Sending, that constraint is gone. This is the difference between a chatbot and an agent.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4aGV0BVpbrj3ql5TubPMXx/b85351f13c5fae93a27d11e20a5fc11e/BLOG-3210_2.png" />
          </figure><p><sup><i>Email agents receive a message, orchestrate work across the platform, and respond asynchronously.</i></sup></p><p>A chatbot responds in the moment or not at all. An agent thinks, acts, and communicates on its own timeline. With Email Sending, your agent can receive a message, spend an hour processing data, check three other systems, and then reply with a complete answer. It can schedule follow-ups. It can escalate when it detects an edge case. It can operate independently. In other words: it can actually do work, not just answer questions. </p><p>Here's what a support agent looks like with the full pipeline — receive, persist, and reply:</p>
            <pre><code>import { Agent, routeAgentEmail } from "agents";
import { createAddressBasedEmailResolver, type AgentEmail } from "agents/email";
import PostalMime from "postal-mime";

export class SupportAgent extends Agent {
  async onEmail(email: AgentEmail) {
    const raw = await email.getRaw();
    const parsed = await PostalMime.parse(raw);

   // Persist in agent state
    this.setState({
      ...this.state,
      ticket: { from: email.from, subject: parsed.subject, body: parsed.text, messageId: parsed.messageId },
    });

    // Kick off long running background agent task 
    // Or place a message on a Queue to be handled by another Worker

    // Reply here or in other Worker handler, like a Queue handler
    await this.sendEmail({
      binding: this.env.EMAIL,
      fromName: "Support Agent",
      from: "support@yourdomain.com",
      to: this.state.ticket.from,
      inReplyTo: this.state.ticket.messageId,
      subject: `Re: ${this.state.ticket.subject}`,
      text: `Thanks for reaching out. We received your message about "${this.state.ticket.subject}" and will follow up shortly.`
    });
  }
}

export default {
  async email(message, env) {
    await routeAgentEmail(message, env, {
      resolver: createAddressBasedEmailResolver("SupportAgent"),
    });
  },
} satisfies ExportedHandler&lt;Env&gt;;</code></pre>
            <p>If you're new to the Agents SDK's email capabilities, here's what's happening under the hood.</p><p><b>Each agent gets its own identity from a single domain.</b> The address-based resolver routes support@yourdomain.com to a "support" agent instance, sales@yourdomain.com to a "sales" instance, and so on. You don't need to provision separate inboxes — the routing is built into the address. You can even use sub-addressing (NotificationAgent+user123@yourdomain.com) to route to different agent namespaces and instances.</p><p><b>State persists across emails.</b> Because agents are backed by Durable Objects, calling this.setState() means your agent remembers conversation history, contact information, and context across sessions. The inbox becomes the agent's memory, without needing a separate database or vector store.</p><p><b>Secure reply routing is built in.</b> When your agent sends an email and expects a reply, you can sign the routing headers with HMAC-SHA256 so that replies route back to the exact agent instance that sent the original message. This prevents attackers from forging headers to route emails to arbitrary agent instances — a security concern that most "email for agents" solutions haven't addressed.</p><p>This is the complete email agent pipeline that teams are building from scratch elsewhere: receive email, parse it, classify it, persist state, kick off async workflows, reply or escalate — all within a single Agent class, deployed globally on Cloudflare's network.</p>
    <div>
      <h2>Email tooling for your agents: MCP server, Wrangler CLI, and skills</h2>
      <a href="#email-tooling-for-your-agents-mcp-server-wrangler-cli-and-skills">
        
      </a>
    </div>
    <p>Email Service isn't only for agents running on Cloudflare. Agents run everywhere, whether it’s coding agents like Claude Code, Cursor, or Copilot running locally or in remote environments, or production agents running in containers or external clouds. They all need to send email from those environments. We're shipping three integrations that make Email Service accessible to any agent, regardless of where it runs.</p><p>Email is now available through the <a href="https://github.com/cloudflare/mcp"><u>Cloudflare MCP server</u></a>, the same <a href="https://blog.cloudflare.com/code-mode/"><u>Code Mode</u></a>-powered server that gives agents access to the entire Cloudflare API. With this MCP server, your agent can discover and call the Email endpoints to send and configure emails. You can send an email with a simple prompt:</p>
            <pre><code>"Send me a notification email at hello@example.com from my staging domain when the build completes"</code></pre>
            <p>For agents running on a computer or a sandbox with bash access, the Wrangler CLI solves the MCP context window problem that we discussed in the <a href="https://blog.cloudflare.com/code-mode/"><u>Code Mode</u></a> blog post — tool definitions can consume tens of thousands of tokens before your agent even starts processing a single message. With Wrangler, your agent starts with near-zero context overhead and discovers capabilities on demand through `--help` commands. Here is how your agent can send an email via Wrangler:</p>
            <pre><code>wrangler email send \
  --to "teammate@example.com" \
  --from "agent@your-domain.com" \
  --subject "Build completed" \
  --text "The build passed. Deployed to staging."
</code></pre>
            <p>Regardless of whether you give your agent the Cloudflare MCP or the Wrangler CLI, your agent will be able to now send emails on your behalf with just a prompt.</p>
    <div>
      <h3>Skills</h3>
      <a href="#skills">
        
      </a>
    </div>
    <p>We are also publishing a <a href="https://github.com/cloudflare/skills"><u>Cloudflare Email Service skill</u></a>. It gives your agents complete guidance: configuring the Workers binding, sending emails via the REST API or SDKs, handling inbound email with Email Routing configuration, building with Agents SDK, and managing email through Wrangler CLI or MCP. It also covers deliverability best practices and how to craft good transactional emails that land in inboxes rather than spam. Drop it into your project and your coding agent has everything needed to build production-ready email on Cloudflare.</p>
    <div>
      <h2>Open-sourcing tools for email agents</h2>
      <a href="#open-sourcing-tools-for-email-agents">
        
      </a>
    </div>
    <p>During the private beta, we also experimented with email agents. It became clear that you often want to keep the human-in-the-loop element to review emails and see what the agent is doing.The best way to do that is to have a fully featured email client with agent automations built-in.</p><p>That’s why we built <a href="https://github.com/cloudflare/agentic-inbox"><u>Agentic Inbox</u></a>: a reference application with full conversation threading, email rendering, receiving and storing emails and their attachments, and automatically replying to emails. It includes a dedicated MCP server built-in, so external agents can draft emails for your review before sending from your agentic-inbox. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4PgrSXLUD5kgA2SFOhksCS/75f85353cf710842420ed806be31b6f6/BLOG-3210_3.png" />
          </figure><p>We’re <a href="https://github.com/cloudflare/agentic-inbox"><u>open-sourcing Agentic Inbox</u></a> as a reference application for how to build a full email application using Email Routing for inbound, Email Sending for outbound, Workers AI for classification, R2 for attachments, and Agents SDK for stateful agent logic. You can deploy it today to get a full inbox, email client and agent for your emails, with the click of a button.</p><p>We want email agent tooling to be composable and reusable. Rather than every team rebuilding the same inbound-classify-reply pipeline, start with this reference application. Fork it, extend it, use it as a starting point for your own email agents that fit your workflows.</p><a href="https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/agentic-inbox"><img src="https://deploy.workers.cloudflare.com/button" /></a>
<p></p>
    <div>
      <h2>Try it out today</h2>
      <a href="#try-it-out-today">
        
      </a>
    </div>
    <p>Email is where the world’s most important workflows live, but for agents, it has often been a difficult channel to reach. With <b>Email Sending</b> now in public beta, Cloudflare Email Service becomes a complete platform for bidirectional communication, making the inbox a first-class interface for your agents.</p><p>Whether you’re building a support agent that meets customers in their inbox or a background process that keeps your team updated in real time, your agents now have a seamless way to communicate on a global scale. The inbox is no longer a silo. Now it’s one more place for your agents to be helpful.</p><ul><li><p>Try out <a href="https://dash.cloudflare.com/?to=/:account/email-service/sending"><u>Email Sending in the Cloudflare Dashboard</u></a></p></li><li><p>Read the <a href="http://developers.cloudflare.com/email-service/"><u>Email Service documentation</u></a></p></li><li><p>Follow the <a href="http://developers.cloudflare.com/agents/api-reference/email"><u>Agents SDK email docs</u></a> </p></li><li><p>Check out the <a href="http://github.com/cloudflare/mcp-server-cloudflare"><u>Email Service MCP server</u></a> and <a href="https://github.com/cloudflare/skills"><u>Skills</u></a></p></li><li><p><a href="https://github.com/cloudflare/agentic-inbox"><u>Deploy the open-source reference app</u></a></p></li></ul>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2YaSov283la8ajbi0Sbprq/9108f26b499fd976470a79ee74034e59/BLOG-3210_5.png" />
          </figure>
    <div>
      <h2>Watch on Cloudflare TV</h2>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div>
  
</div><p></p> ]]></content:encoded>
            <category><![CDATA[Agents Week]]></category>
            <category><![CDATA[Agents]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Workers AI]]></category>
            <category><![CDATA[Email]]></category>
            <guid isPermaLink="false">3G2uQLc6OAHayqaZxxDrrZ</guid>
            <dc:creator>Thomas Gauvin</dc:creator>
            <dc:creator>Eric Falcão</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing Agent Lee - a new interface to the Cloudflare stack]]></title>
            <link>https://blog.cloudflare.com/introducing-agent-lee/</link>
            <pubDate>Wed, 15 Apr 2026 13:00:00 GMT</pubDate>
            <description><![CDATA[ Agent Lee is an in-dashboard agent that shifts Cloudflare’s interface from manual tab-switching to a single prompt. Using sandboxed TypeScript, it helps you troubleshoot and manage your stack as a grounded technical collaborator.
 ]]></description>
            <content:encoded><![CDATA[ <p>While there have been small improvements along the way, the interface of technical products has not really changed since the dawn of the Internet. It still remains: clicking five pages deep, cross-referencing logs across tabs, and hunting for hidden toggles.</p><p>AI gives us the opportunity to rethink all that. Instead of complexity spread over a sprawling graphical user interface: what if you could describe in plain language what you wanted to achieve? </p><p>This is the future — and we’re launching it today. We didn’t want to just put an agent in a dashboard. We wanted to create an entirely new way to interact with our entire platform. Any task, any surface, a single prompt.</p><p>Introducing Agent Lee.</p><p>Agent Lee is an in-dashboard AI assistant that understands <b>your</b> Cloudflare account. </p><p>It can help you with troubleshooting, which, today, is a manual grind. If your Worker starts returning 503s at 02:00 UTC, finding the root cause: be it an R2 bucket, a misconfigured route, or a hidden rate limit, you’re opening half a dozen tabs and hoping you recognize the pattern. Most developers don't have a teammate who knows the entire platform standing over their shoulder at 2 a.m. Agent Lee does. </p><p>But it won’t just troubleshoot for you at 2 a.m. Agent Lee will also fix the problem for you on the spot.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2Iva79HIiHPUrK8NLukkwH/dd1cf1709ab04f6d5825124cecd20a5e/BLOG-3231_2.png" />
          </figure><p>Agent Lee has been running in an active beta during which it has served over 18,000 daily users, executing nearly a quarter of a million tool calls per day. While we are confident in its current capabilities and success in production, this is a system we are continuously developing. As it remains in beta, you may encounter unexpected limitations or edge cases as we refine its performance. We encourage you to use the feedback form below to help us make it better every day.</p>
    <div>
      <h2>What Agent Lee can do</h2>
      <a href="#what-agent-lee-can-do">
        
      </a>
    </div>
    <p>Agent Lee is built directly into the dashboard and understands the resources in your account. It knows your Workers, your zones, your DNS configuration, your error rates. The knowledge that today lives across six tabs and two browser windows will now live in one place, and you can talk to it.</p><p>With natural language, you can use it to:</p><ul><li><p><b>Answer questions about your account:</b> "Show me the top 5 error messages on my Worker."</p></li><li><p><b>Debug an issue:</b> "I can't access my site with the www prefix."</p></li><li><p><b>Apply a change:</b> "Enable Access for my domain."</p></li><li><p><b>Deploy a resource: </b>"Create a new R2 bucket for my photos and connect it to my Worker."</p></li></ul><p>Instead of switching between products, you describe what you want to do, and Agent Lee helps you get there with instructions and visualizations. It retrieves context, uses the right tools, and creates dynamic visualizations based on the types of questions you ask. Ask what your error rate looks like over the last 24 hours, and it renders a chart inline, pulling from your actual traffic, not sending you to a separate Analytics page.</p><div>
  
</div><p>Agent Lee isn't answering FAQ questions — it's doing real work, against real accounts, at scale. Today, Agent Lee serves ~18,000 daily users, executing ~250k tool calls per day across DNS, Workers, SSL/TLS, R2, Registrar, Cache, Cloudflare Tunnel, API Shield, and more. </p>
    <div>
      <h2>How we built it</h2>
      <a href="#how-we-built-it">
        
      </a>
    </div>
    
    <div>
      <h3>Codemode</h3>
      <a href="#codemode">
        
      </a>
    </div>
    <p>Rather than presenting MCP tool definitions directly to the model, Agent Lee uses <a href="https://blog.cloudflare.com/code-mode/"><u>Codemode</u></a> to convert the tools into a TypeScript API and asks the model to write code that calls it instead.</p><p>This works better for a couple of reasons. LLMs have seen a huge amount of real-world TypeScript but very few tool call examples, so they're more accurate when working in code. For multi-step tasks, the model can also chain calls together in a single script and return only the final result, ultimately skipping the round-trips.</p><p>The generated code is sent to an upstream Cloudflare MCP server for sandboxed execution, but it goes through a Durable Object that acts as a credentialed proxy. Before any call goes out, the DO classifies the generated code as read or write by inspecting the method and body. Read operations are proxied directly. Write operations are blocked until you explicitly approve them through the elicitation gate. API keys are never present in the generated code — they're held inside the DO and injected server-side when the upstream call is made. The security boundary isn't just a sandbox that gets thrown away; it's a permission architecture that structurally prevents writes from happening without your approval.</p>
    <div>
      <h3>The MCP permission system</h3>
      <a href="#the-mcp-permission-system">
        
      </a>
    </div>
    <p>Agent Lee connects to Cloudflare's own MCP server, which exposes two tools: a search tool for querying API endpoints and an execute tool for writing code that performs API requests. This is the surface through which Agent Lee reads your account and, when you approve, writes to it.</p><p>Write operations go through an elicitation system that surfaces the approval step before any code executes. Agent Lee cannot skip this step. The permission model is the enforcement layer, and the confirmation prompt you see is not a UX courtesy. It's the gate.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/s8phQottGj8yVgc42Nzvl/3abb536756e10360b68cabc0522bcb30/BLOG-3231_4.png" />
          </figure>
    <div>
      <h2>Built on the same stack you can use</h2>
      <a href="#built-on-the-same-stack-you-can-use">
        
      </a>
    </div>
    <p>Every primitive Agent Lee is built on is available to all our customers: <a href="https://developers.cloudflare.com/agents/"><u>Agents SDK</u></a>, <a href="https://www.cloudflare.com/developer-platform/products/workers-ai/"><u>Workers AI</u></a>, <a href="https://www.cloudflare.com/developer-platform/products/durable-objects/"><u>Durable Objects</u></a>, and the same MCP infrastructure available to any Cloudflare developer. We didn't build internal tools that aren't available to you — instead we built it with the same Cloudflare lego blocks that you have access to.</p><p>Building Agent Lee on our own primitives wasn't just a design principle. It was the fastest way to find out what works and what doesn't. We built this in production, with real users, against real accounts. That means every limitation we hit is a limitation we can fix in the platform. Every pattern that works is one we can make easier for the next team that builds on top of it.</p><p>These are not opinions. They're what quarter of a million tool calls across 18,000 users a day are telling us.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6877BA4kZUUP6qTs5ONucr/50d572b38fecdb3e77ab38d7976f06ed/image5.png" />
          </figure>
    <div>
      <h2>Generative UI</h2>
      <a href="#generative-ui">
        
      </a>
    </div>
    <p>Interacting with a platform should feel like collaborating with an expert. Conversations should transcend simple text. With Agent Lee, as your dialogue evolves, the platform dynamically generates UI components alongside textual responses to provide a richer, more actionable experience.</p><p>For example, if you ask about website traffic trends for the month, you won’t just get a paragraph of numbers. Agent Lee will render an interactive line graph, allowing you to visualize peaks and troughs in activity at a glance.</p><p>To give you full creative control, every conversation is accompanied within an adaptive grid. Here you can click and drag across the grid to carve out space for new UI blocks, then simply describe what you want to see and let the agent handle the heavy lifting.</p><p>Today, we support a diverse library of visual blocks, including dynamic tables, interactive charts, architecture maps, and more. By blending the flexibility of natural language with the clarity of structured UI, Agent Lee transforms your chat history into a living dashboard.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1oTLXzK5eYGyJm6Y54cKbz/98bae92dc7523f63ac6515d8088e70f7/image4.png" />
          </figure>
    <div>
      <h2>Measuring quality and safety</h2>
      <a href="#measuring-quality-and-safety">
        
      </a>
    </div>
    <p>An agent that can take action on your account needs to be reliable and secure. Elicitations allow agentic systems to actively solicit information, preferences, or approvals from users or other systems mid-execution. When Agent Lee needs to take non-read actions on a user's behalf we use elicitations by requiring an explicit approval action in the user interface. These guardrails allow Agent Lee to truly be a partner alongside you in managing your resource safely.</p><p>In addition to safety, we continuously measure quality.</p><ul><li><p>Evals to measure conversation success rate and information accuracy.</p></li><li><p>Feedback signals from user interactions (thumbs up / thumbs down).</p></li><li><p>Tool call execution success rate and hallucination scorers.</p></li><li><p>Per-product breakdown of conversation performance.</p></li></ul><p>These systems help us improve Agent Lee over time while keeping users in control. </p>
    <div>
      <h2>Our vision ahead</h2>
      <a href="#our-vision-ahead">
        
      </a>
    </div>
    <p>Agent Lee in the dashboard is only the beginning.</p><p>The bigger vision is Agent Lee as the interface to the entire Cloudflare platform — from anywhere. The dashboard today, the CLI next, your phone when you're on the go. The surface you use shouldn't matter. You should be able to describe what you need and have it done, regardless of where you are.</p><p>From there, Agent Lee gets proactive. Rather than waiting to be asked, it watches what matters to you, your Workers, your traffic, your error thresholds and reaches out when something warrants attention. An agent that only responds is useful. One that notices things first is something different.</p><p>Underlying all of this is context. Agent Lee already knows your account configuration. Over time, it will know more, what you've asked before, what page you're on, what you were debugging last week. That accumulated context is what makes a platform feel less like a tool and more like a collaborator.</p><p>We're not there yet. Agent Lee today is the first step, running in production, doing real work at scale. The architecture is built to get to the rest. </p>
    <div>
      <h2>Try it out</h2>
      <a href="#try-it-out">
        
      </a>
    </div>
    <p>Agent Lee is available in beta for Free plan users. Log in to your <a href="https://dash.cloudflare.com/login"><u>Cloudflare dashboard</u></a> and click Ask AI in the upper right corner to get started.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4FThQbf24TcV1mYT49yi39/05df8347e14c8ef5e591d224a1a38393/Screenshot_2026-04-13_at_3.37.29%C3%A2__PM.png" />
          </figure><p>We'd love to know what you build and what you’d like to see in Agent Lee. Please share your feedback <a href="https://forms.gle/dSCHNkHpJt6Uwsvc8"><u>here</u></a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/NsNVsMvU9v3U03jY4kU54/28182a4d9f36f75f8e93e5fcf67c1f21/BLOG-3231_6.png" />
          </figure>
    <div>
      <h3>Watch on Cloudflare TV</h3>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div>
  
</div>
<p></p> ]]></content:encoded>
            <category><![CDATA[Agents Week]]></category>
            <category><![CDATA[Agents]]></category>
            <category><![CDATA[Workers AI]]></category>
            <category><![CDATA[SDK]]></category>
            <category><![CDATA[Dashboard]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">4KNkr1nK6i3lbBEBRWnt5z</guid>
            <dc:creator>Kylie Czajkowski</dc:creator>
            <dc:creator>Aparna Somaiah</dc:creator>
            <dc:creator>Brayden Wilmoth</dc:creator>
        </item>
        <item>
            <title><![CDATA[Secure private networking for everyone: users, nodes, agents, Workers — introducing Cloudflare Mesh]]></title>
            <link>https://blog.cloudflare.com/mesh/</link>
            <pubDate>Tue, 14 Apr 2026 13:00:09 GMT</pubDate>
            <description><![CDATA[ Cloudflare Mesh provides secure, private network access for users, nodes, and autonomous AI agents. By integrating with Workers VPC, developers can now grant agents scoped access to private databases and APIs without manual tunnels.
 ]]></description>
            <content:encoded><![CDATA[ <p>AI agents have changed how teams think about private network access. Your coding agent needs to query a staging database. Your production agent needs to call an internal API. Your personal AI assistant needs to reach a service running on your home network. The clients are no longer just humans or services. They're <a href="https://www.cloudflare.com/learning/ai/what-is-agentic-ai/"><u>agents</u></a>, running autonomously, making requests you didn't explicitly approve, against infrastructure you need to keep secure.</p><p>Each of these workflows has the same underlying problem: agents need to reach private resources, but the tools for doing that were built for humans, not autonomous software. VPNs require interactive login. SSH tunnels require manual setup. Exposing services publicly is a security risk. And none of these approaches give you visibility into what the agent is actually doing once it's connected.</p><p>Today, <b>we're introducing Cloudflare Mesh</b> to connect your private networks together and provide secure access for your agents. We're also integrating Mesh with <a href="https://www.cloudflare.com/developer-platform/"><u>Cloudflare Developer Platform</u></a> so that <a href="https://www.cloudflare.com/developer-platform/products/workers/"><u>Workers</u></a>, <a href="https://www.cloudflare.com/developer-platform/products/durable-objects/"><u>Durable Objects</u></a>, and agents built with the Agents SDK can reach your private infrastructure directly.</p><p>If you’re using <a href="https://www.cloudflare.com/sase/"><u>Cloudflare One’s SASE and Zero Trust suite</u></a>, you already have access to Mesh. You don’t need a new technology paradigm to secure agentic workloads. You need a SASE that was built for the agentic era, and that’s Cloudflare One. Cloudflare Mesh is a new experience with a simpler setup that leverages the on-ramps you’re already familiar with: WARP Connector <i>(now called a Cloudflare Mesh node)</i> and WARP Client <i>(now called Cloudflare One Client)</i>. Together, these create a private network for human, developer, and agent traffic. Mesh is directly integrated into your existing Cloudflare One deployment. Your existing Gateway policies, Access rules, and device posture checks apply to Mesh traffic automatically.</p><p>If you're a developer who just wants private networking for your agents, services, and team, Mesh is where you start. Set it up in minutes, connect your networks, and secure your traffic. And because Mesh runs on the <a href="https://developers.cloudflare.com/cloudflare-one/"><u>Cloudflare One</u></a> platform, you can grow into more advanced capabilities over time: <a href="https://www.cloudflare.com/sase/products/gateway/"><u>Gateway</u></a> network, DNS, and HTTP policies for fine-grained traffic control, <a href="https://www.cloudflare.com/sase/use-cases/infrastructure-access/"><u>Access for Infrastructure</u></a> for SSH and RDP session management, <a href="https://www.cloudflare.com/sase/products/browser-isolation/"><u>Browser Isolation</u></a> for safe web access, <a href="https://developers.cloudflare.com/cloudflare-one/data-loss-prevention/"><u>DLP</u></a> to prevent sensitive data from leaving your network, and <a href="https://www.cloudflare.com/sase/products/casb/"><u>CASB</u></a> for SaaS security. You won’t have to plan for all of this on day one. You just don't have to migrate when you need it.</p>
    <div>
      <h2>New agentic workflows</h2>
      <a href="#new-agentic-workflows">
        
      </a>
    </div>
    <p>Private networking has always been about connecting clients to resources — SSH into a server, query a database, access an internal API. What's changed is who the clients are. A year ago, the answer was your developers and your services. Today, it's increasingly your agents.</p><p>This isn't theoretical. Look at the ecosystem: the explosion of <a href="https://blog.cloudflare.com/remote-model-context-protocol-servers-mcp/"><u>MCP (Model Context Protocol) servers</u></a> providing tool access, coding agents that need to read from private repos and databases, personal assistants running on home hardware. Each of these patterns assumes the agent can reach the resources it needs. When those resources are isolated in private networks, the agent is stuck. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/12RTZcOBkkKwmaRe5VYn9k/e1b00401bee274a7643a4dfd4139e2df/BLOG-3215_2.png" />
          </figure><p>This creates three workflows that are hard to secure today:</p><ol><li><p>Accessing a personal agent from a mobile device. You're running OpenClaw on a Mac mini at home. You want to reach it from your phone, your laptop at a coffee shop, or your work machine. But exposing it to the public Internet (even behind a password) can leave some gaps exposed. Your agent has shell access, file system access, and network access to your home network. One misconfiguration and anyone can reach it.</p></li><li><p>Letting a coding agent access your staging environment. You're using Claude Code, Cursor, or Codex on your laptop. You ask it to check deployment status, query analytics from a staging database, or read from an internal object store. But those services live in a private cloud VPC, so your agent can't reach them without exposing them to the Internet or tunneling your entire laptop into the VPC.</p></li><li><p>Connecting deployed agents to private services. You're building agents into your product using the <a href="https://developers.cloudflare.com/agents/"><u>Agents SDK</u></a> on Cloudflare Workers. Those agents need to call internal APIs, query databases, and access services that aren't on the public Internet. They need private access, but with scoped permissions, audit trails, and no credential leakage.</p></li></ol>
    <div>
      <h2>Cloudflare Mesh: one private network for users, nodes, and agents</h2>
      <a href="#cloudflare-mesh-one-private-network-for-users-nodes-and-agents">
        
      </a>
    </div>
    <p>Cloudflare Mesh is developer-friendly private networking. One lightweight connector, one binary, connects everything: your personal devices, your remote servers, your user endpoints. You don't need to install separate tools for each pattern. One connector on your network, and every access pattern works.</p><p>Once connected, devices in your private network can talk to each other over private IPs, routed through Cloudflare’s global network across 330+ cities giving you better reliability and control over your network.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/ENfLb0kr1Zesh7PnuCvDK/830afd93d49b658023a9ece55e0e505b/BLOG-3215_3.gif" />
          </figure><p>Now, with Mesh, a single solution can solve all of the agent scenarios we mentioned above:</p><ul><li><p>With <b>Cloudflare One Client for iOS</b> on your phone, you can securely connect your mobile devices to your local Mac mini running OpenClaw via a Mesh private network.</p></li><li><p>With <b>Cloudflare One Client for macOS</b> on your laptop, you can connect your laptop to your private network so your coding agents can reach staging databases or APIs and query them.</p></li><li><p>With <b>Mesh nodes</b> on your Linux servers, you can connect VPCs in external clouds together, letting agents access resources and MCPs in external private networks.</p></li></ul><p>Because Mesh is powered by <a href="https://developers.cloudflare.com/cloudflare-one/team-and-resources/devices/cloudflare-one-client/"><u>Cloudflare One Client</u></a>, every connection inherits the security controls of the Cloudflare One platform. Gateway policies apply to Mesh traffic. Device posture checks validate connecting devices. DNS filtering catches suspicious lookups. You get this without additional configuration: the same policies that protect your human traffic protect your agent traffic.</p>
    <div>
      <h2>Choosing between Mesh and Tunnel</h2>
      <a href="#choosing-between-mesh-and-tunnel">
        
      </a>
    </div>
    <p>With the introduction of Mesh, you might ask: when should I use Mesh instead of Tunnel? Both connect external networks privately to Cloudflare, but they serve different purposes. <a href="https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/"><u>Cloudflare Tunnel</u></a> is the ideal solution for unidirectional traffic, where Cloudflare proxies the traffic from the edge to specific private services (like a web server or a database). </p><p>Cloudflare Mesh, on the other hand, provides a full bidirectional, many-to-many network. Every device and node on your Mesh can access one another using their private IPs. An application or agent running in your network can discover and access any other resource on the Mesh without each resource needing its own Tunnel. </p>
    <div>
      <h2>Using the power of Cloudflare’s network</h2>
      <a href="#using-the-power-of-cloudflares-network">
        
      </a>
    </div>
    <p>Cloudflare Mesh gives you the benefits of a mesh network (resiliency, high scalability, low latency and high performance), but, by routing everything through Cloudflare, it resolves a key challenge of mesh networks: NAT traversal.</p><p>Most of the Internet is behind NAT (Network Address Translation). This mechanism allows an entire local network of devices to share a single public IP address by mapping traffic between public headers and private internal addresses. When two devices are behind NAT, direct connections can fail and traffic has to fall back to relay servers. If your relay infrastructure has limited points of presence, a meaningful fraction of your traffic hits those relays, adding latency and reducing reliability. And while it can be possible to self-host your own relay servers to compensate, that means taking on the burden of managing additional infrastructure just to connect your existing network.</p><p>Cloudflare Mesh takes a different approach. All Mesh traffic routes through Cloudflare's global network, the same infrastructure that serves traffic for some of the largest websites of the Internet. For cross-region or multi-cloud traffic, this consistently beats public Internet routing. There's no degraded fallback path, because the Cloudflare edge is the path.</p><p>Routing through Cloudflare also means every packet passes through Cloudflare's security stack. This is the key advantage of building Mesh on the Cloudflare One platform: security isn't a separate product you bolt on later. And by leveraging this same global backbone, we can provide these core pillars to every team from day one:</p><p><b>50 nodes and 50 users free. </b>Your whole team and your whole staging environment on one private network, included with every Cloudflare account. </p><p><b>Global edge routing.</b> 330+ cities, optimized backbone routing. No relay servers with limited points of presence. No degraded fallback paths.</p><p><b>Security controls from day one.</b> Mesh runs on Cloudflare One. Gateway policies, DNS filtering, DLP, traffic inspection, and device posture checks are all available on the same platform. Start with simple private connectivity. Turn on Gateway policies when you need traffic filtering. Enable Access for Infrastructure when you need session-level controls for SSH and RDP. Add DLP when you need to prevent sensitive data from leaving your network. Every capability is one toggle away.</p><p><b>High availability.</b> Create a Mesh node with high availability enabled and spin up multiple connectors using the same token in active-passive mode. They advertise the same IP routes, so if one goes down, traffic fails over automatically.</p>
    <div>
      <h2>Integrated with the Developer Platform with Workers VPC</h2>
      <a href="#integrated-with-the-developer-platform-with-workers-vpc">
        
      </a>
    </div>
    <p>Mesh connects your agents and resources across external clouds, but you also need to be able to connect from your agents built on Workers with Agents SDK as well. To enable this, we’ve extended <a href="https://blog.cloudflare.com/workers-vpc-open-beta/"><u>Workers VPC</u></a> to make your entire Mesh network accessible to Workers and Durable Objects.</p><p>That means that you can connect to your Cloudflare Mesh network from Workers, making the entire network accessible from a single binding’s <code>fetch()</code> call. This complements Workers VPC’s existing support for Cloudflare Tunnel, giving you more choice over how you want to secure your networks. Now, you can specify entire networks that you want to connect to in your <code>wrangler.jsonc</code> file. To bind to your Mesh network, use the <code>cf1:network</code> reserved keyword that binds to the Mesh network of your account:</p>
            <pre><code>"vpc_networks": [
  { "binding": "MESH", "network_id": "cf1:network", "remote": true },
  { "binding": "AWS_VPC", "tunnel_id": "350fd307-...", "remote": true }
]
</code></pre>
            <p>Then, you can use it within your Worker or agent code:</p>
            <pre><code>export default {
  async fetch(request: Request, env: Env, ctx: ExecutionContext) {
    // Reach any internal host on your Mesh, no pre-registration required
    const apiResponse = await env.MESH.fetch("http://10.0.1.50/api/data");

    // Internal hostname resolved via tunnel's private DNS resolver
    const dbResponse = await env.AWS_VPC.fetch("http://internal-db.corp.local:5432");

    return new Response(await apiResponse.text());
  },
};
</code></pre>
            <p>By connecting the Developer Platform to your Mesh networks, you can build Workers that have secure access to your private databases, internal APIs and MCPs, allowing you to build cross-cloud agents and MCPs that provide agentic capabilities to your app. But it also opens up a world where agents can autonomously observe your entire stack end-to-end, cross-reference logs and suggest optimizations in real-time.</p>
    <div>
      <h2>How it all fits together</h2>
      <a href="#how-it-all-fits-together">
        
      </a>
    </div>
    <p>Together, Cloudflare Mesh, Workers VPC, and the Agents SDK provide a unified private network for your agents that spans both Cloudflare and your external clouds. We’ve merged connectivity and compute so your agents can securely reach the resources they need, wherever they live, across the globe.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6tYFghWEN1H3jIwV0Kejc9/04dcaedda5692a9726a7081d19bc413c/BLOG-3215_4.png" />
          </figure><p><b>Mesh nodes</b> are your servers, VMs, and containers. They run a headless version of Cloudflare One Client and get a Mesh IP. Services talk to services over private IPs, bidirectionally, routed through Cloudflare's edge. </p><p><b>Devices</b> are your laptops and phones. They run the Cloudflare One Client and reach Mesh nodes directly: SSH, database queries, API calls, all over private IPs. Your local coding agents use this connection to access private resources. </p><p><b>Agents on Workers</b> reach private services through Workers VPC Network bindings. They get scoped access to entire networks, mediated by MCP. The network enforces what the agent can reach. The MCP server enforces what the agent can do. </p>
    <div>
      <h2>What’s next</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>The current version of Mesh provides the foundation for secure, unified connectivity. But as agentic workflows become more complex, we’re focused on moving beyond simple connectivity toward a network that is more intuitive to manage and more granularly aware of who, or what, is talking to your services. Here is what we are building for the rest of the year.</p>
    <div>
      <h4>Hostname routing</h4>
      <a href="#hostname-routing">
        
      </a>
    </div>
    <p>We're extending Cloudflare Tunnel's <a href="https://blog.cloudflare.com/tunnel-hostname-routing/"><u>hostname routing</u></a> to Mesh this summer. Your Mesh nodes will be able to attract traffic for private hostnames like <code>wiki.local</code> or <code>api.staging.internal</code>, without you having to manage IP lists or worry about how those hostnames resolve on the Cloudflare edge. Route traffic to services by name, not by IP. If your infrastructure uses dynamic IPs, auto-scaling groups, or ephemeral containers, this removes an entire class of routing headaches.</p>
    <div>
      <h4>Mesh DNS</h4>
      <a href="#mesh-dns">
        
      </a>
    </div>
    <p>Today, you reach Mesh nodes by their Mesh IPs: <code>ssh 100.64.0.5</code>. That works, but it's not how you think about your infrastructure. You think in names: <code>postgres-staging</code>, <code>api-prod</code>, <code>nikitas-openclaw</code>.</p><p>Later this year we're building Mesh DNS so that every node and device that joins your Mesh automatically gets a routable internal hostname. No DNS configuration or manual records. Add a node named <code>postgres-staging</code>, and <code>postgres-staging.mesh</code> resolves to the right Mesh IP from any device on your Mesh.</p><p>Combined with hostname routing, you'll be able to <code>ssh postgres-staging.mesh</code> or <code>curl http://api-prod.mesh:3000/health</code> without ever knowing or managing an IP address.</p>
    <div>
      <h4>Identity-aware routing</h4>
      <a href="#identity-aware-routing">
        
      </a>
    </div>
    <p>Today, Mesh nodes authenticate to the Cloudflare edge, but they share an identity at the network layer. Devices authenticate with user identity via the Cloudflare One Client, but nodes don't yet carry distinct, routable identities that Gateway policies can differentiate.</p><p>We want to change that. The goal is identity-aware routing for Mesh, where each node, each device, and eventually each agent gets a distinct identity that policies can evaluate. Instead of writing rules based on IP ranges, you write rules based on who or what is connecting.</p><p>This matters most for agents. Today, when an agent running on Workers calls a tool through a VPC binding, the target service sees a Worker making a request. It doesn't know which agent is calling, who authorized it, or what scope was granted. On the Mesh side, when a local coding agent on your laptop reaches a staging service, Gateway sees your device identity but not the agent's.</p><p>We're working toward a model where agents carry their own identity through the network:</p><ul><li><p>Principal / Sponsor: The human who authorized the action (Nikita from the platform team)</p></li><li><p>Agent: The AI system performing it (the deployment assistant, session #abc123)</p></li><li><p>Scope: What the agent is allowed to do (read deployments, trigger rollbacks, nothing else)</p></li></ul><p>This would let you write policies like: reads from Nikita's agents are allowed, but writes require Nikita directly. Agent traffic can be filtered independently from human traffic. An agent's network access can be revoked without touching Nikita's.</p><p>The infrastructure for this is in place. Mesh nodes provision with per-node tokens, devices authenticate with per-user identity, and Workers VPC bindings scope per-service access. The missing piece is making these identities visible to the policy layer so Gateway can make routing and access decisions based on them. That's what we're building.</p>
    <div>
      <h4>Mesh in containers</h4>
      <a href="#mesh-in-containers">
        
      </a>
    </div>
    <p>Today, Mesh nodes run on VMs and bare-metal Linux servers. But modern infrastructure increasingly runs in containers: Kubernetes pods, Docker Compose stacks, ephemeral CI/CD runners. We're building a Mesh Docker image that lets you add a Mesh node to any containerized environment.</p><p>This means you'll be able to include a Mesh sidecar in your Docker Compose stack and give every service in that stack private network access. A microservice running in a container in your staging cluster could reach a database in your production VPC over Mesh, without either service needing a public endpoint. </p><p>It is also useful for CI/CD pipelines that can access private infrastructure during builds and tests: your GitHub Actions runner pulls the Mesh container image, joins your network, runs integration tests against your staging environment, and tears down. All without VPN credentials to manage or persistent tunnels to maintain: the node disappears when the container exits.</p><p>We expect the Mesh Docker image to be available later this year.</p>
    <div>
      <h2>Get started</h2>
      <a href="#get-started">
        
      </a>
    </div>
    <p>While we continue to evolve these identity and routing capabilities, the foundation for secure, unified networking is available today. You can start bridging your clouds and securing your agents in just a few minutes.</p><p><b>Get started Cloudflare Mesh</b>: Head to Networking &gt; Mesh in the <a href="https://dash.cloudflare.com/?to=/:account/mesh"><u>Cloudflare dashboard</u></a>. Free for up to 50 nodes and 50 users.</p><p><b>Build agents with Agents SDK and Workers VPC: </b>Install the Agents SDK (`npm i agents`), follow the <a href="https://developers.cloudflare.com/workers-vpc/get-started/"><u>Workers VPC quickstart</u></a>, and build a <a href="https://developers.cloudflare.com/agents/guides/remote-mcp-server/"><u>remote MCP server</u></a> with private backend access.</p><p><b>Already on Cloudflare One?</b> Mesh works with your existing setup. Your Gateway policies, device posture checks, and access rules apply to Mesh traffic automatically. See <a href="https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-mesh/"><u>the Mesh documentation</u></a> to add your first node.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4KMBRDqq5wBjdLHyhM3LNW/98064f81453b6ff29d0bf4b86cfc251a/image1.png" />
          </figure>
    <div>
      <h4>Watch on Cloudflare TV</h4>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div>
  
</div>
<p></p> ]]></content:encoded>
            <category><![CDATA[Agents Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Workers AI]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Zero Trust]]></category>
            <category><![CDATA[Cloudflare One]]></category>
            <category><![CDATA[SASE]]></category>
            <guid isPermaLink="false">iAIJH3zvDaXoiDMugrwOv</guid>
            <dc:creator>Nikita Cano</dc:creator>
            <dc:creator>Thomas Gauvin</dc:creator>
        </item>
        <item>
            <title><![CDATA[Welcome to Agents Week]]></title>
            <link>https://blog.cloudflare.com/welcome-to-agents-week/</link>
            <pubDate>Sun, 12 Apr 2026 17:01:05 GMT</pubDate>
            <description><![CDATA[ Cloudflare's mission has always been to help build a better Internet. Sometimes that means building for the Internet as it exists. Sometimes it means building for the Internet as it's about to become. 

This week, we're kicking off Agents Week, dedicated to what comes next.
 ]]></description>
            <content:encoded><![CDATA[ <p>Cloudflare's mission has always been to help build a better Internet. Sometimes that means building for the Internet as it exists. Sometimes it means building for the Internet as it's about to become. </p><p>Today, we're kicking off Agents Week, dedicated to building the Internet for what comes next.</p>
    <div>
      <h2>The Internet wasn't built for the age of AI. Neither was the cloud.</h2>
      <a href="#the-internet-wasnt-built-for-the-age-of-ai-neither-was-the-cloud">
        
      </a>
    </div>
    <p>The cloud, as we know it, was a product of the last major technological paradigm shift: smartphones.</p><p>When smartphones put the Internet in everyone's pocket, they didn't just add users — they changed the nature of what it meant to be online. Always connected, always expecting an instant response. Applications had to handle an order of magnitude more users, and the infrastructure powering them had to evolve.</p><p>The approach the industry converged on was straightforward: more users, more copies of your application. As applications grew in complexity, teams broke them into smaller pieces — microservices — so each team could control its own destiny. But the core principle stayed the same: a finite number of applications, each serving many users. Scale meant more copies.</p><p>Kubernetes and containers became the default. They made it easy to spin up instances, load balance, and tear down what you didn't need. Under this one-to-many model, a single instance could serve many users, and even as user counts grew into the billions, the number of things you had to manage stayed finite.</p><p>Agents break this.</p>
    <div>
      <h2>One user, one agent, one task</h2>
      <a href="#one-user-one-agent-one-task">
        
      </a>
    </div>
    <p>Unlike every application that came before them, agents are one-to-one. Each agent is a unique instance. Serving one user, running one task. Where a traditional application follows the same execution path regardless of who's using it, an agent requires its own execution environment: one where the LLM dictates the code path, calls tools dynamically, adjusts its approach, and persists until the task is done.</p><p>Think of it as the difference between a restaurant and a personal chef. A restaurant has a menu — a fixed set of options — and a kitchen optimized to churn them out at volume. That's most applications today. An agent is more like a personal chef who asks: what do you want to eat? They might need entirely different ingredients, utensils, or techniques each time. You can't run a personal-chef service out of the same kitchen setup you'd use for a restaurant.</p><p>Over the past year, we've seen agents take off, with coding agents leading the way — not surprisingly, since developers tend to be early adopters. The way most coding agents work today is by spinning up a container to give the LLM what it needs: a filesystem, git, bash, and the ability to run arbitrary binaries.</p><p>But coding agents are just the beginning. Tools like Claude Cowork are already making agents accessible to less technical users. Once agents move beyond developers and into the hands of everyone — administrative assistants, research analysts, customer service reps, personal planners — the scale math gets sobering fast.</p>
    <div>
      <h2>The math on scaling agents to the masses</h2>
      <a href="#the-math-on-scaling-agents-to-the-masses">
        
      </a>
    </div>
    <p>If the more than 100 million knowledge workers in the US each used an agentic assistant at ~15% concurrency, you'd need capacity for approximately 24 million simultaneous sessions. At 25–50 users per CPU, that's somewhere between 500K and 1M server CPUs — just for the US, with one agent per person.</p><p>Now picture each person running several agents in parallel. Now picture the rest of the world with more than 1 billion knowledge workers. We're not a little short on compute. We're orders of magnitude away.</p><p>So how do we close that gap?</p>
    <div>
      <h2>Infrastructure built for agents</h2>
      <a href="#infrastructure-built-for-agents">
        
      </a>
    </div>
    <p>Eight years ago, we launched <a href="https://workers.cloudflare.com/"><u>Workers</u></a> — the beginning of our developer platform, and a bet on containerless, serverless compute. The motivation at the time was practical: we needed lightweight compute without cold-starts for customers who depended on Cloudflare for speed. Built on V8 isolates rather than containers, Workers turned out to be an order of magnitude more efficient — faster to start, cheaper to run, and natively suited to the "spin up, execute, tear down" pattern.</p><p>What we didn't anticipate was how well this model would map to the age of agents.</p><p>Where containers give every agent a full commercial kitchen: bolted-down appliances, walk-in fridges, the works, whether the agent needs them or not, isolates, on the other hand, give the personal chef exactly the counter space, the burner, and the knife they need for this particular meal. Provisioned in milliseconds. Cleaned up the moment the dish is served.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3UM1NukO0Ho4lQAYk5CoU8/30e1376c4fe61e86204a0de92ae4612b/BLOG-3238_2.png" />
          </figure><p>In a world where we need to support not thousands of long-running applications, but billions of ephemeral, single-purpose execution environments — isolates are the right primitive. </p><p>Each one starts in milliseconds. Each one is securely sandboxed. And you can run orders of magnitude more of them on the same hardware compared to containers.</p><p>Just a few weeks ago, we took this further with the <a href="https://blog.cloudflare.com/dynamic-workers/"><u>Dynamic Workers open beta</u></a>: execution environments spun up at runtime, on demand. An isolate takes a few milliseconds to start and uses a few megabytes of memory. That's roughly 100x faster and up to 100x more memory-efficient than a container. </p><p>You can start a new one for every single request, run a snippet of code, and throw it away — at a scale of millions per second.</p><p>For agents to move beyond early adopters and into everyone's hands, they also have to be affordable. Running each agent in its own container is expensive enough that agentic tools today are mostly limited to coding assistants for engineers who can justify the cost. <b>Isolates, by running orders of magnitude more efficiently, are what make per-unit economics viable at the scale agents require.</b></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6iiD7zACxMNEDdvJWM6zbo/2261c320f3be3cd2fa6ef34593a1ee09/BLOG-3238_3.png" />
          </figure>
    <div>
      <h2>The horseless carriage phase</h2>
      <a href="#the-horseless-carriage-phase">
        
      </a>
    </div>
    <p>While it’s critical to build the right foundation for the future, we’re not there yet. And every paradigm shift has a period where we try to make the new thing work within the old model. The first cars were called "horseless carriages." The first websites were digital brochures. The first mobile apps were shrunken desktop UIs. We're in that phase now with agents.</p><p>You can see it everywhere. </p><p>We're giving agents headless browsers to navigate websites designed for human eyes, when what they need are structured protocols like MCP to discover and invoke services directly. </p><p>Many early MCP servers are thin wrappers around existing REST APIs — same CRUD operations, new protocol — when LLMs are actually far better at writing code than making sequential tool calls. </p><p>We're using CAPTCHAs and behavioral fingerprinting to verify the thing on the other end of a request, when increasingly that thing is an agent acting on someone's behalf — and the right question isn't "are you human?" but "which agent are you, who authorized you, and what are you allowed to do?"</p><p>We're spinning up full containers for agents that just need to make a few API calls and return a result.</p><p>These are just a few examples, but none of this is surprising. It's what transitions look like.</p>
    <div>
      <h2>Building for both</h2>
      <a href="#building-for-both">
        
      </a>
    </div>
    <p>The Internet is always somewhere between two eras. IPv6 is objectively better than IPv4, but dropping IPv4 support would break half the Internet. HTTP/2 and HTTP/3 coexist. TLS 1.2 still hasn't fully given way to 1.3. The better technology exists, the old technology persists, and the job of infrastructure is to bridge both.</p><p>Cloudflare has always been in the business of bridging these transitions. The shift to agents is no different.</p><p>Coding agents genuinely need containers — a filesystem, git, bash, arbitrary binary execution. That's not going away. This week, our container-based sandbox environments are going GA, because we're committed to making them the best they can be. We're going deeper on browser rendering for agents, because there will be a long tail of services that don't yet speak MCP, and agents will still need to interact with them. These aren't stopgaps — they're part of a complete platform.</p><p>But we're also building what comes next: the isolates, the protocols, and the identity models that agents actually need. Our job is to make sure you don't have to choose between what works today and what's right for tomorrow.</p>
    <div>
      <h2>Security in the model, not around it</h2>
      <a href="#security-in-the-model-not-around-it">
        
      </a>
    </div>
    <p>If agents are going to handle our professional and personal tasks — reading our email, operating on our code, interacting with our financial services — then security has to be built into the execution model, not layered on after the fact.</p><p>CISOs have been the first to confront this. The productivity gains from putting agents in everyone's hands are real, but today, most agent deployments are fraught with risk: prompt injection, data exfiltration, unauthorized API access, opaque tool usage. </p><p>A developer's vibe-coding agent needs access to repositories and deployment pipelines. An enterprise's customer service agent needs access to internal APIs and user data. In both cases, securing the environment today means stitching together credentials, network policies, and access controls that were never designed for autonomous software.</p><p>Cloudflare has been building two platforms in parallel: our developer platform, for people who build applications, and our zero trust platform, for organizations that need to secure access. For a while, these served distinct audiences. </p><p>But "how do I build this agent?" and "how do I make sure it's safe?" are increasingly the same question. We're bringing these platforms together so that all of this is native to how agents run, not a separate layer you bolt on.</p>
    <div>
      <h2>Agents that follow the rules</h2>
      <a href="#agents-that-follow-the-rules">
        
      </a>
    </div>
    <p>There's another dimension to the agent era that goes beyond compute and security: economics and governance.</p><p>When agents interact with the Internet on our behalf — reading articles, consuming APIs, accessing services — there needs to be a way for the people and organizations who create that content and run those services to set terms and get paid. Today, the web's economic model is built around human attention: ads, paywalls, subscriptions. </p><p>Agents don't have attention (well, not that <a href="https://arxiv.org/abs/1706.03762"><u>kind of attention</u></a>). They don't see ads. They don't click through cookie banners.</p><p>If we want an Internet where agents can operate freely <i>and</i> where publishers, content creators, and service providers are fairly compensated, we need new infrastructure for it. We’re building tools that make it easy for publishers and content owners to set and enforce policies for how agents interact with their content.</p><p>Building a better Internet has always meant making sure it works for everyone — not just the people building the technology, but the people whose work and creativity make the Internet worth using. That doesn't change in the age of agents. It becomes more important.</p>
    <div>
      <h2>The platform for developers and agents</h2>
      <a href="#the-platform-for-developers-and-agents">
        
      </a>
    </div>
    <p>Our vision for the developer platform has always been to provide a comprehensive platform that just works: from experiment, to MVP, to scaling to millions of users. But providing the primitives is only part of the equation. A great platform also has to think about how everything works together, and how it integrates into your development flow.</p><p>That job is evolving. It used to be purely about developer experience, making it easy for humans to build, test, and ship. Increasingly, it's also about helping agents help humans, and making the platform work not just for the people building agents, but for the agents themselves. Can an agent find the latest most up-to- date best practices? How easily can it discover and invoke the tools and CLIs it needs? How seamlessly can it move from writing code to deploying it?</p><p>This week, we're shipping improvements across both dimensions — making Cloudflare better for the humans building on it and for the agents running on it.</p>
    <div>
      <h2>Building for the future is a team sport</h2>
      <a href="#building-for-the-future-is-a-team-sport">
        
      </a>
    </div>
    <p>Building for the future is not something we can do alone. Every major Internet transition from HTTP/1.1 to HTTP/2 and HTTP/3, from TLS 1.2 to 1.3 — has required the industry to converge on shared standards. The shift to agents will be no different.</p><p>Cloudflare has a long history of contributing to and helping push forward the standards that make the Internet work. We've been <a href="https://blog.cloudflare.com/tag/ietf/"><u>deeply involved in the IETF</u></a> for over a decade, helping develop and deploy protocols like QUIC, TLS 1.3, and Encrypted Client Hello. We were a founding member of WinterTC, the ECMA technical committee for JavaScript runtime interoperability. We open-sourced the Workers runtime itself, because we believe the foundation should be open.</p><p>We're bringing the same approach to the agentic era. We're excited to be part of the Linux Foundation and AAIF, and to help support and push forward standards like MCP that will be foundational for the agentic future. Since Anthropic introduced MCP, we've worked closely with them to build the infrastructure for remote MCP servers, open-sourced our own implementations, and invested in making the protocol practical at scale. </p><p>Last year, alongside Coinbase, we <a href="https://blog.cloudflare.com/x402/"><u>co-founded the x402 Foundation</u></a>, an open, neutral standard that revives the long-dormant HTTP 402 status code to give agents a native way to pay for the services and content they consume. </p><p>Agent identity, authorization, payment, safety: these all need open standards that no single company can define alone.</p>
    <div>
      <h2>Stay tuned</h2>
      <a href="#stay-tuned">
        
      </a>
    </div>
    <p>This week, we're making announcements across every dimension of the agent stack: compute, connectivity, security, identity, economics, and developer experience.</p><p>The Internet wasn't built for AI. The cloud wasn't built for agents. But Cloudflare has always been about helping build a better Internet — and what "better" means changes with each era. This is the era of agents. This week, <a href="https://blog.cloudflare.com/"><u>follow along</u></a> and we'll show you what we're building for it.</p> ]]></content:encoded>
            <category><![CDATA[Agents Week]]></category>
            <category><![CDATA[Agents]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Workers AI]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Serverless]]></category>
            <guid isPermaLink="false">4dZj0C0XnS9BJQnxy2QkzY</guid>
            <dc:creator>Rita Kozlov</dc:creator>
            <dc:creator>Dane Knecht</dc:creator>
        </item>
        <item>
            <title><![CDATA[500 Tbps of capacity: 16 years of scaling our global network]]></title>
            <link>https://blog.cloudflare.com/500-tbps-of-capacity/</link>
            <pubDate>Fri, 10 Apr 2026 18:00:05 GMT</pubDate>
            <description><![CDATA[ Cloudflare’s global network has officially crossed 500 Tbps of external capacity, enough to route more than 20% of the web and absorb the largest DDoS attacks ever recorded. ]]></description>
            <content:encoded><![CDATA[ <p><sup><i>Cloudflare’s global network and backbone in 2026.</i></sup> </p><p>Cloudflare's network recently passed a major milestone: we crossed 500 terabits per second (Tbps) of external capacity.</p><p>When we say 500 Tbps, we mean total provisioned external interconnection capacity: the sum of every port facing a transit provider, private peering partner, Internet exchange, or <a href="https://blog.cloudflare.com/announcing-express-cni/"><u>Cloudflare Network Interconnect</u></a> (CNI) port across all 330+ cities. This is not peak traffic. On any given day, our peak utilization is a fraction of that number. (The rest is our DDoS budget.)</p><p>It’s a long way from where we started. In 2010, we launched from a small office above a nail salon in Palo Alto, with a single transit provider and a reverse proxy you could set up by <a href="https://blog.cloudflare.com/whats-the-story-behind-the-names-of-cloudflares-name-servers/"><u>changing two nameservers</u></a>.</p>
    <div>
      <h3>The early days of transit and peering</h3>
      <a href="#the-early-days-of-transit-and-peering">
        
      </a>
    </div>
    <p>Our first transit provider was nLayer Communications, a network most people now know as GTT. nLayer gave us our first capacity and our first hands-on company experience in peering relationships and the careful balance between cost and performance.</p><p>From there, we grew <a href="https://blog.cloudflare.com/and-then-there-were-threecloudflares-new-data/"><u>city</u></a> by <a href="https://blog.cloudflare.com/luxembourg-chisinau/"><u>city</u></a>: Chicago, Ashburn, San Jose, Amsterdam, Tokyo. Each new data center meant negotiating colocation contracts, pulling fiber, racking servers, and establishing peering through <a href="https://blog.cloudflare.com/think-global-peer-local-peer-with-cloudflare-at-100-internet-exchange-points/"><u>Internet exchanges</u></a>. The Internet isn't actually a cloud, of course. It is a collection of specific rooms full of cables, and we spent years learning the nuances of every one of them.</p><p>Not every city was a straightforward deployment, having to deal with missing hardware, customs strikes, and even <a href="https://blog.cloudflare.com/stories-from-our-recent-global-data-center-upgrade/"><u>dental floss</u></a>. In a single month in 2018, we opened up in 31 cities in 24 days: from <a href="https://blog.cloudflare.com/kathmandu/"><u>Kathmandu</u></a> and <a href="https://blog.cloudflare.com/baghdad/"><u>Baghdad</u></a> to <a href="https://blog.cloudflare.com/reykjavik-cloudflares-northernmost-location/"><u>Reykjavík</u></a> and <a href="https://blog.cloudflare.com/luxembourg-chisinau/"><u>Chișinău</u></a>. When we opened our 127th data center in <a href="https://blog.cloudflare.com/macau/"><u>Macau</u></a>, we were protecting 7 million Internet properties. Today, with data centers in 330+ cities, we protect more than 20% of the web.</p>
    <div>
      <h3>When the network became the security layer </h3>
      <a href="#when-the-network-became-the-security-layer">
        
      </a>
    </div>
    <p>As our footprint grew, customers asked for more than just website caching. They needed to protect employees, replace aging Multiprotocol Label Switching (<a href="https://www.cloudflare.com/learning/network-layer/what-is-mpls/"><u>MPLS</u></a>) circuits, and <a href="https://blog.cloudflare.com/mpls-to-zerotrust/"><u>secure entire enterprise networks</u></a>. Instead of traditional appliances, <a href="https://blog.cloudflare.com/magic-transit-network-functions/"><u>we built systems</u></a> to establish secure tunnels to private subnets and advertise enterprise IP space directly from our global network via BGP.</p><p>The scale of threats grew in parallel. In 2025, we mitigated a 31.4 Tbps <a href="https://blog.cloudflare.com/ddos-threat-report-2025-q4/"><u>DDoS attack</u></a> lasting 35 seconds. The source was the Aisuru-Kimwolf botnet, including many infected Android TVs. It was one of over 5,000 attacks we blocked that day. No engineer was paged.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4TRCTdrET5JTpz0BaxSdF4/4623f0953417f0998f6810dd048773ff/BLOG-3267_2.png" />
          </figure><p>A decade ago, an attack of that magnitude would have required nation-state resources to counter. Today, our network handles it in seconds without human intervention. That is what operating at a 500 Tbps scale requires: moving the intelligence to every server in our network so the network can <a href="https://blog.cloudflare.com/deep-dive-cloudflare-autonomous-edge-ddos-protection/"><u>defend itself</u></a>.</p>
    <div>
      <h3>How our network responds to an attack</h3>
      <a href="#how-our-network-responds-to-an-attack">
        
      </a>
    </div>
    <p>Here is what actually happens when an attack hits our network. Packets arrive at the network interface card (NIC) and immediately enter an eXpress Data Path (<a href="https://en.wikipedia.org/wiki/Express_Data_Path"><u>XDP</u></a>) program chain managed by <i>xdpd</i>, running in driver mode. Among the first programs in that chain is <i>l4drop</i>, which evaluates each packet against mitigation rules in extended Berkeley Packet Filter (eBPF). Those rules are generated by <i>dosd</i>, our denial of service daemon, which runs on every server in our fleet. Each <i>dosd</i> instance samples incoming traffic, builds a table of the heaviest hitters it sees, and broadcasts that table to every other instance in the colo. The result is a shared colo-wide view of traffic, and because every server works from the same data, they reach the same mitigation decision.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6tGvgIFrQkO0BqATmbWVst/c0b8dff1379a9b1437ec784f10876a2c/BLOG-3267_3.png" />
          </figure><p>When <i>dosd</i> detects an attack pattern, the resulting rule is applied locally via <i>l4drop</i> and propagates globally via Quicksilver, our distributed key-value (KV) store, reaching every server in every data center within seconds. Only after surviving <i>l4drop</i> do packets reach Unimog, our Layer 4 (L4) load balancer, which distributes them across healthy servers in the data center. For Magic Transit customers routing enterprise network traffic through our edge, <i>flowtrackd</i> adds a further layer of stateful TCP inspection, tracking connection state and dropping packets that don't belong to legitimate flows.</p><p>The 31.4 Tbps attack we mitigated followed exactly this path. No traffic was backhauled to a centralized scrubbing center. No human intervened. Every server in the targeted data centers independently recognized the attack and began dropping malicious packets at line rate, before those packets consumed a single CPU cycle of application processing. The software is only half the story: none of it works if the ports aren't there to absorb the traffic in the first place.</p>
    <div>
      <h3>A distributed developer platform</h3>
      <a href="#a-distributed-developer-platform">
        
      </a>
    </div>
    <p>Running code on every server in our network was a natural consequence of controlling the full stack. If we already ran eBPF programs on every machine to drop attack traffic, we could run customer application code there too. That insight became <a href="https://blog.cloudflare.com/code-everywhere-cloudflare-workers/"><u>Workers</u></a>, and later <a href="https://blog.cloudflare.com/introducing-workers-kv/"><u>KV</u></a> and <a href="https://blog.cloudflare.com/introducing-workers-durable-objects/"><u>Durable Objects</u></a>.</p><p>Our developer platform runs in every city we operate in, not in a handful of cloud regions. In 2025, we added <a href="https://blog.cloudflare.com/cloudflare-containers-coming-2025/"><u>Containers</u></a> to Workers, so heavier workloads can run at the edge too. V8 isolates and custom filesystem layers minimize cold starts. Your code runs where your users are, on the same servers that drop attack traffic at line rate via l4drop. Attack traffic is dropped before it reaches the network stack. Your application never sees it.</p>
    <div>
      <h3>Forward-looking protocols: IPv6, RPKI, ASPA</h3>
      <a href="#forward-looking-protocols-ipv6-rpki-aspa">
        
      </a>
    </div>
    <p>We were early adopters of <a href="https://blog.cloudflare.com/introducing-cloudflares-automatic-ipv6-gatewa/"><u>IPv6</u></a> and Resource Public Key Infrastructure (<a href="https://blog.cloudflare.com/rpki/"><u>RPKI</u></a>). <a href="https://blog.cloudflare.com/cloudflare-1111-incident-on-june-27-2024/"><u>BGP hijacks</u></a> cause real outages and security breaches. RPKI allows us to drop invalid routes from peers, ensuring traffic goes where it is supposed to. We sign Route Origin Authorizations (ROAs) for our prefixes and enforce Route Origin Validation on ingress. We reject RPKI-invalid routes, even when that occasionally breaks reachability to networks with misconfigured ROAs.</p><p>Autonomous System Provider Authorization (<a href="https://blog.cloudflare.com/aspa-secure-internet/"><u>ASPA</u></a>) is next. RPKI validates who owns a prefix. ASPA validates the path it took to get here. RPKI is a passport check at the destination, confirming the right owner, while ASPA is a flight manifest check: it verifies every network the traffic passed through. A route leak is like a passenger who boarded in the wrong city; RPKI would not catch it, but ASPA will.</p><p>Current ecosystem adoption for ASPA looks like RPKI did in 2015. We were one of the first networks to deploy RPKI at scale, and today, <a href="https://radar.cloudflare.com/routing"><u>867,000 prefixes</u></a> in the global routing table have valid RPKI certificates, up from near zero a decade ago. At our scale, the protocols we choose have real consequences for the broader Internet. We push for adoption early because waiting means more hijacks and more leaks in the meantime.</p>
    <div>
      <h3>AI agents and the evolving Internet</h3>
      <a href="#ai-agents-and-the-evolving-internet">
        
      </a>
    </div>
    <p>AI has changed what it means to have a presence on the web. For most of the Internet’s history, traffic was human-generated, by people clicking links in browsers. Today, AI crawlers, model training pipelines, and autonomous agents <a href="https://blog.cloudflare.com/radar-2025-year-in-review/"><u>now account</u></a> for more than 4% of all HTML requests across our network, comparable to Googlebot itself. "User action" crawling, where an AI visits a page because a human asked it a question, grew over 15x in 2025 alone.</p><p>AI crawlers behave differently than browsers at the infrastructure level. Browsers load a page and stop. Crawlers instead fetch every linked resource at maximum throughput with no pause between requests. At our scale, distinguishing legitimate AI crawling from actual attacks is a real engineering problem. Our <a href="https://blog.cloudflare.com/introducing-ai-crawl-control/"><u>detection systems</u></a> use a combination of verified bot IP ranges, TLS fingerprinting, behavioral analysis, and robots.txt compliance signals to make that distinction, and to give site owners the data they need to <a href="https://blog.cloudflare.com/content-independence-day-no-ai-crawl-without-compensation/"><u>decide which crawlers to allow</u></a>.</p><p>At the TLS layer, for example, a legitimate browser presents a ClientHello with a predictable set of cipher suites, extensions, and ordering that matches its declared User-Agent. A crawler spoofing that User-Agent but using a stripped-down TLS library will present a different fingerprint, and that mismatch is one of the signals our systems use to classify the request before it reaches the origin.</p>
    <div>
      <h3>Help us build the next 500 Tbps</h3>
      <a href="#help-us-build-the-next-500-tbps">
        
      </a>
    </div>
    <p>What started above a nail salon in Palo Alto is now a 500 Tbps network in 330+ cities across 125+ countries, where every server runs our developer platform and security services, not just cache. That is sixteen years of architectural decisions compounding, and we owe it to the 13,000+ networks and partners who peer with us. We are not done.</p><p>If you are a network operator, peer with us. Our peering policy and interconnection details are on <a href="https://peeringdb.com/asn/13335"><u>PeeringDB</u></a>. If you are interested in embedding Cloudflare infrastructure directly within your network, reach out to our team at <a href="#"><u>epp@cloudflare.com</u></a>, to join the Edge Partner Program.</p> ]]></content:encoded>
            <category><![CDATA[Network Services]]></category>
            <category><![CDATA[Cloudflare Network]]></category>
            <category><![CDATA[Peering]]></category>
            <category><![CDATA[DDoS]]></category>
            <category><![CDATA[BGP]]></category>
            <category><![CDATA[RPKI]]></category>
            <category><![CDATA[Workers AI]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[AI]]></category>
            <guid isPermaLink="false">14NmmkLAEg9hFIbB9bosqh</guid>
            <dc:creator>Tanner Ryan</dc:creator>
        </item>
        <item>
            <title><![CDATA[Sandboxing AI agents, 100x faster]]></title>
            <link>https://blog.cloudflare.com/dynamic-workers/</link>
            <pubDate>Tue, 24 Mar 2026 13:00:00 GMT</pubDate>
            <description><![CDATA[ We’re introducing Dynamic Workers, which allow you to execute AI-generated code in secure, lightweight isolates. This approach is 100 times faster than traditional containers, enabling millisecond startup times for AI agent sandboxing. ]]></description>
            <content:encoded><![CDATA[ <p>Last September we introduced <a href="https://blog.cloudflare.com/code-mode/"><u>Code Mode</u></a>, the idea that agents should perform tasks not by making tool calls, but instead by writing code that calls APIs. We've shown that simply converting an MCP server into a TypeScript API can <a href="https://www.youtube.com/watch?v=L2j3tYTtJwk"><u>cut token usage by 81%</u></a>. We demonstrated that Code Mode can also operate <i>behind</i> an MCP server instead of in front of it, creating the new <a href="https://blog.cloudflare.com/code-mode-mcp/"><u>Cloudflare MCP server that exposes the entire Cloudflare API with just two tools and under 1,000 tokens</u></a>.</p><p>But if an agent (or an MCP server) is going to execute code generated on-the-fly by AI to perform tasks, that code needs to run somewhere, and that somewhere needs to be secure. You can't just <code>eval() </code>AI-generated code directly in your app: a malicious user could trivially prompt the AI to inject vulnerabilities.</p><p>You need a <b>sandbox</b>: a place to execute code that is isolated from your application and from the rest of the world, except for the specific capabilities the code is meant to access.</p><p>Sandboxing is a hot topic in the AI industry. For this task, most people are reaching for containers. Using a Linux-based container, you can start up any sort of code execution environment you want. Cloudflare even offers <a href="https://developers.cloudflare.com/containers/"><u>our container runtime</u></a> and <a href="https://developers.cloudflare.com/sandbox/"><u>our Sandbox SDK</u></a> for this purpose.</p><p>But containers are expensive and slow to start, taking hundreds of milliseconds to boot and hundreds of megabytes of memory to run. You probably need to keep them warm to avoid delays, and you may be tempted to reuse existing containers for multiple tasks, compromising the security.</p><p><b>If we want to support consumer-scale agents, where every end user has an agent (or many!) and every agent writes code, containers are not enough. We need something lighter.</b></p><h6>And we have it.</h6>
    <div>
      <h2>Dynamic Worker Loader: a lean sandbox</h2>
      <a href="#dynamic-worker-loader-a-lean-sandbox">
        
      </a>
    </div>
    <p>Tucked into our Code Mode post in September was the announcement of a new, experimental feature: the Dynamic Worker Loader API. This API allows a Cloudflare Worker to instantiate a new Worker, in its own sandbox, with code specified at runtime, all on the fly.</p><p><b>Dynamic Worker Loader is now in open beta, available to all paid Workers users.</b></p><p><a href="https://developers.cloudflare.com/workers/runtime-apis/bindings/worker-loader/"><u>Read the docs for full details</u></a>, but here's what it looks like:</p>
            <pre><code>// Have your LLM generate code like this.
let agentCode: string = `
  export default {
    async myAgent(param, env, ctx) {
      // ...
    }
  }
`;

// Get RPC stubs representing APIs the agent should be able
// to access. (This can be any Workers RPC API you define.)
let chatRoomRpcStub = ...;

// Load a worker to run the code, using the worker loader
// binding.
let worker = env.LOADER.load({
  // Specify the code.
  compatibilityDate: "2026-03-01",
  mainModule: "agent.js",
  modules: { "agent.js": agentCode },

  // Give agent access to the chat room API.
  env: { CHAT_ROOM: chatRoomRpcStub },

  // Block internet access. (You can also intercept it.)
  globalOutbound: null,
});

// Call RPC methods exported by the agent code.
await worker.getEntrypoint().myAgent(param);
</code></pre>
            <p>That's it.</p>
    <div>
      <h3>100x faster</h3>
      <a href="#100x-faster">
        
      </a>
    </div>
    <p>Dynamic Workers use the same underlying sandboxing mechanism that the entire Cloudflare Workers platform has been built on since its launch, eight years ago: isolates. An isolate is an instance of the V8 JavaScript execution engine, the same engine used by Google Chrome. They are <a href="https://developers.cloudflare.com/workers/reference/how-workers-works/"><u>how Workers work</u></a>.</p><p>An isolate takes a few milliseconds to start and uses a few megabytes of memory. That's around 100x faster and 10x-100x more memory efficient than a typical container.</p><p><b>That means that if you want to start a new isolate for every user request, on-demand, to run one snippet of code, then throw it away, you can.</b></p>
    <div>
      <h3>Unlimited scalability</h3>
      <a href="#unlimited-scalability">
        
      </a>
    </div>
    <p>Many container-based sandbox providers impose limits on global concurrent sandboxes and rate of sandbox creation. Dynamic Worker Loader has no such limits. It doesn't need to, because it is simply an API to the same technology that has powered our platform all along, which has always allowed Workers to seamlessly scale to millions of requests per second.</p><p>Want to handle a million requests per second, where <i>every single request</i> loads a separate Dynamic Worker sandbox, all running concurrently? No problem!</p>
    <div>
      <h3>Zero latency</h3>
      <a href="#zero-latency">
        
      </a>
    </div>
    <p>One-off Dynamic Workers usually run on the same machine — the same thread, even — as the Worker that created them. No need to communicate around the world to find a warm sandbox. Isolates are so lightweight that we can just run them wherever the request landed. Dynamic Workers are supported in every one of Cloudflare's hundreds of locations around the world.</p>
    <div>
      <h3>It's all JavaScript</h3>
      <a href="#its-all-javascript">
        
      </a>
    </div>
    <p>The only catch, vs. containers, is that your agent needs to write JavaScript.</p><p>Technically, Workers (including dynamic ones) can use Python and WebAssembly, but for small snippets of code — like that written on-demand by an agent — JavaScript will load and run much faster.</p><p>We humans tend to have strong preferences on programming languages, and while many love JavaScript, others might prefer Python, Rust, or countless others.</p><p>But we aren't talking about humans here. We're talking about AI. AI will write any language you want it to. LLMs are experts in every major language. Their training data in JavaScript is immense.</p><p>JavaScript, by its nature on the web, is designed to be sandboxed. It is the correct language for the job.</p>
    <div>
      <h3>Tools defined in TypeScript</h3>
      <a href="#tools-defined-in-typescript">
        
      </a>
    </div>
    <p>If we want our agent to be able to do anything useful, it needs to talk to external APIs. How do we tell it about the APIs it has access to?</p><p>MCP defines schemas for flat tool calls, but not programming APIs. OpenAPI offers a way to express REST APIs, but it is verbose, both in the schema itself and the code you'd have to write to call it.</p><p>For APIs exposed to JavaScript, there is a single, obvious answer: TypeScript.</p><p>Agents know TypeScript. TypeScript is designed to be concise. With very few tokens, you can give your agent a precise understanding of your API.</p>
            <pre><code>// Interface to interact with a chat room.
interface ChatRoom {
  // Get the last `limit` messages of the chat log.
  getHistory(limit: number): Promise&lt;Message[]&gt;;

  // Subscribe to new messages. Dispose the returned object
  // to unsubscribe.
  subscribe(callback: (msg: Message) =&gt; void): Promise&lt;Disposable&gt;;

  // Post a message to chat.
  post(text: string): Promise&lt;void&gt;;
}

type Message = {
  author: string;
  time: Date;
  text: string;
}
</code></pre>
            <p>Compare this with the equivalent OpenAPI spec (which is so long you have to scroll to see it all):</p><pre>
openapi: 3.1.0
info:
  title: ChatRoom API
  description: &gt;
    Interface to interact with a chat room.
  version: 1.0.0

paths:
  /messages:
    get:
      operationId: getHistory
      summary: Get recent chat history
      description: Returns the last `limit` messages from the chat log, newest first.
      parameters:
        - name: limit
          in: query
          required: true
          schema:
            type: integer
            minimum: 1
      responses:
        "200":
          description: A list of messages.
          content:
            application/json:
              schema:
                type: array
                items:
                  $ref: "#/components/schemas/Message"

    post:
      operationId: postMessage
      summary: Post a message to the chat room
      requestBody:
        required: true
        content:
          application/json:
            schema:
              type: object
              required:
                - text
              properties:
                text:
                  type: string
      responses:
        "204":
          description: Message posted successfully.

  /messages/stream:
    get:
      operationId: subscribeMessages
      summary: Subscribe to new messages via SSE
      description: &gt;
        Opens a Server-Sent Events stream. Each event carries a JSON-encoded
        Message object. The client unsubscribes by closing the connection.
      responses:
        "200":
          description: An SSE stream of new messages.
          content:
            text/event-stream:
              schema:
                description: &gt;
                  Each SSE `data` field contains a JSON-encoded Message object.
                $ref: "#/components/schemas/Message"

components:
  schemas:
    Message:
      type: object
      required:
        - author
        - time
        - text
      properties:
        author:
          type: string
        time:
          type: string
          format: date-time
        text:
          type: string
</pre><p>We think the TypeScript API is better. It's fewer tokens and much easier to understand (for both agents and humans).  </p><p>Dynamic Worker Loader makes it easy to implement a TypeScript API like this in your own Worker and then pass it in to the Dynamic Worker either as a method parameter or in the env object. The Workers Runtime will automatically set up a <a href="https://blog.cloudflare.com/capnweb-javascript-rpc-library/"><u>Cap'n Web RPC</u></a> bridge between the sandbox and your harness code, so that the agent can invoke your API across the security boundary without ever realizing that it isn't using a local library.</p><p>That means your agent can write code like this:</p>
            <pre><code>// Thinking: The user asked me to summarize recent chat messages from Alice.
// I will filter the recent message history in code so that I only have to
// read the relevant messages.
let history = await env.CHAT_ROOM.getHistory(1000);
return history.filter(msg =&gt; msg.author == "alice");
</code></pre>
            
    <div>
      <h3>HTTP filtering and credential injection</h3>
      <a href="#http-filtering-and-credential-injection">
        
      </a>
    </div>
    <p>If you prefer to give your agents HTTP APIs, that's fully supported. Using the <code>globalOutbound</code> option to the worker loader API, you can register a callback to be invoked on every HTTP request, in which you can inspect the request, rewrite it, inject auth keys, respond to it directly, block it, or anything else you might like.</p><p>For example, you can use this to implement <b>credential injection</b> (token injection): When the agent makes an HTTP request to a service that requires authorization, you add credentials to the request on the way out. This way, the agent itself never knows the secret credentials, and therefore cannot leak them.</p><p>Using a plain HTTP interface may be desirable when an agent is talking to a well-known API that is in its training set, or when you want your agent to use a library that is built on a REST API (the library can run inside the agent's sandbox).</p><p>With that said, <b>in the absence of a compatibility requirement, TypeScript RPC interfaces are better than HTTP:</b></p><ul><li><p>As shown above, a TypeScript interface requires far fewer tokens to describe than an HTTP interface.</p></li><li><p>The agent can write code to call TypeScript interfaces using far fewer tokens than equivalent HTTP.</p></li><li><p>With TypeScript interfaces, since you are defining your own wrapper interface anyway, it is easier to narrow the interface to expose exactly the capabilities that you want to provide to your agent, both for simplicity and security. With HTTP, you are more likely implementing <i>filtering</i> of requests made against some existing API. This is hard, because your proxy must fully interpret the meaning of every API call in order to properly decide whether to allow it, and HTTP requests are complicated, with many headers and other parameters that could all be meaningful. It ends up being easier to just write a TypeScript wrapper that only implements the functions you want to allow.</p></li></ul>
    <div>
      <h3>Battle-hardened security</h3>
      <a href="#battle-hardened-security">
        
      </a>
    </div>
    <p>Hardening an isolate-based sandbox is tricky, as it is a more complicated attack surface than hardware virtual machines. Although all sandboxing mechanisms have bugs, security bugs in V8 are more common than security bugs in typical hypervisors. When using isolates to sandbox possibly-malicious code, it's important to have additional layers of defense-in-depth. Google Chrome, for example, implemented strict process isolation for this reason, but it is not the only possible solution.</p><p>We have nearly a decade of experience securing our isolate-based platform. Our systems automatically deploy V8 security patches to production within hours — faster than Chrome itself. Our <a href="https://blog.cloudflare.com/mitigating-spectre-and-other-security-threats-the-cloudflare-workers-security-model/"><u>security architecture</u></a> features a custom second-layer sandbox with dynamic cordoning of tenants based on risk assessments. <a href="https://blog.cloudflare.com/safe-in-the-sandbox-security-hardening-for-cloudflare-workers/"><u>We've extended the V8 sandbox itself</u></a> to leverage hardware features like MPK. We've teamed up with (and hired) leading researchers to develop <a href="https://blog.cloudflare.com/spectre-research-with-tu-graz/"><u>novel defenses against Spectre</u></a>. We also have systems that scan code for malicious patterns and automatically block them or apply additional layers of sandboxing. And much more.</p><p>When you use Dynamic Workers on Cloudflare, you get all of this automatically.</p>
    <div>
      <h2>Helper libraries</h2>
      <a href="#helper-libraries">
        
      </a>
    </div>
    <p>We've built a number of libraries that you might find useful when working with Dynamic Workers: </p>
    <div>
      <h3>Code Mode</h3>
      <a href="#code-mode">
        
      </a>
    </div>
    <p><a href="https://www.npmjs.com/package/@cloudflare/codemode"><code>@cloudflare/codemode</code></a> simplifies running model-generated code against AI tools using Dynamic Workers. At its core is <code>DynamicWorkerExecutor()</code>, which constructs a purpose-built sandbox with code normalisation to handle common formatting errors, and direct access to a <code>globalOutbound</code> fetcher for controlling <code>fetch()</code> behaviour inside the sandbox — set it to <code>null</code> for full isolation, or pass a <code>Fetcher</code> binding to route, intercept or enrich outbound requests from the sandbox.</p>
            <pre><code>const executor = new DynamicWorkerExecutor({
  loader: env.LOADER,
  globalOutbound: null, // fully isolated 
});

const codemode = createCodeTool({
  tools: myTools,
  executor,
});

return generateText({
  model,
  messages,
  tools: { codemode },
});
</code></pre>
            <p>The Code Mode SDK also provides two server-side utility functions. <code>codeMcpServer({ server, executor })</code> wraps an existing MCP Server, replacing its tool surface with a single <code>code()</code> tool. <code>openApiMcpServer({ spec, executor, request })</code> goes further: given an OpenAPI spec and an executor, it builds a complete MCP Server with <code>search()</code> and <code>execute()</code> tools as used by the Cloudflare MCP Server, and better suited to larger APIs.</p><p>In both cases, the code generated by the model runs inside Dynamic Workers, with calls to external services made over RPC bindings passed to the executor.</p><p><a href="https://www.npmjs.com/package/@cloudflare/codemode"><u>Learn more about the library and how to use it.</u></a> </p>
    <div>
      <h3>Bundling</h3>
      <a href="#bundling">
        
      </a>
    </div>
    <p>Dynamic Workers expect pre-bundled modules. <a href="https://www.npmjs.com/package/@cloudflare/worker-bundler"><code>@cloudflare/worker-bundler</code></a> handles that for you: give it source files and a <code>package.json</code>, and it resolves npm dependencies from the registry, bundles everything with <code>esbuild</code>, and returns the module map the Worker Loader expects.</p>
            <pre><code>import { createWorker } from "@cloudflare/worker-bundler";

const worker = env.LOADER.get("my-worker", async () =&gt; {
  const { mainModule, modules } = await createWorker({
    files: {
      "src/index.ts": `
        import { Hono } from 'hono';
        import { cors } from 'hono/cors';

        const app = new Hono();
        app.use('*', cors());
        app.get('/', (c) =&gt; c.text('Hello from Hono!'));
        app.get('/json', (c) =&gt; c.json({ message: 'It works!' }));

        export default app;
      `,
      "package.json": JSON.stringify({
        dependencies: { hono: "^4.0.0" }
      })
    }
  });

  return { mainModule, modules, compatibilityDate: "2026-01-01" };
});

await worker.getEntrypoint().fetch(request);
</code></pre>
            <p>It also supports full-stack apps via <code>createApp</code> — bundle a server Worker, client-side JavaScript, and static assets together, with built-in asset serving that handles content types, ETags, and SPA routing.</p><p><a href="https://www.npmjs.com/package/@cloudflare/worker-bundler"><u>Learn more about the library and how to use it.</u></a></p>
    <div>
      <h3>File manipulation</h3>
      <a href="#file-manipulation">
        
      </a>
    </div>
    <p><a href="https://www.npmjs.com/package/@cloudflare/shell"><code>@cloudflare/shell</code></a> gives your agent a virtual filesystem inside a Dynamic Worker. Agent code calls typed methods on a <code>state</code> object — read, write, search, replace, diff, glob, JSON query/update, archive — with structured inputs and outputs instead of string parsing.</p><p>Storage is backed by a durable <code>Workspace</code> (SQLite + R2), so files persist across executions. Coarse operations like <code>searchFiles</code>, <code>replaceInFiles</code>, and <code>planEdits</code> minimize RPC round-trips — the agent issues one call instead of looping over individual files. Batch writes are transactional by default: if any write fails, earlier writes roll back automatically.</p>
            <pre><code>import { Workspace } from "@cloudflare/shell";
import { stateTools } from "@cloudflare/shell/workers";
import { DynamicWorkerExecutor, resolveProvider } from "@cloudflare/codemode";

const workspace = new Workspace({
  sql: this.ctx.storage.sql, // Works with any DO's SqlStorage, D1, or custom SQL backend
  r2: this.env.MY_BUCKET, // large files spill to R2 automatically
  name: () =&gt; this.name   // lazy — resolved when needed, not at construction
});

// Code runs in an isolated Worker sandbox with no network access
const executor = new DynamicWorkerExecutor({ loader: env.LOADER });

// The LLM writes this code; `state.*` calls dispatch back to the host via RPC
const result = await executor.execute(
  `async () =&gt; {
    // Search across all TypeScript files for a pattern
    const hits = await state.searchFiles("src/**/*.ts", "answer");
    // Plan multiple edits as a single transaction
    const plan = await state.planEdits([
      { kind: "replace", path: "/src/app.ts",
        search: "42", replacement: "43" },
      { kind: "writeJson", path: "/src/config.json",
        value: { version: 2 } }
    ]);
    // Apply atomically — rolls back on failure
    return await state.applyEditPlan(plan);
  }`,
  [resolveProvider(stateTools(workspace))]
);</code></pre>
            <p>The package also ships prebuilt TypeScript type declarations and a system prompt template, so you can drop the full <code>state</code> API into your LLM context in a handful of tokens.</p><p><a href="https://www.npmjs.com/package/@cloudflare/shell"><u>Learn more about the library and how to use it.</u></a></p>
    <div>
      <h2>How are people using it?</h2>
      <a href="#how-are-people-using-it">
        
      </a>
    </div>
    
    <div>
      <h4>Code Mode</h4>
      <a href="#code-mode">
        
      </a>
    </div>
    <p>Developers want their agents to write and execute code against tool APIs, rather than making sequential tool calls one at a time. With Dynamic Workers, the LLM generates a single TypeScript function that chains multiple API calls together, runs it in a Dynamic Worker, and returns the final result back to the agent. As a result, only the output, and not every intermediate step, ends up in the context window. This cuts both latency and token usage, and produces better results, especially when the tool surface is large.</p><p>Our own <a href="https://github.com/cloudflare/mcp-server-cloudflare">Cloudflare MCP server</a> is built exactly this way: it exposes the entire Cloudflare API through just two tools — search and execute — in under 1,000 tokens, because the agent writes code against a typed API instead of navigating hundreds of individual tool definitions.</p>
    <div>
      <h4>Building custom automations </h4>
      <a href="#building-custom-automations">
        
      </a>
    </div>
    <p>Developers are using Dynamic Workers to let agents build custom automations on the fly. <a href="https://www.zite.com/"><u>Zite</u></a>, for example, is building an app platform where users interact through a chat interface — the LLM writes TypeScript behind the scenes to build CRUD apps, connect to services like Stripe, Airtable, and Google Calendar, and run backend logic, all without the user ever seeing a line of code. Every automation runs in its own Dynamic Worker, with access to only the specific services and libraries that the endpoint needs.</p><blockquote><p><i>“To enable server-side code for Zite’s LLM-generated apps, we needed an execution layer that was instant, isolated, and secure. Cloudflare’s Dynamic Workers hit the mark on all three, and out-performed all of the other platforms we benchmarked for speed and library support. The NodeJS compatible runtime supported all of Zite’s workflows, allowing hundreds of third party integrations, without sacrificing on startup time. Zite now services millions of execution requests daily thanks to Dynamic Workers.” </i></p><p><i>— </i><b><i>Antony Toron</i></b><i>, CTO and Co-Founder, Zite </i></p></blockquote>
    <div>
      <h4>Running AI-generated applications</h4>
      <a href="#running-ai-generated-applications">
        
      </a>
    </div>
    <p>Developers are building platforms that generate full applications from AI — either for their customers or for internal teams building prototypes. With Dynamic Workers, each app can be spun up on demand, then put back into cold storage until it's invoked again. Fast startup times make it easy to preview changes during active development. Platforms can also block or intercept any network requests the generated code makes, keeping AI-generated apps safe to run.</p>
    <div>
      <h2>Pricing</h2>
      <a href="#pricing">
        
      </a>
    </div>
    <p>Dynamically-loaded Workers are priced at $0.002 per unique Worker loaded per day (as of this post’s publication), in addition to the usual CPU time and invocation pricing of regular Workers.</p><p>For AI-generated "code mode" use cases, where every Worker is a unique one-off, this means the price is $0.002 per Worker loaded (plus CPU and invocations). This cost is typically negligible compared to the inference costs to generate the code.</p><p>During the beta period, the $0.002 charge is waived. As pricing is subject to change, please always check our Dynamic Workers <a href="https://developers.cloudflare.com/dynamic-workers/pricing/"><u>pricing</u></a> for the most current information. </p>
    <div>
      <h2>Get Started</h2>
      <a href="#get-started">
        
      </a>
    </div>
    <p>If you’re on the Workers Paid plan, you can start using <a href="https://developers.cloudflare.com/dynamic-workers/">Dynamic Workers</a> today. </p>
    <div>
      <h4>Dynamic Workers Starter</h4>
      <a href="#dynamic-workers-starter">
        
      </a>
    </div>
    <a href="https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/agents/tree/main/examples/dynamic-workers"><img src="https://deploy.workers.cloudflare.com/button" /></a>
<p></p>
<p>Use this “hello world” <a href="https://github.com/cloudflare/agents/tree/main/examples/dynamic-workers">starter</a> to get a Worker deployed that can load and execute Dynamic Workers. </p>
    <div>
      <h4>Dynamic Workers Playground</h4>
      <a href="#dynamic-workers-playground">
        
      </a>
    </div>
    <a href="https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/agents/tree/main/examples/dynamic-workers-playground"><img src="https://deploy.workers.cloudflare.com/button" /></a>
<p></p><p>You can also deploy the <a href="https://github.com/cloudflare/agents/tree/main/examples/dynamic-workers-playground">Dynamic Workers Playground</a>, where you’ll be able to write or import code, bundle it at runtime with <code>@cloudflare/worker-bundler</code>, execute it through a Dynamic Worker, see real-time responses and execution logs. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/32d0ficYALnSneKc4jZPja/0d4d07d747fc14936f16071714b7a8e5/BLOG-3243_2.png" />
          </figure><p>Dynamic Workers are fast, scalable, and lightweight. <a href="https://discord.com/channels/595317990191398933/1460655307255578695"><u>Find us on Discord</u></a> if you have any questions. We’d love to see what you build!</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/mQOJLnMtXULmj6l3DgKZg/ef2ee4cef616bc2d9a7caf35df5834f5/BLOG-3243_3.png" />
          </figure><p></p> ]]></content:encoded>
            <category><![CDATA[MCP]]></category>
            <category><![CDATA[Workers AI]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Agents]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <guid isPermaLink="false">1tc7f8AggVLw5D8OmaZri5</guid>
            <dc:creator>Kenton Varda</dc:creator>
            <dc:creator>Sunil Pai</dc:creator>
            <dc:creator>Ketan Gupta</dc:creator>
        </item>
        <item>
            <title><![CDATA[Powering the agents: Workers AI now runs large models, starting with Kimi K2.5]]></title>
            <link>https://blog.cloudflare.com/workers-ai-large-models/</link>
            <pubDate>Thu, 19 Mar 2026 19:53:16 GMT</pubDate>
            <description><![CDATA[ Kimi K2.5 is now on Workers AI, helping you power agents entirely on Cloudflare’s Developer Platform. Learn how we optimized our inference stack and reduced inference costs for internal agent use cases.  ]]></description>
            <content:encoded><![CDATA[ <p>We're making Cloudflare the best place for building and deploying agents. But reliable agents aren't built on prompts alone; they require a robust, coordinated infrastructure of underlying primitives. </p><p>At Cloudflare, we have been building these primitives for years: <a href="https://developers.cloudflare.com/durable-objects/"><u>Durable Objects</u></a> for state persistence, <a href="https://developers.cloudflare.com/workflows/"><u>Workflows</u></a> for long running tasks, and <a href="https://developers.cloudflare.com/workers/runtime-apis/bindings/worker-loader/"><u>Dynamic Workers</u></a> or <a href="https://developers.cloudflare.com/sandbox/"><u>Sandbox</u></a> containers for secure execution. Powerful abstractions like the <a href="https://developers.cloudflare.com/agents/"><u>Agents SDK</u></a> are designed to help you build agents on top of Cloudflare’s Developer Platform.</p><p>But these primitives only provided the execution environment. The agent still needed a model capable of powering it. </p><p>Starting today, Workers AI is officially in the big models game. We now offer frontier open-source models on our AI inference platform. We’re starting by releasing <a href="https://www.kimi.com/blog/kimi-k2-5"><u>Moonshot AI’s Kimi K2.5</u></a> model <a href="https://developers.cloudflare.com/workers-ai/models/kimi-k2.5"><u>on Workers AI</u></a>. With a full 256k context window and support for multi-turn tool calling, vision inputs, and structured outputs, the Kimi K2.5 model is excellent for all kinds of agentic tasks. By bringing a frontier-scale model directly into the Cloudflare Developer Platform, we’re making it possible to run the entire agent lifecycle on a single, unified platform.</p><p>The heart of an agent is the AI model that powers it, and that model needs to be smart, with high reasoning capabilities and a large context window. Workers AI now runs those models.</p>
    <div>
      <h2>The price-performance sweet spot</h2>
      <a href="#the-price-performance-sweet-spot">
        
      </a>
    </div>
    <p>We spent the last few weeks testing Kimi K2.5 as the engine for our internal development tools. Within our <a href="https://opencode.ai/"><u>OpenCode</u></a> environment, Cloudflare engineers use Kimi as a daily driver for agentic coding tasks. We have also integrated the model into our automated code review pipeline; you can see this in action via our public code review agent, <a href="https://github.com/ask-bonk/ask-bonk"><u>Bonk</u></a>, on Cloudflare GitHub repos. In production, the model has proven to be a fast, efficient alternative to larger proprietary models without sacrificing quality.</p><p>Serving Kimi K2.5 began as an experiment, but it quickly became critical after reviewing how the model performs and how cost-efficient it is. As an illustrative example: we have an agent that does security reviews of Cloudflare’s codebases. This agent processes over 7B tokens per day, and using Kimi, it has caught more than 15 confirmed issues in a single codebase. Doing some rough math, if we had run this agent on a mid-tier proprietary model, we would have spent $2.4M a year for this single use case, on a single codebase. Running this agent with Kimi K2.5 cost just a fraction of that: we cut costs by 77% simply by making the switch to Workers AI.</p><p>As AI adoption increases, we are seeing a fundamental shift not only in how engineering teams are operating, but how individuals are operating. It is becoming increasingly common for people to have a personal agent like <a href="https://openclaw.ai/"><u>OpenClaw</u></a> running 24/7. The volume of inference is skyrocketing.</p><p>This new rise in personal and coding agents means that cost is no longer a secondary concern; it is the primary blocker to scaling. When every employee has multiple agents processing hundreds of thousands of tokens per hour, the math for proprietary models stops working. Enterprises will look to transition to open-source models that offer frontier-level reasoning without the proprietary price tag. Workers AI is here to facilitate this shift, providing everything from serverless endpoints for a personal agent to dedicated instances powering autonomous agents across an entire organization.</p>
    <div>
      <h2>The large model inference stack</h2>
      <a href="#the-large-model-inference-stack">
        
      </a>
    </div>
    <p>Workers AI has served models, including LLMs, since its launch two years ago, but we’ve historically prioritized smaller models. Part of the reason was that for some time, open-source LLMs fell far behind the models from frontier model labs. This changed with models like Kimi K2.5, but to serve this type of very large LLM, we had to make changes to our inference stack. We wanted to share with you some of what goes on behind the scenes to support a model like Kimi.</p><p>We’ve been working on custom kernels for Kimi K2.5 to optimize how we serve the model, which is built on top of our proprietary <a href="https://blog.cloudflare.com/cloudflares-most-efficient-ai-inference-engine/"><u>Infire inference engine</u></a>. Custom kernels improve the model’s performance and GPU utilization, unlocking gains that would otherwise go unclaimed if you were just running the model out of the box. There are also multiple techniques and hardware configurations that can be leveraged to serve a large model. Developers typically use a combination of data, tensor, and expert parallelization techniques to optimize model performance. Strategies like disaggregated prefill are also important, in which you separate the prefill and generation stages onto different machines in order to get better throughput or higher GPU utilization. Implementing these techniques and incorporating them into the inference stack takes a lot of dedicated experience to get right. </p><p>Workers AI has already done the experimentation with serving techniques to yield excellent throughput on Kimi K2.5. A lot of this does not come out of the box when you self-host an open-source model. The benefit of using a platform like Workers AI is that you don’t need to be a Machine Learning Engineer, a DevOps expert, or a Site Reliability Engineer to do the optimizations required to host it: we’ve already done the hard part, you just need to call an API.</p>
    <div>
      <h2>Beyond the model — platform improvements for agentic workloads</h2>
      <a href="#beyond-the-model-platform-improvements-for-agentic-workloads">
        
      </a>
    </div>
    <p>In concert with this launch, we’ve also improved our platform and are releasing several new features to help you build better agents.</p>
    <div>
      <h3>Prefix caching and surfacing cached tokens</h3>
      <a href="#prefix-caching-and-surfacing-cached-tokens">
        
      </a>
    </div>
    <p>When you work with agents, you are likely sending a large number of input tokens as part of the context: this could be detailed system prompts, tool definitions, MCP server tools, or entire codebases. Inputs can be as large as the model context window, so in theory, you could be sending requests with almost 256k input tokens. That’s a lot of tokens.</p><p>When an LLM processes a request, the request is broken down into two stages: the prefill stage processes input tokens and the output stage generates output tokens. These stages are usually sequential, where input tokens have to be fully processed before you can generate output tokens. This means that sometimes the GPU is not fully utilized while the model is doing prefill.</p><p>With multi-turn conversations, when you send a new prompt, the client sends all the previous prompts, tools, and context from the session to the model as well. The delta between consecutive requests is usually just a few new lines of input; all the other context has already gone through the prefill stage during a previous request. This is where prefix caching helps. Instead of doing prefill on the entire request, we can cache the input tensors from a previous request, and only do prefill on the new input tokens. This saves a lot of time and compute from the prefill stage, which means a faster Time to First Token (TTFT) and a higher Tokens Per Second (TPS) throughput as you’re not blocked on prefill.</p><p>Workers AI has always done prefix caching, but we are now surfacing cached tokens as a usage metric and offering a discount on cached tokens compared to input tokens. (Pricing can be found on the <a href="https://developers.cloudflare.com/workers-ai/models/kimi-k2.5/"><u>model page</u></a>.) We also have new techniques for you to leverage in order to get a higher prefix cache hit rate, reducing your costs.</p>
    <div>
      <h3>New session affinity header for higher cache hit rates</h3>
      <a href="#new-session-affinity-header-for-higher-cache-hit-rates">
        
      </a>
    </div>
    <p>In order to route to the same model instance and take advantage of prefix caching, we use a new <code>x-session-affinity</code> header. When you send this header, you’ll improve your cache hit ratio, leading to more cached tokens and subsequently, faster TTFT, TPS, and lower inference costs.</p><p>You can pass the new header like below, with a unique string per session or per agent. Some clients like OpenCode implement this automatically out of the box. Our <a href="https://github.com/cloudflare/agents-starter"><u>Agents SDK starter</u></a> has already set up the wiring to do this for you, too.</p>
            <pre><code>curl -X POST \
"https://api.cloudflare.com/client/v4/accounts/{ACCOUNT_ID}/ai/run/@cf/moonshotai/kimi-k2.5" \
  -H "Authorization: Bearer {API_TOKEN}" \
  -H "Content-Type: application/json" \
  -H "x-session-affinity: ses_12345678" \
  -d '{
    "messages": [
      {
        "role": "system",
        "content": "You are a helpful assistant."
      },
      {
        "role": "user",
        "content": "What is prefix caching and why does it matter?"
      }
    ],
    "max_tokens": 2400,
    "stream": true
  }'
</code></pre>
            
    <div>
      <h3>Redesigned async APIs</h3>
      <a href="#redesigned-async-apis">
        
      </a>
    </div>
    <p>Serverless inference is really hard. With a pay-per-token business model, it’s cheaper on a single request basis because you don’t need to pay for entire GPUs to service your requests. But there’s a trade-off: you have to contend with other people’s traffic and capacity constraints, and there’s no strict guarantee that your request will be processed. This is not unique to Workers AI — it’s evidently the case across serverless model providers, given the frequent news reports of overloaded providers and service disruptions. While we always strive to serve your request and have built-in autoscaling and rebalancing, there are hard limitations (like hardware) that make this a challenge.</p><p>For volumes of requests that would exceed synchronous rate limits, you can submit batches of inferences to be completed asynchronously. We’re introducing a revamped Asynchronous API, which means that for asynchronous use cases, you won’t run into Out of Capacity errors and inference will execute durably at some point. Our async API looks more like flex processing than a batch API, where we process requests in the async queue as long as we have headroom in our model instances. With internal testing, our async requests usually execute within 5 minutes, but this will depend on what live traffic looks like. As we bring Kimi to the public, we will tune our scaling accordingly, but the async API is the best way to make sure you don’t run into capacity errors in durable workflows. This is perfect for use cases that are not real-time, such as code scanning agents or research agents.</p><p>Workers AI previously had an asynchronous API, but we’ve recently revamped the systems under the hood. We now rely on a pull-based system versus the historical push-based system, allowing us to pull in queued requests as soon as we have capacity. We’ve also added better controls to tune the throughput of async requests, monitoring GPU utilization in real-time and pulling in async requests when utilization is low, so that critical synchronous requests get priority while still processing asynchronous requests efficiently.</p><p>To use the asynchronous API, you would send your requests as seen below. We also have a way to <a href="https://developers.cloudflare.com/workers-ai/platform/event-subscriptions/"><u>set up event notifications</u></a> so that you can know when the inference is complete instead of polling for the request. </p>
            <pre><code>// (1.) Push a request in queue
// pass queueRequest: true
let res = await env.AI.run("@cf/moonshotai/kimi-k2.5", {
  "requests": [{
    "messages": [{
      "role": "user",
      "content": "Tell me a joke"
    }]
  }, {
    "messages": [{
      "role": "user",
      "content": "Explain the Pythagoras theorem"
    }]
  }, ...{&lt;add more requests in a batch&gt;} ];
}, {
  queueRequest: true,
});


// (2.) grab the request id
let request_id;
if(res &amp;&amp; res.request_id){
  request_id = res.request_id;
}
// (3.) poll the status
let res = await env.AI.run("@cf/moonshotai/kimi-k2.5", {
  request_id: request_id
});

if(res &amp;&amp; res.status === "queued" || res.status === "running") {
 // retry by polling again
 ...
}
else 
 return Response.json(res); // This will contain the final completed response 
</code></pre>
            
    <div>
      <h2>Try it out today</h2>
      <a href="#try-it-out-today">
        
      </a>
    </div>
    <p>Get started with Kimi K2.5 on Workers AI today. You can read our developer docs to find out <a href="https://developers.cloudflare.com/workers-ai/models/kimi-k2.5/"><u>model information and pricing</u></a>, and how to take advantage of <a href="https://developers.cloudflare.com/workers-ai/features/prompt-caching/"><u>prompt caching via session affinity headers</u></a> and <a href="https://developers.cloudflare.com/workers-ai/features/batch-api/"><u>asynchronous API</u></a>. The <a href="https://github.com/cloudflare/agents-starter"><u>Agents SDK starter</u></a> also now uses Kimi K2.5 as its default model. You can also <a href="https://opencode.ai/docs/providers/"><u>connect to Kimi K2.5 on Workers AI via Opencode</u></a>. For a live demo, try it in our <a href="https://playground.ai.cloudflare.com/"><u>playground</u></a>.</p><p>And if this set of problems around serverless inference, ML optimizations, and GPU infrastructure sound  interesting to you — <a href="https://job-boards.greenhouse.io/cloudflare/jobs/6297179?gh_jid=6297179"><u>we’re hiring</u></a>!</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/36JzF0zePj2z7kZQK8Q2fg/73b0a7206d46f0eef170ffd1494dc4b3/BLOG-3247_2.png" />
          </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Workers AI]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Agents]]></category>
            <guid isPermaLink="false">1wSO33KRdd5aUPAlSVDiqU</guid>
            <dc:creator>Michelle Chen</dc:creator>
            <dc:creator>Kevin Flansburg</dc:creator>
            <dc:creator>Ashish Datta</dc:creator>
            <dc:creator>Kevin Jain</dc:creator>
        </item>
        <item>
            <title><![CDATA[How we rebuilt Next.js with AI in one week]]></title>
            <link>https://blog.cloudflare.com/vinext/</link>
            <pubDate>Tue, 24 Feb 2026 20:00:00 GMT</pubDate>
            <description><![CDATA[ One engineer used AI to rebuild Next.js on Vite in a week. vinext builds up to 4x faster, produces 57% smaller bundles, and deploys to Cloudflare Workers with a single command. ]]></description>
            <content:encoded><![CDATA[ <p><sub><i>*This post was updated at 12:35 pm PT to fix a typo in the build time benchmarks.</i></sub></p><p>Last week, one engineer and an AI model rebuilt the most popular front-end framework from scratch. The result, <a href="https://github.com/cloudflare/vinext"><u>vinext</u></a> (pronounced "vee-next"), is a drop-in replacement for Next.js, built on <a href="https://vite.dev/"><u>Vite</u></a>, that deploys to Cloudflare Workers with a single command. In early benchmarks, it builds production apps up to 4x faster and produces client bundles up to 57% smaller. And we already have customers running it in production. </p><p>The whole thing cost about $1,100 in tokens.</p>
    <div>
      <h2>The Next.js deployment problem</h2>
      <a href="#the-next-js-deployment-problem">
        
      </a>
    </div>
    <p><a href="https://nextjs.org/"><u>Next.js</u></a> is the most popular React framework. Millions of developers use it. It powers a huge chunk of the production web, and for good reason. The developer experience is top-notch.</p><p>But Next.js has a deployment problem when used in the broader serverless ecosystem. The tooling is entirely bespoke: Next.js has invested heavily in Turbopack but if you want to deploy it to Cloudflare, Netlify, or AWS Lambda, you have to take that build output and reshape it into something the target platform can actually run.</p><p>If you’re thinking: “Isn’t that what OpenNext does?”, you are correct. </p><p>That is indeed the problem <a href="https://opennext.js.org/"><u>OpenNext</u></a> was built to solve. And a lot of engineering effort has gone into OpenNext from multiple providers, including us at Cloudflare. It works, but quickly runs into limitations and becomes a game of whack-a-mole. </p><p>Building on top of Next.js output as a foundation has proven to be a difficult and fragile approach. Because OpenNext has to reverse-engineer Next.js's build output, this results in unpredictable changes between versions that take a lot of work to correct. </p><p>Next.js has been working on a first-class adapters API, and we've been collaborating with them on it. It's still an early effort but even with adapters, you're still building on the bespoke Turbopack toolchain. And adapters only cover build and deploy. During development, next dev runs exclusively in Node.js with no way to plug in a different runtime. If your application uses platform-specific APIs like Durable Objects, KV, or AI bindings, you can't test that code in dev without workarounds.</p>
    <div>
      <h2>Introducing vinext </h2>
      <a href="#introducing-vinext">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7BCYnb6nCnc9oRBPQnuES5/d217b3582f4fe30597a3b4bf000d9bd7/BLOG-3194_2.png" />
          </figure><p>What if instead of adapting Next.js output, we reimplemented the Next.js API surface on <a href="https://vite.dev/"><u>Vite</u></a> directly? Vite is the build tool used by most of the front-end ecosystem outside of Next.js, powering frameworks like Astro, SvelteKit, Nuxt, and Remix. A clean reimplementation, not merely a wrapper or adapter. We honestly didn't think it would work. But it’s 2026, and the cost of building software has completely changed.</p><p>We got a lot further than we expected.</p>
            <pre><code>npm install vinext</code></pre>
            <p>Replace <code>next</code> with <code>vinext</code> in your scripts and everything else stays the same. Your existing <code>app/</code>, <code>pages/</code>, and <code>next.config.js</code> work as-is.</p>
            <pre><code>vinext dev          # Development server with HMR
vinext build        # Production build
vinext deploy       # Build and deploy to Cloudflare Workers</code></pre>
            <p>This is not a wrapper around Next.js and Turbopack output. It's an alternative implementation of the API surface: routing, server rendering, React Server Components, server actions, caching, middleware. All of it built on top of Vite as a plugin. Most importantly Vite output runs on any platform thanks to the <a href="https://vite.dev/guide/api-environment"><u>Vite Environment API</u></a>.</p>
    <div>
      <h2>The numbers</h2>
      <a href="#the-numbers">
        
      </a>
    </div>
    <p>Early benchmarks are promising. We compared vinext against Next.js 16 using a shared 33-route App Router application.

Both frameworks are doing the same work: compiling, bundling, and preparing server-rendered routes. We disabled TypeScript type checking and ESLint in Next.js's build (Vite doesn't run these during builds), and used force-dynamic so Next.js doesn't spend extra time pre-rendering static routes, which would unfairly slow down its numbers. The goal was to measure only bundler and compilation speed, nothing else. Benchmarks run on GitHub CI on every merge to main. </p><p><b>Production build time:</b></p>
<div><table><colgroup>
<col></col>
<col></col>
<col></col>
</colgroup>
<thead>
  <tr>
    <th><span>Framework</span></th>
    <th><span>Mean</span></th>
    <th><span>vs Next.js</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>Next.js 16.1.6 (Turbopack)</span></td>
    <td><span>7.38s</span></td>
    <td><span>baseline</span></td>
  </tr>
  <tr>
    <td><span>vinext (Vite 7 / Rollup)</span></td>
    <td>4.64s</td>
    <td>1.6x faster</td>
  </tr>
  <tr>
    <td><span>vinext (Vite 8 / Rolldown)</span></td>
    <td>1.67s</td>
    <td>4.4x faster</td>
  </tr>
</tbody></table></div><p><b>Client bundle size (gzipped):</b></p>
<div><table><colgroup>
<col></col>
<col></col>
<col></col>
</colgroup>
<thead>
  <tr>
    <th><span>Framework</span></th>
    <th><span>Gzipped</span></th>
    <th><span>vs Next.js</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>Next.js 16.1.6</span></td>
    <td><span>168.9 KB</span></td>
    <td><span>baseline</span></td>
  </tr>
  <tr>
    <td><span>vinext (Rollup)</span></td>
    <td><span>74.0 KB</span></td>
    <td><span>56% smaller</span></td>
  </tr>
  <tr>
    <td><span>vinext (Rolldown)</span></td>
    <td><span>72.9 KB</span></td>
    <td><span>57% smaller</span></td>
  </tr>
</tbody></table></div><p>These benchmarks measure compilation and bundling speed, not production serving performance. The test fixture is a single 33-route app, not a representative sample of all production applications. We expect these numbers to evolve as three projects continue to develop. The <a href="https://benchmarks.vinext.workers.dev"><u>full methodology and historical results</u></a> are public. Take them as directional, not definitive.</p><p>The direction is encouraging, though. Vite's architecture, and especially <a href="https://rolldown.rs/"><u>Rolldown</u></a> (the Rust-based bundler coming in Vite 8), has structural advantages for build performance that show up clearly here.</p>
    <div>
      <h2>Deploying to Cloudflare Workers</h2>
      <a href="#deploying-to-cloudflare-workers">
        
      </a>
    </div>
    <p>vinext is built with Cloudflare Workers as the first deployment target. A single command takes you from source code to a running Worker:</p>
            <pre><code>vinext deploy</code></pre>
            <p>This handles everything: builds the application, auto-generates the Worker configuration, and deploys. Both the App Router and Pages Router work on Workers, with full client-side hydration, interactive components, client-side navigation, React state.</p><p>For production caching, vinext includes a Cloudflare KV cache handler that gives you ISR (Incremental Static Regeneration) out of the box:</p>
            <pre><code>import { KVCacheHandler } from "vinext/cloudflare";
import { setCacheHandler } from "next/cache";

setCacheHandler(new KVCacheHandler(env.MY_KV_NAMESPACE));</code></pre>
            <p><a href="https://developers.cloudflare.com/kv/"><u>KV</u></a> is a good default for most applications, but the caching layer is designed to be pluggable. That setCacheHandler call means you can swap in whatever backend makes sense. <a href="https://developers.cloudflare.com/r2/"><u>R2</u></a> might be a better fit for apps with large cached payloads or different access patterns. We're also working on improvements to our Cache API that should provide a strong caching layer with less configuration. The goal is flexibility: pick the caching strategy that fits your app.</p><p>Live examples running right now:</p><ul><li><p><a href="https://app-router-playground.vinext.workers.dev"><u>App Router Playground</u></a></p></li><li><p><a href="https://hackernews.vinext.workers.dev"><u>Hacker News clone</u></a></p></li><li><p><a href="https://app-router-cloudflare.vinext.workers.dev"><u>App Router minimal</u></a></p></li><li><p><a href="https://pages-router-cloudflare.vinext.workers.dev"><u>Pages Router minimal</u></a></p></li></ul><p>We also have <a href="https://next-agents.threepointone.workers.dev/"><u>a live example</u></a> of Cloudflare Agents running in a Next.js app, without the need for workarounds like <a href="https://developers.cloudflare.com/workers/wrangler/api/#getplatformproxy"><u>getPlatformProxy</u></a>, since the entire app now runs in workerd, during both dev and deploy phases. This means being able to use Durable Objects, AI bindings, and every other Cloudflare-specific service without compromise. <a href="https://github.com/cloudflare/vinext-agents-example"><u>Have a look here.</u></a>   </p>
    <div>
      <h2>Frameworks are a team sport</h2>
      <a href="#frameworks-are-a-team-sport">
        
      </a>
    </div>
    <p>The current deployment target is Cloudflare Workers, but that's a small part of the picture. Something like 95% of vinext is pure Vite. The routing, the module shims, the SSR pipeline, the RSC integration: none of it is Cloudflare-specific.</p><p>Cloudflare is looking to work with other hosting providers about adopting this toolchain for their customers (the lift is minimal — we got a proof-of-concept working on <a href="https://vinext-on-vercel.vercel.app/"><u>Vercel</u></a> in less than 30 minutes!). This is an open-source project, and for its long term success, we believe it’s important we work with partners across the ecosystem to ensure ongoing investment. PRs from other platforms are welcome. If you're interested in adding a deployment target, <a href="https://github.com/cloudflare/vinext/issues"><u>open an issue</u></a> or reach out.</p>
    <div>
      <h2>Status: Experimental</h2>
      <a href="#status-experimental">
        
      </a>
    </div>
    <p>We want to be clear: vinext is experimental. It's not even one week old, and it has not yet been battle-tested with any meaningful traffic at scale. If you're evaluating it for a production application, proceed with appropriate caution.</p><p>That said, the test suite is extensive: over 1,700 Vitest tests and 380 Playwright E2E tests, including tests ported directly from the Next.js test suite and OpenNext's Cloudflare conformance suite. We’ve verified it against the Next.js App Router Playground. Coverage sits at 94% of the Next.js 16 API surface.

Early results from real-world customers are encouraging. We've been working with <a href="https://ndstudio.gov/"><u>National Design Studio</u></a>, a team that's aiming to modernize every government interface, on one of their beta sites, <a href="https://www.cio.gov/"><u>CIO.gov</u></a>. They're already running vinext in production, with meaningful improvements in build times and bundle sizes.</p><p>The README is honest about <a href="https://github.com/cloudflare/vinext#whats-not-supported-and-wont-be"><u>what's not supported and won't be</u></a>, and about <a href="https://github.com/cloudflare/vinext#known-limitations"><u>known limitations</u></a>. We want to be upfront rather than overpromise.</p>
    <div>
      <h2>What about pre-rendering?</h2>
      <a href="#what-about-pre-rendering">
        
      </a>
    </div>
    <p>vinext already supports Incremental Static Regeneration (ISR) out of the box. After the first request to any page, it's cached and revalidated in the background, just like Next.js. That part works today.</p><p>vinext does not yet support static pre-rendering at build time. In Next.js, pages without dynamic data get rendered during <code>next build</code> and served as static HTML. If you have dynamic routes, you use <code>generateStaticParams()</code> to enumerate which pages to build ahead of time. vinext doesn't do that… yet.</p><p>This was an intentional design decision for launch. It's  <a href="https://github.com/cloudflare/vinext/issues/9">on the roadmap</a>, but if your site is 100% prebuilt HTML with static content, you probably won't see much benefit from vinext today. That said, if one engineer can spend <span>$</span>1,100 in tokens and rebuild Next.js, you can probably spend $10 and migrate to a Vite-based framework designed specifically for static content, like <a href="https://astro.build/">Astro</a> (which <a href="https://blog.cloudflare.com/astro-joins-cloudflare/">also deploys to Cloudflare Workers</a>).</p><p>For sites that aren't purely static, though, we think we can do something better than pre-rendering everything at build time.</p>
    <div>
      <h2>Introducing Traffic-aware Pre-Rendering</h2>
      <a href="#introducing-traffic-aware-pre-rendering">
        
      </a>
    </div>
    <p>Next.js pre-renders every page listed in <code>generateStaticParams()</code> during the build. A site with 10,000 product pages means 10,000 renders at build time, even though 99% of those pages may never receive a request. Builds scale linearly with page count. This is why large Next.js sites end up with 30-minute builds.</p><p>So we built <b>Traffic-aware Pre-Rendering</b> (TPR). It's experimental today, and we plan to make it the default once we have more real-world testing behind it.</p><p>The idea is simple. Cloudflare is already the reverse proxy for your site. We have your traffic data. We know which pages actually get visited. So instead of pre-rendering everything or pre-rendering nothing, vinext queries Cloudflare's zone analytics at deploy time and pre-renders only the pages that matter.</p>
            <pre><code>vinext deploy --experimental-tpr

  Building...
  Build complete (4.2s)

  TPR (experimental): Analyzing traffic for my-store.com (last 24h)
  TPR: 12,847 unique paths — 184 pages cover 90% of traffic
  TPR: Pre-rendering 184 pages...
  TPR: Pre-rendered 184 pages in 8.3s → KV cache

  Deploying to Cloudflare Workers...
</code></pre>
            <p>For a site with 100,000 product pages, the power law means 90% of traffic usually goes to 50 to 200 pages. Those get pre-rendered in seconds. Everything else falls back to on-demand SSR and gets cached via ISR after the first request. Every new deploy refreshes the set based on current traffic patterns. Pages that go viral get picked up automatically. All of this works without <code>generateStaticParams()</code> and without coupling your build to your production database.</p>
    <div>
      <h2>Taking on the Next.js challenge, but this time with AI</h2>
      <a href="#taking-on-the-next-js-challenge-but-this-time-with-ai">
        
      </a>
    </div>
    <p>A project like this would normally take a team of engineers months, if not years. Several teams at various companies have attempted it, and the scope is just enormous. We tried once at Cloudflare! Two routers, 33+ module shims, server rendering pipelines, RSC streaming, file-system routing, middleware, caching, static export. There's a reason nobody has pulled it off.</p><p>This time we did it in under a week. One engineer (technically engineering manager) directing AI.</p><p>The first commit landed on February 13. By the end of that same evening, both the Pages Router and App Router had basic SSR working, along with middleware, server actions, and streaming. By the next afternoon, <a href="https://app-router-playground.vinext.workers.dev"><u>App Router Playground</u></a> was rendering 10 of 11 routes. By day three, <code>vinext deploy</code> was shipping apps to Cloudflare Workers with full client hydration. The rest of the week was hardening: fixing edge cases, expanding the test suite, bringing API coverage to 94%.</p><p>What changed from those earlier attempts? AI got better. Way better.</p>
    <div>
      <h2>Why this problem is made for AI</h2>
      <a href="#why-this-problem-is-made-for-ai">
        
      </a>
    </div>
    <p>Not every project would go this way. This one did because a few things happened to line up at the right time.</p><p><b>Next.js is well-specified.</b> It has extensive documentation, a massive user base, and years of Stack Overflow answers and tutorials. The API surface is all over the training data. When you ask Claude to implement <code>getServerSideProps</code> or explain how <code>useRouter</code> works, it doesn't hallucinate. It knows how Next works.</p><p><b>Next.js has an elaborate test suite.</b> The <a href="https://github.com/vercel/next.js"><u>Next.js repo</u></a> contains thousands of E2E tests covering every feature and edge case. We ported tests directly from their suite (you can see the attribution in the code). This gave us a specification we could verify against mechanically.</p><p><b>Vite is an excellent foundation.</b> <a href="https://vite.dev/"><u>Vite</u></a> handles the hard parts of front-end tooling: fast HMR, native ESM, a clean plugin API, production bundling. We didn't have to build a bundler. We just had to teach it to speak Next.js. <a href="https://github.com/vitejs/vite-plugin-rsc"><code><u>@vitejs/plugin-rsc</u></code></a> is still early, but it gave us React Server Components support without having to build an RSC implementation from scratch.</p><p><b>The models caught up.</b> We don't think this would have been possible even a few months ago. Earlier models couldn't sustain coherence across a codebase this size. New models can hold the full architecture in context, reason about how modules interact, and produce correct code often enough to keep momentum going. At times, I saw it go into Next, Vite, and React internals to figure out a bug. The state-of-the-art models are impressive, and they seem to keep getting better.</p><p>All of those things had to be true at the same time. Well-documented target API, comprehensive test suite, solid build tool underneath, and a model that could actually handle the complexity. Take any one of them away and this doesn't work nearly as well.</p>
    <div>
      <h2>How we actually built it</h2>
      <a href="#how-we-actually-built-it">
        
      </a>
    </div>
    <p>Almost every line of code in vinext was written by AI. But here's the thing that matters more: every line passes the same quality gates you'd expect from human-written code. The project has 1,700+ Vitest tests, 380 Playwright E2E tests, full TypeScript type checking via tsgo, and linting via oxlint. Continuous integration runs all of it on every pull request. Establishing a set of good guardrails is critical to making AI productive in a codebase.</p><p>The process started with a plan. I spent a couple of hours going back and forth with Claude in <a href="https://opencode.ai"><u>OpenCode</u></a> to define the architecture: what to build, in what order, which abstractions to use. That plan became the north star. From there, the workflow was straightforward:</p><ol><li><p>Define a task ("implement the <code>next/navigation</code> shim with usePathname, <code>useSearchParams</code>, <code>useRouter</code>").</p></li><li><p>Let the AI write the implementation and tests.</p></li><li><p>Run the test suite.</p></li><li><p>If tests pass, merge. If not, give the AI the error output and let it iterate.</p></li><li><p>Repeat.</p></li></ol><p>We wired up AI agents for code review too. When a PR was opened, an agent reviewed it. When review comments came back, another agent addressed them. The feedback loop was mostly automated. </p><p>It didn't work perfectly every time. There were PRs that were just wrong. The AI would confidently implement something that seemed right but didn't match actual Next.js behavior. I had to course-correct regularly. Architecture decisions, prioritization, knowing when the AI was headed down a dead end: that was all me. When you give AI good direction, good context, and good guardrails, it can be very productive. But the human still has to steer.</p><p>For browser-level testing, I used <a href="https://github.com/vercel-labs/agent-browser"><u>agent-browser</u></a> to verify actual rendered output, client-side navigation, and hydration behavior. Unit tests miss a lot of subtle browser issues. This caught them.</p><p>Over the course of the project, we ran over 800 sessions in OpenCode. Total cost: roughly $1,100 in Claude API tokens.</p>
    <div>
      <h2>What this means for software</h2>
      <a href="#what-this-means-for-software">
        
      </a>
    </div>
    <p>Why do we have so many layers in the stack? This project forced me to think deeply about this question. And to consider how AI impacts the answer.</p><p>Most abstractions in software exist because humans need help. We couldn't hold the whole system in our heads, so we built layers to manage the complexity for us. Each layer made the next person's job easier. That's how you end up with frameworks on top of frameworks, wrapper libraries, thousands of lines of glue code.</p><p>AI doesn't have the same limitation. It can hold the whole system in context and just write the code. It doesn't need an intermediate framework to stay organized. It just needs a spec and a foundation to build on.</p><p>It's not clear yet which abstractions are truly foundational and which ones were just crutches for human cognition. That line is going to shift a lot over the next few years. But vinext is a data point. We took an API contract, a build tool, and an AI model, and the AI wrote everything in between. No intermediate framework needed. We think this pattern will repeat across a lot of software. The layers we've built up over the years aren't all going to make it.</p>
    <div>
      <h2>Acknowledgments</h2>
      <a href="#acknowledgments">
        
      </a>
    </div>
    <p>Thanks to the Vite team. <a href="https://vite.dev/"><u>Vite</u></a> is the foundation this whole thing stands on. <a href="https://github.com/vitejs/vite-plugin-rsc"><code><u>@vitejs/plugin-rsc</u></code></a> is still early days, but it gave me RSC support without having to build that from scratch, which would have been a dealbreaker. The Vite maintainers were responsive and helpful as I pushed the plugin into territory it hadn't been tested in before.</p><p>We also want to acknowledge the <a href="https://nextjs.org/"><u>Next.js</u></a> team. They've spent years building a framework that raised the bar for what React development could look like. The fact that their API surface is so well-documented and their test suite so comprehensive is a big part of what made this project possible. vinext wouldn't exist without the standard they set.</p>
    <div>
      <h2>Try it</h2>
      <a href="#try-it">
        
      </a>
    </div>
    <p>vinext includes an <a href="https://agentskills.io"><u>Agent Skill</u></a> that handles migration for you. It works with Claude Code, OpenCode, Cursor, Codex, and dozens of other AI coding tools. Install it, open your Next.js project, and tell the AI to migrate:</p>
            <pre><code>npx skills add cloudflare/vinext</code></pre>
            <p>Then open your Next.js project in any supported tool and say:</p>
            <pre><code>migrate this project to vinext</code></pre>
            <p>The skill handles compatibility checking, dependency installation, config generation, and dev server startup. It knows what vinext supports and will flag anything that needs manual attention.</p><p>Or if you prefer doing it by hand:</p>
            <pre><code>npx vinext init    # Migrate an existing Next.js project
npx vinext dev     # Start the dev server
npx vinext deploy  # Ship to Cloudflare Workers</code></pre>
            <p>The source is at <a href="https://github.com/cloudflare/vinext"><u>github.com/cloudflare/vinext</u></a>. Issues, PRs, and feedback are welcome.</p> ]]></content:encoded>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Workers AI]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[Open Source]]></category>
            <category><![CDATA[Performance]]></category>
            <guid isPermaLink="false">2w61xT0J7H7ECzhiABytS</guid>
            <dc:creator>Steve Faulkner</dc:creator>
        </item>
        <item>
            <title><![CDATA[Code Mode: give agents an entire API in 1,000 tokens]]></title>
            <link>https://blog.cloudflare.com/code-mode-mcp/</link>
            <pubDate>Fri, 20 Feb 2026 14:00:00 GMT</pubDate>
            <description><![CDATA[ The Cloudflare API has over 2,500 endpoints. Exposing each one as an MCP tool would consume over 2 million tokens. With Code Mode, we collapsed all of it into two tools and roughly 1,000 tokens of context. ]]></description>
            <content:encoded><![CDATA[ <p><a href="https://www.cloudflare.com/learning/ai/what-is-model-context-protocol-mcp/"><u>Model Context Protocol (MCP)</u></a> has become the standard way for AI agents to use external tools. But there is a tension at its core: agents need many tools to do useful work, yet every tool added fills the model's context window, leaving less room for the actual task. </p><p><a href="https://blog.cloudflare.com/code-mode/"><u>Code Mode</u></a> is a technique we first introduced for reducing context window usage during agent tool use. Instead of describing every operation as a separate tool, let the model write code against a typed SDK and execute the code safely in a <a href="https://developers.cloudflare.com/workers/runtime-apis/bindings/worker-loader/"><u>Dynamic Worker Loader</u></a>. The code acts as a compact plan. The model can explore tool operations, compose multiple calls, and return just the data it needs. Anthropic independently explored the same pattern in their <a href="https://www.anthropic.com/engineering/code-execution-with-mcp"><u>Code Execution with MCP</u></a> post.</p><p>Today we are introducing <a href="https://github.com/cloudflare/mcp"><u>a new MCP server</u></a> for the <a href="https://developers.cloudflare.com/api/"><u>entire Cloudflare API</u></a> — from <a href="https://developers.cloudflare.com/dns/"><u>DNS</u></a> and <a href="https://developers.cloudflare.com/cloudflare-one/"><u>Zero Trust</u></a> to <a href="https://workers.cloudflare.com/product/workers/"><u>Workers</u></a> and <a href="https://workers.cloudflare.com/product/r2/"><u>R2</u></a> — that uses Code Mode. With just two tools, search() and execute(), the server is able to provide access to the entire Cloudflare API over MCP, while consuming only around 1,000 tokens. The footprint stays fixed, no matter how many API endpoints exist.</p><p>For a large API like the Cloudflare API, Code Mode reduces the number of input tokens used by 99.9%. An equivalent MCP server without Code Mode would consume 1.17 million tokens — more than the entire context window of the most advanced foundation models.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7KqjQiI09KubtUSe9Dgf0N/6f37896084c7f34abca7dc36ab18d8e0/image2.png" />
          </figure><p><sup><i>Code mode savings vs native MCP, measured with </i></sup><a href="https://github.com/openai/tiktoken"><sup><i><u>tiktoken</u></i></sup></a><sup></sup></p><p>You can start using this new Cloudflare MCP server today. And we are also open-sourcing a new <a href="https://github.com/cloudflare/agents/tree/main/packages/codemode"><u>Code Mode SDK</u></a> in the <a href="https://github.com/cloudflare/agents"><u>Cloudflare Agents SDK</u></a>, so you can use the same approach in your own MCP servers and AI Agents.</p>
    <div>
      <h3>Server‑side Code Mode</h3>
      <a href="#server-side-code-mode">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/ir1KOZHIjVNyqdC9FSuZs/334456a711fb2b5fa612b3fc0b4adc48/images_BLOG-3184_2.png" />
          </figure><p>This new MCP server applies Code Mode server-side. Instead of thousands of tools, the server exports just two: <code>search()</code> and <code>execute()</code>. Both are powered by Code Mode. Here is the full tool surface area that gets loaded into the model context:</p>
            <pre><code>[
  {
    "name": "search",
    "description": "Search the Cloudflare OpenAPI spec. All $refs are pre-resolved inline.",
    "inputSchema": {
      "type": "object",
      "properties": {
        "code": {
          "type": "string",
          "description": "JavaScript async arrow function to search the OpenAPI spec"
        }
      },
      "required": ["code"]
    }
  },
  {
    "name": "execute",
    "description": "Execute JavaScript code against the Cloudflare API.",
    "inputSchema": {
      "type": "object",
      "properties": {
        "code": {
          "type": "string",
          "description": "JavaScript async arrow function to execute"
        }
      },
      "required": ["code"]
    }
  }
]
</code></pre>
            <p>To discover what it can do, the agent calls <code>search()</code>. It writes JavaScript against a typed representation of the OpenAPI spec. The agent can filter endpoints by product, path, tags, or any other metadata and narrow thousands of endpoints to the handful it needs. The full OpenAPI spec never enters the model context. The agent only interacts with it through code.</p><p>When the agent is ready to act, it calls <code>execute()</code>. The agent writes code that can make Cloudflare API requests, handle pagination, check responses, and chain operations together in a single execution. </p><p>Both tools run the generated code inside a <a href="https://developers.cloudflare.com/workers/runtime-apis/bindings/worker-loader/"><u>Dynamic Worker</u></a> isolate — a lightweight V8 sandbox with no file system, no environment variables to leak through prompt injection and external fetches disabled by default. Outbound requests can be explicitly controlled with outbound fetch handlers when needed.</p>
    <div>
      <h4>Example: Protecting an origin from DDoS attacks</h4>
      <a href="#example-protecting-an-origin-from-ddos-attacks">
        
      </a>
    </div>
    <p>Suppose a user tells their agent: "protect my origin from DDoS attacks." The agent's first step is to consult documentation. It might call the <a href="https://developers.cloudflare.com/agents/model-context-protocol/mcp-servers-for-cloudflare/"><u>Cloudflare Docs MCP Server</u></a>, use a <a href="https://github.com/cloudflare/skills"><u>Cloudflare Skill</u></a>, or search the web directly. From the docs it learns: put <a href="https://www.cloudflare.com/application-services/products/waf/"><u>Cloudflare WAF</u></a> and <a href="https://www.cloudflare.com/ddos/"><u>DDoS protection</u></a> rules in front of the origin.</p><p><b>Step 1: Search for the right endpoints
</b>The <code>search</code> tool gives the model a <code>spec</code> object: the full Cloudflare OpenAPI spec with all <code>$refs</code> pre-resolved. The model writes JavaScript against it. Here the agent looks for WAF and ruleset endpoints on a zone:</p>
            <pre><code>async () =&gt; {
  const results = [];
  for (const [path, methods] of Object.entries(spec.paths)) {
    if (path.includes('/zones/') &amp;&amp;
        (path.includes('firewall/waf') || path.includes('rulesets'))) {
      for (const [method, op] of Object.entries(methods)) {
        results.push({ method: method.toUpperCase(), path, summary: op.summary });
      }
    }
  }
  return results;
}
</code></pre>
            <p>The server runs this code in a Workers isolate and returns:</p>
            <pre><code>[
  { "method": "GET",    "path": "/zones/{zone_id}/firewall/waf/packages",              "summary": "List WAF packages" },
  { "method": "PATCH",  "path": "/zones/{zone_id}/firewall/waf/packages/{package_id}", "summary": "Update a WAF package" },
  { "method": "GET",    "path": "/zones/{zone_id}/firewall/waf/packages/{package_id}/rules", "summary": "List WAF rules" },
  { "method": "PATCH",  "path": "/zones/{zone_id}/firewall/waf/packages/{package_id}/rules/{rule_id}", "summary": "Update a WAF rule" },
  { "method": "GET",    "path": "/zones/{zone_id}/rulesets",                           "summary": "List zone rulesets" },
  { "method": "POST",   "path": "/zones/{zone_id}/rulesets",                           "summary": "Create a zone ruleset" },
  { "method": "GET",    "path": "/zones/{zone_id}/rulesets/phases/{ruleset_phase}/entrypoint", "summary": "Get a zone entry point ruleset" },
  { "method": "PUT",    "path": "/zones/{zone_id}/rulesets/phases/{ruleset_phase}/entrypoint", "summary": "Update a zone entry point ruleset" },
  { "method": "POST",   "path": "/zones/{zone_id}/rulesets/{ruleset_id}/rules",        "summary": "Create a zone ruleset rule" },
  { "method": "PATCH",  "path": "/zones/{zone_id}/rulesets/{ruleset_id}/rules/{rule_id}", "summary": "Update a zone ruleset rule" }
]
</code></pre>
            <p>The full Cloudflare API spec has over 2,500 endpoints. The model narrowed that to the WAF and ruleset endpoints it needs, without any of the spec entering the context window. </p><p>The model can also drill into a specific endpoint's schema before calling it. Here it inspects what phases are available on zone rulesets:</p>
            <pre><code>async () =&gt; {
  const op = spec.paths['/zones/{zone_id}/rulesets']?.get;
  const items = op?.responses?.['200']?.content?.['application/json']?.schema;
  // Walk the schema to find the phase enum
  const props = items?.allOf?.[1]?.properties?.result?.items?.allOf?.[1]?.properties;
  return { phases: props?.phase?.enum };
}

{
  "phases": [
    "ddos_l4", "ddos_l7",
    "http_request_firewall_custom", "http_request_firewall_managed",
    "http_response_firewall_managed", "http_ratelimit",
    "http_request_redirect", "http_request_transform",
    "magic_transit", "magic_transit_managed"
  ]
}
</code></pre>
            <p>The agent now knows the exact phases it needs: <code>ddos_l7 </code>for DDoS protection and <code>http_request_firewall_managed</code> for WAF.</p><p><b>Step 2: Act on the API
</b>The agent switches to using <code>execute</code>. The sandbox gets a <code>cloudflare.request()</code> client that can make authenticated calls to the Cloudflare API. First the agent checks what rulesets already exist on the zone:</p>
            <pre><code>async () =&gt; {
  const response = await cloudflare.request({
    method: "GET",
    path: `/zones/${zoneId}/rulesets`
  });
  return response.result.map(rs =&gt; ({
    name: rs.name, phase: rs.phase, kind: rs.kind
  }));
}

[
  { "name": "DDoS L7",          "phase": "ddos_l7",                        "kind": "managed" },
  { "name": "Cloudflare Managed","phase": "http_request_firewall_managed", "kind": "managed" },
  { "name": "Custom rules",     "phase": "http_request_firewall_custom",   "kind": "zone" }
]
</code></pre>
            <p>The agent sees that managed DDoS and WAF rulesets already exist. It can now chain calls to inspect their rules and update sensitivity levels in a single execution:</p>
            <pre><code>async () =&gt; {
  // Get the current DDoS L7 entrypoint ruleset
  const ddos = await cloudflare.request({
    method: "GET",
    path: `/zones/${zoneId}/rulesets/phases/ddos_l7/entrypoint`
  });

  // Get the WAF managed ruleset
  const waf = await cloudflare.request({
    method: "GET",
    path: `/zones/${zoneId}/rulesets/phases/http_request_firewall_managed/entrypoint`
  });
}
</code></pre>
            <p>This entire operation, from searching the spec and inspecting a schema to listing rulesets and fetching DDoS and WAF configurations, took four tool calls.</p>
    <div>
      <h3>The Cloudflare MCP server</h3>
      <a href="#the-cloudflare-mcp-server">
        
      </a>
    </div>
    <p>We started with MCP servers for individual products. Want an agent that manages DNS? Add the <a href="https://github.com/cloudflare/mcp-server-cloudflare/tree/main/apps/dns-analytics"><u>DNS MCP server</u></a>. Want Workers logs? Add the <a href="https://developers.cloudflare.com/agents/model-context-protocol/mcp-servers-for-cloudflare/"><u>Workers Observability MCP server</u></a>. Each server exported a fixed set of tools that mapped to API operations. This worked when the tool set was small, but the Cloudflare API has over 2,500 endpoints. No collection of hand-maintained servers could keep up.</p><p>The Cloudflare MCP server simplifies this. Two tools, roughly 1,000 tokens, and coverage of every endpoint in the API. When we add new products, the same <code>search()</code> and <code>execute()</code> code paths discover and call them — no new tool definitions, no new MCP servers. It even has support for the <a href="https://developers.cloudflare.com/analytics/graphql-api/"><u>GraphQL Analytics API</u></a>.</p><p>Our MCP server is built on the latest MCP specifications. It is OAuth 2.1 compliant, using <a href="https://github.com/cloudflare/workers-oauth-provider"><u>Workers OAuth Provider</u></a> to downscope the token to selected permissions approved by the user when connecting. The agent  only gets the capabilities the user explicitly granted. </p><p>For developers, this means you can use a simple agent loop and still give your agent access to the full Cloudflare API with built-in progressive capability discovery.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/60ZoSFdK6t6hR6DpAn6Bub/93b86239cedb06d7fb265859be7590e8/images_BLOG-3184_4.png" />
          </figure>
    <div>
      <h3>Comparing approaches to context reduction</h3>
      <a href="#comparing-approaches-to-context-reduction">
        
      </a>
    </div>
    <p>Several approaches have emerged to reduce how many tokens MCP tools consume:</p><p><b>Client-side Code Mode</b> was our first experiment. The model writes TypeScript against typed SDKs and runs it in a Dynamic Worker Loader on the client. The tradeoff is that it requires the agent to ship with secure sandbox access. Code Mode is implemented in <a href="https://block.github.io/goose/blog/2025/12/15/code-mode-mcp/"><u>Goose</u></a> and Anthropics Claude SDK as <a href="https://platform.claude.com/docs/en/agents-and-tools/tool-use/programmatic-tool-calling"><u>Programmatic Tool Calling</u></a>.</p><p><b>Command-line interfaces </b>are another path. CLIs are self-documenting and reveal capabilities as the agent explores. Tools like <a href="https://openclaw.ai/"><u>OpenClaw</u></a> and <a href="https://blog.cloudflare.com/moltworker-self-hosted-ai-agent/"><u>Moltworker</u></a> convert MCP servers into CLIs using <a href="https://github.com/steipete/mcporter"><u>MCPorter</u></a> to give agents progressive disclosure. The limitation is obvious: the agent needs a shell, which not every environment provides and which introduces a much broader attack surface than a sandboxed isolate.</p><p><b>Dynamic tool search</b>, as used by <a href="https://x.com/trq212/status/2011523109871108570"><u>Anthropic in Claude Code</u></a>, surfaces a smaller set of tools hopefully relevant to the current task. It shrinks context use but now requires a search function that must be maintained and evaluated, and each matched tool still uses tokens.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5FPxVAuJggv7A08DbPsksb/aacb9087a79d08a1430ea87bb6960ad3/images_BLOG-3184_5.png" />
          </figure><p>Each approach solves a real problem. But for MCP servers specifically, server-side Code Mode combines their strengths: fixed token cost regardless of API size, no modifications needed on the agent side, progressive discovery built in, and safe execution inside a sandboxed isolate. The agent just calls two tools with code. Everything else happens on the server.</p>
    <div>
      <h3>Get started today</h3>
      <a href="#get-started-today">
        
      </a>
    </div>
    <p>The Cloudflare MCP server is available now. Point your MCP client at the server URL and you'll be redirected to Cloudflare to authorize and select the permissions to grant to your agent. Add this config to your MCP client: </p>
            <pre><code>{
  "mcpServers": {
    "cloudflare-api": {
      "url": "https://mcp.cloudflare.com/mcp"
    }
  }
}
</code></pre>
            <p>For CI/CD, automation, or if you prefer managing tokens yourself, create a Cloudflare API token with the permissions you need. Both user tokens and account tokens are supported and can be passed as bearer tokens in the <code>Authorization</code> header.</p><p>More information on different MCP setup configurations can be found at the <a href="https://github.com/cloudflare/mcp"><u>Cloudflare MCP repository</u></a>.</p>
    <div>
      <h3>Looking forward</h3>
      <a href="#looking-forward">
        
      </a>
    </div>
    <p>Code Mode solves context costs for a single API. But agents rarely talk to one service. A developer's agent might need the Cloudflare API alongside GitHub, a database, and an internal docs server. Each additional MCP server brings the same context window pressure we started with.</p><p><a href="https://blog.cloudflare.com/zero-trust-mcp-server-portals/"><u>Cloudflare MCP Server Portals</u></a> let you compose multiple MCP servers behind a single gateway with unified auth and access control. We are building a first-class Code Mode integration for all your MCP servers, and exposing them to agents with built-in progressive discovery and the same fixed-token footprint, regardless of how many services sit behind the gateway.</p> ]]></content:encoded>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Workers AI]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Optimization]]></category>
            <category><![CDATA[Open Source]]></category>
            <guid isPermaLink="false">2lWwgP33VT0NJjZ3pWShsw</guid>
            <dc:creator>Matt Carey</dc:creator>
        </item>
        <item>
            <title><![CDATA[Astro is joining Cloudflare]]></title>
            <link>https://blog.cloudflare.com/astro-joins-cloudflare/</link>
            <pubDate>Fri, 16 Jan 2026 14:00:00 GMT</pubDate>
            <description><![CDATA[ The Astro Technology Company team — the creators of the Astro web framework — is joining Cloudflare. We’re doubling down on making Astro the best framework for content-driven websites, today and in the years to come. ]]></description>
            <content:encoded><![CDATA[ <p>The Astro Technology Company, creators of the Astro web framework, is joining Cloudflare.</p><p><a href="https://astro.build/"><u>Astro</u></a> is the web framework for building fast, content-driven websites. Over the past few years, we’ve seen an incredibly diverse range of developers and companies use Astro to build for the web. This ranges from established brands like Porsche and IKEA, to fast-growing AI companies like Opencode and OpenAI. Platforms that are built on Cloudflare, like <a href="https://webflow.com/feature/cloud"><u>Webflow Cloud</u></a> and <a href="https://vibe.wix.com/"><u>Wix Vibe</u></a>, have chosen Astro to power the websites their customers build and deploy to their own platforms. At Cloudflare, we use Astro, too — for our <a href="https://developers.cloudflare.com/"><u>developer docs</u></a>, <a href="https://workers.cloudflare.com/"><u>website</u></a>, <a href="https://sandbox.cloudflare.com/"><u>landing pages</u></a>, <a href="https://blog.cloudflare.com/"><u>blog</u></a>, and more. Astro is used almost everywhere there is content on the Internet. </p><p>By joining forces with the Astro team, we are doubling down on making Astro the best framework for content-driven websites for many years to come. The best version of Astro — <a href="https://github.com/withastro/astro/milestone/37"><u>Astro 6</u></a> —  is just around the corner, bringing a redesigned development server powered by Vite. The first public beta release of Astro 6 is <a href="https://github.com/withastro/astro/releases/tag/astro%406.0.0-beta.0"><u>now available</u></a>, with GA coming in the weeks ahead.</p><p>We are excited to share this news and even more thrilled for what it means for developers building with Astro. If you haven’t yet tried Astro — give it a spin and run <a href="https://docs.astro.build/en/getting-started/"><u>npm create astro@latest</u></a>.</p>
    <div>
      <h3>What this means for Astro</h3>
      <a href="#what-this-means-for-astro">
        
      </a>
    </div>
    <p>Astro will remain open source, MIT-licensed, and open to contributions, with a public roadmap and open governance. All full-time employees of The Astro Technology Company are now employees of Cloudflare, and will continue to work on Astro. We’re committed to Astro’s long-term success and eager to keep building.</p><p>Astro wouldn’t be what it is today without an incredibly strong community of open-source contributors. Cloudflare is also committed to continuing to support open-source contributions, via the <a href="https://astro.build/blog/astro-ecosystem-fund-update/"><u>Astro Ecosystem Fund</u></a>, alongside industry partners including Webflow, Netlify, Wix, Sentry, Stainless and many more.</p><p>From day one, Astro has been a bet on the web and portability: Astro is built to run anywhere, across clouds and platforms. Nothing changes about that. You can deploy Astro to any platform or cloud, and we’re committed to supporting Astro developers everywhere.</p>
    <div>
      <h3>There are many web frameworks out there — so why are developers choosing Astro?</h3>
      <a href="#there-are-many-web-frameworks-out-there-so-why-are-developers-choosing-astro">
        
      </a>
    </div>
    <p>Astro has been growing rapidly:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6SiPDolNqvmfQmHftQAr2W/b0b0b0c6725203b945d83da9b190c443/BLOG-3112_2.png" />
          </figure><p>Why? Many web frameworks have come and gone trying to be everything to everyone, aiming to serve the needs of both content-driven websites and web applications.</p><p>The key to Astro’s success: Instead of trying to serve every use case, Astro has stayed focused on <a href="https://docs.astro.build/en/concepts/why-astro/#design-principles"><u>five design principles</u></a>. Astro is…</p><ul><li><p><b>Content-driven:</b> Astro was designed to showcase your content.</p></li><li><p><b>Server-first:</b> Websites run faster when they render HTML on the server.</p></li><li><p><b>Fast by default:</b> It should be impossible to build a slow website in Astro.</p></li><li><p><b>Easy to use:</b> You don’t need to be an expert to build something with Astro.</p></li><li><p><b>Developer-focused:</b> You should have the resources you need to be successful.</p></li></ul><p>Astro’s <a href="https://docs.astro.build/en/concepts/islands/"><u>Islands Architecture</u></a> is a core part of what makes all of this possible. The majority of each page can be fast, static HTML — fast and simple to build by default, oriented around rendering content. And when you need it, you can render a specific part of a page as a client island, using any client UI framework. You can even mix and match multiple frameworks on the same page, whether that’s React.js, Vue, Svelte, Solid, or anything else:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1SjrMUpO9xZb0wxlATkrQo/16afe1efdb57da6b8b17cd804d94cfb2/BLOG-3112_3.png" />
          </figure>
    <div>
      <h3>Bringing back the joy in building websites</h3>
      <a href="#bringing-back-the-joy-in-building-websites">
        
      </a>
    </div>
    <p>The more Astro and Cloudflare started talking, the clearer it became how much we have in common. Cloudflare’s mission is to help build a better Internet — and part of that is to help build a <i>faster</i> Internet. Almost all of us grew up building websites, and we want a world where people have fun building things on the Internet, where anyone can publish to a site that is truly their own.</p><p>When Astro first <a href="https://astro.build/blog/introducing-astro/"><u>launched</u></a> in 2021, it had become painful to build great websites — it felt like a fight with build tools and frameworks. It sounds strange to say it, with the coding agents and powerful LLMs of 2026, but in 2021 it was very hard to build an excellent and fast website without being a domain expert in JavaScript build tooling. So much has gotten better, both because of Astro and in the broader frontend ecosystem, that we take this almost for granted today.</p><p>The Astro project has spent the past five years working to simplify web development. So as LLMs, then vibe coding, and now true coding agents have come along and made it possible for truly anyone to build — Astro provided a foundation that was simple and fast by default. We’ve all seen how much better and faster agents get when building off the right foundation, in a well-structured codebase. More and more, we’ve seen both builders and platforms choose Astro as that foundation.</p><p>We’ve seen this most clearly through the platforms that both Cloudflare and Astro serve, that extend Cloudflare to their own customers in creative ways using <a href="https://developers.cloudflare.com/cloudflare-for-platforms/"><u>Cloudflare for Platforms</u></a>, and have chosen Astro as the framework that their customers build on. </p><p>When you deploy to <a href="https://webflow.com/feature/cloud"><u>Webflow Cloud</u></a>, your Astro site just works and is deployed across Cloudflare’s network. When you start a new project with <a href="https://vibe.wix.com/"><u>Wix Vibe</u></a>, behind the scenes you’re creating an Astro site, running on Cloudflare. And when you generate a developer docs site using <a href="https://www.stainless.com/"><u>Stainless</u></a>, that generates an Astro project, running on Cloudflare, powered by <a href="https://astro.build/blog/stainless-astro-launch/"><u>Starlight</u></a> — a framework built on Astro.</p><p>Each of these platforms is built for a different audience. But what they have in common — beyond their use of Cloudflare and Astro — is they make it <i>fun</i> to create and publish content to the Internet. In a world where everyone can be both a builder and content creator, we think there are still so many more platforms to build and people to reach.</p>
    <div>
      <h3><b>Astro 6 — new local dev server, powered by Vite</b></h3>
      <a href="#astro-6-new-local-dev-server-powered-by-vite">
        
      </a>
    </div>
    <p>Astro 6 is coming, and the first open beta release is <a href="https://astro.build/blog/astro-6-beta/"><u>now available</u></a>. To be one of the first to try it out, run:</p><p><code>npm create astro@latest -- --ref next</code></p><p>Or to upgrade your existing Astro app, run:</p><p><code>npx @astrojs/upgrade beta</code></p><p>Astro 6 brings a brand new development server, built on the <a href="https://vite.dev/guide/api-environment"><u>Vite Environments API</u></a>, that runs your code locally using the same runtime that you deploy to. This means that when you run <code>astro dev</code> with the <a href="https://developers.cloudflare.com/workers/vite-plugin/"><u>Cloudflare Vite plugin</u></a>, your code runs in <a href="https://github.com/cloudflare/workerd"><u>workerd</u></a>, the open-source Cloudflare Workers runtime, and can use <a href="https://developers.cloudflare.com/durable-objects/"><u>Durable Objects</u></a>, <a href="https://developers.cloudflare.com/d1/"><u>D1</u></a>, <a href="https://developers.cloudflare.com/kv/"><u>KV</u></a>, <a href="https://developers.cloudflare.com/agents/"><u>Agents</u></a> and <a href="https://developers.cloudflare.com/workers/runtime-apis/bindings/"><u>more</u></a>. This isn’t just a Cloudflare feature: Any JavaScript runtime with a plugin that uses the Vite Environments API can benefit from this new support, and ensure local dev runs in the same environment, with the same runtime APIs as production.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4YAgzaSkgUr3gxK5Mkh62V/09847d3f15744b6f049864a6e898a343/BLOG-3112_4.png" />
          </figure><p><a href="https://docs.astro.build/en/reference/experimental-flags/live-content-collections/"><u>Live Content Collections</u></a> in Astro are also stable in Astro 6 and out of beta. These content collections let you update data in real time, without requiring a rebuild of your site. This makes it easy to bring in content that changes often, such as the current inventory in a storefront, while still benefitting from the built-in validation and caching that come with Astro’s existing support for <a href="https://v6.docs.astro.build/en/guides/content-collections"><u>content collections</u></a>.</p><p>There’s more to Astro 6, including Astro’s most upvoted feature request — first-class support for Content Security Policy (CSP) — as well as simpler APIs, an upgrade to <a href="https://zod.dev/?id=introduction"><u>Zod</u></a> 4, and more.</p>
    <div>
      <h3>Doubling down on Astro</h3>
      <a href="#doubling-down-on-astro">
        
      </a>
    </div>
    <p>We're thrilled to welcome the Astro team to Cloudflare. We’re excited to keep building, keep shipping, and keep making Astro the best way to build content-driven sites. We’re already thinking about what comes next beyond V6, and we’d love to hear from you.</p><p>To keep up with the latest, follow the <a href="https://astro.build/blog/"><u>Astro blog</u></a> and join the <a href="https://astro.build/chat"><u>Astro Discord</u></a>. Tell us what you’re building!</p><p></p> ]]></content:encoded>
            <category><![CDATA[Acquisitions]]></category>
            <category><![CDATA[Application Services]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Workers AI]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[AI]]></category>
            <guid isPermaLink="false">6snDEFT5jgryV5wPhY4HEj</guid>
            <dc:creator>Fred Schott</dc:creator>
            <dc:creator>Brendan Irvine-Broque</dc:creator>
        </item>
        <item>
            <title><![CDATA[Why Replicate is joining Cloudflare]]></title>
            <link>https://blog.cloudflare.com/why-replicate-joining-cloudflare/</link>
            <pubDate>Mon, 01 Dec 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[ Today, we’re excited to announce that Replicate is officially part of Cloudflare. We wanted to share a bit about our journey and why we made this decision.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p></p><p>We're happy to announce that as of today Replicate is officially part of Cloudflare.</p><p>When we started Replicate in 2019, OpenAI had just open sourced GPT-2, and few people outside of the machine learning community paid much attention to AI. But for those of us in the field, it felt like something big was about to happen. Remarkable models were being created in academic labs, but you needed a metaphorical lab coat to be able to run them.</p><p>We made it our mission to get research models out of the lab into the hands of developers. We wanted programmers to creatively bend and twist these models into products that the researchers would never have thought of.</p><p>We approached this as a tooling problem. Just like tools like Heroku made it possible to run websites without managing web servers, we wanted to build tools for running models without having to understand backpropagation or deal with CUDA errors.</p><p>The first tool we built was <a href="https://github.com/replicate/cog"><u>Cog</u></a>: a standard packaging format for machine learning models. Then we built <a href="https://replicate.com/"><u>Replicate</u></a> as the platform to run Cog models as API endpoints in the cloud. We abstracted away both the low-level machine learning, and the complicated GPU cluster management you need to run inference at scale.</p><p>It turns out the timing was just right. When <a href="https://replicate.com/stability-ai/stable-diffusion"><u>Stable Diffusion</u></a> was released in 2022 we had mature infrastructure that could handle the massive developer interest in running these models. A ton of fantastic apps and products were built on Replicate, apps that often ran a single model packaged in a slick UI to solve a particular use case.</p><p>Since then, <a href="https://www.latent.space/p/ai-engineer"><i><u>AI Engineering</u></i></a> has matured into a serious craft. AI apps are no longer just about running models. The modern AI stack has model inference, but also microservices, content delivery, <a href="https://www.cloudflare.com/learning/cloud/what-is-object-storage/">object storage</a>, caching, databases, telemetry, etc. We see many of our customers building complex heterogenous stacks where the Replicate models are one part of a higher-order system across several platforms.</p><p><i>This is why we’re joining Cloudflare</i>. Replicate has the tools and primitives for running models. Cloudflare has the best network, Workers, <a href="https://www.cloudflare.com/developer-platform/products/r2/">R2</a>, Durable Objects, and all the other primitives you need to build a full AI stack.</p><p>The AI stack lives entirely on the network. Models run on data center GPUs and are glued together by small cloud functions that call out to vector databases, fetch objects from blob storage, call MCP servers, etc. “<a href="https://blog.cloudflare.com/the-network-is-the-computer/"><u>The network is the computer</u></a>” has never been more true.</p><p>At Cloudflare, we’ll now be able to build the AI infrastructure layer we have dreamed of since we started. We’ll be able to do things like run fast models on the edge, run model pipelines on instantly-booting Workers, stream model inputs and outputs with WebRTC, etc.</p><p>We’re proud of what we’ve built at Replicate. We were the first generative AI serving platform, and we defined the abstractions and design patterns that most of our peers have adopted. We’ve grown a wonderful community of builders and researchers around our product.</p> ]]></content:encoded>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Workers AI]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Acquisitions]]></category>
            <guid isPermaLink="false">3KkAemgmCHkd0D9QW5qTnz</guid>
            <dc:creator>Andreas Jansson</dc:creator>
            <dc:creator>Ben Firshman</dc:creator>
        </item>
        <item>
            <title><![CDATA[Partnering with Black Forest Labs to bring FLUX.2 [dev] to Workers AI]]></title>
            <link>https://blog.cloudflare.com/flux-2-workers-ai/</link>
            <pubDate>Tue, 25 Nov 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[ FLUX.2 [dev] by Black Forest Labs is now on Workers AI! This advanced open-weight image model offers superior photorealism, multi-reference inputs, and granular control with JSON prompting. ]]></description>
            <content:encoded><![CDATA[ <p>In recent months, we’ve seen a leap forward for closed-source image generation models with the rise of <a href="https://gemini.google/overview/image-generation/"><u>Google’s Nano Banana</u></a> and <a href="https://openai.com/index/image-generation-api/"><u>OpenAI image generation models</u></a>. Today, we’re happy to share that a new open-weight contender is back with the launch of Black Forest Lab’s FLUX.2 [dev] and available to run on Cloudflare’s inference platform, Workers AI. You can read more about this new model in detail on BFL’s blog post about their new model launch <a href="https://bfl.ai/blog/flux-2"><u>here</u></a>. </p><p>We have been huge fans of Black Forest Lab’s FLUX image models since their earliest versions. Our hosted version of FLUX.1 [schnell] is one of the most popular models in our catalog for its photorealistic outputs and high-fidelity generations. When the time came to host the licensed version of their new model, we jumped at the opportunity. The FLUX.2 model takes all the best features of FLUX.1 and amps it up, generating even more realistic, grounded images with added customization support like JSON prompting.</p><p>Our Workers AI hosted version of FLUX.2 has some specific patterns, like using multipart form data to support input images (up to 4 512x512 images), and output images up to 4 megapixels. The multipart form data format allows users to send us multiple image inputs alongside the typical model parameters. Check out our <a href="https://developers.cloudflare.com/changelog/2025-11-25-flux-2-dev-workers-ai/"><u>developer docs changelog announcement</u></a> to understand how to use the FLUX.2 model.</p>
    <div>
      <h2>What makes FLUX.2 special? Physical world grounding, digital world assets, and multi-language support</h2>
      <a href="#what-makes-flux-2-special-physical-world-grounding-digital-world-assets-and-multi-language-support">
        
      </a>
    </div>
    <p>The FLUX.2 model has a more robust understanding of the physical world, allowing you to turn abstract concepts into photorealistic reality. It excels at generating realistic image details and consistently delivers accurate hands, faces, fabrics, logos, and small objects that are often missed by other models. Its knowledge of the physical world also generates life-like lighting, angles and depth perception.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3tOCj8UT98MbcvlXFe8EMl/ad6d94d8b4713453dcd2455a9a3ad331/image3.png" />
          </figure><p><sup>Figure 1. Image generated with FLUX.2 featuring accurate lighting, shadows, reflections and depth perception at a café in Paris.</sup></p><p>This high-fidelity output makes it ideal for applications requiring superior image quality, such as creative photography, e-commerce product shots, marketing visuals, and interior design. Because it can understand context, tone, and trends, the model allows you to create engaging and editorial-quality digital assets from short prompts.</p><p>Aside from the physical world, the model is also able to generate high-quality digital assets such as designing landing pages or generating detailed infographics (see below for example). It’s also able to understand multiple languages naturally, so combining these two features – we can get a beautiful landing page in French from a French prompt.</p>
            <pre><code>Générer une page web visuellement immersive pour un service de promenade de chiens. L'image principale doit dominer l'écran, montrant un chien exubérant courant dans un parc ensoleillé, avec des touches de vert vif (#2ECC71) intégrées subtilement dans le feuillage ou les accessoires du chien. Minimiser le texte pour un impact visuel maximal.</code></pre>
            
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3C9EEp5jsISYMrOsC4NKb/12e0630b51334feb02a5be805e767d08/image8.png" />
          </figure>
    <div>
      <h2>Character consistency – solving for stochastic drift</h2>
      <a href="#character-consistency-solving-for-stochastic-drift">
        
      </a>
    </div>
    <p>FLUX.2 offers multi-reference editing with state-of-the-art character consistency, ensuring identities, products, and styles remain consistent for tasks. In the world of generative AI, getting a high-quality image is easy. However, getting the <i>exact same</i> character or product twice has always been the hard part. This is a phenomenon known as "stochastic drift", where generated images drift away from the original source material.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/58T8pavXCsWneWxEDfKgte/a821df53fa3a86d577dd72e3c285fe8a/image9.png" />
          </figure><p><sup>Figure 2. Stochastic drift infographic (generated on FLUX.2)</sup></p><p>One of FLUX.2’s<b> </b>breakthroughs is multi-reference image inputs designed to solve this consistency challenge. You’ll have the ability to change the background, lighting, or pose of an image without accidentally changing the face of your model or the design of your product. You can also reference other images or combine multiple images together to create something new. </p><p>In code, Workers AI supports multi-reference images (up to 4) with a multipart form-data upload. The image inputs are binary images and output is a base64 encoded image:</p>
            <pre><code>curl --request POST \
  --url 'https://api.cloudflare.com/client/v4/accounts/{ACCOUNT}/ai/run/@cf/black-forest-labs/flux-2-dev' \
  --header 'Authorization: Bearer {TOKEN}' \
  --header 'Content-Type: multipart/form-data' \
  --form 'prompt=take the subject of image 2 and style it like image 1' \
  --form input_image_0=@/Users/johndoe/Desktop/icedoutkeanu.png \
  --form input_image_1=@/Users/johndoe/Desktop/me.png \
  --form steps=25
  --form width=1024
  --form height=1024</code></pre>
            <p>We also support this through the Workers AI Binding:</p>
            <pre><code>const image = await fetch("http://image-url");
const form = new FormData();
 
const image_blob = await streamToBlob(image.body, "image/png");
form.append('input_image_0', image_blob)
form.append('prompt', 'a sunset with the dog in the original image')
 
const resp = await env.AI.run("@cf/black-forest-labs/flux-2-dev", {
    multipart: {
        body: form,
        contentType: "multipart/form-data"
    }
})</code></pre>
            
    <div>
      <h3>Built for real world use cases</h3>
      <a href="#built-for-real-world-use-cases">
        
      </a>
    </div>
    <p>The newest image model signifies a shift towards functional business use cases, moving beyond simple image quality improvements. FLUX.2 enables you to:</p><ul><li><p><b>Create Ad Variations:</b> Generate 50 different advertisements using the exact same actor, without their face morphing between frames.</p></li><li><p><b>Trust Your Product Shots:</b> Drop your product on a model, or into a beach scene, a city street, or a studio table. The environment changes, but your product stays accurate.</p></li><li><p><b>Build Dynamic Editorials:</b> Produce a full fashion spread where the model looks identical in every single shot, regardless of the angle.</p></li></ul>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5Me4jErrBPSW7kAx8qecId/22b2c98a8661489ebe45188cb7947381/image6.png" />
          </figure><p><sup>Figure 3. Combining the oversized hoodie and sweatpant ad photo (generated with FLUX.2) with Cloudflare’s logo to create product renderings with consistent faces, fabrics, and scenery. **</sup><sup><i>Note: we prompted for white Cloudflare font as well instead of the original black font. </i></sup></p>
    <div>
      <h2>Granular controls — JSON prompting, HEX codes and more!</h2>
      <a href="#granular-controls-json-prompting-hex-codes-and-more">
        
      </a>
    </div>
    <p>The FLUX.2 model makes another advancement by allowing users to control small details in images through tools like JSON prompting and specifying specific hex codes.</p><p>For example, you could send this JSON as a prompt (as part of the multipart form input) and the resulting image follows the prompt exactly:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5hnd2lQHC8kqKGbEFqdsgb/de72f15c9e3ede2fcfa717c013a4cba4/image4.jpg" />
          </figure>
            <pre><code>{
  "scene": "A bustling, neon-lit futuristic street market on an alien planet, rain slicking the metal ground",
  "subjects": [
    {
      "type": "Cyberpunk bounty hunter",
      "description": "Female, wearing black matte armor with glowing blue trim, holding a deactivated energy rifle, helmet under her arm, rain dripping off her synthetic hair",
      "pose": "Standing with a casual but watchful stance, leaning slightly against a glowing vendor stall",
      "position": "foreground"
    },
    {
      "type": "Merchant bot",
      "description": "Small, rusted, three-legged drone with multiple blinking red optical sensors, selling glowing synthetic fruit from a tray attached to its chassis",
      "pose": "Hovering slightly, offering an item to the viewer",
      "position": "midground"
    }
  ],
  "style": "noir sci-fi digital painting",
  "color_palette": [
    "deep indigo",
    "electric blue",
    "acid green"
  ],
  "lighting": "Low-key, dramatic, with primary light sources coming from neon signs and street lamps reflecting off wet surfaces",
  "mood": "Gritty, tense, and atmospheric",
  "background": "Towering, dark skyscrapers disappearing into the fog, with advertisements scrolling across their surfaces, flying vehicles (spinners) visible in the distance",
  "composition": "dynamic off-center",
  "camera": {
    "angle": "eye level",
    "distance": "medium close-up",
    "focus": "sharp on subject",
    "lens": "35mm",
    "f-number": "f/1.4",
    "ISO": 400
  },
  "effects": [
    "heavy rain effect",
    "subtle film grain",
    "neon light reflections",
    "mild chromatic aberration"
  ]
}</code></pre>
            <p>To take it further, we can ask the model to recolor the accent lighting to a Cloudflare orange by giving it a specific hex code like #F48120.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/79EL6Y3YGu8PqvWauHyqzh/29684aa0f4bb9b4306059e1634b5b94c/image1.jpg" />
          </figure>
    <div>
      <h2>Try it out today!</h2>
      <a href="#try-it-out-today">
        
      </a>
    </div>
    <p>The newest FLUX.2 [dev] model is now available on Workers AI — you can get started with the model through our <a href="http://developers.cloudflare.com/workers-ai/models/flux-2-dev"><u>developer docs</u></a> or test it out on our <a href="https://multi-modal.ai.cloudflare.com/"><u>multimodal playground.</u></a></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/49KKiYwNbkrRaiDRruKCck/66cdcb3b41f8a87fd44a240e05bd851a/image2.png" />
          </figure><p></p> ]]></content:encoded>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Workers AI]]></category>
            <category><![CDATA[Developers]]></category>
            <guid isPermaLink="false">5lE1GkcjJWDeQq5696TdSs</guid>
            <dc:creator>Michelle Chen</dc:creator>
            <dc:creator>David Liu</dc:creator>
        </item>
        <item>
            <title><![CDATA[AI Week 2025: Recap]]></title>
            <link>https://blog.cloudflare.com/ai-week-2025-wrapup/</link>
            <pubDate>Wed, 03 Sep 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ How do we embrace the power of AI without losing control? That was one of our big themes for AI Week 2025. Check out all of the products, partnerships, and features we announced. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>How do we embrace the power of AI without losing control? </p><p>That was one of our big themes for AI Week 2025, which has now come to a close. We announced products, partnerships, and features to help companies successfully navigate this new era.</p><p>Everything we built was based on feedback from customers like you that want to get the most out of AI without sacrificing control and safety. Over the next year, we will double down on our efforts to deliver world-class features that augment and secure AI. Please keep an eye on our Blog, AI Avenue, Product Change Log and CloudflareTV for more announcements.</p><p>This week we focused on four core areas to help companies secure and deliver AI experiences safely and securely:</p><ul><li><p><b>Securing AI environments and workflows</b></p></li><li><p><b>Protecting original content from misuse by AI</b></p></li><li><p><b>Helping developers build world-class, secure, AI experiences </b></p></li><li><p><b>Making Cloudflare better for you with AI</b></p></li></ul><p>Thank you for following along with our first ever AI week at Cloudflare. This recap blog will summarize each announcement across these four core areas. For more information, check out our “<a href="http://thisweekinnet.com"><u>This Week in NET</u></a>” recap episode also featured at the end of this blog.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1JQHvkcThqyE3f21FjM59I/20e41ab0d3c4aaecbedc6d51b5c1f9f8/BLOG-2933_2.png" />
          </figure>
    <div>
      <h2>Securing AI environments and workflows</h2>
      <a href="#securing-ai-environments-and-workflows">
        
      </a>
    </div>
    <p>These posts and features focused on helping companies control and understand their employee’s usage of AI tools.</p><table><tr><td><p><b>Blog</b></p></td><td><p><b>Recap</b></p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/ai-prompt-protection/">Beyond the ban: A better way to secure generative AI applications</a></p></td><td><p>Generative AI tools present a trade-off of productivity and data risk. Cloudflare One’s new AI prompt protection feature provides the visibility and control needed to govern these tools, allowing organizations to confidently embrace AI.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/shadow-AI-analytics/">Unmasking the Unseen: Your Guide to Taming Shadow AI with Cloudflare One</a></p></td><td><p>Don't let "Shadow AI" silently leak your data to unsanctioned AI. This new threat requires a new defense. Learn how to gain visibility and control without sacrificing innovation.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/confidence-score-rubric/">Introducing Cloudflare Application Confidence Score For AI Applications</a></p></td><td><p>Cloudflare will provide confidence scores within our application library for Gen AI applications, allowing customers to assess their risk for employees using shadow IT.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/casb-ai-integrations/">ChatGPT, Claude, &amp; Gemini security scanning with Cloudflare CASB</a></p></td><td><p>Cloudflare CASB now scans ChatGPT, Claude, and Gemini for misconfigurations, sensitive data exposure, and compliance issues, helping organizations adopt AI with confidence.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/zero-trust-mcp-server-portals/">Securing the AI Revolution: Introducing Cloudflare MCP Server Portals</a></p></td><td><p>Cloudflare MCP Server Portals are now available in Open Beta. MCP Server Portals are a new capability that enable you to centralize, secure, and observe every MCP connection in your organization.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/best-practices-sase-for-ai/">Best Practices for Securing Generative AI with SASE</a></p></td><td><p>This guide provides best practices for Security and IT leaders to securely adopt generative AI using Cloudflare’s SASE architecture as part of a strategy for AI Security Posture Management (AI-SPM).</p></td></tr></table>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3q82P48XrTFDEWKBiIWlVC/d9c1bfa96d7b170df2f66577767d1ecc/BLOG-2933_3.png" />
          </figure>
    <div>
      <h2>Protecting original content from misuse by AI</h2>
      <a href="#protecting-original-content-from-misuse-by-ai">
        
      </a>
    </div>
    <p>Cloudflare is committed to helping content creators control access to their original work. These announcements focused on analysis of what we’re currently seeing on the Internet with respect to AI bots and crawlers and significant improvements to our existing control features.</p><table><tr><td><p><b>Blog</b></p></td><td><p><b>Recap</b></p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/ai-crawler-traffic-by-purpose-and-industry/">A deeper look at AI crawlers: breaking down traffic by purpose and industry</a></p></td><td><p>We are extending AI-related insights on Cloudflare Radar with new industry-focused data and a breakdown of bot traffic by purpose, such as training or user action.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/signed-agents/">The age of agents: cryptographically recognizing agent traffic</a></p></td><td><p>Cloudflare now lets websites and bot creators use Web Bot Auth to segment agents from verified bots, making it easier for customers to allow or disallow the many types of user and partner directed.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/conversational-search-with-nlweb-and-autorag/">Make Your Website Conversational for People and Agents with NLWeb and AutoRAG</a></p></td><td><p>With NLWeb, an open project by Microsoft, and Cloudflare AutoRAG, conversational search is now a one-click setup for your website.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/introducing-ai-crawl-control/">The next step for content creators in working with AI bots: Introducing AI Crawl Control</a></p></td><td><p>Cloudflare launches AI Crawl Control (formerly AI Audit) and introduces easily customizable 402 HTTP responses.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/crawlers-click-ai-bots-training/">The crawl-to-click gap: Cloudflare data on AI bots, training, and referrals</a></p></td><td><p>By mid-2025, training drives nearly 80% of AI crawling, while referrals to publishers (especially from Google) are falling and crawl-to-refer ratios show AI consumes far more than it sends back.</p></td></tr></table>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2XxME3f6wr64laagnl7fMR/d6929874d74637eec7d0227de0c33211/BLOG-2933_4.png" />
          </figure>
    <div>
      <h2>Helping developers build world-class, secure, AI experiences</h2>
      <a href="#helping-developers-build-world-class-secure-ai-experiences">
        
      </a>
    </div>
    <p>At Cloudflare we are committing to building the best platform to build AI experiences, all with security by default.</p><table><tr><td><p><b>Blog</b></p></td><td><p><b>Recap</b></p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/ai-gateway-aug-2025-refresh/">AI Gateway now gives you access to your favorite AI models, dynamic routing and more — through just one endpoint</a></p></td><td><p>AI Gateway now gives you access to your favorite AI models, dynamic routing and more — through just one endpoint.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/cloudflares-most-efficient-ai-inference-engine/">How we built the most efficient inference engine for Cloudflare’s network</a></p></td><td><p>Infire is an LLM inference engine that employs a range of techniques to maximize resource utilization, allowing us to serve AI models more efficiently with better performance for Cloudflare workloads.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/workers-ai-partner-models/">State-of-the-art image generation Leonardo models and text-to-speech Deepgram models now available in Workers AI</a></p></td><td><p>We're expanding Workers AI with new partner models from Leonardo.Ai and Deepgram. Start using state-of-the-art image generation models from Leonardo and real-time TTS and STT models from Deepgram.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/how-cloudflare-runs-more-ai-models-on-fewer-gpus/">How Cloudflare runs more AI models on fewer GPUs: A technical deep-dive</a></p></td><td><p>Cloudflare built an internal platform called Omni. This platform uses lightweight isolation and memory over-commitment to run multiple AI models on a single GPU.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/welcome-to-ai-avenue/">Cloudflare Launching AI Miniseries for Developers (and Everyone Else They Know)</a></p></td><td><p>In AI Avenue, we address people’s fears, show them the art of the possible, and highlight the positive human stories where AI is augmenting — not replacing — what people can do. And yes, we even let people touch AI themselves.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/block-unsafe-llm-prompts-with-firewall-for-ai/">Block unsafe prompts targeting your LLM endpoints with Firewall for AI</a></p></td><td><p>Cloudflare's AI security suite now includes unsafe content moderation, integrated into the Application Security Suite via Firewall for AI.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/cloudflare-realtime-voice-ai/">Cloudflare is the best place to build realtime voice agents</a></p></td><td><p>Today, we're excited to announce new capabilities that make it easier than ever to build real-time, voice-enabled AI applications on Cloudflare's global network.</p></td></tr></table>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/69qL26BPP68czkSiBGVkuM/2e916e61473354bff2806ac0d8a2517a/BLOG-2933_5.png" />
          </figure>
    <div>
      <h2>Making Cloudflare better for you with AI</h2>
      <a href="#making-cloudflare-better-for-you-with-ai">
        
      </a>
    </div>
    <p>Cloudflare logs and analytics can often be a needle in the haystack challenge, AI helps surface and alert to issues that need attention or review. Instead of a human having to spend hours sifting and searching for an issue, they can focus on action and remediation while AI does the sifting.</p><table><tr><td><p><b>Blog</b></p></td><td><p><b>Except</b></p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/background-removal/">Evaluating image segmentation models for background removal for Images</a></p></td><td><p>An inside look at how the Images team compared dichotomous image segmentation models to identify and isolate subjects in an image from the background.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/automating-threat-analysis-and-response-with-cloudy/">Automating threat analysis and response with Cloudy</a></p></td><td><p>Cloudy now supercharges analytics investigations and Cloudforce One threat intelligence! Get instant insights from threat events and APIs on APTs, DDoS, cybercrime &amp; more - powered by Workers AI!</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/cloudy-driven-email-security-summaries/">Cloudy Summarizations of Email Detections: Beta Announcement</a></p></td><td><p>We're now leveraging our internal LLM, Cloudy, to generate automated summaries within our Email Security product, helping SOC teams better understand what's happening within flagged messages.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/AI-troubleshoot-warp-and-network-connectivity-issues/">Troubleshooting network connectivity and performance with Cloudflare AI</a></p></td><td><p>Troubleshoot network connectivity issues by using Cloudflare AI-Power to quickly self diagnose and resolve WARP client and network issues.</p></td></tr></table><p>We thank you for following along this week — and please stay tuned for exciting announcements coming during Cloudflare’s 15th birthday week in September!</p><p>Check out the full video recap, featuring insights from Kenny Johnson and host João Tomé, in our special This Week in NET episode (<a href="https://thisweekinnet.com">ThisWeekinNET.com</a>) covering everything announced during AI Week 2025.</p><div>
  
</div><p></p> ]]></content:encoded>
            <category><![CDATA[AI Week]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[AI Gateway]]></category>
            <category><![CDATA[Generative AI]]></category>
            <category><![CDATA[Workers AI]]></category>
            <category><![CDATA[AI WAF]]></category>
            <category><![CDATA[AI Bots]]></category>
            <guid isPermaLink="false">6l0AjZFdEn4hrKgQlWOYiB</guid>
            <dc:creator>Kenny Johnson</dc:creator>
            <dc:creator>James Allworth</dc:creator>
        </item>
        <item>
            <title><![CDATA[Automating threat analysis and response with Cloudy ]]></title>
            <link>https://blog.cloudflare.com/automating-threat-analysis-and-response-with-cloudy/</link>
            <pubDate>Fri, 29 Aug 2025 14:05:00 GMT</pubDate>
            <description><![CDATA[ Cloudy now supercharges analytics investigations and Cloudforce One threat intelligence! Get instant insights from threat events and APIs on APTs, DDoS, cybercrime & more - powered by Workers AI. ]]></description>
            <content:encoded><![CDATA[ <p>Security professionals everywhere face a paradox: while more data provides the visibility needed to catch threats, it also makes it harder for humans to process it all and find what's important. When there’s a sudden spike in suspicious traffic, every second counts. But for many security teams — especially lean ones — it’s hard to quickly figure out what’s going on. Finding a root cause means diving into dashboards, filtering logs, and cross-referencing threat feeds. All the data tracking that has happened can be the very thing that slows you down — or worse yet, what buries the threat that you’re looking for. </p><p>Today, we’re excited to announce that we’ve solved that problem. We’ve integrated <a href="https://blog.cloudflare.com/introducing-ai-agent/"><u>Cloudy</u></a> — Cloudflare’s first <a href="https://www.cloudflare.com/learning/ai/what-is-agentic-ai/"><u>AI agent</u></a> — with our security analytics functionality, and we’ve also built a new, conversational interface that Cloudflare users can use to ask questions, refine investigations, and get answers.  With these changes, Cloudy can now help Cloudflare users find the needle in the digital haystack, making security analysis faster and more accessible than ever before.  </p><p>Since Cloudy’s launch in March of this year, its adoption has been exciting to watch. Over <b>54,000</b> users have tried Cloudy for <a href="https://developers.cloudflare.com/waf/custom-rules/"><u>custom rule</u></a> creation, and <b>31%</b> of them have deployed a rule suggested by the agent. For our log explainers in <a href="https://www.cloudflare.com/zero-trust/products/gateway/"><u>Cloudflare Gateway</u></a>, Cloudy has been loaded over <b>30,000 </b> times in just the last month, with <b>80%</b> of the feedback we received confirming the summaries were insightful. We are excited to empower our users to do even more.</p>
    <div>
      <h2>Talk to your traffic: a new conversational interface for faster RCA and mitigation</h2>
      <a href="#talk-to-your-traffic-a-new-conversational-interface-for-faster-rca-and-mitigation">
        
      </a>
    </div>
    <p>Security analytics dashboards are powerful, but they often require you to know exactly what you're looking for — and the right queries to get there. The new Cloudy chat interface changes this. It is designed for faster root cause analysis (RCA) of traffic anomalies, helping you get from “something’s wrong” to “here’s the fix” in minutes. You can now start with a broad question and narrow it down, just like you would with a human analyst.</p><p>For example, you can start an investigation by asking Cloudy to look into a recommendation from Security Analytics.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1P7YDzX9JoHmmKLPwGw0z8/aa3675b36492ea13e2cba4d1ba13dce4/image4.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6Nort6ZEZUUkYQc8PTiLgo/33a92121c4c161290f50e792d77c1e16/image1.png" />
          </figure><p>From there, you can ask follow-up questions to dig deeper:</p><ul><li><p>"Focus on login endpoints only."</p></li><li><p>"What are the top 5 IP addresses involved?"</p></li><li><p>"Are any of these IPs known to be malicious?"</p></li></ul><p>This is just the beginning of how Cloudy is transforming security. You can <a href="http://blog.cloudflare.com/cloudy-driven-email-security-summaries/"><u>read more</u></a> about how we’re using Cloudy to bring clarity to another critical security challenge: automating summaries of email detections. This is the same core mission — translating complex security data into clear, actionable insights — but applied to the constant stream of email threats that security teams face every day.</p>
    <div>
      <h2>Use Cloudy to understand, prioritize, and act on threats</h2>
      <a href="#use-cloudy-to-understand-prioritize-and-act-on-threats">
        
      </a>
    </div>
    <p>Analyzing your own logs is powerful — but it only shows part of the picture. What if Cloudy could look beyond your own data and into Cloudflare’s global network to identify emerging threats? This is where Cloudforce One's <a href="https://blog.cloudflare.com/threat-events-platform/"><u>Threat Events platform</u></a> comes in.</p><p>Cloudforce One translates the high-volume attack data observed on the Cloudflare network into real-time, attacker-attributed events relevant to your organization. This platform helps you track adversary activity at scale — including APT infrastructure, cybercrime groups, compromised devices, and volumetric DDoS activity. Threat events provide detailed, context-rich events, including interactive timelines and mappings to attacker TTPs, regions, and targeted verticals. </p><p>We have spent the last few months making Cloudy more powerful by integrating it with the Cloudforce One Threat Events platform.  Cloudy now can offer contextual data about the threats we observe and mitigate across Cloudflare's global network, spanning everything from APT activity and residential proxies to ACH fraud, DDoS attacks, WAF exploits, cybercrime, and compromised devices. This integration empowers our users to quickly understand, prioritize, and act on <a href="https://www.cloudflare.com/learning/security/what-are-indicators-of-compromise/"><u>indicators of compromise (IOCs)</u></a> based on a vast ocean of real-time threat data. </p><p>Cloudy lets you query this global dataset in a natural language and receive clear, concise answers. For example, imagine asking these questions and getting immediate actionable answers:</p><ul><li><p>Who is targeting my industry vertical or country?</p></li><li><p>What are the most relevant indicators (IPs, JA3/4 hashes, ASNs, domains, URLs, SHA fingerprints) to block right now?</p></li><li><p>How has a specific adversary progressed across the cyber kill chain over time?</p></li><li><p>What novel new threats are threat actors using that might be used against your network next, and what insights do Cloudflare analysts know about them?</p></li></ul><p>Simply interact with Cloudy in the Cloudflare Dashboard &gt; Security Center &gt; Threat Intelligence, providing your queries in natural language. It can walk you from a single indicator (like an IP address or domain) to the specific threat event Cloudflare observed, and then pivot to other related data — other attacks, related threats, or even other activity from the same actor. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4WE42KXmWzejXpk8CsG05h/2fe63d5f86fe78642a341d645844ab56/image2.png" />
          </figure><p>This cuts through the noise, so you can quickly understand an adversary's actions across the cyber kill chain and MITRE ATT&amp;CK framework, and then block attacks with precise, actionable intelligence. The threat events platform is like an evidence board on the wall that helps you understand threats; Cloudy is like your sidekick that will run down every lead.</p>
    <div>
      <h2>How it works: Agents SDK and Workers AI</h2>
      <a href="#how-it-works-agents-sdk-and-workers-ai">
        
      </a>
    </div>
    <p>Developing this advanced capability for Cloudy was a testament to the agility of Cloudflare's AI ecosystem. We leveraged our <a href="https://developers.cloudflare.com/agents/"><u>Agents SDK</u></a> running on <a href="https://developers.cloudflare.com/workers-ai/"><u>Workers AI</u></a>. This allowed for rapid iteration and deployment, ensuring Cloudy could quickly grasp the nuances of threat intelligence and provide highly accurate, contextualized insights. The combination of our massive network telemetry, purpose-built LLM prompts, and the flexibility of Workers AI means Cloudy is not just fast, but also remarkably precise.</p><p>And a quick word on what we didn’t do when developing Cloudy: We did not train Cloudy on any Cloudflare customer data. Instead, Cloudy relies on models made publicly available through <a href="https://developers.cloudflare.com/workers-ai/models/"><u>Workers AI</u></a>. For more information on Cloudflare’s approach to responsible AI, please see <a href="https://www.cloudflare.com/trust-hub/responsible-ai/"><u>these FAQs</u></a>.</p>
    <div>
      <h2>What's next for Cloudy</h2>
      <a href="#whats-next-for-cloudy">
        
      </a>
    </div>
    <p>This is just the next step in Cloudy’s journey. We're working on expanding Cloudy's abilities across the board. This includes intelligent debugging for WAF rules and deeper integrations with Alerts to give you more actionable, contextual notifications. At the same time, we are continuously enriching our threat events datasets and exploring ways for Cloudy to help you visualize complex attacker timelines, campaign overviews, and intricate attack graphs. Our goal remains the same: make Cloudy an indispensable partner in understanding and reacting to the security landscape.</p><p>The new chat interface is now available on all plans, and the threat intelligence capabilities are live for Cloudforce One customers. Learn more about Cloudforce One <a href="https://www.cloudflare.com/application-services/products/cloudforceone/"><u>here</u></a> and reach out for a <a href="https://www.cloudflare.com/plans/enterprise/contact/?utm_medium=referral&amp;utm_source=blog&amp;utm_campaign=2025-q3-acq-gbl-connectivity-ge-ge-general-ai_week_blog"><u>consultation</u></a> if you want to go deeper with our experts.</p><div>
  
</div><p></p> ]]></content:encoded>
            <category><![CDATA[AI Week]]></category>
            <category><![CDATA[Cloudy]]></category>
            <category><![CDATA[Cloudforce One]]></category>
            <category><![CDATA[Threat Intelligence]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Workers AI]]></category>
            <guid isPermaLink="false">26RGd07uODP8AQ5WaxcjnF</guid>
            <dc:creator>Alexandra Moraru</dc:creator>
            <dc:creator>Harsh Saxena</dc:creator>
            <dc:creator>Steve James</dc:creator>
            <dc:creator>Nick Downie</dc:creator>
            <dc:creator>Levi Kipke</dc:creator>
        </item>
        <item>
            <title><![CDATA[How we built the most efficient inference engine for Cloudflare’s network ]]></title>
            <link>https://blog.cloudflare.com/cloudflares-most-efficient-ai-inference-engine/</link>
            <pubDate>Wed, 27 Aug 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Infire is an LLM inference engine that employs a range of techniques to maximize resource utilization, allowing us to serve AI models more efficiently with better performance for Cloudflare workloads. ]]></description>
            <content:encoded><![CDATA[ <p>Inference powers some of today’s most powerful AI products: chat bot replies, <a href="https://www.cloudflare.com/learning/ai/what-is-agentic-ai/"><u>AI agents</u></a>, autonomous vehicle decisions, and fraud detection. The problem is, if you’re building one of these products on top of a hyperscaler, you’ll likely need to rent expensive GPUs from large centralized data centers to run your inference tasks. That model doesn’t work for Cloudflare — there’s a mismatch between Cloudflare’s globally-distributed network and a typical centralized AI deployment using large multi-GPU nodes. As a company that operates our own compute on a lean, fast, and widely distributed network within 50ms of 95% of the world’s Internet-connected population, we need to be running inference tasks more efficiently than anywhere else.</p><p>This is further compounded by the fact that AI models are getting larger and more complex. As we started to support these models, like the Llama 4 herd and gpt-oss, we realized that we couldn’t just throw money at the scaling problems by buying more GPUs. We needed to utilize every bit of idle capacity and be agile with where each model is deployed. </p><p>After running most of our models on the widely used open source inference and serving engine <a href="https://github.com/vllm-project/vllm"><u>vLLM</u></a>, we figured out it didn’t allow us to fully utilize the GPUs at the edge. Although it can run on a very wide range of hardware, from personal devices to data centers, it is best optimized for large data centers. When run as a dedicated inference server on powerful hardware serving a specific model, vLLM truly shines. However, it is much less optimized for dynamic workloads, distributed networks, and for the unique security constraints of running inference at the edge alongside other services.</p><p>That’s why we decided to build something that will be able to meet the needs of Cloudflare inference workloads for years to come. Infire is an LLM inference engine, written in Rust, that employs a range of techniques to maximize memory, network I/O, and GPU utilization. It can serve more requests with fewer GPUs and significantly lower CPU overhead, saving time, resources, and energy across our network. </p><p>Our initial benchmarking has shown that Infire completes inference tasks up to 7% faster than vLLM 0.10.0 on unloaded machines equipped with an H100 NVL GPU. On infrastructure under real load, it performs significantly better. </p><p>Currently, Infire is powering the Llama 3.1 8B model for <a href="https://developers.cloudflare.com/workers-ai/"><u>Workers AI</u></a>, and you can test it out today at <a href="https://developers.cloudflare.com/workers-ai/models/llama-3.1-8b-instruct-fast/"><u>@cf/meta/llama-3.1-8b-instruct</u></a>!</p>
    <div>
      <h2>The Architectural Challenge of LLM Inference at Cloudflare </h2>
      <a href="#the-architectural-challenge-of-llm-inference-at-cloudflare">
        
      </a>
    </div>
    <p>Thanks to industry efforts, inference has improved a lot over the past few years. vLLM has led the way here with the recent release of the vLLM V1 engine with features like an optimized KV cache, improved batching, and the implementation of Flash Attention 3. vLLM is great for most inference workloads — we’re currently using it for several of the models in our <a href="https://developers.cloudflare.com/workers-ai/models/"><u>Workers AI catalog</u></a> — but as our AI workloads and catalog has grown, so has our need to optimize inference for the exact hardware and performance requirements we have. </p><p>Cloudflare is writing much of our <a href="https://blog.cloudflare.com/rust-nginx-module/"><u>new infrastructure in Rust</u></a>, and vLLM is written in Python. Although Python has proven to be a great language for prototyping ML workloads, to maximize efficiency we need to control the low-level implementation details. Implementing low-level optimizations through multiple abstraction layers and Python libraries adds unnecessary complexity and leaves a lot of CPU performance on the table, simply due to the inefficiencies of Python as an interpreted language.</p><p>We love to contribute to open-source projects that we use, but in this case our priorities may not fit the goals of the vLLM project, so we chose to write a server for our needs. For example, vLLM does not support co-hosting multiple models on the same GPU without using Multi-Instance GPU (MIG), and we need to be able to dynamically schedule multiple models on the same GPU to minimize downtime. We also have an in-house AI Research team exploring unique features that are difficult, if not impossible, to upstream to vLLM. </p><p>Finally, running code securely is our top priority across our platform and <a href="https://www.cloudflare.com/developer-platform/products/workers-ai/"><u>Workers AI</u></a> is no exception. We simply can’t trust a 3rd party Python process to run on our edge nodes alongside the rest of our services without strong sandboxing. We are therefore forced to run vLLM via <a href="https://gvisor.dev"><u>gvisor</u></a>. Having an extra virtualization layer adds another performance overhead to vLLM. More importantly, it also increases the startup and tear downtime for vLLM instances — which are already pretty long. Under full load on our edge nodes, vLLM running via gvisor consumes as much as 2.5 CPU cores, and is forced to compete for CPU time with other crucial services, that in turn slows vLLM down and lowers GPU utilization as a result.</p><p>While developing Infire, we’ve been incorporating the latest research in inference efficiency — let’s take a deeper look at what we actually built.</p>
    <div>
      <h2>How Infire works under the hood </h2>
      <a href="#how-infire-works-under-the-hood">
        
      </a>
    </div>
    <p>Infire is composed of three major components: an OpenAI compatible HTTP server, a batcher, and the Infire engine itself.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3BypYSG9QFsPjPFhjlOEsa/6ef5d4ccaabcd96da03116b7a14e8439/image2.png" />
          </figure><p><i><sup>An overview of Infire’s architecture </sup></i></p>
    <div>
      <h2>Platform startup</h2>
      <a href="#platform-startup">
        
      </a>
    </div>
    <p>When a model is first scheduled to run on a specific node in one of our data centers by our auto-scaling service, the first thing that has to happen is for the model weights to be fetched from our <a href="https://www.cloudflare.com/developer-platform/products/r2/"><u>R2 object storage</u></a>. Once the weights are downloaded, they are cached on the edge node for future reuse.</p><p>As the weights become available either from cache or from R2, Infire can begin loading the model onto the GPU. </p><p>Model sizes vary greatly, but most of them are <b>large, </b>so transferring them into GPU memory can be a time-consuming part of Infire’s startup process. For example, most non-quantized models store their weights in the BF16 floating point format. This format has the same dynamic range as the 32-bit floating format, but with reduced accuracy. It is perfectly suited for inference providing the sweet spot of size, performance and accuracy. As the name suggests, the BF16 format requires 16 bits, or 2 bytes per weight. The approximate in-memory size of a given model is therefore double the size of its parameters. For example, LLama3.1 8B has approximately 8B parameters, and its memory footprint is about 16 GB. A larger model, like LLama4 Scout, has 109B parameters, and requires around 218 GB of memory. Infire utilizes a combination of <a href="https://developer.nvidia.com/blog/how-optimize-data-transfers-cuda-cc/#pinned_host_memory"><u>Page Locked</u></a> memory with CUDA asynchronous copy mechanism over multiple streams to speed up model transfer into GPU memory.</p><p>While loading the model weights, Infire begins just-in-time compiling the required kernels based on the model's parameters, and loads them onto the device. Parallelizing the compilation with model loading amortizes the latency of both processes. The startup time of Infire when loading the Llama-3-8B-Instruct model from disk is just under 4 seconds. </p>
    <div>
      <h3>The HTTP server</h3>
      <a href="#the-http-server">
        
      </a>
    </div>
    <p>The Infire server is built on top of <a href="https://docs.rs/hyper/latest/hyper/"><u>hyper</u></a>, a high performance HTTP crate, which makes it possible to handle hundreds of connections in parallel – while consuming a modest amount of CPU time. Because of ChatGPT’s ubiquity, vLLM and many other services offer OpenAI compatible endpoints out of the box. Infire is no different in that regard. The server is responsible for handling communication with the client: accepting connections, handling prompts and returning responses. A prompt will usually consist of some text, or a "transcript" of a chat session along with extra parameters that affect how the response is generated. Some parameters that come with a prompt include the temperature, which affects the randomness of the response, as well as other parameters that affect the randomness and length of a possible response.</p><p>After a request is deemed valid, Infire will pass it to the tokenizer, which transforms the raw text into a series of tokens, or numbers that the model can consume. Different models use different kinds of tokenizers, but the most popular ones use byte-pair encoding. For tokenization, we use HuggingFace's tokenizers crate. The tokenized prompts and params are then sent to the batcher, and scheduled for processing on the GPU, where they will be processed as vectors of numbers, called <a href="https://www.cloudflare.com/learning/ai/what-are-embeddings/"><u>embeddings</u></a>.</p>
    <div>
      <h2>The batcher</h2>
      <a href="#the-batcher">
        
      </a>
    </div>
    <p>The most important part of Infire is in how it does batching: by executing multiple requests in parallel. This makes it possible to better utilize memory bandwidth and caches. </p><p>In order to understand why batching is so important, we need to understand how the inference algorithm works. The weights of a model are essentially a bunch of two-dimensional matrices (also called tensors). The prompt represented as vectors is passed through a series of transformations that are largely dominated by one operation: vector-by-matrix multiplication. The model weights are so large, that the cost of the multiplication is dominated by the time it takes to fetch it from memory. In addition, modern GPUs have hardware units dedicated to matrix-by-matrix multiplications (called Tensor Cores on Nvidia GPUs). In order to amortize the cost of memory access and take advantage of the Tensor Cores, it is necessary to aggregate multiple operations into a larger matrix multiplication.</p><p>Infire utilizes two techniques to increase the size of those matrix operations. The first one is called prefill: this technique is applied to the prompt tokens. Because all the prompt tokens are available in advance and do not require decoding, they can all be processed in parallel. This is one reason why input tokens are often cheaper (and faster) than output tokens.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1pqyNSzgWLcgrV3urpCvA0/e204ac477992d591a7368632c36e97eb/image1.png" />
          </figure><p><sup><i>How Infire enables larger matrix multiplications via batching</i></sup></p><p>The other technique is called batching: this technique aggregates multiple prompts into a single decode operation.</p><p>Infire mixes both techniques. It attempts to process as many prompts as possible in parallel, and fills the remaining slots in a batch with prefill tokens from incoming prompts. This is also known as continuous batching with chunked prefill.</p><p>As tokens get decoded by the Infire engine, the batcher is also responsible for retiring prompts that reach an End of Stream token, and sending tokens back to the decoder to be converted into text. </p><p>Another job the batcher has is handling the KV cache. One demanding operation in the inference process is called <i>attention</i>. Attention requires going over the KV values computed for all the tokens up to the current one. If we had to recompute those previously encountered KV values for every new token we decode, the runtime of the process would explode for longer context sizes. However, using a cache, we can store all the previous values and re-read them for each consecutive token. Potentially the KV cache for a prompt can store KV values for as many tokens as the context window allows. In LLama 3, the maximal context window is 128K tokens. If we pre-allocated the KV cache for each prompt in advance, we would only have enough memory available to execute 4 prompts in parallel on H100 GPUs! The solution for this is paged KV cache. With paged KV caching, the cache is split into smaller chunks called pages. When the batcher detects that a prompt would exceed its KV cache, it simply assigns another page to that prompt. Since most prompts rarely hit the maximum context window, this technique allows for essentially unlimited parallelism under typical load.</p><p>Finally, the batcher drives the Infire forward pass by scheduling the needed kernels to run on the GPU.</p>
    <div>
      <h2>CUDA kernels</h2>
      <a href="#cuda-kernels">
        
      </a>
    </div>
    <p>Developing Infire gives us the luxury of focusing on the exact hardware we use, which is currently Nvidia Hopper GPUs. This allowed us to improve performance of specific compute kernels using low-level PTX instructions for this specific architecture.</p><p>Infire just-in-time compiles its kernel for the specific model it is running, optimizing for the model’s parameters, such as the hidden state size, dictionary size and the GPU it is running on. For some operations, such as large matrix multiplications, Infire will utilize the high performance cuBLASlt library, if it would deem it faster.</p><p>Infire also makes use of very fine-grained CUDA graphs, essentially creating a dedicated CUDA graph for every possible batch size on demand. It then stores it for future launch. Conceptually, a CUDA graph is another form of just-in-time compilation: the CUDA driver replaces a series of kernel launches with a single construct (the graph) that has a significantly lower amortized kernel launch cost, thus kernels executed back to back will execute faster when launched as a single graph as opposed to individual launches.</p>
    <div>
      <h2>How Infire performs in the wild </h2>
      <a href="#how-infire-performs-in-the-wild">
        
      </a>
    </div>
    <p>We ran synthetic benchmarks on one of our edge nodes with an H100 NVL GPU.</p><p>The benchmark we ran was on the widely used ShareGPT v3 dataset. We ran the benchmark on a set of 4,000 prompts with a concurrency of 200. We then compared Infire and vLLM running on bare metal as well as vLLM running under gvisor, which is the way we currently run in production. In a production traffic scenario, an edge node would be competing for resources with other traffic. To simulate this, we benchmarked vLLM running in gvisor with only one CPU available.</p><table><tr><td><p>
</p></td><td><p>requests/s</p></td><td><p>tokens/s</p></td><td><p>CPU load</p></td></tr><tr><td><p>Infire</p></td><td><p>40.91</p></td><td><p>17224.21</p></td><td><p>25%</p></td></tr><tr><td><p>vLLM 0.10.0</p></td><td><p>38.38</p></td><td><p>16164.41</p></td><td><p>140%</p></td></tr><tr><td><p>vLLM under gvisor</p></td><td><p>37.13</p></td><td><p>15637.32</p></td><td><p>250%</p></td></tr><tr><td><p>vLLM under gvisor with CPU constraints</p></td><td><p>22.04</p></td><td><p>9279.25</p></td><td><p>100%</p></td></tr></table><p>As evident from the benchmarks we achieved our initial goal of matching and even slightly surpassing vLLM performance, but more importantly, we’ve done so at a significantly lower CPU usage, in large part because we can run Infire as a trusted bare-metal process. Inference no longer takes away precious resources from our other services and we see GPU utilization upward of 80%, reducing our operational costs.</p><p>This is just the beginning. There are still multiple proven performance optimizations yet to be implemented in Infire – for example, we’re integrating Flash Attention 3, and most of our kernels don’t utilize kernel fusion. Those and other optimizations will allow us to unlock even faster inference in the near future.</p>
    <div>
      <h2>What’s next </h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>Running AI inference presents novel challenges and demands to our infrastructure. Infire is how we’re running AI efficiently — close to users around the world. By building upon techniques like continuous batching, a paged KV-cache, and low-level optimizations tailored to our hardware, Infire maximizes GPU utilization while minimizing overhead. Infire completes inference tasks faster and with a fraction of the CPU load of our previous vLLM-based setup, especially under the strict security constraints we require. This allows us to serve more requests with fewer resources, making requests served via Workers AI faster and more efficient.</p><p>However, this is just our first iteration — we’re excited to build in multi-GPU support for larger models, quantization, and true multi-tenancy into the next version of Infire. This is part of our goal to make Cloudflare the best possible platform for developers to build AI applications.</p><p>Want to see if your AI workloads are faster on Cloudflare? <a href="https://developers.cloudflare.com/workers-ai/"><u>Get started</u></a> with Workers AI today. </p> ]]></content:encoded>
            <category><![CDATA[AI Week]]></category>
            <category><![CDATA[LLM]]></category>
            <category><![CDATA[Workers AI]]></category>
            <guid isPermaLink="false">7Li4fkq9b4B8QlgwSmZrqE</guid>
            <dc:creator>Vlad Krasnov</dc:creator>
            <dc:creator>Mari Galicer</dc:creator>
        </item>
        <item>
            <title><![CDATA[State-of-the-art image generation Leonardo models and text-to-speech Deepgram models now available in Workers AI]]></title>
            <link>https://blog.cloudflare.com/workers-ai-partner-models/</link>
            <pubDate>Wed, 27 Aug 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ We're expanding Workers AI with new partner models from Leonardo.Ai and Deepgram. Start using state-of-the-art image generation models from Leonardo and real-time TTS and STT models from Deepgram.  ]]></description>
            <content:encoded><![CDATA[ <p>When we first launched <a href="https://www.cloudflare.com/developer-platform/products/workers-ai/"><u>Workers AI</u></a>, we made a bet that AI models would get faster and smaller. We built our infrastructure around this hypothesis, adding specialized GPUs to our datacenters around the world that can serve inference to users as fast as possible. We created our platform to be as general as possible, but we also identified niche use cases that fit our infrastructure well, such as low-latency image generation or real-time audio voice agents. To lean in on those use cases, we’re bringing on some new models that will help make it easier to develop for these applications.</p><p>Today, we’re excited to announce that we are expanding our model catalog to include closed-source partner models that fit this use case. We’ve partnered with <a href="http://leonardo.ai"><u>Leonardo.Ai</u></a> and <a href="https://deepgram.com/"><u>Deepgram</u></a> to bring their latest and greatest models to Workers AI, hosted on Cloudflare’s infrastructure. Leonardo and Deepgram both have models with a great speed-to-performance ratio that suit the infrastructure of Workers AI. We’re starting off with these great partners — but expect to expand our catalog to other partner models as well.</p><p>The benefits of using these models on Workers AI is that we don’t only have a standalone inference service, we also have an entire suite of Developer products that allow you to build whole applications around AI. If you’re building an image generation platform, you could use Workers to <a href="https://www.cloudflare.com/developer-platform/solutions/hosting/">host the application logic</a>, Workers AI to generate the images, R2 for storage, and Images for serving and transforming media. If you’re building Realtime voice agents, we offer WebRTC and WebSocket support via Workers, speech-to-text, text-to-speech, and turn detection models via Workers AI, and an orchestration layer via Cloudflare Realtime. All in all, we want to lean into use cases that we think Cloudflare has a unique advantage in, with developer tools to back it up, and make it all available so that you can build the best AI applications on top of our holistic Developer Platform.</p>
    <div>
      <h2>Leonardo Models</h2>
      <a href="#leonardo-models">
        
      </a>
    </div>
    <p><a href="https://www.leonardo.ai"><u>Leonardo.Ai</u></a> is a generative AI media lab that trains their own models and hosts a platform for customers to create generative media. The Workers AI team has been working with Leonardo for a while now and have experienced the magic of their image generation models firsthand. We’re excited to bring on two image generation models from Leonardo: @cf/leonardo/phoenix-1.0 and @cf/leonardo/lucid-origin.</p><blockquote><p><i>“We’re excited to enable Cloudflare customers a new avenue to extend and use our image generation technology in creative ways such as creating character images for gaming, generating personalized images for websites, and a host of other uses... all through the Workers AI and the Cloudflare Developer Platform.” - </i><b><i>Peter Runham</i></b><i>, CTO, </i><a href="http://leonardo.ai"><i><u>Leonardo.Ai </u></i></a></p></blockquote><p>The Phoenix model is trained from the ground up by Leonardo, excelling at things like text rendering and prompt coherence. The full image generation request took 4.89s end-to-end for a 25 step, 1024x1024 image.</p>
            <pre><code>curl --request POST \
  --url https://api.cloudflare.com/client/v4/accounts/{ACCOUNT_ID}/ai/run/@cf/leonardo/phoenix-1.0 \
  --header 'Authorization: Bearer {TOKEN}' \
  --header 'Content-Type: application/json' \
  --data '{
    "prompt": "A 1950s-style neon diner sign glowing at night that reads '\''OPEN 24 HOURS'\'' with chrome details and vintage typography.",
    "width":1024,
    "height":1024,
    "steps": 25,
    "seed":1,
    "guidance": 4,
    "negative_prompt": "bad image, low quality, signature, overexposed, jpeg artifacts, undefined, unclear, Noisy, grainy, oversaturated, overcontrasted"
}'
</code></pre>
            
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1q7ndHYrwLQqqAdX6kGEkl/96ece588cf82691fa8e8d11ece382672/BLOG-2903_2.png" />
          </figure><p>The Lucid Origin model is a recent addition to Leonardo’s family of models and is great at generating photorealistic images. The image took 4.38s to generate end-to-end at 25 steps and a 1024x1024 image size.</p>
            <pre><code>curl --request POST \
  --url https://api.cloudflare.com/client/v4/accounts/{ACCOUNT_ID}/ai/run/@cf/leonardo/lucid-origin \
  --header 'Authorization: Bearer {TOKEN}' \
  --header 'Content-Type: application/json' \
  --data '{
    "prompt": "A 1950s-style neon diner sign glowing at night that reads '\''OPEN 24 HOURS'\'' with chrome details and vintage typography.",
    "width":1024,
    "height":1024,
    "steps": 25,
    "seed":1,
    "guidance": 4,
    "negative_prompt": "bad image, low quality, signature, overexposed, jpeg artifacts, undefined, unclear, Noisy, grainy, oversaturated, overcontrasted"
}'
</code></pre>
            
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/26VKWD8ua6Pe2awQWRnF7n/bb42c9612b08269af4ef38df39a2ed30/BLOG-2903_3.png" />
          </figure>
    <div>
      <h2>Deepgram Models</h2>
      <a href="#deepgram-models">
        
      </a>
    </div>
    <p>Deepgram is a voice AI company that develops their own audio models, allowing users to interact with AI through a natural interface for humans: voice. Voice is an exciting interface because it carries higher bandwidth than text, because it has other speech signals like pacing, intonation, and more. The Deepgram models that we’re bringing on our platform are audio models which perform extremely fast speech-to-text and text-to-speech inference. Combined with the Workers AI infrastructure, the models showcase our unique infrastructure so customers can build low-latency voice agents and more.</p><blockquote><p><i>"By hosting our voice models on Cloudflare's Workers AI, we're enabling developers to create real-time, expressive voice agents with ultra-low latency. Cloudflare's global network brings AI compute closer to users everywhere, so customers can now deliver lightning-fast conversational AI experiences without worrying about complex infrastructure." - </i><i><b>Adam Sypniewski</b></i><i>, CTO, Deepgram</i></p></blockquote><p><a href="https://developers.cloudflare.com/workers-ai/models/nova-3"><u>@cf/deepgram/nova-3</u></a> is a speech-to-text model that can quickly transcribe audio with high accuracy. <a href="https://developers.cloudflare.com/workers-ai/models/aura-1"><u>@cf/deepgram/aura-1</u></a> is a text-to-speech model that is context aware and can apply natural pacing and expressiveness based on the input text. The newer Aura 2 model will be available on Workers AI soon. We’ve also improved the experience of sending binary mp3 files to Workers AI, so you don’t have to convert it into an Uint8 array like you had to previously. Along with our Realtime announcements (coming soon!), these audio models are the key to enabling customers to build voice agents directly on Cloudflare.</p><p>With the AI binding, a call to the Nova 3 speech-to-text model would look like this:</p>
            <pre><code>const URL = "https://www.some-website.com/audio.mp3";
const mp3 = await fetch(URL);
 
const res = await env.AI.run("@cf/deepgram/nova-3", {
    "audio": {
      body: mp3.body,
      contentType: "audio/mpeg"
    },
    "detect_language": true
  });
</code></pre>
            <p>With the REST API, it would look like this:</p>
            <pre><code>curl --request POST \
  --url 'https://api.cloudflare.com/client/v4/accounts/{ACCOUNT_ID}/ai/run/@cf/deepgram/nova-3?detect_language=true' \
  --header 'Authorization: Bearer {TOKEN}' \
  --header 'Content-Type: audio/mpeg' \
  --data-binary @/path/to/audio.mp3</code></pre>
            <p>As well, we’ve added WebSocket support to the Deepgram models, which you can use to keep a connection to the inference server live and use it for bi-directional input and output. To use the Nova model with WebSocket support, check out our <a href="https://developers.cloudflare.com/workers-ai/models/nova-3"><u>Developer Docs</u></a>.</p><p>All the pieces work together so that you can:</p><ol><li><p><b>Capture audio</b> with Cloudflare Realtime from any WebRTC source</p></li><li><p><b>Pipe it</b> via WebSocket to your processing pipeline</p></li><li><p><b>Transcribe</b> with audio ML models Deepgram running on Workers AI</p></li><li><p><b>Process</b> with your LLM of choice through a model hosted on Workers AI or proxied via <a href="https://developers.cloudflare.com/ai-gateway/"><u>AI Gateway</u></a></p></li><li><p><b>Orchestrate</b> everything with Realtime Agents</p></li></ol>
    <div>
      <h2>Try these models out today</h2>
      <a href="#try-these-models-out-today">
        
      </a>
    </div>
    <p>Check out our<a href="https://developers.cloudflare.com/workers-ai/"><u> developer docs</u></a> for more details, pricing and how to get started with the newest partner models available on Workers AI.</p><div>
  
</div><p></p> ]]></content:encoded>
            <category><![CDATA[AI Week]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Workers AI]]></category>
            <guid isPermaLink="false">35N861jwJHF4GEiRCDxWP</guid>
            <dc:creator>Michelle Chen</dc:creator>
            <dc:creator>Nikhil Kothari</dc:creator>
        </item>
    </channel>
</rss>