
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Thu, 09 Apr 2026 14:51:41 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Announcing Workers automatic tracing, now in open beta]]></title>
            <link>https://blog.cloudflare.com/workers-tracing-now-in-open-beta/</link>
            <pubDate>Tue, 28 Oct 2025 12:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare Workers' support for automatic tracing is now in open beta! Export traces to any OpenTelemetry-compatible provider for deeper application observability -- no code changes required ]]></description>
            <content:encoded><![CDATA[ <p></p><p>When your Worker slows down or starts throwing errors, finding the root cause shouldn't require hours of log analysis and trial-and-error debugging. You should have clear visibility into what's happening at every step of your application's request flow. This is feedback we’ve heard loud and clear from developers using Workers, and today we’re excited to announce an Open Beta for tracing on <a href="https://www.cloudflare.com/developer-platform/products/workers/"><u>Cloudflare Workers</u></a>! You can now:  </p><ul><li><p><b>Get automatic instrumentation for applications on the Workers platform: </b>No manual setup, complex instrumentation, or code changes. It works out of the box. </p></li><li><p><b>Explore and investigate traces in the Cloudflare dashboard:</b> Your traces are processed and available in the Workers Observability dashboard alongside your existing logs.</p></li><li><p><b>Export logs and traces to OpenTelemetry-compatible providers:</b> Send OpenTelemetry traces (and correlated logs) to your observability provider of choice. </p></li></ul><p>In 2024, <a href="https://blog.cloudflare.com/cloudflare-acquires-baselime-expands-observability-capabilities/"><u>we set out to build</u></a> the best first-party <a href="https://www.cloudflare.com/developer-platform/products/workers-observability/"><u>observability</u></a> of any cloud platform. We launched a new metrics dashboard to give better insights into how your Worker is performing, <a href="https://developers.cloudflare.com/workers/observability/logs/workers-logs/#enable-workers-logs"><u>Workers Logs</u></a> to automatically ingest and store logs for your Workers, a <a href="https://developers.cloudflare.com/workers/observability/query-builder/"><u>query builder</u></a> to explore your data across any dimension and <a href="https://developers.cloudflare.com/workers/observability/logs/real-time-logs/"><u>real-time logs</u></a> to stream your logs in real time with advanced filtering capabilities. Starting today, you can get an even deeper understanding of your Workers applications by enabling automatic <b>tracing</b>!</p>
    <div>
      <h3>What is Workers Tracing? </h3>
      <a href="#what-is-workers-tracing">
        
      </a>
    </div>
    <p>Workers traces capture and emit OpenTelemetry-compliant spans to show you detailed metadata and timing information on every operation your Worker performs.<b> </b>It helps you identify performance bottlenecks, resolve errors, and understand how your Worker interacts with other services on the Workers platform. You can now answer questions like:</p><ul><li><p>Which calls are slowing down my application?</p></li><li><p>Which queries to my database take the longest? </p></li><li><p>What happened within a request that resulted in an error?</p></li></ul><p>Tracing provides a visualization of each invocation's journey through various operations. Each operation is captured as a span, a timed segment that shows what happened and how long it took. Child spans nest within parent spans to show sub-operations and dependencies, creating a hierarchical view of your invocation’s execution flow. Each span can include contextual metadata or attributes that provide details for debugging and filtering events.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4F7l3WSJ2hY0eu6kX47Rdp/c7c3934e9abbbb1f01ec979941d35b54/unnamed.png" />
          </figure>
    <div>
      <h3>Full automatic instrumentation, no code changes </h3>
      <a href="#full-automatic-instrumentation-no-code-changes">
        
      </a>
    </div>
    <p>Previously, instrumenting your application typically required an understanding of the <a href="https://opentelemetry.io/docs/specs/"><u>OpenTelemetry spec</u></a>, multiple <a href="https://opentelemetry.io/docs/concepts/instrumentation/libraries/"><u>OTel libraries</u></a>, and how they related to each other. Implementation was tedious and bloated your codebase with instrumentation code that obfuscated your application logic.</p><p>Setting up tracing typically meant spending hours integrating third-party SDKs, wrapping every database call and API request with instrumentation code, and debugging complex config files before you saw a single trace. This implementation overhead often makes observability an afterthought, leaving you without full visibility in production when issues arise.</p><p>What makes Workers Tracing truly magical is it’s <b>completely automatic – no set up, no code changes, no wasted time. </b>We took the approach of automatically instrumenting every I/O operation in your Workers, through a deep integration in <a href="https://github.com/cloudflare/workerd"><u>workerd</u></a>, our runtime, enabling us to capture the full extent of data flows through every invocation of your Workers.</p><p>You focus on your application logic. We take care of the instrumentation.</p>
    <div>
      <h4>What you can trace today</h4>
      <a href="#what-you-can-trace-today">
        
      </a>
    </div>
    <p>The operations covered today are: </p><ul><li><p><b>Binding calls:</b> Interactions with various <a href="https://developers.cloudflare.com/workers/runtime-apis/bindings/"><u>Worker bindings</u></a>. KV reads and writes, <a href="https://www.cloudflare.com/developer-platform/products/r2/">R2 object storage</a> operations, Durable Object invocations, and many more binding calls are automatically traced. This gives you complete visibility into how your Worker uses other services.</p></li><li><p><b>Fetch calls:</b> All outbound HTTP requests are automatically instrumented, capturing timing, status codes, and request metadata. This enables you to quickly identify which external dependencies are affecting your application's performance.</p></li><li><p><b>Handler calls:</b> Methods on a Worker that can receive and process external inputs, such as <a href="https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/"><u>fetch handlers</u></a>, <a href="https://developers.cloudflare.com/workers/runtime-apis/handlers/scheduled/"><u>scheduled handlers</u></a>, and <a href="https://developers.cloudflare.com/queues/configuration/javascript-apis/#consumer"><u>queue handlers</u></a>. This gives you visibility into performance of how your Worker is being invoked.</p></li></ul>
    <div>
      <h4>Automatic attributes on every span </h4>
      <a href="#automatic-attributes-on-every-span">
        
      </a>
    </div>
    <p>Our automated instrumentation captures each operation as a span. For example, a span generated by an R2 binding call (like a <code>get</code> or <code>put</code> operation) will automatically contain any available attributes, such as the operation type, the error if applicable, the object key, and duration. These detailed attributes provide the context you need to answer precise questions about your application without needing to manually log every detail.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6n5j0AdsgouMuHacgrTW9Q/60e4f73dd66a591ee666d6acdbaafc03/unnamed2.png" />
          </figure><p>We will continue to add more detailed attributes to spans and add the ability to trace an invocation across multiple Workers or external services. Our <a href="http://developers.cloudflare.com/workers/observability/traces/spans-and-attributes/"><u>documentation</u></a> contains a complete list of all instrumented spans and their attributes.</p>
    <div>
      <h3>Investigate traces in the Workers dashboard</h3>
      <a href="#investigate-traces-in-the-workers-dashboard">
        
      </a>
    </div>
    <p>You can easily<a href="http://developers.cloudflare.com/workers/observability/traces/"><u> view traces directly within a specific Worker application</u></a> in the Cloudflare dashboard, giving you immediate visibility into your application's performance. You’ll find a list of all trace events within your desired time frame and a trace visualization of each invocation including duration of each call and any available attributes. You can also query across all Workers on your account, letting you pinpoint issues occurring on multiple applications. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7HU4vktx7zr3aflWK9cdsl/73dcd1f1f9702faa65430fec9c9da8c5/unnamed_3.png" />
          </figure><p>To get started viewing traces on your Workers application, you can set: </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5e7hIAgVnjxvZSshwBTpXf/a0571af6e43be414c85e25dba106848c/1.png" />
          </figure>
    <div>
      <h3>Export traces to OpenTelemetry compatible providers </h3>
      <a href="#export-traces-to-opentelemetry-compatible-providers">
        
      </a>
    </div>
    <p>However, we realize that some development teams need Workers data to live alongside other telemetry data in the <b>tools they are already using</b>. That’s why we’re also adding tracing exports, letting your team send, visualize and query data with your existing observability stack! Starting today, you can export traces directly to providers like <a href="https://www.honeycomb.io/"><u>Honeycomb</u></a>, <a href="https://grafana.com/"><u>Grafana</u></a>, <a href="https://sentry.io/welcome/"><u>Sentry</u></a> or any other <a href="http://developers.cloudflare.com/workers/observability/exporting-opentelemetry-data/#available-opentelemetry-destinations"><u>OpenTelemetry Protocol (OTLP) provider with an available endpoint</u></a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3GdiHZEL1bIpGn5BRhbyyT/0fbb6610a488f5f2c44f78ba3ecaf576/Export_traces_to_OpenTelemetry_compatible_providers_.png" />
          </figure>
    <div>
      <h4>Correlated logs and traces </h4>
      <a href="#correlated-logs-and-traces">
        
      </a>
    </div>
    <p>We also support exporting OTLP-formatted logs that share the same trace ID, enabling third-party platforms to automatically correlate log entries with their corresponding traces. This lets you easily jump between spans and related log messages.</p>
    <div>
      <h4>Set up your destination, enable exports, and go! </h4>
      <a href="#set-up-your-destination-enable-exports-and-go">
        
      </a>
    </div>
    <p>To start sending events to your destination of choice, first, configure your OTLP endpoint destination in the Cloudflare dashboard. For every destination you can specify a custom name and set custom headers to include API keys or app configuration. </p><p>Once you have your destination set up (e.g. <code>honeycomb-tracing</code>), set the following in your <code>wrangler.jsonc </code>and deploy: </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2xUbPsz307PvkZ6i15iUKe/452efaad0afdbeb2722f42726d0a849b/3.png" />
          </figure>
    <div>
      <h3>Coming up for Workers observability</h3>
      <a href="#coming-up-for-workers-observability">
        
      </a>
    </div>
    <p>This is just the beginning of Workers providing the workflows and tools to get you the telemetry data you want, where you want it. We’re improving our support both for native tracing in the dashboard and for exporting other types of telemetry to 3rd parties. In the upcoming months we’ll be launching: </p><ul><li><p><b>Support for more spans and attributes: </b>We are adding more automatic traces for every part of the Workers platform. While our first goal is to give you visibility into the duration of every operation within your request, we also want to add detailed attributes. Your feedback on what’s missing will be extremely valuable here. </p></li><li><p><b>Trace context propagation: </b>When building<b> </b><b><i>distributed</i></b> applications, ensuring your traces connect across all of your services (even those outside of Cloudflare), automatically linking spans together to create complete, end-to-end visibility is critical. For example, a trace from Workers could be nested from a parent service or vice versa. When fully implemented, our automatic trace context propagation will follow <a href="https://www.w3.org/TR/trace-context/"><u>W3C standards</u></a> to ensure compatibility across your existing tools and services. </p></li><li><p><b>Support for custom spans and attributes</b>: While automatic instrumentation gives you visibility into what’s happening within the Workers platform, we know you need visibility into your own application logic too. So, we’ll give you the ability to manually add your own spans as well.</p></li><li><p><b>Ability to export metrics: </b>Today, metrics, logs and traces are available for you to monitor and view within the Workers dashboard. But the final missing piece is giving you the ability to export both infrastructure metrics (like request volume, error rates, and execution duration) and custom application metrics to your preferred observability provider.</p></li></ul>
    <div>
      <h3>What you can expect from tracing pricing  </h3>
      <a href="#what-you-can-expect-from-tracing-pricing">
        
      </a>
    </div>
    <p>Today, at the start of beta, viewing traces in the Cloudflare dashboard and exporting traces to a 3rd party provider are both free. On <b>January 15, 2026</b>, tracing and log events will be charged the following pricing:</p><p><b>Viewing Workers traces in the Cloudflare dashboard</b></p><p>To view traces in the Cloudflare dashboard, you can do so on a <a href="https://www.cloudflare.com/plans/developer-platform/"><u>Workers Free and Paid plan</u></a> at the pricing shown below:</p>
<div><table><thead>
  <tr>
    <th></th>
    <th><span>Workers Free</span></th>
    <th><span>Workers Paid</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>Included Volume </span></td>
    <td><span>200K events per day</span></td>
    <td><span>20M events per month </span></td>
  </tr>
  <tr>
    <td><span>Additional Events </span></td>
    <td><span>N/A</span></td>
    <td><span>$0.60 per million logs </span></td>
  </tr>
  <tr>
    <td><span>Retention </span></td>
    <td><span>3 days </span></td>
    <td><span>7 days </span></td>
  </tr>
</tbody></table></div><p><b>Exporting traces and logs </b></p><p>To export traces to a 3rd-party OTLP-compatible destination, you will need a <b>Workers Paid </b>subscription. Pricing is based on total span or log events with the following inclusions:</p>
<div><table><thead>
  <tr>
    <th></th>
    <th><span>Workers Free</span></th>
    <th><span>Workers Paid</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>Events  </span></td>
    <td><br /><span>Not available</span></td>
    <td><span>10 million events per month </span></td>
  </tr>
  <tr>
    <td><span>Additional events </span></td>
    <td><span>$0.05 per million batched events</span></td>
  </tr>
</tbody></table></div>
    <div>
      <h3>Enable tracing today</h3>
      <a href="#enable-tracing-today">
        
      </a>
    </div>
    <p>Ready to get started with tracing on your Workers application? </p><ul><li><p><b>Check out our </b><a href="http://developers.cloudflare.com/workers/observability/traces/"><b><u>documentation</u></b></a><b>: </b>Learn how to get set up, read about current limitations and discover more about what’s coming up. </p></li><li><p><b>Join the chatter in our </b><a href="https://github.com/cloudflare/workers-sdk/discussions/11062"><b>GitHub discussion</b></a><b>:</b> Your feedback will be extremely valuable in our beta period on our automatic instrumentation, tracing dashboard, and OpenTelemetry export flow. Head to our GitHub discussion to raise issues, put in feature requests and get in touch with us!</p></li></ul><p></p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Observability]]></category>
            <category><![CDATA[Tracing]]></category>
            <category><![CDATA[OpenTelemetry ]]></category>
            <guid isPermaLink="false">2Np8UAH0AuW7KjeIwym0NY</guid>
            <dc:creator>Nevi Shah</dc:creator>
            <dc:creator>Boris Tane</dc:creator>
            <dc:creator>Jeremy Morrell</dc:creator>
        </item>
        <item>
            <title><![CDATA[Bringing streamable HTTP transport and Python language support to MCP servers]]></title>
            <link>https://blog.cloudflare.com/streamable-http-mcp-servers-python/</link>
            <pubDate>Wed, 30 Apr 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ We're continuing to make it easier for developers to bring their services into the AI ecosystem with the Model Context Protocol (MCP) with two new updates. ]]></description>
            <content:encoded><![CDATA[ <p>We’re <a href="https://blog.cloudflare.com/building-ai-agents-with-mcp-authn-authz-and-durable-objects/"><u>continuing</u></a> to make it easier for developers to <a href="https://blog.cloudflare.com/remote-model-context-protocol-servers-mcp/"><u>bring their services into the AI ecosystem</u></a> with the <a href="https://www.cloudflare.com/learning/ai/what-is-model-context-protocol-mcp/">Model Context Protocol</a> (MCP). Today, we’re announcing two new capabilities:</p><ul><li><p><b>Streamable HTTP Transport</b>: The <a href="https://agents.cloudflare.com/"><u>Agents SDK</u></a> now supports the <a href="https://modelcontextprotocol.io/specification/2025-03-26/basic/transports#streamable-http"><u>new Streamable HTTP transport</u></a>, allowing you to future-proof your MCP server. <a href="https://developers.cloudflare.com/agents/model-context-protocol/transport/"><u>Our implementation</u></a> allows your MCP server to simultaneously handle both the new Streamable HTTP transport and the existing SSE transport, maintaining backward compatibility with all remote MCP clients.</p></li><li><p><b>Deploy MCP servers written in Python</b>: In 2024, we <a href="https://blog.cloudflare.com/python-workers/"><u>introduced first-class Python language support</u></a> in <a href="https://www.cloudflare.com/developer-platform/products/workers/">Cloudflare Workers</a>, and now you can build MCP servers on Cloudflare that are entirely written in Python.</p></li></ul><p>Click “Deploy to Cloudflare” to <a href="https://developers.cloudflare.com/agents/guides/remote-mcp-server/"><u>get started</u></a> with a <a href="https://github.com/cloudflare/ai/tree/main/demos/remote-mcp-authless"><u>remote MCP server</u></a> that supports the new Streamable HTTP transport method, with backwards compatibility with the SSE transport. </p><a href="https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/ai/tree/main/demos/remote-mcp-authless"><img src="https://deploy.workers.cloudflare.com/button" /></a>
<p></p>
    <div>
      <h3>Streamable HTTP: A simpler way for AI agents to communicate with services via MCP</h3>
      <a href="#streamable-http-a-simpler-way-for-ai-agents-to-communicate-with-services-via-mcp">
        
      </a>
    </div>
    <p><a href="https://spec.modelcontextprotocol.io/specification/2025-03-26/"><u>The MCP spec</u></a> was <a href="https://spec.modelcontextprotocol.io/specification/2025-03-26/basic/transports/"><u>updated</u></a> on March 26 to introduce a new transport mechanism for remote MCP, called <a href="https://spec.modelcontextprotocol.io/specification/2025-03-26/basic/transports/#streamable-http"><u>Streamable HTTP</u></a>. The new transport simplifies how <a href="https://www.cloudflare.com/learning/ai/what-is-agentic-ai/">AI agents</a> can interact with services by using a single HTTP endpoint for sending and receiving responses between the client and the server, replacing the need to implement separate endpoints for initializing the connection and for sending messages. </p>
    <div>
      <h4>Upgrading your MCP server to use the new transport method</h4>
      <a href="#upgrading-your-mcp-server-to-use-the-new-transport-method">
        
      </a>
    </div>
    <p>If you've already built a remote MCP server on Cloudflare using the Cloudflare Agents SDK, then <a href="https://developers.cloudflare.com/agents/model-context-protocol/transport/"><u>adding support for Streamable HTTP</u></a> is straightforward. The SDK has been updated to support both the existing Server-Sent Events (SSE) transport and the new Streamable HTTP transport concurrently. </p><p>Here's how you can configure your server to handle both transports:​</p>
            <pre><code>export default {
  fetch(request: Request, env: Env, ctx: ExecutionContext) {
    const { pathname }  = new URL(request.url);
    if (pathname.startsWith('/sse')) {
      return MyMcpAgent.serveSSE('/sse').fetch(request, env, ctx);
    }
    if (pathname.startsWith('/mcp')) {
      return MyMcpAgent.serve('/mcp').fetch(request, env, ctx);
    }
  },
};</code></pre>
            <p>Or, if you’re using Hono:</p>
            <pre><code>const app = new Hono()
app.mount('/sse', MyMCP.serveSSE('/sse').fetch, { replaceRequest: false })
app.mount('/mcp', MyMCP.serve('/mcp').fetch, { replaceRequest: false )
export default app</code></pre>
            <p>Or if your MCP server implements <a href="https://developers.cloudflare.com/agents/model-context-protocol/authorization/"><u>authentication &amp; authorization</u></a> using the Workers <a href="https://github.com/cloudflare/workers-oauth-provider"><u>OAuth Provider Library</u></a>: </p>
            <pre><code>export default new OAuthProvider({
 apiHandlers: {
   '/sse': MyMCP.serveSSE('/sse'),
   '/mcp': MyMCP.serve('/mcp'),
 },
 // ...
})</code></pre>
            <p>The key changes are: </p><ul><li><p>Use <code>MyMcpAgent.serveSSE('/sse')</code> for the existing SSE transport. Previously, this would have been <code>MyMcpAgent.mount('/sse')</code>, which has been kept as an alias.</p></li><li><p>Add a new path with <code>MyMcpAgent.serve('/mcp')</code> to support the new Streamable HTTP transport</p></li></ul><p>That's it! With these few lines of code, your MCP server will support both transport methods, making it compatible with both existing and new clients.</p>
    <div>
      <h4>Using Streamable HTTP from an MCP client</h4>
      <a href="#using-streamable-http-from-an-mcp-client">
        
      </a>
    </div>
    <p>While most MCP clients haven’t yet adopted the new Streamable HTTP transport, you can start testing it today using<a href="https://www.npmjs.com/package/mcp-remote"> mcp-remote</a>, an adapter that lets MCP clients like Claude Desktop that otherwise only support local connections work with remote MCP servers. This tool allows any MCP client to connect to remote MCP servers via either SSE or Streamable HTTP, even if the client doesn't natively support remote connections or the new transport method. </p>
    <div>
      <h4>So, what’s new with Streamable HTTP? </h4>
      <a href="#so-whats-new-with-streamable-http">
        
      </a>
    </div>
    <p>Initially, remote MCP communication between AI agents and services used a single connection but required interactions with two different endpoints: one endpoint (<code>/sse</code>) to establish a persistent Server-Sent Events (SSE) connection that the client keeps open for receiving responses and updates from the server, and another endpoint (<code>/sse/messages</code>) where the client sends requests for tool calls. </p><p>While this works, it's like having a conversation with two phones, one for listening and one for speaking. This adds complexity to the setup, makes it harder to scale, and requires connections to be kept open for long periods of time. This is because SSE operates as a persistent one-way channel where servers push updates to clients. If this connection closes prematurely, clients will miss responses or updates sent from the MCP server during long-running operations. </p><p>The new Streamable HTTP transport addresses these challenges by enabling: </p><ul><li><p><b>Communication through a single endpoint: </b>All MCP interactions now flow through one endpoint, eliminating the need to manage separate endpoints for requests and responses, reducing complexity.</p></li><li><p><b>Bi-directional communication: </b>Servers can send notifications and requests back to clients on the same connection, enabling the server to prompt for additional information or provide real-time updates. </p></li><li><p><b>Automatic connection upgrades: </b>Connections start as standard HTTP requests, but can dynamically upgrade to SSE (Server-Sent Events) to stream responses during long-running tasks.</p></li></ul><p>Now, when an AI agent wants to call a tool on a remote MCP server, it can do so with a single <code>POST</code> request to one endpoint (<code>/mcp</code>). Depending on the tool call, the server will either respond immediately or decide to upgrade the connection to use SSE to stream responses or notifications as they become available — all over the same request.</p><p>Our current implementation of Streamable HTTP provides feature parity with the previous SSE transport. We're actively working to implement the full capabilities defined in the specification, including <a href="https://modelcontextprotocol.io/specification/2025-03-26/basic/transports#resumability-and-redelivery"><u>resumability</u></a>, cancellability, and <a href="https://modelcontextprotocol.io/specification/2025-03-26/basic/transports#session-management"><u>session management</u></a> to enable more complex, reliable, and scalable agent-to-agent interactions. </p>
    <div>
      <h4>What’s coming next? </h4>
      <a href="#whats-coming-next">
        
      </a>
    </div>
    <p>The <a href="https://modelcontextprotocol.io/specification/2025-03-26"><u>MCP specification</u></a> is rapidly evolving, and we're committed to bringing these changes to the Agents SDK to keep your MCP server compatible with all clients. We're actively tracking developments across both transport and authorization, adding support as they land, and maintaining backward compatibility to prevent breaking changes as adoption grows. Our goal is to handle the complexity behind the scenes, so you can stay focused on building great agent experiences.</p><p>On the transport side, here are some of the improvements coming soon to the Agents SDK:</p><ul><li><p><b>Resumability:</b> If a connection drops during a long-running operation, clients will be able to resume exactly where they left off without missing any responses. This eliminates the need to keep connections open continuously, making it ideal for AI agents that run for hours.</p></li><li><p><b>Cancellability</b>: Clients will have explicit mechanisms to cancel operations, enabling cleaner termination of long-running processes.</p></li><li><p><b>Session management</b>: We're implementing secure session handling with unique session IDs that maintain state across multiple connections, helping build more sophisticated agent-to-agent communication patterns.</p></li></ul>
    <div>
      <h3>Deploying Python MCP Servers on Cloudflare</h3>
      <a href="#deploying-python-mcp-servers-on-cloudflare">
        
      </a>
    </div>
    <p>In 2024, we <a href="https://blog.cloudflare.com/python-workers/"><u>introduced Python Workers</u></a>, which lets you write Cloudflare Workers entirely in Python. Now, you can use them to build and deploy remote MCP servers powered by the <a href="https://github.com/modelcontextprotocol/python-sdk"><u>Python MCP SDK</u></a> — a library for defining tools and resources using regular Python functions.</p><p>You can deploy a Python MCP server to your Cloudflare account with the button below, or read the code <a href="https://github.com/cloudflare/ai/tree/main/demos/python-workers-mcp"><u>here</u></a>. </p><a href="https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/ai/tree/main/demos/python-workers-mcp"><img src="https://deploy.workers.cloudflare.com/button" /></a>
<p></p><p>Here’s how you can define tools and resources in the MCP server:</p>
            <pre><code>class FastMCPServer(DurableObject):
    def __init__(self, ctx, env):
        self.ctx = ctx
        self.env = env
        from mcp.server.fastmcp import FastMCP
        self.mcp = FastMCP("Demo")

        @mcp.tool()
        def calculate_bmi(weight_kg: float, height_m: float) -&gt; float:
            """Calculate BMI given weight in kg and height in meters"""
            return weight_kg / (height_m**2)

        @mcp.resource("greeting://{name}")
        def get_greeting(name: str) -&gt; str:
            """Get a personalized greeting"""
            return f"Hello, {name}!"

        self.app = mcp.sse_app()

    async def call(self, request):
        import asgi
        return await asgi.fetch(self.app, request, self.env, self.ctx)



async def on_fetch(request, env):
    id = env.ns.idFromName("example")
    obj = env.ns.get(id)
    return await obj.call(request)</code></pre>
            <p>If you're already building APIs with<a href="https://fastapi.tiangolo.com/"> <u>FastAPI</u></a>, a popular Python package for quickly building high performance API servers, you can use <a href="https://github.com/cloudflare/ai/tree/main/packages/fastapi-mcp"><u>FastAPI-MCP</u></a> to expose your existing endpoints as MCP tools. It handles the protocol boilerplate for you, making it easy to bring FastAPI-based services into the agent ecosystem.</p><p>With recent updates like <a href="https://blog.cloudflare.com/python-workers/"><u>support for Durable Objects</u></a> and <a href="https://developers.cloudflare.com/changelog/2025-04-22-python-worker-cron-triggers/"><u>Cron Triggers in Python Workers</u></a>, it’s now easier to run stateful logic and scheduled tasks directly in your MCP server. </p>
    <div>
      <h3>Start building a remote MCP server today! </h3>
      <a href="#start-building-a-remote-mcp-server-today">
        
      </a>
    </div>
    <p>On Cloudflare, <a href="https://developers.cloudflare.com/agents/guides/remote-mcp-server/"><u>you can start building today</u></a>. We’re ready for you, and ready to help build with you. Email us at <a><u>1800-mcp@cloudflare.com</u></a>, and we’ll help get you going. There’s lots more to come with MCP, and we’re excited to see what you build.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/77k853sJHhvZ1UQwrQWyy2/22264b8bda63bc40b6568f88ae99804c/image2.png" />
          </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Phython]]></category>
            <category><![CDATA[MCP]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Durable Objects]]></category>
            <guid isPermaLink="false">5BMzZem6hjKhNsSnI5l3BZ</guid>
            <dc:creator>Jeremy Morrell</dc:creator>
            <dc:creator>Dan Lapid</dc:creator>
        </item>
    </channel>
</rss>