
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sat, 04 Apr 2026 02:33:26 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Building a serverless, post-quantum Matrix homeserver]]></title>
            <link>https://blog.cloudflare.com/serverless-matrix-homeserver-workers/</link>
            <pubDate>Tue, 27 Jan 2026 14:00:00 GMT</pubDate>
            <description><![CDATA[ As a proof of concept, we built a Matrix homeserver to Cloudflare Workers — delivering encrypted messaging at the edge with automatic post-quantum cryptography. ]]></description>
            <content:encoded><![CDATA[ <p><sup><i>* This post was updated at 11:45 a.m. Pacific time to clarify that the use case described here is a proof of concept and a personal project. Some sections have been updated for clarity.</i></sup></p><p>Matrix is the gold standard for decentralized, end-to-end encrypted communication. It powers government messaging systems, open-source communities, and privacy-focused organizations worldwide. </p><p>For the individual developer, however, the appeal is often closer to home: bridging fragmented chat networks (like Discord and Slack) into a single inbox, or simply ensuring your conversation history lives on infrastructure you control. Functionally, Matrix operates as a decentralized, eventually consistent state machine. Instead of a central server pushing updates, homeservers exchange signed JSON events over HTTP, using a conflict resolution algorithm to merge these streams into a unified view of the room's history.</p><p><b>But there is a "tax" to running it. </b>Traditionally, operating a Matrix <a href="https://matrix.org/homeserver/about/"><u>homeserver</u></a> has meant accepting a heavy operational burden. You have to provision virtual private servers (VPS), tune PostgreSQL for heavy write loads, manage Redis for caching, configure <a href="https://www.cloudflare.com/learning/cdn/glossary/reverse-proxy/"><u>reverse proxies</u></a>, and handle rotation for <a href="https://www.cloudflare.com/application-services/products/ssl/">TLS certificates</a>. It’s a stateful, heavy beast that demands to be fed time and money, whether you’re using it a lot or a little.</p><p>We wanted to see if we could eliminate that tax entirely.</p><p><b>Spoiler: We could.</b> In this post, we’ll explain how we ported a Matrix homeserver to <a href="https://workers.cloudflare.com/"><u>Cloudflare Workers</u></a>. The resulting proof of concept is a serverless architecture where operations disappear, costs scale to zero when idle, and every connection is protected by <a href="https://www.cloudflare.com/learning/ssl/quantum/what-is-post-quantum-cryptography/"><u>post-quantum cryptography</u></a> by default. You can view the source code and <a href="https://github.com/nkuntz1934/matrix-workers"><u>deploy your own instance directly from Github</u></a>.</p><a href="https://deploy.workers.cloudflare.com/?url=https://github.com/nkuntz1934/matrix-workers"><img src="https://deploy.workers.cloudflare.com/button" /></a>
<p></p><p></p>
    <div>
      <h2>From Synapse to Workers</h2>
      <a href="#from-synapse-to-workers">
        
      </a>
    </div>
    <p>Our starting point was <a href="https://github.com/matrix-org/synapse"><u>Synapse</u></a>, the Python-based reference Matrix homeserver designed for traditional deployments. PostgreSQL for persistence, Redis for caching, filesystem for media.</p><p>Porting it to Workers meant questioning every storage assumption we’d taken for granted.</p><p>The challenge was storage. Traditional homeservers assume strong consistency via a central SQL database. Cloudflare <a href="https://developers.cloudflare.com/durable-objects/"><u>Durable Objects</u></a> offers a powerful alternative. This primitive gives us the strong consistency and atomicity required for Matrix state resolution, while still allowing the application to run at the edge.</p><p>We ported the core Matrix protocol logic — event authorization, room state resolution, cryptographic verification — in TypeScript using the Hono framework. D1 replaces PostgreSQL, KV replaces Redis, R2 replaces the filesystem, and Durable Objects handle real-time coordination.</p><p>Here’s how the mapping worked out:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1JTja38UZRbFygluawrnz1/9bce290e3070155c734e874c17051551/BLOG-3101_2.png" />
          </figure>
    <div>
      <h2>From monolith to serverless</h2>
      <a href="#from-monolith-to-serverless">
        
      </a>
    </div>
    <p>Moving to Cloudflare Workers brings several advantages for a developer: simple deployment, lower costs, low latency, and built-in security.</p><p><b>Easy deployment: </b>A traditional Matrix deployment requires server provisioning, PostgreSQL administration, Redis cluster management, <a href="https://www.cloudflare.com/application-services/solutions/certificate-lifecycle-management/">TLS certificate renewal</a>, load balancer configuration, monitoring infrastructure, and on-call rotations.</p><p>With Workers, deployment is simply: wrangler deploy. Workers handles TLS, load balancing, DDoS protection, and global distribution. </p><p><b>Usage-based costs: </b>Traditional homeservers cost money whether anyone is using them or not. Workers pricing is request-based, so you pay when you’re using it, but costs drop to near zero when everyone’s asleep. </p><p><b>Lower latency globally:</b> A traditional Matrix homeserver in us-east-1 adds 200ms+ latency for users in Asia or Europe. Workers, meanwhile, run in 300+ locations worldwide. When a user in Tokyo sends a message, the Worker executes in Tokyo. </p><p><b>Built-in security: </b>Matrix homeservers can be high-value targets: They handle encrypted communications, store message history, and authenticate users. Traditional deployments require careful hardening: firewall configuration, rate limiting, DDoS mitigation, WAF rules, IP reputation filtering.</p><p>Workers provide all of this by default. </p>
    <div>
      <h3>Post-quantum protection </h3>
      <a href="#post-quantum-protection">
        
      </a>
    </div>
    <p>Cloudflare deployed post-quantum hybrid key agreement across all <a href="https://www.cloudflare.com/learning/ssl/why-use-tls-1.3/"><u>TLS 1.3</u></a> connections in <a href="https://blog.cloudflare.com/post-quantum-for-all/"><u>October 2022</u></a>. Every connection to our Worker automatically negotiates X25519MLKEM768 — a hybrid combining classical X25519 with ML-KEM, the post-quantum algorithm standardized by NIST.</p><p>Classical cryptography relies on mathematical problems that are hard for traditional computers but trivial for quantum computers running Shor’s algorithm. ML-KEM is based on lattice problems that remain hard even for quantum computers. The hybrid approach means both algorithms must fail for the connection to be compromised.</p>
    <div>
      <h3>Following a message through the system</h3>
      <a href="#following-a-message-through-the-system">
        
      </a>
    </div>
    <p>Understanding where encryption happens matters for security architecture. When someone sends a message through our homeserver, here’s the actual path:</p><p>The sender’s client takes the plaintext message and encrypts it with Megolm — Matrix’s end-to-end encryption. This encrypted payload then gets wrapped in TLS for transport. On Cloudflare, that TLS connection uses X25519MLKEM768, making it quantum-resistant.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/wGGYZ4LYspufH1c4psmL1/28acad8ab8e6535525dda413669c2d74/BLOG-3101_3.png" />
          </figure><p>The Worker terminates TLS, but what it receives is still encrypted — the Megolm ciphertext. We store that ciphertext in D1, index it by room and timestamp, and deliver it to recipients. But we never see the plaintext. The message “Hello, world” exists only on the sender’s device and the recipient’s device.</p><p>When the recipient syncs, the process reverses. They receive the encrypted payload over another quantum-resistant TLS connection, then decrypt locally with their Megolm session keys.</p>
    <div>
      <h3>Two layers, independent protection</h3>
      <a href="#two-layers-independent-protection">
        
      </a>
    </div>
    <p>This protects via two encryption layers that operate independently:</p><p>The <a href="https://www.cloudflare.com/learning/ssl/transport-layer-security-tls/"><u>transport layer (TLS)</u></a> protects data in transit. It’s encrypted at the client and decrypted at the Cloudflare edge. With X25519MLKEM768, this layer is now post-quantum.</p><p>The <a href="https://www.cloudflare.com/learning/ddos/what-is-layer-7/"><u>application layer</u></a> (Megolm E2EE) protects message content. It’s encrypted on the sender’s device and decrypted only on recipient devices. This uses classical Curve25519 cryptography.</p>
    <div>
      <h3>Who sees what</h3>
      <a href="#who-sees-what">
        
      </a>
    </div>
    <p>Any Matrix homeserver operator — whether running Synapse on a VPS or this implementation on Workers — can see metadata: which rooms exist, who’s in them, when messages were sent. But no one in the infrastructure chain can see the message content, because the E2EE payload is encrypted on sender devices before it ever hits the network. Cloudflare terminates TLS and passes requests to your Worker, but both see only Megolm ciphertext. Media in encrypted rooms is encrypted client-side before upload, and private keys never leave user devices.</p>
    <div>
      <h3>What traditional deployments would need</h3>
      <a href="#what-traditional-deployments-would-need">
        
      </a>
    </div>
    <p>Achieving post-quantum TLS on a traditional Matrix deployment would require upgrading OpenSSL or BoringSSL to a version supporting ML-KEM, configuring cipher suite preferences correctly, testing client compatibility across all Matrix apps, monitoring for TLS negotiation failures, staying current as PQC standards evolve, and handling clients that don’t support PQC gracefully.</p><p>With Workers, it’s automatic. Chrome, Firefox, and Edge all support X25519MLKEM768. Mobile apps using platform TLS stacks inherit this support. The security posture improves as Cloudflare’s <a href="https://developers.cloudflare.com/ssl/post-quantum-cryptography/"><u>PQC</u></a> deployment expands — no action required on our part.</p>
    <div>
      <h2>The storage architecture that made it work</h2>
      <a href="#the-storage-architecture-that-made-it-work">
        
      </a>
    </div>
    <p>The key insight from porting Tuwunel was that different data needs different consistency guarantees. We use each Cloudflare primitive for what it does best.</p>
    <div>
      <h3>D1 for the data model</h3>
      <a href="#d1-for-the-data-model">
        
      </a>
    </div>
    <p>D1 stores everything that needs to survive restarts and support queries: users, rooms, events, device keys. Over 25 tables covering the full Matrix data model. </p>
            <pre><code>CREATE TABLE events (
	event_id TEXT PRIMARY KEY,
	room_id TEXT NOT NULL,
	sender TEXT NOT NULL,
	event_type TEXT NOT NULL,
	state_key TEXT,
	content TEXT NOT NULL,
	origin_server_ts INTEGER NOT NULL,
	depth INTEGER NOT NULL
);
</code></pre>
            <p><a href="https://www.cloudflare.com/developer-platform/products/d1/">D1’s SQLite foundation</a> meant we could port Tuwunel’s queries with minimal changes. Joins, indexes, and aggregations work as expected.</p><p>We learned one hard lesson: D1’s eventual consistency breaks foreign key constraints. A write to rooms might not be visible when a subsequent write to events checks the foreign key. We removed all foreign keys and enforce referential integrity in application code.</p>
    <div>
      <h3>KV for ephemeral state</h3>
      <a href="#kv-for-ephemeral-state">
        
      </a>
    </div>
    <p>OAuth authorization codes live for 10 minutes, while refresh tokens last for a session.</p>
            <pre><code>// Store OAuth code with 10-minute TTL
kv.put(&amp;format!("oauth_code:{}", code), &amp;token_data)?
	.expiration_ttl(600)
	.execute()
	.await?;</code></pre>
            <p>KV’s global distribution means OAuth flows work fast regardless of where users are located.</p>
    <div>
      <h3>R2 for media</h3>
      <a href="#r2-for-media">
        
      </a>
    </div>
    <p>Matrix media maps directly to R2, so you can upload an image, get back a content-addressed URL – and egress is free.</p>
    <div>
      <h3>Durable Objects for atomicity</h3>
      <a href="#durable-objects-for-atomicity">
        
      </a>
    </div>
    <p>Some operations can’t tolerate eventual consistency. When a client claims a one-time encryption key, that key must be atomically removed. If two clients claim the same key, encrypted session establishment fails.</p><p>Durable Objects provide single-threaded, strongly consistent storage:</p>
            <pre><code>#[durable_object]
pub struct UserKeysObject {
	state: State,
	env: Env,
}

impl UserKeysObject {
	async fn claim_otk(&amp;self, algorithm: &amp;str) -&gt; Result&lt;Option&lt;Key&gt;&gt; {
    	// Atomic within single DO - no race conditions possible
    	let mut keys: Vec&lt;Key&gt; = self.state.storage()
        	.get("one_time_keys")
        	.await
        	.ok()
        	.flatten()
        	.unwrap_or_default();

    	if let Some(idx) = keys.iter().position(|k| k.algorithm == algorithm) {
        	let key = keys.remove(idx);
        	self.state.storage().put("one_time_keys", &amp;keys).await?;
        	return Ok(Some(key));
    	}
    	Ok(None)
	}
}</code></pre>
            <p>We use UserKeysObject for E2EE key management, RoomObject for real-time room events like typing indicators and read receipts, and UserSyncObject for to-device message queues. The rest flows through D1.</p>
    <div>
      <h3>Complete end-to-end encryption, complete OAuth</h3>
      <a href="#complete-end-to-end-encryption-complete-oauth">
        
      </a>
    </div>
    <p>Our implementation supports the full Matrix E2EE stack: device keys, cross-signing keys, one-time keys, fallback keys, key backup, and dehydrated devices.</p><p>Modern Matrix clients use OAuth 2.0/OIDC instead of legacy password flows. We implemented a complete OAuth provider, with dynamic client registration, PKCE authorization, RS256-signed JWT tokens, token refresh with rotation, and standard OIDC discovery endpoints.
</p>
            <pre><code>curl https://matrix.example.com/.well-known/openid-configuration
{
  "issuer": "https://matrix.example.com",
  "authorization_endpoint": "https://matrix.example.com/oauth/authorize",
  "token_endpoint": "https://matrix.example.com/oauth/token",
  "jwks_uri": "https://matrix.example.com/.well-known/jwks.json"
}
</code></pre>
            <p>Point Element or any Matrix client at the domain, and it discovers everything automatically.</p>
    <div>
      <h2>Sliding Sync for mobile</h2>
      <a href="#sliding-sync-for-mobile">
        
      </a>
    </div>
    <p>Traditional Matrix sync transfers megabytes of data on initial connection,  draining mobile battery and data plans.</p><p>Sliding Sync lets clients request exactly what they need. Instead of downloading everything, clients get the 20 most recent rooms with minimal state. As users scroll, they request more ranges. The server tracks position and sends only deltas.</p><p>Combined with edge execution, mobile clients can connect and render their room list in under 500ms, even on slow networks.</p>
    <div>
      <h2>The comparison</h2>
      <a href="#the-comparison">
        
      </a>
    </div>
    <p>For a homeserver serving a small team:</p><table><tr><th><p> </p></th><th><p><b>Traditional (VPS)</b></p></th><th><p><b>Workers</b></p></th></tr><tr><td><p>Monthly cost (idle)</p></td><td><p>$20-50</p></td><td><p>&lt;$1</p></td></tr><tr><td><p>Monthly cost (active)</p></td><td><p>$20-50</p></td><td><p>$3-10</p></td></tr><tr><td><p>Global latency</p></td><td><p>100-300ms</p></td><td><p>20-50ms</p></td></tr><tr><td><p>Time to deploy</p></td><td><p>Hours</p></td><td><p>Seconds</p></td></tr><tr><td><p>Maintenance</p></td><td><p>Weekly</p></td><td><p>None</p></td></tr><tr><td><p>DDoS protection</p></td><td><p>Additional cost</p></td><td><p>Included</p></td></tr><tr><td><p>Post-quantum TLS</p></td><td><p>Complex setup</p></td><td><p>Automatic</p></td></tr></table><p><sup>*</sup><sup><i>Based on public rates and metrics published by DigitalOcean, AWS Lightsail, and Linode as of January 15, 2026.</i></sup></p><p>The economics improve further at scale. Traditional deployments require capacity planning and over-provisioning. Workers scale automatically.</p>
    <div>
      <h2>The future of decentralized protocols</h2>
      <a href="#the-future-of-decentralized-protocols">
        
      </a>
    </div>
    <p>We started this as an experiment: could Matrix run on Workers? It can—and the approach can work for other stateful protocols, too.</p><p>By mapping traditional stateful components to Cloudflare’s primitives — Postgres to D1, Redis to KV, mutexes to Durable Objects — we can see  that complex applications don't need complex infrastructure. We stripped away the operating system, the database management, and the network configuration, leaving only the application logic and the data itself.</p><p>Workers offers the sovereignty of owning your data, without the burden of owning the infrastructure.</p><p>I have been experimenting with the implementation and am excited for any contributions from others interested in this kind of service. </p><p>Ready to build powerful, real-time applications on Workers? Get started with<a href="https://developers.cloudflare.com/workers/"> <u>Cloudflare Workers</u></a> and explore<a href="https://developers.cloudflare.com/durable-objects/"> <u>Durable Objects</u></a> for your own stateful edge applications. Join our<a href="https://discord.cloudflare.com"> <u>Discord community</u></a> to connect with other developers building at the edge.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Durable Objects]]></category>
            <category><![CDATA[D1]]></category>
            <category><![CDATA[Cloudflare Workers KV]]></category>
            <category><![CDATA[R2]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Rust]]></category>
            <category><![CDATA[WebAssembly]]></category>
            <category><![CDATA[Post-Quantum]]></category>
            <category><![CDATA[Encryption]]></category>
            <guid isPermaLink="false">6VOVAMNwIZ18hMaUlC6aqp</guid>
            <dc:creator>Nick Kuntz</dc:creator>
        </item>
        <item>
            <title><![CDATA[We shipped FinalizationRegistry in Workers: why you should never use it]]></title>
            <link>https://blog.cloudflare.com/we-shipped-finalizationregistry-in-workers-why-you-should-never-use-it/</link>
            <pubDate>Wed, 11 Jun 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare Workers now support FinalizationRegistry, but just because you can use it doesn’t mean you should. ]]></description>
            <content:encoded><![CDATA[ <p>We’ve recently added support for the <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/FinalizationRegistry"><u>FinalizationRegistry API</u></a> in Cloudflare Workers. This API allows developers to request a callback when a JavaScript object is garbage-collected, a feature that can be particularly relevant for managing external resources, such as memory allocated by <a href="https://developer.mozilla.org/en-US/docs/WebAssembly"><u>WebAssembly</u></a> (Wasm). However, despite its availability, our general advice is: <b>avoid using it directly in most scenarios.</b></p><p>Our decision to add <code>FinalizationRegistry</code> — while still cautioning against using it — opens up a bigger conversation: how memory management works when JavaScript and WebAssembly share the same runtime. This is becoming more common in high-performance web apps, and getting it wrong can lead to memory leaks, out-of-memory errors, and performance issues, especially in resource-constrained environments like Cloudflare Workers.</p><p>In this post, we’ll look at how JavaScript and Wasm handle memory differently, why that difference matters, and what <code>FinalizationRegistry</code> is actually useful for. We’ll also explain its limitations, particularly around timing and predictability, walk through why we decided to support it, and how we’ve made it safer to use. Finally, we’ll talk about how newer JavaScript language features offer a more reliable and structured approach to solving these problems.</p>
    <div>
      <h2>Memory management 101</h2>
      <a href="#memory-management-101">
        
      </a>
    </div>
    
    <div>
      <h3>JavaScript</h3>
      <a href="#javascript">
        
      </a>
    </div>
    <p>JavaScript relies on automatic memory management through a process called <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Memory_management#garbage_collection"><u>garbage collection</u></a>. This means developers do not need to worry about freeing allocated memory, or lifetimes. The garbage collector identifies and reclaims memory occupied by objects that are no longer needed by the program (that is, garbage). This helps prevent memory leaks and simplifies memory management for developers.</p>
            <pre><code>function greet() {
  let name = "Alice";         // String is allocated in memory
  console.log("Hello, " + name);
}                             // 'name' goes out of scope

greet();
// JavaScript automatically frees allocated memory at some point in future</code></pre>
            
    <div>
      <h3>WebAssembly</h3>
      <a href="#webassembly">
        
      </a>
    </div>
    <p>WebAssembly (Wasm) is an assembly-like instruction format designed to run high-performance applications on the web. While it initially gained prominence in web browsers, Wasm is also highly effective on the server side. At Cloudflare, we leverage Wasm to enable users to run code written in a variety of programming languages, such as <a href="https://developers.cloudflare.com/workers/languages/rust/"><u>Rust</u></a> and <a href="https://developers.cloudflare.com/workers/languages/python/"><u>Python</u></a>, directly within our <a href="https://www.cloudflare.com/learning/serverless/glossary/what-is-chrome-v8/"><u>V8 isolates</u></a>, offering both performance and versatility.</p><p>Wasm runtimes are designed to be simple stack machines, and lack built-in garbage collectors. This necessitates manual memory management (allocation and deallocation of memory used by Wasm code), making it an ideal compilation target for languages like Rust and C++ that handle their own memory.</p><p>Wasm modules operate on <a href="https://webassembly.github.io/spec/core/exec/runtime.html#memory-instances"><u>linear memory</u></a>: a resizable block of raw bytes, which JavaScript views as an <a href="https://developer.mozilla.org/en-US/docs/WebAssembly/Reference/JavaScript_interface/Memory"><u>ArrayBuffer</u></a>. This memory is organized in 64 KB pages, and its initial size is defined when the module is compiled or loaded. Wasm code interacts with this memory using 32-bit offsets — integer values functioning as direct pointers that specify a byte offset from the start of its linear memory. This direct memory access model is crucial for Wasm's high performance. The host environment (which in Cloudflare Workers is JavaScript) also shares this ArrayBuffer, reading and writing (often via <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/TypedArray"><u>TypedArrays</u></a>) to enable vital data exchange between Wasm and JavaScript.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3nWyMj5xginVWLxCnPpUdY/f939bb0b4f0007999aff876854645f16/image3.png" />
          </figure><p>A core Wasm design is its <a href="https://webassembly.org/docs/security/"><u>secure sandbox</u></a>. This confines Wasm code strictly to its own linear memory and explicitly declared imports from the host, preventing unauthorized memory access or system calls. Direct interaction with JavaScript objects is blocked; communication occurs through numeric values, function references, or operations on the shared ArrayBuffer. This strong isolation is vital for security, ensuring Wasm modules don't interfere with the host or other application components, which is especially important in multi-tenant environments like Cloudflare Workers.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3Ck9v7opkB5Mg1agKNnQm7/c328cc0a2d1c3145a527e1267594fe1d/image2.png" />
          </figure><p>Bridging WebAssembly memory with JavaScript often involves writing low-level "glue" code to convert raw byte arrays from Wasm into usable JavaScript types. Doing this manually for every function or data structure is both tedious and error-prone. Fortunately, tools like <a href="https://rustwasm.github.io/wasm-bindgen/"><u>wasm-bindgen</u></a> and <a href="https://emscripten.org/"><u>Emscripten (Embind)</u></a> handle this interop automatically, generating the binding code needed to pass data cleanly between the two environments. We use these same tools under the hood — wasm-bindgen for Rust-based <a href="https://github.com/cloudflare/workers-rs"><u>workers-rs</u></a> projects, and Emscripten for <a href="https://developers.cloudflare.com/workers/languages/python/how-python-workers-work/"><u>Python Workers</u></a> — to simplify integration and let developers focus on application logic rather than memory translation.</p>
    <div>
      <h2>Interoperability</h2>
      <a href="#interoperability">
        
      </a>
    </div>
    <p>High-performance web apps often use JavaScript for interactive UIs and data fetching, while WebAssembly handles demanding operations like media processing and complex calculations for significant performance gains, allowing developers to maximize efficiency. Given the difference in memory management models, developers need  to be careful when using WebAssembly memory in JavaScript.</p><p>For this example, we'll use Rust to compile a WebAssembly module manually. Rust is a popular choice for WebAssembly because it offers precise control over memory and easy Wasm compilation using standard toolchains.</p>
    <div>
      <h3>Rust</h3>
      <a href="#rust">
        
      </a>
    </div>
    <p>Here we have two simple functions. <code>make_buffer</code> creates a string and returns a raw pointer back to JavaScript. The function intentionally “forgets” the memory allocated so that it doesn’t get cleaned up after the function returns. <code>free_buffer</code>, on the other hand, expects the initial string reference handed back and frees the memory.</p>
            <pre><code>// Allocate a fresh byte buffer and hand the raw pointer + length to JS.
// *We intentionally “forget” the Vec so Rust will not free it right away;
//   JS now owns it and must call `free_buffer` later.*
#[no_mangle]
pub extern "C" fn make_buffer(out_len: *mut usize) -&gt; *mut u8 {
    let mut data = b"Hello from Rust".to_vec();
    let ptr = data.as_mut_ptr();
    let len  = data.len();

    unsafe { *out_len = len };

    std::mem::forget(data);
    return ptr;
}

/// Counterpart that **must** be called by JS to avoid a leak.
#[no_mangle]
pub unsafe extern "C" fn free_buffer(ptr: *mut u8, len: usize) {
    let _ = Vec::from_raw_parts(ptr, len, len);
}</code></pre>
            
    <div>
      <h3>JavaScript</h3>
      <a href="#javascript">
        
      </a>
    </div>
    <p>Back in JavaScript land, we’ll call these Wasm functions and output them using console.log. This is a common pattern in Wasm-based applications since WebAssembly doesn’t have direct access to Web APIs, and rely on a JavaScript “glue” to interface with the outer world in order to do anything useful.</p>
            <pre><code>const { instance } = await WebAssembly.instantiate(WasmBytes, {});

const { memory, make_buffer, free_buffer } = instance.exports;

//  Use the Rust functions
const lenPtr = 0;                 // scratch word in Wasm memory
const ptr = make_buffer(lenPtr);

const len = new DataView(memory.buffer).getUint32(lenPtr, true);
const data = new Uint8Array(memory.buffer, ptr, len);

console.log(new TextDecoder().decode(data)); // “Hello from Rust”

free_buffer(ptr, len); // free_buffer must be called to prevent memory leaks</code></pre>
            <p>You can find all code samples along with setup instructions <a href="https://github.com/cloudflare/cloudflare-blog/tree/master/2025-06-finalization-registry"><u>here</u></a>.</p><p>As you can see, working with Wasm memory from JavaScript requires care, as it introduces the risk of memory leaks if allocated memory isn’t properly released. JavaScript developers are often unfamiliar with manual memory management, and it’s easy to forget returning memory to WebAssembly after use. This can become especially tricky when Wasm-allocated data is passed into JavaScript libraries, making ownership and lifetime harder to track.</p><p>While occasional leaks may not cause immediate issues, over time they can lead to increased memory usage and degrade performance, particularly in memory-constrained environments like Cloudflare Workers.</p>
    <div>
      <h2>FinalizationRegistry</h2>
      <a href="#finalizationregistry">
        
      </a>
    </div>
    <p><code>FinalizationRegistry</code>, introduced as part of the <a href="https://tc39.es/proposal-weakrefs/"><u>TC-39 WeakRef proposal</u></a>, is a JavaScript API which lets you run “finalizers” (aka cleanup callbacks) when an object gets garbage-collected. Let’s look at a simple example to demonstrate the API:</p>
            <pre><code>const my_registry = new FinalizationRegistry((obj) =&gt; { console.log("Cleaned up: " + obj); });

{
  let temporary = { key: "value" };
  // Register this object in our FinalizationRegistry -- the second argument,
  // "temporary", will be passed to our callback as its obj parameter
  my_registry.register(temporary, "temporary");
}

// At some point in the future when temporary object gets garbage collected, we'll see "Cleaned up: temporary" in our logs.</code></pre>
            <p>Let’s see how we can use this API in our Wasm-based application:</p>
            <pre><code>const { instance } = await WebAssembly.instantiate(WasmBytes, {});

const { memory, make_buffer, free_buffer } = instance.exports;

// FinalizationRegistry would be responsible for returning memory back to Wasm
const cleanupFr = new FinalizationRegistry(({ ptr, len }) =&gt; {
  free_buffer(ptr, len);
});

//  Use the Rust functions
const lenPtr = 0;                 // scratch word in Wasm memory
const ptr = make_buffer(lenPtr);

const len = new DataView(memory.buffer).getUint32(lenPtr, true);
const data = new Uint8Array(memory.buffer, ptr, len);

// Register the data buffer in our FinalizationRegistry so that it gets cleaned up automatically
cleanupFr.register(data, { ptr, len });

console.log(new TextDecoder().decode(data));   // → “Hello from Rust”

// No need to manually call free_buffer, FinalizationRegistry will do this for us</code></pre>
            <p>We can use a <code>FinalizationRegistry</code> to manage any object borrowed from WebAssembly by registering it with a finalizer that calls the appropriate free function. This is the same approach used by <a href="https://rustwasm.github.io/docs/wasm-bindgen/reference/weak-references.html"><u>wasm-bindgen</u></a>. It shifts the burden of manual cleanup away from the JavaScript developer and delegates it to the JavaScript garbage collector. However, in practice, things aren’t quite that simple.</p>
    <div>
      <h2>Inherent issues with FinalizationRegistry</h2>
      <a href="#inherent-issues-with-finalizationregistry">
        
      </a>
    </div>
    <p>There is a fundamental issue with <code>FinalizationRegistry</code>: garbage collection is non-deterministic, and may clean up your unused memory at some arbitrary point in the future. In some cases, garbage collection might not even run and your “finalizers” will never be triggered.</p><p>This is part of its <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/FinalizationRegistry#notes_on_cleanup_callbacks"><u>documentation</u></a> as well:</p><blockquote><p><i>“A conforming JavaScript implementation, even one that does garbage collection, is not required to call cleanup callbacks. When and whether it does so is entirely down to the implementation of the JavaScript engine. When a registered object is reclaimed, any cleanup callbacks for it may be called then, or some time later, or not at all.”</i></p></blockquote><p>Even Emscripten mentions this in their <a href="https://emscripten.org/docs/porting/connecting_cpp_and_javascript/embind.html#automatic-memory-management"><u>documentation</u></a>: “<i>... finalizers are not guaranteed to be called, and even if they are, there are no guarantees about their timing or order of execution, which makes them unsuitable for general RAII-style resource management.</i>”</p><p>Given their non-deterministic nature, developers seldom use finalizers for any essential program logic. Treat them as a last-ditch safety net, not as a primary cleanup mechanism — explicit, deterministic teardown logic is almost always safer, faster, and easier to reason about.</p>
    <div>
      <h2>Enabling FinalizationRegistry in Workers</h2>
      <a href="#enabling-finalizationregistry-in-workers">
        
      </a>
    </div>
    <p>Given its non-deterministic nature and limited early adoption, we initially disabled the <code>FinalizationRegistry</code> API in our runtime. However, as usage of Wasm-based Workers grew — particularly among high-traffic customers — we began to see new demands emerge. One such customer was running an extremely high requests per second (RPS) workload using WebAssembly, and needed tight control over memory to sustain massive traffic spikes without degradation. This highlighted a gap in our memory management capabilities, especially in cases where manual cleanup wasn’t always feasible or reliable. As a result, we re-evaluated our stance and began exploring the challenges and trade-offs of enabling <code>FinalizationRegistry</code> within the Workers environment, despite its known limitations.</p>
    <div>
      <h3>Preventing footguns with safe defaults</h3>
      <a href="#preventing-footguns-with-safe-defaults">
        
      </a>
    </div>
    <p>Because this API could be misused and cause unpredictable results for our customers, we’ve added a few safeguards. Most importantly, cleanup callbacks are run without an active async context, which means they cannot perform any I/O. This includes sending events to a tail Worker, logging metrics, or making fetch requests.</p><p>While this might sound limiting, it’s very intentional. Finalization callbacks are meant for cleanup — especially for releasing WebAssembly memory — not for triggering side effects. If we allowed I/O here, developers might (accidentally) rely on finalizers to perform critical logic that depends on when garbage collection happens. That timing is non-deterministic and outside your control, which could lead to flaky, hard-to-debug behavior.</p><p>We don’t have full control over when V8’s garbage collector performs cleanup, but V8 does let us nudge the timing of finalizer execution. Like Node and Deno, Workers queue <code>FinalizationRegistry</code> jobs only after the microtask queue has drained, so each cleanup batch slips into the quiet slots between I/O phases of the event loop.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/69QSYwmiAueWyP2KlvBwGi/183345cbb043acbe4d76c0fdb2ebf4dd/image1.png" />
          </figure>
    <div>
      <h3>Security concerns</h3>
      <a href="#security-concerns">
        
      </a>
    </div>
    <p>The Cloudflare Workers runtime is <a href="https://blog.cloudflare.com/mitigating-spectre-and-other-security-threats-the-cloudflare-workers-security-model"><u>specifically engineered</u></a> to prevent side-channel attacks in a multi-tenant environment. Prior to enabling the <code>FinalizationRegistry</code> API, we did a thorough analysis to assess its impact on our security model and determine the necessity of additional safeguards. The non-deterministic nature of <code>FinalizationRegistry</code> raised concerns about potential information leaks leading to Spectre-like vulnerabilities, particularly regarding the possibility of exploiting the garbage collector (GC) as a confused deputy or using it to create a timer.</p>
    <div>
      <h4>GC as confused deputy</h4>
      <a href="#gc-as-confused-deputy">
        
      </a>
    </div>
    <p>One concern was whether the garbage collector (GC) could act as a confused deputy — a security antipattern where a privileged component is tricked into misusing its authority on behalf of untrusted code. In theory, a clever attacker could try to exploit the GC's ability to access internal object lifetimes and memory behavior in order to infer or manipulate sensitive information across isolation boundaries.</p><p>However, our analysis indicated that the V8 GC is effectively contained and not exposed to confused deputy risks within the runtime. This is attributed to our existing threat models and security measures, such as the isolation of user code, where the V8 Isolate serves as the primary security boundary. Furthermore, even though FinalizationRegistry involves some internal GC mechanics, the callbacks themselves execute in the same isolate that registered them — never across isolates — ensuring isolation remains intact.</p>
    <div>
      <h4>GC as timer</h4>
      <a href="#gc-as-timer">
        
      </a>
    </div>
    <p>We also evaluated the possibility of using FinalizationRegistry as a high-resolution timing mechanism — a common vector in side-channel attacks like Spectre. The concern here is that an attacker could schedule object finalization in a way that indirectly leaks information via the timing of callbacks.</p><p>In practice, though, the resolution of such a "GC timer" is low and highly variable, offering poor reliability for side-channel attacks. Additionally, we control when finalizer callbacks are scheduled — delaying them until after the microtask queue has drained — giving us an extra layer of control to limit timing precision and reduce risk.</p><p>Following a review with our security research team, we determined that our existing security model is sufficient to support this API.</p>
    <div>
      <h2>Predictable cleanups?</h2>
      <a href="#predictable-cleanups">
        
      </a>
    </div>
    <p>JavaScript's <a href="https://tc39.es/proposal-explicit-resource-management/"><u>Explicit Resource Management</u></a> proposal introduces a deterministic approach to handle resources needing manual cleanup, such as file handles, network connections, or database sessions. Drawing inspiration from constructs like C#'s <a href="https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/using"><u>using</u></a> and Python's <a href="https://docs.python.org/3/reference/compound_stmts.html#with"><u>with</u></a>, this proposal introduces the using and <code>await using</code> syntax. This new syntax guarantees that objects adhering to a specific cleanup protocol are automatically disposed of when they are no longer within their scope.</p><p>Let’s look at a simple example to understand it a bit better.</p>
            <pre><code>class MyResource {
  [Symbol.dispose]() {
    console.log("Resource cleaned up!");
  }

  use() {
    console.log("Using the resource...");
  }
}

{
  using res = new MyResource();
  res.use();
} // When this block ends, Symbol.dispose is called automatically (and deterministically).</code></pre>
            <p>The proposal also includes additional features that offer finer control over when dispose methods are called. But at a high level, it provides a much-needed, deterministic way to manage resource cleanup. Let’s now update our earlier WebAssembly-based example to take advantage of this new mechanism instead of relying on <code>FinalizationRegistry</code>:</p>
            <pre><code>const { instance } = await WebAssembly.instantiate(WasmBytes, {});
const { memory, make_buffer, free_buffer } = instance.exports;

class WasmBuffer {
  constructor(ptr, len) {
    this.ptr = ptr;
    this.len = len;
  }

  [Symbol.dispose]() {
    free_buffer(this.ptr, this.len);
  }
}

{
  const lenPtr = 0;
  const ptr = make_buffer(lenPtr);
  const len = new DataView(memory.buffer).getUint32(lenPtr, true);

  using buf = new WasmBuffer(ptr, len);

  const data = new Uint8Array(memory.buffer, ptr, len);
  console.log(new TextDecoder().decode(data));  // → “Hello from Rust”
} // Symbol.dispose or free_buffer gets called deterministically here</code></pre>
            <p>Explicit Resource Management provides a more dependable way to clean up resources than <code>FinalizationRegistry</code>, as it runs cleanup logic — such as calling <code>free_buffer</code> in WasmBuffer via <code>[Symbol.dispose]()</code> and the <code>using</code> syntax — deterministically, rather than relying on the garbage collector’s unpredictable timing. This makes it a more reliable choice for managing critical resources, especially memory.</p>
    <div>
      <h2>Future</h2>
      <a href="#future">
        
      </a>
    </div>
    <p>Emscripten <a href="https://emscripten.org/docs/porting/connecting_cpp_and_javascript/embind.html#automatic-memory-management"><u>already makes use</u></a> of Explicit Resource Management for handling Wasm memory, using <code>FinalizationRegistry</code> as a last resort, while wasm-bindgen <a href="https://github.com/rustwasm/wasm-bindgen/pull/4118"><u>supports it in experimental mode</u></a>. The proposal has seen growing adoption across the ecosystem and was recently conditionally advanced to Stage 4 in the TC39 process, meaning it’ll soon officially be part of the JavaScript language standard. This reflects a broader shift toward more predictable and structured memory cleanup in WebAssembly applications.</p><p>We <a href="https://developers.cloudflare.com/workers/platform/changelog/#2025-05-22"><u>recently added support</u></a> for this feature in Cloudflare Workers as well, enabling developers to take advantage of deterministic resource cleanup in edge environments. As support for the feature matures, it's likely to become a standard practice for managing linear memory safely and reliably.</p>
    <div>
      <h3>FinalizationRegistry: still not dead yet?</h3>
      <a href="#finalizationregistry-still-not-dead-yet">
        
      </a>
    </div>
    <p>Explicit Resource Management brings much-needed structure and predictability to resource cleanup in WebAssembly and JavaScript interop applications, but it doesn’t make <code>FinalizationRegistry</code> obsolete. There are still important use cases, particularly when a Wasm-allocated object’s lifecycle is out of your hands or when explicit disposal isn’t practical. In scenarios involving third-party libraries, dynamic lifecycles, or integration layers that don’t follow <code>using</code> patterns, <code>FinalizationRegistry</code> remains a valuable fallback to prevent memory leaks.</p><p>Looking ahead, a hybrid approach will likely become the standard in Wasm-JavaScript applications. Developers can use ERM for deterministic cleanup of Wasm memory and other resources, while relying on <code>FinalizationRegistry</code> as a safety net when full control isn’t possible. Together, they offer a more reliable and flexible foundation for managing memory across the JavaScript and WebAssembly boundary.</p><p>Ready to try it yourself? Deploy a <a href="https://developers.cloudflare.com/workers/runtime-apis/webassembly/"><u>WebAssembly-powered Worker</u></a> and experiment with memory management — start building with <a href="https://developers.cloudflare.com/workers/"><u>Cloudflare Workers</u></a> today.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[WebAssembly]]></category>
            <category><![CDATA[JavaScript]]></category>
            <guid isPermaLink="false">3cuVG8K7iSqKV8O3fNrAYK</guid>
            <dc:creator>Ketan Gupta</dc:creator>
            <dc:creator>Harris Hancock</dc:creator>
        </item>
        <item>
            <title><![CDATA[Bringing Python to Workers using Pyodide and WebAssembly]]></title>
            <link>https://blog.cloudflare.com/python-workers/</link>
            <pubDate>Tue, 02 Apr 2024 13:00:45 GMT</pubDate>
            <description><![CDATA[ Introducing Cloudflare Workers in Python, now in open beta! We've revamped our systems to support Python, from the runtime to deployment. Learn about Python Worker's lifecycle, dynamic linking, and memory snapshots in this post ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4BCvEmK49aK7qLQuTkUsI1/0aecc2333aabe7e94ee99cdf9f830ef6/pythonweba.png" />
            
            </figure><p>Starting today, in open beta, you can now <a href="https://developers.cloudflare.com/workers/languages/python/">write Cloudflare Workers in Python</a>.</p><p>This new support for Python is different from how Workers have historically supported languages beyond JavaScript — in this case, we have directly integrated a Python implementation into <a href="https://github.com/cloudflare/workerd">workerd</a>, the open-source Workers runtime. All <a href="https://developers.cloudflare.com/workers/configuration/bindings/">bindings</a>, including bindings to <a href="https://developers.cloudflare.com/vectorize/">Vectorize</a>, <a href="https://developers.cloudflare.com/workers-ai/">Workers AI</a>, <a href="https://developers.cloudflare.com/r2/">R2</a>, <a href="https://developers.cloudflare.com/durable-objects/">Durable Objects</a>, and more are supported on day one. Python Workers can import a subset of popular Python <a href="https://developers.cloudflare.com/workers/languages/python/packages/">packages</a> including <a href="https://fastapi.tiangolo.com/">FastAPI</a>, <a href="https://python.langchain.com/docs/get_started/introduction">Langchain</a>, <a href="https://numpy.org/">Numpy</a> and more. There are no extra build steps or external toolchains.</p><p>To do this, we’ve had to push the bounds of all of our systems, from the runtime itself, to our deployment system, to the contents of the Worker bundle that is published across our <a href="https://www.cloudflare.com/network/">network</a>. You can <a href="https://developers.cloudflare.com/workers/languages/python/">read the docs</a>, and start using it today.</p><p>We want to use this post to pull back the curtain on the internal lifecycle of a Python Worker, share what we’ve learned in the process, and highlight where we’re going next.</p>
    <div>
      <h2>Beyond “Just compile to WebAssembly”</h2>
      <a href="#beyond-just-compile-to-webassembly">
        
      </a>
    </div>
    <p>Cloudflare Workers have supported WebAssembly <a href="/webassembly-on-cloudflare-workers">since 2018</a> — each Worker is a <a href="https://developers.cloudflare.com/workers/reference/how-workers-works/">V8 isolate</a>, powered by the same JavaScript engine as the Chrome web browser. In principle, it’s been <a href="/webassembly-on-cloudflare-workers">possible</a> for years to write Workers in any language — including Python — so long as it first compiles to WebAssembly or to JavaScript.</p><p>In practice, just because something is possible doesn’t mean it’s simple. And just because “hello world” works doesn’t mean you can reliably build an application. Building full applications requires supporting an ecosystem of packages that developers are used to building with. For a platform to truly support a programming language, it’s necessary to go much further than showing how to compile code using external toolchains.</p><p>Python Workers are different from what we’ve done in the past. It’s early, and still in beta, but we think it shows what providing first-class support for programming languages beyond JavaScript can look like on Workers.</p>
    <div>
      <h2>The lifecycle of a Python Worker</h2>
      <a href="#the-lifecycle-of-a-python-worker">
        
      </a>
    </div>
    <p>With Pyodide now <a href="https://github.com/cloudflare/workerd/tree/main/src/pyodide">built into workerd</a>, you can write a Worker like this:</p>
            <pre><code>from js import Response

async def on_fetch(request, env):
    return Response.new("Hello world!")</code></pre>
            <p>...with a wrangler.toml file that points to a .py file:</p>
            <pre><code>name = "hello-world-python-worker"
main = "src/entry.py"
compatibility_date = "2024-03-18"
compatibility_flags = ["python_workers"]</code></pre>
            <p>…and when you run <a href="https://developers.cloudflare.com/workers/wrangler/commands/#dev">npx wrangler@latest dev</a>, the Workers runtime will:</p><ol><li><p>Determine which <a href="https://developers.cloudflare.com/workers/languages/python/packages/">version of Pyodide</a> is required, based on your <a href="https://developers.cloudflare.com/workers/configuration/compatibility-dates/">compatibility date</a></p></li><li><p>Create an isolate for your Worker, and automatically inject Pyodide</p></li><li><p>Serve your Python code using Pyodide</p></li></ol><p>This all happens under the hood — no extra toolchain or precompilation steps needed. The Python execution environment is provided for you, mirroring how Workers written in JavaScript already work.</p>
    <div>
      <h2>A Python interpreter built into the Workers runtime</h2>
      <a href="#a-python-interpreter-built-into-the-workers-runtime">
        
      </a>
    </div>
    <p>Just as JavaScript has <a href="https://en.wikipedia.org/wiki/List_of_ECMAScript_engines">many engines</a>, Python has <a href="https://wiki.python.org/moin/PythonImplementations">many implementations</a> that can execute Python code. <a href="https://github.com/python/cpython">CPython</a> is the reference implementation of Python. If you’ve used Python before, this is almost certainly what you’ve used, and is commonly referred to as just “Python”.</p><p><a href="https://pyodide.org/en/stable/">Pyodide</a> is a port of CPython to WebAssembly. It interprets Python code, without any need to precompile the Python code itself to any other format. It runs in a web browser — check out this <a href="https://pyodide-console.pages.dev/">REPL</a>. It is true to the CPython that Python developers know and expect, providing <a href="https://developers.cloudflare.com/workers/languages/python/stdlib/">most of the Python Standard Library</a>. It provides a foreign function interface (FFI) to JavaScript, allowing you to call JavaScript APIs directly from Python — more on this below. It provides popular open-source <a href="https://developers.cloudflare.com/workers/languages/python/packages/">packages</a>, and can import pure Python packages directly from PyPI.</p><p>Pyodide struck us as the perfect fit for Workers. It is designed to allow the core interpreter and each native Python module to be built as separate WebAssembly modules, dynamically linked at runtime. This allows the code footprint for these modules to be shared among all Workers running on the same machine, rather than requiring each Worker to bring its own copy. This is essential to making WebAssembly work well in the Workers environment, where we often run <a href="https://www.infoq.com/presentations/cloudflare-v8/">thousands of Workers per machine</a> — we need Workers using the same programming language to share their runtime code footprint. Running thousands of Workers on every machine is what makes it possible for us to deploy every application in every location at a <a href="/workers-pricing-scale-to-zero">reasonable price</a>.</p><p>Just like with JavaScript Workers, with Python Workers we provide the runtime for you:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4eCllqbVYDOzao6swhou7t/c4731707e15b9b37b4198f05a88682ee/VMs--Containers--ans-Isolates-comparison.png" />
            
            </figure><p>Pyodide is currently the exception — most languages that target WebAssembly do not yet support dynamic linking, so each application ends up bringing its own copy of its language runtime. We hope to see more languages support dynamic linking in the future, so that we can more effectively bring them to Workers.</p>
    <div>
      <h3>How Pyodide works</h3>
      <a href="#how-pyodide-works">
        
      </a>
    </div>
    <p>Pyodide executes Python code in WebAssembly, which is a sandboxed environment, separated from the host runtime. Unlike running native code, all operations outside of pure computation (such as file reads) must be provided by a runtime environment, then <i>imported</i> by the WebAssembly module.</p><p><a href="https://llvm.org/">LLVM</a> provides three target triples for WebAssembly:</p><ol><li><p><b>wasm32-unknown-unknown</b> – this backend provides no C standard library or system call interface; to support this backend, we would need to manually rewrite every system or library call to make use of imports we would define ourselves in the runtime.</p></li><li><p><b>wasm32-wasi</b> – WASI is a standardized system interface, and defines a standard set of imports that are implemented in WASI runtimes such as <a href="https://github.com/bytecodealliance/wasmtime/">wasmtime</a>.</p></li><li><p><b>wasm32-unknown-emscripten</b> – Like WASI, Emscripten defines the imports that a WebAssembly program needs to execute, but also outputs an accompanying JavaScript library that implements these imported functions.</p></li></ol><p>Pyodide uses Emscripten, and provides three things:</p><ol><li><p>A distribution of the CPython interpreter, compiled using Emscripten</p></li><li><p>A foreign function interface (FFI) between Python and JavaScript</p></li><li><p>A set of third-party Python packages, compiled using Emscripten’s compiler to WebAssembly.</p></li></ol><p>Of these targets, only Emscripten currently supports dynamic linking, which, as we noted above, is essential to providing a shared language runtime for Python that is shared across isolates. Emscripten does this by <a href="https://emscripten.org/docs/compiling/Dynamic-Linking.html">providing implementations of dlopen and dlsym,</a> which use the accompanying JavaScript library to modify the WebAssembly program’s table to link additional WebAssembly-compiled modules at runtime. WASI <a href="https://github.com/WebAssembly/component-model/blob/main/design/mvp/examples/SharedEverythingDynamicLinking.md#runtime-dynamic-linking">does not yet support</a> the dlopen/dlsym dynamic linking abstractions used by CPython.</p>
    <div>
      <h2>Pyodide and the magic of foreign function interfaces (FFI)</h2>
      <a href="#pyodide-and-the-magic-of-foreign-function-interfaces-ffi">
        
      </a>
    </div>
    <p>You might have noticed that in our Hello World Python Worker, we import Response from the js module:</p>
            <pre><code>from js import Response

async def on_fetch(request, env):
    return Response.new("Hello world!")</code></pre>
            <p>Why is that?</p><p>Most Workers are written in JavaScript, and most of our engineering effort on the Workers runtime goes into improving JavaScript Workers. There is a risk in adding a second language that it might never reach feature parity with the first language and always be a second class citizen. Pyodide’s foreign function interface (FFI) is critical to avoiding this by providing access to all JavaScript functionality from Python. This can be used by the Worker author directly, and it is also used to make packages like <a href="https://developers.cloudflare.com/workers/languages/python/packages/fastapi/">FastAPI</a> and <a href="https://developers.cloudflare.com/workers/languages/python/packages/langchain/">Langchain</a> work out-of-the-box, as we’ll show later in this post.</p><p>An FFI is a system for calling functions in one language that are implemented in another language. In most cases, an FFI is defined by a "higher-level" language in order to call functions implemented in a systems language, often C. Python’s <a href="https://docs.python.org/3/library/ctypes.html#module-ctypes">ctypes module</a> is such a system. These sorts of foreign function interfaces are often difficult to use because of the nature of C APIs.</p><p>Pyodide’s foreign function interface is an interface between Python and JavaScript, which are two high level object-oriented languages with a lot of design similarities. When passed from one language to another, immutable types such as strings and numbers are transparently translated. All mutable objects are wrapped in an appropriate proxy.</p><p>When a JavaScript object is passed into Python, Pyodide determines which JavaScript protocols the object supports and <a href="https://github.com/pyodide/pyodide/blob/main/src/core/jsproxy.c#L3781-L3791">dynamically constructs</a> an appropriate Python class that implements the corresponding Python protocols. For example, if the JavaScript object supports the <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Iteration_protocols">JavaScript iteration protocol</a> then the proxy will support the <a href="https://docs.python.org/3/library/stdtypes.html#iterator-types">Python iteration protocol</a>. If the JavaScript object is a Promise or other <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise#thenables">thenable</a>, the Python object will be an <a href="https://docs.python.org/3/reference/datamodel.html#awaitable-objects">awaitable</a>.</p>
            <pre><code>from js import JSON

js_array = JSON.parse("[1,2,3]")

for entry in js_array:
   print(entry)</code></pre>
            <p>The lifecycle of a request to a Python Worker makes use of Pyodide’s FFI, wrapping the incoming JavaScript <a href="https://developers.cloudflare.com/workers/runtime-apis/request/">Request</a> object in a <a href="https://pyodide.org/en/stable/usage/api/python-api/ffi.html#pyodide.ffi.JsProxy">JsProxy</a> object that is accessible in your Python code. It then converts the value returned by the Python Worker’s <a href="https://developers.cloudflare.com/workers/runtime-apis/handlers/">handler</a> into a JavaScript <a href="https://developers.cloudflare.com/workers/runtime-apis/response/">Response</a> object that can be delivered back to the client:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/736nXjbNS1gxb4dm8xaptB/9f4dd0232d22e7d7070b1f716813a7e6/Python-Worker-Request-Lifecycle.png" />
            
            </figure>
    <div>
      <h2>Why dynamic linking is essential, and static linking isn’t enough</h2>
      <a href="#why-dynamic-linking-is-essential-and-static-linking-isnt-enough">
        
      </a>
    </div>
    <p>Python comes with <a href="https://cffi.readthedocs.io/en/stable/">a C FFI</a>, and many Python packages use this FFI to import native libraries. These libraries are typically written in C, so they must first be compiled down to WebAssembly in order to work on the Workers runtime. As we noted above, Pyodide is built with Emscripten, which overrides Python’s C FFI — any time a package tries to load a native library, it is instead loaded from a WebAssembly module that is provided by the Workers runtime. Dynamic linking is what makes this possible — it is what lets us override Python’s C FFI, allowing Pyodide to support many <a href="https://developers.cloudflare.com/workers/languages/python/packages/">Python packages</a> that have native library dependencies.</p><p>Dynamic linking is “pay as you go”, while static linking is “pay upfront” — if code is statically linked into your binary, it must be loaded upfront in order for the binary to run, even if this code is never used.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1JiF8v11hINyO6CpNnuP6a/52fad68dedc7a4d1c6beba46eb13f964/Python-Workers---Runtime.png" />
            
            </figure><p>Dynamic linking enables the Workers runtime to share the underlying WebAssembly modules of packages across different Workers that are running on the same machine.</p><p>We won’t go too much into detail on <a href="https://emscripten.org/docs/compiling/Dynamic-Linking.html#runtime-dynamic-linking-with-dlopen">how dynamic linking works in Emscripten</a>, but the main takeaway is that the Emscripten runtime fetches WebAssembly modules from a filesystem abstraction provided in JavaScript. For each Worker, we generate a filesystem at runtime, whose structure mimics a Python distribution that has the Worker’s dependencies installed, but whose underlying files are shared between Workers. This makes it possible to share Python and WebAssembly files between multiple Workers that import the same dependency. Today, we’re able to share these files across Workers, but copy them into each new isolate. We think we can go even further, by employing <a href="https://en.wikipedia.org/wiki/Copy-on-write">copy-on-write</a> techniques to share the underlying resource across many Workers.</p>
    <div>
      <h2>Supporting Server and Client libraries</h2>
      <a href="#supporting-server-and-client-libraries">
        
      </a>
    </div>
    <p>Python has a wide variety of popular HTTP client libraries, including <a href="https://www.python-httpx.org/">httpx</a>, <a href="https://pypi.org/project/urllib3/">urllib3</a>, <a href="https://pypi.org/project/requests/">requests</a> and more. Unfortunately, none of them work out of the box in Pyodide. Adding support for these has been one of the longest running user requests for the Pyodide project. The Python HTTP client libraries all work with raw sockets, and the browser security model and CORS do not allow this, so we needed another way to make them work in the Workers runtime.</p>
    <div>
      <h3>Async Client libraries</h3>
      <a href="#async-client-libraries">
        
      </a>
    </div>
    <p>For libraries that can make requests asynchronously, including <a href="https://docs.aiohttp.org/en/stable/index.html">aiohttp</a> and <a href="https://www.python-httpx.org/">httpx</a>, we can use the <a href="https://developers.cloudflare.com/workers/runtime-apis/fetch/">Fetch API</a> to make requests. We do this by patching the library, instructing it to use the Fetch API from JavaScript — taking advantage of Pyodide’s FFI. <a href="https://github.com/cloudflare/pyodide/blob/main/packages/httpx/httpx_patch.py">The httpx patch</a> ends up quite simple —fewer than 100 lines of code. Simplified even further, it looks like this:</p>
            <pre><code>from js import Headers, Request, fetch

def py_request_to_js_request(py_request):
    js_headers = Headers.new(py_request.headers)
    return Request.new(py_request.url, method=py_request.method, headers=js_headers)

def js_response_to_py_response(js_response):
  ... # omitted

async def do_request(py_request):
  js_request = py_request_to_js_request(py_request)
    js_response = await fetch(js_request)
    py_response = js_response_to_py_response(js_response)
    return py_response</code></pre>
            
    <div>
      <h3>Synchronous Client libraries</h3>
      <a href="#synchronous-client-libraries">
        
      </a>
    </div>
    <p>Another challenge in supporting Python HTTP client libraries is that many Python APIs are synchronous. For these libraries, we cannot use the <a href="https://developers.cloudflare.com/workers/runtime-apis/fetch/">fetch API</a> directly because it is asynchronous.</p><p>Thankfully, Joe Marshall recently landed <a href="https://urllib3.readthedocs.io/en/stable/reference/contrib/emscripten.html">a contribution to urllib3</a> that adds Pyodide support in web browsers by:</p><ol><li><p>Checking if blocking with <code>Atomics.wait()</code> is possible</p><ol><li><p>If so, start a fetch worker thread</p></li><li><p>Delegate the fetch operation to the worker thread and serialize the response into a SharedArrayBuffer</p></li><li><p>In the Python thread, use Atomics.wait to block for the response in the SharedArrayBuffer</p></li></ol></li><li><p>If <code>Atomics.wait()</code> doesn’t work, fall back to a synchronous XMLHttpRequest</p></li></ol><p>Despite this, today Cloudflare Workers do not support <a href="https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Using_web_workers">worker threads</a> or synchronous <a href="https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest">XMLHttpRequest</a>, so neither of these two approaches will work in Python Workers. We do not support synchronous requests today, but there is a way forward…</p>
    <div>
      <h3>WebAssembly Stack Switching</h3>
      <a href="#webassembly-stack-switching">
        
      </a>
    </div>
    <p>There is an approach which will allow us to support synchronous requests. WebAssembly has <a href="https://github.com/WebAssembly/js-promise-integration">a stage 3 proposal adding support for stack switching</a>, which <a href="https://v8.dev/blog/jspi">v8 has an implementation of</a>. Pyodide contributors have been working on adding support for stack switching to Pyodide since September 2022, and it is almost ready.</p><p>With this support, Pyodide exposes a function called <code>run_sync</code> which can block for completion of an awaitable:</p>
            <pre><code>from pyodide.ffi import run_sync

def sync_fetch(py_request):
   js_request = py_request_to_js_request(py_request)
   js_response  = run_sync(fetch(js_request))
   return js_response_to_py_response(js_response)</code></pre>
            
    <div>
      <h3>FastAPI and Python’s Asynchronous Server Gateway Interface</h3>
      <a href="#fastapi-and-pythons-asynchronous-server-gateway-interface">
        
      </a>
    </div>
    <p><a href="https://fastapi.tiangolo.com/">FastAPI</a> is one of the most popular libraries for defining Python servers. FastAPI applications use a protocol called the <a href="https://asgi.readthedocs.io/en/latest/">Asynchronous Server Gateway Interface</a> (ASGI). This means that FastAPI never reads from or writes to a socket itself. An ASGI application expects to be hooked up to an ASGI server, typically <a href="https://www.uvicorn.org/">uvicorn</a>. The ASGI server handles all of the raw sockets on the application’s behalf.</p><p>Conveniently for us, this means that FastAPI works in Cloudflare Workers without any patches or changes to FastAPI itself. We simply need to replace <a href="https://www.uvicorn.org/">uvicorn</a> with an appropriate ASGI server that can run within a Worker. Our initial implementation lives <a href="https://github.com/cloudflare/workerd/blob/main/src/pyodide/internal/asgi.py">here</a>, in <a href="https://github.com/cloudflare/pyodide">the fork of Pyodide</a> that we maintain. We hope to add a more comprehensive feature set, add test coverage, and then upstream this implementation into Pyodide.</p><p>You can try this yourself by cloning <a href="https://github.com/cloudflare/python-workers-examples">cloudflare/python-workers-examples</a>, and running <code>npx wrangler@latest dev</code> in the directory of the FastAPI example.</p>
    <div>
      <h2>Importing Python Packages</h2>
      <a href="#importing-python-packages">
        
      </a>
    </div>
    <p>Python Workers support <a href="https://developers.cloudflare.com/workers/languages/python/packages/">a subset of Python packages</a>, which are <a href="https://github.com/cloudflare/pyodide/tree/main/packages">provided directly by Pyodide</a>, including <a href="https://numpy.org/">numpy</a>, <a href="https://www.python-httpx.org/">httpx</a>, <a href="https://developers.cloudflare.com/workers/languages/python/packages/fastapi/">FastAPI</a>, <a href="https://developers.cloudflare.com/workers/languages/python/packages/langchain/">Langchain</a>, and more. This ensures compatibility with the Pyodide runtime by pinning package versions to Pyodide versions, and allows Pyodide to patch internal implementations, as we showed above in the case of httpx.</p><p>To import a package, simply add it to your <code>requirements.txt</code> file, without adding a version number. A specific version of the package is provided directly by Pyodide. Today, you can use packages in local development, and in the coming weeks, you will be able to deploy Workers that define dependencies in a <code>requirements.txt</code> file. Later in this post, we’ll show how we’re thinking about managing new versions of Pyodide and packages.</p><p>We maintain our own fork of Pyodide, which allows us to provide patches specific to the Workers runtime, and to quickly expand our support for packages in Python Workers, while also committing to upstreaming our changes back to Pyodide, so that the whole ecosystem of developers can benefit.</p><p>Python packages are often big and memory hungry though, and they can do a lot of work at import time. How can we ensure that you can bring in the packages you need, while mitigating long cold start times?</p>
    <div>
      <h2>Making cold starts faster with memory snapshots</h2>
      <a href="#making-cold-starts-faster-with-memory-snapshots">
        
      </a>
    </div>
    <p>In the example at the start of this post, in local development, we mentioned injecting Pyodide into your Worker. Pyodide itself is 6.4 MB — and Python packages can also be quite large.</p><p>If we simply shoved Pyodide into your Worker and uploaded it to Cloudflare, that’d be quite a large Worker to load into a new isolate — cold starts would be slow. On a fast computer with a good network connection, Pyodide takes about two seconds to initialize in a web browser, one second of network time and one second of cpu time. It wouldn’t be acceptable to initialize it every time you update your code for every isolate your Worker runs in across <a href="https://www.cloudflare.com/network/">Cloudflare’s network</a>.</p><p>Instead, when you run <a href="https://developers.cloudflare.com/workers/wrangler/commands/#deploy">npx wrangler@latest deploy</a>, the following happens:</p><ol><li><p>Wrangler uploads your Python code and your <code>requirements.txt</code> file to the Workers API</p></li><li><p>We send your Python code, and your <code>requirements.txt</code> file to the Workers runtime to be validated</p></li><li><p>We create a new isolate for your Worker, and automatically inject Pyodide plus any <a href="https://developers.cloudflare.com/workers/languages/python/packages/">packages</a> you’ve specified in your <code>requirements.txt</code> file.</p></li><li><p>We scan the Worker’s code for import statements, execute them, and then take a snapshot of the Worker’s WebAssembly linear memory. Effectively, we perform the expensive work of importing packages at deploy time, rather than at runtime.</p></li><li><p>We deploy this snapshot alongside your Worker’s Python code to Cloudflare’s network.</p></li><li><p>Just like a JavaScript Worker, we execute the Worker’s <a href="https://developers.cloudflare.com/workers/platform/limits/#worker-startup-time">top-level scope</a>.</p></li></ol><p>When a request comes in to your Worker, we load this snapshot and use it to bootstrap your Worker in an isolate, avoiding expensive initialization time:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2q0ztvdb60NUqlAsOWWFX4/203e421e31e25c5794f5fada1ad94c40/apipyth.png" />
            
            </figure><p>This takes cold starts for a basic Python Worker down to below 1 second. We’re not yet satisfied with this though. We’re confident that we can drive this down much, much further. How? By reusing memory snapshots.</p>
    <div>
      <h3>Reusing Memory Snapshots</h3>
      <a href="#reusing-memory-snapshots">
        
      </a>
    </div>
    <p>When you upload a Python Worker, we generate a single memory snapshot of the Worker’s top-level imports, including both Pyodide and any dependencies. This snapshot is specific to your Worker. It can’t be shared, even though most of its contents are the same as other Python Workers.</p><p>Instead, we can create a single, shared snapshot ahead of time, and preload it into a pool of “pre-warmed” isolates. These isolates would already have the Pyodide runtime loaded and ready — making a Python Worker work just like a JavaScript Worker. In both cases, the underlying interpreter and execution environment is provided by the Workers runtime, and available on-demand without delay. The only difference is that with Python, the interpreter runs in WebAssembly, within the Worker.</p><p>Snapshots are a common pattern across runtimes and execution environments. Node.js <a href="https://docs.google.com/document/d/1YEIBdH7ocJfm6PWISKw03szNAgnstA2B3e8PZr_-Gp4/edit#heading=h.1v0pvnoifuah">uses V8 snapshots to speed up startup time</a>. You can take <a href="https://github.com/firecracker-microvm/firecracker/blob/main/docs/snapshotting/snapshot-support.md">snapshots of Firecracker microVMs</a> and resume execution in a different process. There’s lots more we can do here — not just for Python Workers, but for Workers written in JavaScript as well, caching snapshots of compiled code from top-level scope and the state of the isolate itself. Workers are so fast and efficient that to-date we haven’t had to take snapshots in this way, but we think there are still big performance gains to be had.</p><p>This is our biggest lever towards driving cold start times down over the rest of 2024.</p>
    <div>
      <h2>Future proofing compatibility with Pyodide versions and Compatibility Dates</h2>
      <a href="#future-proofing-compatibility-with-pyodide-versions-and-compatibility-dates">
        
      </a>
    </div>
    <p>When you deploy a Worker to Cloudflare, you expect it to keep running indefinitely, even if you never update it again. There are Workers deployed in 2018 that are still running just fine in production.</p><p>We achieve this using <a href="https://developers.cloudflare.com/workers/configuration/compatibility-dates/">Compatibility Dates</a> and <a href="https://developers.cloudflare.com/workers/configuration/compatibility-dates/#compatibility-flags">Compatibility Flags</a>, which provide explicit opt-in mechanisms for new behavior and potentially backwards-incompatible changes, without impacting existing Workers.</p><p>This works in part because it mirrors how the Internet and web browsers work. You publish a web page with some JavaScript, and rightly expect it to work forever. Web browsers and Cloudflare Workers have the same type of commitment of stability to developers.</p><p>There is a challenge with Python though — both Pyodide and CPython are <a href="https://devguide.python.org/versions/">versioned</a>. Updated versions are published regularly and can contain breaking changes. And Pyodide provides a set of <a href="https://developers.cloudflare.com/workers/languages/python/packages/">built-in packages</a>, each with a pinned version number. This presents a question — how should we allow you to update your Worker to a newer version of Pyodide?</p><p>The answer is <a href="https://developers.cloudflare.com/workers/configuration/compatibility-dates/">Compatibility Dates</a> and <a href="https://developers.cloudflare.com/workers/configuration/compatibility-dates/#compatibility-flags">Compatibility Flags</a>.</p><p>A new version of Python is released every year in August, and a new version of Pyodide is released six (6) months later. When this new version of Pyodide is published, we will add it to Workers by gating it behind a Compatibility Flag, which is only enabled after a specified Compatibility Date. This lets us continually provide updates, without risk of breaking changes, extending the commitment we’ve made for JavaScript to Python.</p><p>Each Python release has a <a href="https://devguide.python.org/versions/">five (5) year support window</a>. Once this support window has passed for a given version of Python, security patches are no longer applied, making this version unsafe to rely on. To mitigate this risk, while still trying to hold as true as possible to our commitment of stability and long-term support, after five years any Python Worker still on a Python release that is outside of the support window will be automatically moved forward to the next oldest Python release. Python is a mature and stable language, so we expect that in most cases, your Python Worker will continue running without issue. But we recommend updating the compatibility date of your Worker regularly, to stay within the support window.</p><p>In between Python releases, we also expect to update and add additional <a href="https://developers.cloudflare.com/workers/languages/python/packages/%5C">Python packages</a>, using the same opt-in mechanism. A Compatibility Flag will be a combination of the Python version and the release date of a set of packages. For example, <b>python_3.17_packages_2025_03_01</b>.</p>
    <div>
      <h2>How bindings work in Python Workers</h2>
      <a href="#how-bindings-work-in-python-workers">
        
      </a>
    </div>
    <p>We mentioned earlier that Pyodide provides a foreign function interface (FFI) to JavaScript — meaning that you can directly use JavaScript objects, methods, functions and more, directly from Python.</p><p>This means that from day one, all <a href="https://developers.cloudflare.com/workers/configuration/bindings/">binding</a> APIs to other Cloudflare resources are supported in Cloudflare Workers. The env object that is provided by handlers in Python Workers is a JavaScript object that Pyodide provides a proxy API to, handling <a href="https://pyodide.org/en/stable/usage/type-conversions.html">type translations</a> across languages automatically.</p><p>For example, to write to and read from a <a href="https://developers.cloudflare.com/kv/">KV</a> namespace from a Python Worker, you would write:</p>
            <pre><code>from js import Response

async def on_fetch(request, env):
    await env.FOO.put("bar", "baz")
    bar = await env.FOO.get("bar")
    return Response.new(bar) # returns "baz"</code></pre>
            <p>This works for Web APIs too — see how Response is imported from the js module? You can import any global from JavaScript this way.</p>
    <div>
      <h2>Get this JavaScript out of my Python!</h2>
      <a href="#get-this-javascript-out-of-my-python">
        
      </a>
    </div>
    <p>You’re probably reading this post because you want to write Python <i>instead</i> of JavaScript. <code>from js import Response</code> just isn’t Pythonic. We know — and we have actually tackled this challenge before for another language (<a href="/workers-rust-sdk">Rust</a>). And we think we can do this even better for Python.</p><p>We launched <a href="https://github.com/cloudflare/workers-rs">workers-rs</a> in 2021 to make it possible to write Workers in <a href="https://www.rust-lang.org/">Rust</a>. For each JavaScript API in Workers, we, alongside open-source contributors, have written bindings that expose a more idiomatic Rust API.</p><p>We plan to do the same for Python Workers — starting with the bindings to <a href="https://developers.cloudflare.com/workers-ai/">Workers AI</a> and <a href="https://developers.cloudflare.com/vectorize/">Vectorize</a>. But while workers-rs requires that you use and update an external dependency, the APIs we provide with Python Workers will be built into the Workers runtime directly. Just update your compatibility date, and get the latest, most Pythonic APIs.</p><p>This is about more than just making bindings to resources on Cloudflare more Pythonic though — it’s about compatibility with the ecosystem.</p><p>Similar to how we <a href="https://github.com/cloudflare/workers-rs/pull/477">recently converted</a> workers-rs to use types from the <a href="https://crates.io/crates/http">http</a> crate, which makes it easy to use the <a href="https://docs.rs/axum/latest/axum/">axum</a> crate for routing, we aim to do the same for Python Workers. For example, the Python standard library provides a <a href="https://docs.python.org/3/library/socket.html">raw socket API</a>, which many Python packages depend on. Workers already provides <a href="https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/">connect()</a>, a JavaScript API for working with raw sockets. We see ways to provide at least a subset of the Python standard library’s socket API in Workers, enabling a broader set of Python packages to work on Workers, with less of a need for patches.</p><p>But ultimately, we hope to kick start an effort to create a standardized serverless API for Python. One that is easy to use for any Python developer and offers the same capabilities as JavaScript.</p>
    <div>
      <h2>We’re just getting started with Python Workers</h2>
      <a href="#were-just-getting-started-with-python-workers">
        
      </a>
    </div>
    <p>Providing true support for a new programming language is a big investment that goes far beyond making “hello world” work. We chose Python very intentionally — it’s the <a href="https://survey.stackoverflow.co/2023/#technology-most-popular-technologies">second most popular programming language after JavaScript</a> — and we are committed to continuing to improve performance and widen our support for Python packages.</p><p>We’re grateful to the Pyodide maintainers and the broader Python community — and we’d love to hear from you. Drop into the Python Workers channel in the <a href="https://discord.cloudflare.com/">Cloudflare Developers Discord</a>, or <a href="https://github.com/cloudflare/workerd/discussions/categories/python-packages">start a discussion on Github</a> about what you’d like to see next and which Python packages you’d like us to support.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1nmt4B6AYocqmJCw21v5pL/112dcd395906643cbf8a67de22470e13/Workers-and-Python.png" />
            
            </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[WebAssembly]]></category>
            <category><![CDATA[Python]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[WASM]]></category>
            <category><![CDATA[Developer Week]]></category>
            <guid isPermaLink="false">3Gqu0zcjgdix3M03fXEu8V</guid>
            <dc:creator>Hood Chatham</dc:creator>
            <dc:creator>Garrett Gu</dc:creator>
            <dc:creator>Dominik Picheta</dc:creator>
        </item>
        <item>
            <title><![CDATA[Wasm core dumps and debugging Rust in Cloudflare Workers]]></title>
            <link>https://blog.cloudflare.com/wasm-coredumps/</link>
            <pubDate>Mon, 14 Aug 2023 13:00:33 GMT</pubDate>
            <description><![CDATA[ Debugging Rust and Wasm with Cloudflare Workers involves a lot of the good old time-consuming and nerve-wracking printf'ing strategy. What if there’s a better way? This blog is about enabling and using Wasm core dumps and how you can easily debug Rust in Cloudflare Workers ]]></description>
            <content:encoded><![CDATA[ <p></p><p>A clear sign of maturing for any new programming language or environment is how easy and efficient debugging them is. Programming, like any other complex task, involves various challenges and potential pitfalls. Logic errors, off-by-ones, null pointer dereferences, and memory leaks are some examples of things that can make software developers desperate if they can't pinpoint and fix these issues quickly as part of their workflows and tools.</p><p><a href="https://webassembly.org/">WebAssembly</a> (Wasm) is a binary instruction format designed to be a portable and efficient target for the compilation of high-level languages like <a href="https://www.rust-lang.org/">Rust</a>, C, C++, and others. In recent years, it has gained significant traction for building high-performance applications in web and serverless environments.</p><p>Cloudflare Workers has had <a href="https://github.com/cloudflare/workers-rs">first-party support for Rust and Wasm</a> for quite some time. We've been using this powerful combination to bootstrap and build some of our most recent services, like <a href="/introducing-d1/">D1</a>, <a href="/introducing-constellation/">Constellation</a>, and <a href="/automatic-signed-exchanges/">Signed Exchanges</a>, to name a few.</p><p>Using tools like <a href="https://github.com/cloudflare/workers-sdk">Wrangler</a>, our command-line tool for building with Cloudflare developer products, makes streaming real-time logs from our applications running remotely easy. Still, to be honest, debugging Rust and Wasm with Cloudflare Workers involves a lot of the good old time-consuming and nerve-wracking <a href="https://news.ycombinator.com/item?id=26925570">printf'ing</a> strategy.</p><p>What if there’s a better way? This blog is about enabling and using Wasm core dumps and how you can easily debug Rust in Cloudflare Workers.</p>
    <div>
      <h3>What are core dumps?</h3>
      <a href="#what-are-core-dumps">
        
      </a>
    </div>
    <p>In computing, a <a href="https://en.wikipedia.org/wiki/Core_dump">core dump</a> consists of the recorded state of the working memory of a computer program at a specific time, generally when the program has crashed or otherwise terminated abnormally. They also add things like the processor registers, stack pointer, program counter, and other information that may be relevant to fully understanding why the program crashed.</p><p>In most cases, depending on the system’s configuration, core dumps are usually initiated by the operating system in response to a program crash. You can then use a debugger like <a href="https://linux.die.net/man/1/gdb">gdb</a> to examine what happened and hopefully determine the cause of a crash. <a href="https://linux.die.net/man/1/gdb">gdb</a> allows you to run the executable to try to replicate the crash in a more controlled environment, inspecting the variables, and much more. The Windows' equivalent of a core dump is a <a href="https://learn.microsoft.com/en-us/troubleshoot/windows-client/performance/read-small-memory-dump-file">minidump</a>. Other mature languages that are interpreted, like Python, or languages that run inside a virtual machine, like <a href="https://docs.oracle.com/javase/8/docs/technotes/guides/visualvm/coredumps.html">Java</a>, also have their ways of generating core dumps for post-mortem analysis.</p><p>Core dumps are particularly useful for post-mortem debugging, determining the conditions that lead to a failure after it has occurred.</p>
    <div>
      <h3>WebAssembly core dumps</h3>
      <a href="#webassembly-core-dumps">
        
      </a>
    </div>
    <p>WebAssembly has had a <a href="https://github.com/WebAssembly/tool-conventions/blob/main/Coredump.md">proposal for implementing core dumps</a> in discussion for a while. It's a work-in-progress experimental specification, but it provides basic support for the main ideas of post-mortem debugging, including using the <a href="https://yurydelendik.github.io/webassembly-dwarf/">DWARF</a> (debugging with attributed record formats) debug format, the same that Linux and gdb use. Some of the most popular Wasm runtimes, like <a href="https://github.com/bytecodealliance/wasmtime/pull/5868">Wasmtime</a> and <a href="https://github.com/wasmerio/wasmer/pull/3626">Wasmer</a>, have experimental flags that you can enable and start playing with Wasm core dumps today.</p><p>If you run Wasmtime or Wasmer with the flag:</p>
            <pre><code>--coredump-on-trap=/path/to/coredump/file</code></pre>
            <p>The core dump file will be emitted at that location path if a crash happens. You can then use tools like <a href="https://github.com/xtuc/wasm-coredump/tree/main/bin/wasmgdb">wasmgdb</a> to inspect the file and debug the crash.</p><p>But let's dig into how the core dumps are generated in WebAssembly, and what’s inside them.</p>
    <div>
      <h3>How are Wasm core dumps generated</h3>
      <a href="#how-are-wasm-core-dumps-generated">
        
      </a>
    </div>
    <p>(and what’s inside them)</p><p>When WebAssembly terminates execution due to abnormal behavior, we say that it entered a trap. With Rust, examples of operations that can trap are accessing out-of-bounds addresses or a division by zero arithmetic call. You can read about the <a href="https://webassembly.org/docs/security/">security model of WebAssembly</a> to learn more about traps.</p><p>The core dump specification plugs into the trap workflow. When WebAssembly crashes and enters a trap, core dumping support kicks in and starts unwinding the call <a href="https://webassembly.github.io/spec/core/exec/runtime.html#stack">stack</a> gathering debugging information. For each frame in the stack, it collects the <a href="https://webassembly.github.io/spec/core/syntax/modules.html#syntax-func">function</a> parameters and the values stored in locals and in the stack, along with binary offsets that help us map to exact locations in the source code. Finally, it snapshots the <a href="https://webassembly.github.io/spec/core/syntax/modules.html#syntax-mem">memory</a> and captures information like the <a href="https://webassembly.github.io/spec/core/syntax/modules.html#syntax-table">tables</a> and the <a href="https://webassembly.github.io/spec/core/syntax/modules.html#syntax-global">global variables</a>.</p><p><a href="https://dwarfstd.org/">DWARF</a> is used by many mature languages like C, C++, Rust, Java, or Go. By emitting DWARF information into the binary at compile time a debugger can provide information such as the source name and the line number where the exception occurred, function and argument names, and more. Without DWARF, the core dumps would be just pure assembly code without any contextual information or metadata related to the source code that generated it before compilation, and they would be much harder to debug.</p><p>WebAssembly <a href="https://webassembly.github.io/spec/core/appendix/custom.html#name-section">uses a (lighter) version of DWARF</a> that maps functions, or a module and local variables, to their names in the source code (you can read about the <a href="https://webassembly.github.io/spec/core/appendix/custom.html#name-section">WebAssembly name section</a> for more information), and naturally core dumps use this information.</p><p>All this information for debugging is then bundled together and saved to the file, the core dump file.</p><p>The <a href="https://github.com/WebAssembly/tool-conventions/blob/main/Coredump.md#coredump-file-format">core dump structure</a> has multiple sections, but the most important are:</p><ul><li><p>General information about the process;</p></li><li><p>The <a href="https://webassembly.github.io/threads/core/">threads</a> and their stack frames (note that WebAssembly is <a href="https://developers.cloudflare.com/workers/runtime-apis/webassembly/#threading">single threaded</a> in Cloudflare Workers);</p></li><li><p>A snapshot of the WebAssembly linear memory or only the relevant regions;</p></li><li><p>Optionally, other sections like globals, data, or table.</p></li></ul><p>Here’s the thread definition from the core dump specification:</p>
            <pre><code>corestack   ::= customsec(thread-info vec(frame))
thread-info ::= 0x0 thread-name:name ...
frame       ::= 0x0 ... funcidx:u32 codeoffset:u32 locals:vec(value)
                stack:vec(value)</code></pre>
            <p>A thread is a custom section called <code>corestack</code>. A corestack section contains the thread name and a vector (or array) of frames. Each frame contains the function index in the WebAssembly module (<code>funcidx</code>), the code offset relative to the function's start (<code>codeoffset</code>), the list of locals, and the list of values in the stack.</p><p>Values are defined as follows:</p>
            <pre><code>value ::= 0x01       =&gt; ∅
        | 0x7F n:i32 =&gt; n
        | 0x7E n:i64 =&gt; n
        | 0x7D n:f32 =&gt; n
        | 0x7C n:f64 =&gt; n</code></pre>
            <p>At the time of this writing these are the possible <a href="https://webassembly.github.io/spec/core/binary/types.html#binary-numtype">numbers types</a> in a value. Again, we wanted to describe the basics; you should track the full <a href="https://github.com/WebAssembly/tool-conventions/blob/main/Coredump.md#coredump-file-format">specification</a> to get more detail or find information about future changes. WebAssembly core dump support is in its early stages of specification and implementation, things will get better, things might change.</p><p>This is all great news. Unfortunately, however, the Cloudflare Workers <a href="https://github.com/cloudflare/workerd">runtime</a> doesn’t support WebAssembly core dumps yet. There is no technical impediment to adding this feature to <a href="https://github.com/cloudflare/workerd">workerd</a>; after all, it's <a href="https://developers.cloudflare.com/workers/learning/how-workers-works/">based on V8</a>, but since it powers a critical part of our production infrastructure and products, we tend to be conservative when it comes to adding specifications or standards that are still considered experimental and still going through the definitions phase.</p><p>So, how do we get core Wasm dumps in Cloudflare Workers today?</p>
    <div>
      <h3>Polyfilling</h3>
      <a href="#polyfilling">
        
      </a>
    </div>
    <p>Polyfilling means using userland code to provide modern functionality in older environments that do not natively support it. <a href="https://developer.mozilla.org/en-US/docs/Glossary/Polyfill">Polyfills</a> are widely popular in the JavaScript community and the browser environment; they've been used extensively to address issues where browser vendors still didn't catch up with the latest standards, or when they implement the same features in different ways, or address cases where old browsers can never support a new standard.</p><p>Meet <a href="https://github.com/xtuc/wasm-coredump/tree/main/bin/rewriter">wasm-coredump-rewriter</a>, a tool that you can use to rewrite a Wasm module and inject the core dump runtime functionality in the binary. This runtime code will catch most traps (exceptions in host functions are not yet catched and memory violation not by default) and generate a standard core dump file. To some degree, this is similar to how Binaryen's <a href="https://github.com/WebAssembly/binaryen/blob/main/src/passes/Asyncify.cpp">Asyncify</a> <a href="https://kripken.github.io/blog/wasm/2019/07/16/asyncify.html">works</a>.</p><p>Let’s look at code and see how this works. He’s some simple pseudo code:</p>
            <pre><code>export function entry(v1, v2) {
    return addTwo(v1, v2)
}

function addTwo(v1, v2) {
  res = v1 + v2;
  throw "something went wrong";

  return res
}</code></pre>
            <p>An imaginary compiler could take that source and generate the following Wasm binary code:</p>
            <pre><code>  (func $entry (param i32 i32) (result i32)
    (local.get 0)
    (local.get 1)
    (call $addTwo)
  )

  (func $addTwo (param i32 i32) (result i32)
    (local.get 0)
    (local.get 1)
    (i32.add)
    (unreachable) ;; something went wrong
  )

  (export "entry" (func $entry))</code></pre>
            <p><i>“;;” is used to denote a comment.</i></p><p><code>entry()</code> is the Wasm function <a href="https://webassembly.github.io/spec/core/exec/runtime.html#syntax-hostfunc">exported to the host</a>. In an environment like the browser, JavaScript (being the host) can call entry().</p><p>Irrelevant parts of the code have been snipped for brevity, but this is what the Wasm code will look like after <a href="https://github.com/xtuc/wasm-coredump/tree/main/bin/rewriter">wasm-coredump-rewriter</a> rewrites it:</p>
            <pre><code>  (func $entry (type 0) (param i32 i32) (result i32)
    ...
    local.get 0
    local.get 1
    call $addTwo ;; see the addTwo function bellow
    global.get 2 ;; is unwinding?
    if  ;; label = @1
      i32.const x ;; code offset
      i32.const 0 ;; function index
      i32.const 2 ;; local_count
      call $coredump/start_frame
      local.get 0
      call $coredump/add_i32_local
      local.get 1
      call $coredump/add_i32_local
      ...
      call $coredump/write_coredump
      unreachable
    end)

  (func $addTwo (type 0) (param i32 i32) (result i32)
    local.get 0
    local.get 1
    i32.add
    ;; the unreachable instruction was here before
    call $coredump/unreachable_shim
    i32.const 1 ;; funcidx
    i32.const 2 ;; local_count
    call $coredump/start_frame
    local.get 0
    call $coredump/add_i32_local
    local.get 1
    call $coredump/add_i32_local
    ...
    return)

  (export "entry" (func $entry))</code></pre>
            <p>As you can see, a few things changed:</p><ol><li><p>The (unreachable) instruction in <code>addTwo()</code> was replaced by a call to <code>$coredump/unreachable_shim</code> which starts the unwinding process. Then, the location and debugging data is captured, and the function returns normally to the <code>entry()</code> caller.</p></li><li><p>Code has been added after the <code>addTwo()</code> call instruction in <code>entry()</code> that detects if we have an unwinding process in progress or not. If we do, then it also captures the local debugging data, writes the core dump file and then, finally, moves to the unconditional trap unreachable.</p></li></ol><p>In short, we unwind until the host function <code>entry()</code> gets destroyed by calling unreachable.</p><p>Let’s go over the runtime functions that we inject for more clarity, stay with us:</p><ul><li><p><code>$coredump/start_frame(funcidx, local_count)</code> starts a new frame in the coredump.</p></li><li><p><code>$coredump/add_*_local(value)</code> captures the values of function arguments and in locals (currently capturing values from the stack isn’t implemented.)</p></li><li><p><code>$coredump/write_coredump</code> is used at the end and writes the core dump in memory. We take advantage of the first 1 KiB of the Wasm linear memory, which is unused, to store our core dump.</p></li></ul><p>A diagram is worth a thousand words:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/27DxZQioAhBsBiltjwIiyL/2dc57b370b6741120a5bb263c2795652/image2-7.png" />
            
            </figure><p>Wait, what’s this about the first 1 KiB of the memory being unused, you ask? Well, it turns out that most WebAssembly linters and tools, including <a href="https://github.com/emscripten-core/emscripten/issues/5775#issuecomment-344049528">Emscripten</a> and <a href="https://github.com/llvm/llvm-project/blob/121e15f96ce401c875e717992a4d054e308ba775/lld/wasm/Writer.cpp#L366">WebAssembly’s LLVM</a> don’t use the first 1 KiB of memory. <a href="https://github.com/rust-lang/rust/issues/50543">Rust</a> and <a href="https://github.com/ziglang/zig/issues/4496">Zig</a> also use LLVM, but they changed the default. This isn’t pretty, but the hugely popular Asyncify polyfill relies on the same trick, so there’s reasonable support until we find a better way.</p><p>But we digress, let’s continue. After the crash, the host, typically JavaScript in the browser, can now catch the exception and extract the core dump from the Wasm instance’s memory:</p>
            <pre><code>try {
    wasmInstance.exports.someExportedFunction();
} catch(err) {
    const image = new Uint8Array(wasmInstance.exports.memory.buffer);
    writeFile("coredump." + Date.now(), image);
}</code></pre>
            <p>If you're curious about the actual details of the core dump implementation, you can find the <a href="https://github.com/xtuc/wasm-coredump/blob/main/lib/asc-coredump/assembly/coredump.ts">source code here</a>. It was written in <a href="https://www.assemblyscript.org/">AssemblyScript</a>, a TypeScript-like language for WebAssembly.</p><p>This is how we use the polyfilling technique to implement Wasm core dumps when the runtime doesn’t support them yet. Interestingly, some Wasm runtimes, being optimizing compilers, are likely to make debugging more difficult because function arguments, locals, or functions themselves can be optimized away. Polyfilling or rewriting the binary could actually preserve more source-level information for debugging.</p><p>You might be asking what about performance? We did some testing and found that the <a href="https://github.com/xtuc/wasm-coredump-bench/blob/main/results.md">impact is negligible</a>; the cost-benefit of being able to debug our crashes is positive. Also, you can easily turn wasm core dumps on or off for specific builds or environments; deciding when you need them is up to you.</p>
    <div>
      <h3>Debugging from a core dump</h3>
      <a href="#debugging-from-a-core-dump">
        
      </a>
    </div>
    <p>We now know how to generate a core dump, but how do we use it to diagnose and debug a software crash?</p><p>Similarly to <a href="https://en.wikipedia.org/wiki/GNU_Debugger">gdb</a> (GNU Project Debugger) on Linux, <a href="https://github.com/xtuc/wasm-coredump/blob/main/bin/wasmgdb/README.md">wasmgdb</a> is the tool you can use to parse and make sense of core dumps in WebAssembly; it understands the file structure, uses DWARF to provide naming and contextual information, and offers interactive commands to navigate the data. To exemplify how it works, <a href="https://github.com/xtuc/wasm-coredump/blob/main/bin/wasmgdb/demo.md">wasmgdb has a demo</a> of a Rust application that deliberately crashes; we will use it.</p><p>Let's imagine that our Wasm program crashed, wrote a core dump file, and we want to debug it.</p>
            <pre><code>$ wasmgdb source-program.wasm /path/to/coredump
wasmgdb&gt;</code></pre>
            <p>When you fire wasmgdb, you enter a <a href="https://en.wikipedia.org/wiki/Read%E2%80%93eval%E2%80%93print_loop">REPL (Read-Eval-Print Loop)</a> interface, and you can start typing commands. The tool tries to mimic the gdb command syntax; you can find the <a href="https://github.com/xtuc/wasm-coredump/blob/main/bin/wasmgdb/README.md#commands">list here</a>.</p><p>Let's examine the backtrace using the bt command:</p>
            <pre><code>wasmgdb&gt; bt
#18     000137 as __rust_start_panic () at library/panic_abort/src/lib.rs
#17     000129 as rust_panic () at library/std/src/panicking.rs
#16     000128 as rust_panic_with_hook () at library/std/src/panicking.rs
#15     000117 as {closure#0} () at library/std/src/panicking.rs
#14     000116 as __rust_end_short_backtrace&lt;std::panicking::begin_panic_handler::{closure_env#0}, !&gt; () at library/std/src/sys_common/backtrace.rs
#13     000123 as begin_panic_handler () at library/std/src/panicking.rs
#12     000194 as panic_fmt () at library/core/src/panicking.rs
#11     000198 as panic () at library/core/src/panicking.rs
#10     000012 as calculate (value=0x03000000) at src/main.rs
#9      000011 as process_thing (thing=0x2cff0f00) at src/main.rs
#8      000010 as main () at src/main.rs
#7      000008 as call_once&lt;fn(), ()&gt; (???=0x01000000, ???=0x00000000) at /rustc/b833ad56f46a0bbe0e8729512812a161e7dae28a/library/core/src/ops/function.rs
#6      000020 as __rust_begin_short_backtrace&lt;fn(), ()&gt; (f=0x01000000) at /rustc/b833ad56f46a0bbe0e8729512812a161e7dae28a/library/std/src/sys_common/backtrace.rs
#5      000016 as {closure#0}&lt;()&gt; () at /rustc/b833ad56f46a0bbe0e8729512812a161e7dae28a/library/std/src/rt.rs
#4      000077 as lang_start_internal () at library/std/src/rt.rs
#3      000015 as lang_start&lt;()&gt; (main=0x01000000, argc=0x00000000, argv=0x00000000, sigpipe=0x00620000) at /rustc/b833ad56f46a0bbe0e8729512812a161e7dae28a/library/std/src/rt.rs
#2      000013 as __original_main () at &lt;directory not found&gt;/&lt;file not found&gt;
#1      000005 as _start () at &lt;directory not found&gt;/&lt;file not found&gt;
#0      000264 as _start.command_export at &lt;no location&gt;</code></pre>
            <p>Each line represents a frame from the program's call <a href="https://webassembly.github.io/spec/core/exec/runtime.html#stack">stack</a>; see frame #3:</p>
            <pre><code>#3      000015 as lang_start&lt;()&gt; (main=0x01000000, argc=0x00000000, argv=0x00000000, sigpipe=0x00620000) at /rustc/b833ad56f46a0bbe0e8729512812a161e7dae28a/library/std/src/rt.rs</code></pre>
            <p>You can read the funcidx, function name, arguments names and values and source location are all present. Let's select frame #9 now and inspect the locals, which include the function arguments:</p>
            <pre><code>wasmgdb&gt; f 9
000011 as process_thing (thing=0x2cff0f00) at src/main.rs
wasmgdb&gt; info locals
thing: *MyThing = 0xfff1c</code></pre>
            <p>Let’s use the <code>p</code> command to inspect the content of the thing argument:</p>
            <pre><code>wasmgdb&gt; p (*thing)
thing (0xfff2c): MyThing = {
    value (0xfff2c): usize = 0x00000003
}</code></pre>
            <p>You can also use the <code>p</code> command to inspect the value of the variable, which can be useful for nested structures:</p>
            <pre><code>wasmgdb&gt; p (*thing)-&gt;value
value (0xfff2c): usize = 0x00000003</code></pre>
            <p>And you can use p to inspect memory addresses. Let’s point at <code>0xfff2c</code>, the start of the <code>MyThing</code> structure, and inspect:</p>
            <pre><code>wasmgdb&gt; p (MyThing) 0xfff2c
0xfff2c (0xfff2c): MyThing = {
    value (0xfff2c): usize = 0x00000003
}</code></pre>
            <p>All this information in every step of the stack is very helpful to determine the cause of a crash. In our test case, if you look at frame #10, we triggered an integer overflow. Once you get comfortable walking through wasmgdb and using its commands to inspect the data, debugging core dumps will be another powerful skill under your belt.</p>
    <div>
      <h3>Tidying up everything in Cloudflare Workers</h3>
      <a href="#tidying-up-everything-in-cloudflare-workers">
        
      </a>
    </div>
    <p>We learned about core dumps and how they work, and we know how to make Cloudflare Workers generate them using the wasm-coredump-rewriter polyfill, but how does all this work in practice end to end?</p><p>We've been dogfooding the technique described in this blog at Cloudflare for a while now. Wasm core dumps have been invaluable in helping us debug Rust-based services running on top of Cloudflare Workers like <a href="/introducing-d1/">D1</a>, <a href="/privacy-edge-making-building-privacy-first-apps-easier/">Privacy Edge</a>, <a href="/announcing-amp-real-url/">AMP</a>, or <a href="/introducing-constellation/">Constellation</a>.</p><p>Today we're open-sourcing the <a href="https://github.com/cloudflare/wasm-coredump">Wasm Coredump Service</a> and enabling anyone to deploy it. This service collects the Wasm core dumps originating from your projects and applications when they crash, parses them, prints an exception with the stack information in the logs, and can optionally store the full core dump in a file in an <a href="https://developers.cloudflare.com/r2/">R2 bucket</a> (which you can then use with wasmgdb) or send the exception to <a href="https://sentry.io/">Sentry</a>.</p><p>We use a <a href="https://developers.cloudflare.com/workers/configuration/bindings/about-service-bindings/">service binding</a> to facilitate the communication between your application Worker and the Coredump service Worker. A Service binding allows you to send HTTP requests to another Worker without those requests going over the Internet, thus avoiding network latency or having to deal with authentication. Here’s a diagram of how it works:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/gntGbV7rjDOncMZhFP7x1/3429a64f297c0edbf3327c677d56e0d3/image1-12.png" />
            
            </figure><p>Using it is as simple as npm/yarn installing <code>@cloudflare/wasm-coredump</code>, configuring a few options, and then adding a few lines of code to your other applications running in Cloudflare Workers, in the exception handling logic:</p>
            <pre><code>import shim, { getMemory, wasmModule } from "../build/worker/shim.mjs"

const timeoutSecs = 20;

async function fetch(request, env, ctx) {
    try {
        // see https://github.com/rustwasm/wasm-bindgen/issues/2724.
        return await Promise.race([
            shim.fetch(request, env, ctx),
            new Promise((r, e) =&gt; setTimeout(() =&gt; e("timeout"), timeoutSecs * 1000))
        ]);
    } catch (err) {
      const memory = getMemory();
      const coredumpService = env.COREDUMP_SERVICE;
      await recordCoredump({ memory, wasmModule, request, coredumpService });
      throw err;
    }
}</code></pre>
            <p>The <code>../build/worker/shim.mjs</code> import comes from the <a href="https://github.com/cloudflare/workers-rs/tree/main/worker-build">worker-build</a> tool, from the <a href="https://github.com/cloudflare/workers-rs/tree/main">workers-rs</a> packages and is automatically generated when <a href="https://developers.cloudflare.com/workers/wrangler/install-and-update/">wrangler</a> builds your Rust-based Cloudflare Workers project. If the Wasm throws an exception, we catch it, extract the core dump from memory, and send it to our Core dump service.</p><p>You might have noticed that we race the <a href="https://github.com/cloudflare/workers-rs">workers-rs</a> <code>shim.fetch()</code> entry point with another Promise to generate a timeout exception if the Rust code doesn't respond earlier. This is because currently, <a href="https://github.com/rustwasm/wasm-bindgen/">wasm-bindgen</a>, which generates the glue between the JavaScript and Rust land, used by workers-rs, has <a href="https://github.com/rustwasm/wasm-bindgen/issues/2724">an issue</a> where a Promise might not be rejected if Rust panics asynchronously (leading to the Worker runtime killing the worker with “Error: The script will never generate a response”.). This can block the wasm-coredump code and make the core dump generation flaky.</p><p>We are working to improve this, but in the meantime, make sure to adjust <code>timeoutSecs</code> to something slightly bigger than the typical response time of your application.</p><p>Here’s an example of a Wasm core dump exception in Sentry:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/gqZyPFslc9uqCV7jEgaqW/9425701b4209952518e3aef155d9b572/image3-4.png" />
            
            </figure><p>You can find a <a href="https://github.com/cloudflare/wasm-coredump/tree/main/example">working example</a>, the Sentry and R2 configuration options, and more details in the <a href="https://github.com/cloudflare/wasm-coredump">@cloudflare/wasm-coredump</a> GitHub repository.</p>
    <div>
      <h3>Too big to fail</h3>
      <a href="#too-big-to-fail">
        
      </a>
    </div>
    <p>It's worth mentioning one corner case of this debugging technique and the solution: sometimes your codebase is so big that adding core dump and DWARF debugging information might result in a Wasm binary that is too big to fit in a Cloudflare Worker. Well, worry not; we have a solution for that too.</p><p>Fortunately the DWARF for WebAssembly specification also supports <a href="https://yurydelendik.github.io/webassembly-dwarf/#external-DWARF">external DWARF files</a>. To make this work, we have a tool called <a href="https://github.com/xtuc/wasm-coredump/tree/main/bin/debuginfo-split">debuginfo-split</a> that you can add to the build command in the <code>wrangler.toml</code> configuration:</p>
            <pre><code>command = "... &amp;&amp; debuginfo-split ./build/worker/index.wasm"</code></pre>
            <p>What this does is it strips the debugging information from the Wasm binary, and writes it to a new separate file called <code>debug-{UUID}.wasm</code>. You then need to upload this file to the same R2 bucket used by the Wasm Coredump Service (you can automate this as part of your CI or build scripts). The same UUID is also injected into the main Wasm binary; this allows us to correlate the Wasm binary with its corresponding DWARF debugging information. Problem solved.</p><p>Binaries without DWARF information can be significantly smaller. Here’s our example:</p>
<table>
<thead>
  <tr>
    <th><span>4.5 MiB</span></th>
    <th><span>debug-63372dbe-41e6-447d-9c2e-e37b98e4c656.wasm</span></th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><span>313 KiB</span></td>
    <td><span>build/worker/index.wasm</span></td>
  </tr>
</tbody>
</table>
    <div>
      <h3>Final words</h3>
      <a href="#final-words">
        
      </a>
    </div>
    <p>We hope you enjoyed reading this blog as much as we did writing it and that it can help you take your Wasm debugging journeys, using Cloudflare Workers or not, to another level.</p><p>Note that while the examples used here were around using Rust and WebAssembly because that's a common pattern, you can use the same techniques if you're compiling WebAssembly from other languages like C or C++.</p><p>Also, note that the WebAssembly core dump standard is a hot topic, and its implementations and adoption are evolving quickly. We will continue improving the <a href="https://github.com/xtuc/wasm-coredump/tree/main/bin/rewriter">wasm-coredump-rewriter</a>, <a href="https://github.com/xtuc/wasm-coredump/tree/main/bin/debuginfo-split">debuginfo-split</a>, and <a href="https://github.com/xtuc/wasm-coredump/tree/main/bin/wasmgdb">wasmgdb</a> tools and the <a href="https://github.com/cloudflare/wasm-coredump">wasm-coredump service</a>. More and more runtimes, including V8, will eventually support core dumps natively, thus eliminating the need to use polyfills, and the tooling, in general, will get better; that's a certainty. For now, we present you with a solution that works today, and we have strong incentives to keep supporting it.</p><p>As usual, you can talk to us on our <a href="https://discord.cloudflare.com/">Developers Discord</a> or the <a href="https://community.cloudflare.com/c/developers/constellation/97">Community forum</a> or open issues or PRs in our GitHub repositories; the team will be listening.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[WASM]]></category>
            <category><![CDATA[WebAssembly]]></category>
            <category><![CDATA[Rust]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">7xtevgzV4ycZa3fIFTQOP5</guid>
            <dc:creator>Sven Sauleau</dc:creator>
            <dc:creator>Celso Martinho</dc:creator>
        </item>
        <item>
            <title><![CDATA[Use the language of your choice with Pages Functions via WebAssembly]]></title>
            <link>https://blog.cloudflare.com/pages-functions-with-webassembly/</link>
            <pubDate>Fri, 24 Mar 2023 13:00:00 GMT</pubDate>
            <description><![CDATA[ Today Pages is excited to offer support for WebAssembly when writing a Pages Function ]]></description>
            <content:encoded><![CDATA[ <p></p><p>On the Cloudflare Developer Platform, we understand that building any application is a unique experience for every developer. We know that in the developer ecosystem there are a plethora of tools to choose from and as a developer you have preferences and needs. We don’t believe there are “right” or “wrong” tools to use in development and want to ensure a good developer experience no matter your choices. We believe in meeting you where you are.</p><p>When Pages Functions moved to <a href="/pages-function-goes-ga/">Generally Available in November of last year</a>, we knew it was the key that unlocks a variety of use cases – namely full-stack applications! However, we still felt we could do more to provide the flexibility for you to build what you want and how you want.</p><p>That’s why today we’re opening the doors to developers who want to build their server side applications with something other than JavaScript. We’re excited to announce WebAssembly support for Pages Functions projects!</p><p><a href="https://webassembly.org/"><b>WebAssembly</b></a> <b>(or Wasm)</b> is a low-level assembly-like language that can run with near-native performance. It provides programming languages such as C/C++, C# or Rust with a compilation target, enabling them to run alongside JavaScript. Primarily designed to run on the <a href="https://webassembly.org/docs/web/">web</a> (though <a href="https://webassembly.org/docs/non-web/">not exclusively</a>), WebAssembly opens up exciting opportunities for applications to run on the web platform, both on the client and the server, that up until now couldn't have done so.</p><p>With Pages Functions being Workers “under the hood” and Workers having <a href="/workers-javascript-modules/">Wasm module support</a> for quite <a href="/webassembly-on-cloudflare-workers/">some time</a>, it is only natural that Pages provides a similar experience for our users as well. While not all use cases are a good fit for Wasm, there are <a href="https://webassembly.org/docs/use-cases/">many</a> that are. Our goal with adding Wasm support is enabling those use cases and expanding the boundaries of what Functions can build.</p>
    <div>
      <h3>Using WebAssembly in Pages Functions</h3>
      <a href="#using-webassembly-in-pages-functions">
        
      </a>
    </div>
    <p>WebAssembly in Pages Functions works very similar to how it does today in Workers — we read <code>wasm</code> files as WebAssembly modules, ready for you to import and use directly from within your Functions. In short, like this:</p>
            <pre><code>// functions/api/distance-between.js

import wasmModule from "../../pkg/distance.wasm";

export async function onRequest({ request }) {
  const moduleInstance = await WebAssembly.instantiate(wasmModule);
  const distance = await moduleInstance.exports.distance_between();

  return new Response(distance);
}</code></pre>
            <p>Let’s briefly unpack the code snippet above to highlight some things that are important to understand.</p>
            <pre><code>import wasmModule from "../../pkg/distance.wasm";</code></pre>
            <p>Pages makes no assumptions as to how the binary <code>.wasm</code> files you want to import were compiled. In our example above, <code>distance.wasm</code> can be a file you compiled yourself out of code you wrote, or equally, a file provided in a third-party library’s distribution. The only thing Pages cares about is that <code>distance.wasm</code> is a compiled <a href="https://webassembly.github.io/spec/core/binary/conventions.html">binary</a> Wasm <a href="https://webassembly.github.io/spec/core/binary/modules.html">module</a> file.</p><p>The result of that import is a <a href="https://developer.mozilla.org/en-US/docs/WebAssembly/JavaScript_interface/Module"><code>WebAssembly.Module</code></a> object, which you can then <a href="https://developer.mozilla.org/en-US/docs/WebAssembly/JavaScript_interface/instantiate">instantiate</a>:</p>
            <pre><code>const moduleInstance = await WebAssembly.instantiate(wasmModule);</code></pre>
            <p>Once the <a href="https://developer.mozilla.org/en-US/docs/WebAssembly/JavaScript_interface/Instance"><code>WebAssembly.Instance</code></a> object is created, you can start using whatever features your Wasm module <a href="https://webassembly.github.io/spec/core/syntax/modules.html#syntax-export"><code>exports</code></a>, inside your Functions code:</p>
            <pre><code>const distance = await moduleInstance.exports.distance_between();</code></pre>
            
    <div>
      <h3>More modules, more fun!</h3>
      <a href="#more-modules-more-fun">
        
      </a>
    </div>
    <p>Apart from Wasm modules, this work unlocks support for two other module types that you can import within your Functions code: <b>text</b> and <b>binary</b>. These are not standardized modules, but can be very handy if you need to import raw text blobs (such as HTML files) as a <code>string</code>:</p>
            <pre><code>// functions/my-function.js
import html from "404.html";

export async function onRequest() {
  return new Response(html,{
    headers: { "Content-Type": "text/html" }
  });
}</code></pre>
            <p>or raw data blobs (such as images) as an <code>ArrayBuffer</code>.</p>
            <pre><code>// functions/my-function.js
import image from "../hearts.png.bin";

export async function onRequest() {
  return new Response(image,{
    headers: { "Content-Type": "image/png" }
  });
}</code></pre>
            
    <div>
      <h3>The distance between us on the surface of Earth</h3>
      <a href="#the-distance-between-us-on-the-surface-of-earth">
        
      </a>
    </div>
    <p>Let’s take a look at a live example to see it all in action! We’ve built a small <a href="https://pages-with-wasm-demo.pages.dev/">demo app</a> that walks you through an example of Functions with WebAssembly end-to-end. You can check out the code of our demo application available on <a href="https://github.com/cloudflare/pages-fns-with-wasm-demo">GitHub</a>.</p><p>The application computes the distance in kilometers on the surface of Earth between your current location (based on the geo coordinates of the incoming <a href="https://developers.cloudflare.com/workers/runtime-apis/request/#incomingrequestcfproperties">request</a>) and any other point on the globe, each time you click on the globe's surface.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4ZkZSyM2pJghHYzcwgfw3T/3c3609ec59bb6589d7a575177cbf602e/image3-31.png" />
            
            </figure><p>The code that performs the actual high-performance distance calculation is written in Rust, and is a slight adaptation of the <a href="https://rust-lang-nursery.github.io/rust-cookbook/science/mathematics/trigonometry.html#distance-between-two-points-on-the-earth">example</a> provided in the <a href="https://rust-lang-nursery.github.io/rust-cookbook/">Rust cookbook</a>:</p>
            <pre><code>fn distance_between(from_latitude_degrees: f64, from_longitude_degrees: f64, to_latitude_degrees: f64, to_longitude_degrees: f64) -&gt; f64 {
    let earth_radius_kilometer = 6371.0_f64;

    let from_latitude = from_latitude_degrees.to_radians();
    let to_latitude = to_latitude_degrees.to_radians();

    let delta_latitude = (from_latitude_degrees - to_latitude_degrees).to_radians();
    let delta_longitude = (from_longitude_degrees - to_longitude_degrees).to_radians();

    let central_angle_inner = (delta_latitude / 2.0).sin().powi(2)
        + from_latitude.cos() * to_latitude.cos() * (delta_longitude / 2.0).sin().powi(2);
    let central_angle = 2.0 * central_angle_inner.sqrt().asin();

    let distance = earth_radius_kilometer * central_angle;
    
    return distance;
}</code></pre>
            <p>We have a Rust playground experiment available <a href="https://play.rust-lang.org/?version=stable&amp;mode=debug&amp;edition=2021&amp;gist=b60cdd8c60bed969c03bf5b87914c196">here</a>, in case you want to play around with this code snippet in particular.</p><p>To use the <code>distance_between()</code> Rust function in Pages Functions, we first compile the code to WebAssembly using <a href="https://github.com/rustwasm/wasm-pack"><code>wasm-pack</code></a>:</p>
            <pre><code>##
# generate the `pkg` folder which will contain the wasm binary
##
wasm-pack build</code></pre>
            <p>Then, we import the generated <code>.wasm</code> artifact from inside our <code>distance-between.js</code> Pages Function. Now, each time you click on the globe surface, a request to <code>/api/distance-between</code> is made, which will trigger the <code>distance_between()</code> function to execute. Once computed, the distance value is returned by our Function, back to the client, which proceeds to display the value to the user.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1nWcWHd4MNnPVEZQ8A3MwE/868b091b3b4849d2c6d9848a737eb476/image2-23.png" />
            
            </figure><p>We want to point out that this application could have been built entirely in JavaScript, however, we equally wanted to show just how simple it is to build it with Rust. The decision to use Rust was motivated by a few factors. First, the tooling ecosystem for building and working with Rust-generated WebAssembly is quite mature, well documented, and easy to get started with. Second, the Rust <a href="https://www.rust-lang.org/learn">docs</a> are a fantastic resource if you are new to Rust or to Rust with WebAssembly! If you are looking for a step-by-step tutorial on how to generate and set up a Rust and WebAssembly project, we highly recommend checking out Rust’s official <a href="https://rustwasm.github.io/docs/book/introduction.html">WebAssembly Book</a>.</p><p>We hope it gives you a solid starting point in exploring what is possible with Wasm on Pages Functions, and inspires you to create some powerful applications of your own. Head over to our <a href="https://developers.cloudflare.com/pages/platform/functions/module-support/#module-support">docs</a> to get started today!</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Pages]]></category>
            <category><![CDATA[WASM]]></category>
            <category><![CDATA[WebAssembly]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">55qq7fR9xOWSZtN9TZgrik</guid>
            <dc:creator>Carmen Popoviciu</dc:creator>
        </item>
        <item>
            <title><![CDATA[Running Zig with WASI on Cloudflare Workers]]></title>
            <link>https://blog.cloudflare.com/running-zig-with-wasi-on-cloudflare-workers/</link>
            <pubDate>Mon, 01 Aug 2022 14:12:26 GMT</pubDate>
            <description><![CDATA[ After the recent announcement regarding WASI support in Workers I decided to see what it would take to get code written in Zig to run as a Worker, and it turned out to be pretty trivial. This post documents the process I followed ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6IYT1Gc6y5cogG4X4H9skK/583baf4700006967596b41b2dbff7908/image1-4.png" />
            
            </figure><p>After the recent announcement regarding <a href="/announcing-wasi-on-workers/">WASI support in Workers</a>, I decided to see what it would take to get code written in <a href="https://ziglang.org/">Zig</a> to run as a Worker, and it turned out to be trivial. This post documents the process I followed as a new user of Zig. It’s so exciting to see how Cloudflare Workers is a polyglot platform allowing you to write programs in the language you love, or the language you’re learning!</p>
    <div>
      <h3>Hello, World!</h3>
      <a href="#hello-world">
        
      </a>
    </div>
    <p>I’m not a Zig expert by any means, and to keep things entirely honest I’ve only just started looking into the language, but we all have to start somewhere. So, if my Zig code isn’t perfect please bear with me. My goal was to build a real, small program using Zig and deploy it on Cloudflare Workers. And to see how fast I can go from a blank screen to production code.</p><p>My goal for this wasn’t ambitious, just read some text from stdin and print it to stdout with line numbers, like running <code>cat -n</code>. But it does show just how easy the Workers paradigm is. This Zig program works identically on the command-line on my laptop and as an HTTP API deployed on Cloudflare Workers.</p><p>Here’s my code. It reads a line from stdin and outputs the same line prefixed with a line number. It terminates when there’s no more input.</p>
            <pre><code>const std = @import("std");

pub fn main() anyerror!void {
	// setup allocator
	var gpa = std.heap.GeneralPurposeAllocator(.{}){};
	defer std.debug.assert(!gpa.deinit());
	const allocator = gpa.allocator();

	// setup streams
	const stdout = std.io.getStdOut().writer();
	const in = std.io.getStdIn();
	var reader = std.io.bufferedReader(in.reader()).reader();

	var counter: u32 = 1;

	// read input line by line
	while (try reader.readUntilDelimiterOrEofAlloc(allocator, '\n', std.math.maxInt(usize))) |line| {
    	    defer allocator.free(line);
    	    try stdout.print("{d}\t{s}\n", .{counter, line});
    	    counter = counter + 1;
	}
}</code></pre>
            <p>To build Zig code, you create a <code>build.zig</code> file that defines how to build your project. For this trivial case I just opted to build an executable from the sources</p>
            <pre><code>const std = @import("std");

pub fn build(b: *std.build.Builder) void {
	const target = b.standardTargetOptions(.{});
	const mode = b.standardReleaseOptions();

	const exe = b.addExecutable("print-with-line-numbers", "src/main.zig");
	exe.setTarget(target);
	exe.setBuildMode(mode);
	exe.install();
}</code></pre>
            <p>By running <code>zig build</code> the compiler will run and output a binary under <code>zig-out/bin</code></p>
            <pre><code>$ zig build

$ ls zig-out/bin
print-with-line-numbers

$ echo "Hello\nWorld" | ./zig-out/bin/print-with-line-numbers
1    Hello
2    World</code></pre>
            
    <div>
      <h3>WASI</h3>
      <a href="#wasi">
        
      </a>
    </div>
    <p>The next step is to get this running on Workers, but first I need to compile it into WASM with WASI support.</p><p>Thankfully, this comes out of the box with recent versions of Zig, so you can just tell the compiler to build your executable using the <code>wasm32-wasi</code> target, which will produce a file that can be run on any WASI-compatible WebAssembly runtime, such as <a href="https://wasmtime.dev/">wasmtime</a>.</p><p>This same .wasm file can be run in wasmtime and deployed directly to Cloudflare Workers. This makes building, testing and deploying seamless.</p>
            <pre><code>$ zig build -Dtarget=wasm32-wasi

$ ls zig-out/bin
print-with-line-numbers.wasm

$ echo "Hello\nWorld" | wasmtime ./zig-out/bin/print-with-line-numbers.wasm
1    Hello
2    World</code></pre>
            
    <div>
      <h3>Zig on Workers</h3>
      <a href="#zig-on-workers">
        
      </a>
    </div>
    <p>With our binary ready to go, the last piece is to get it running on Cloudflare Workers using <a href="/wrangler-v2-beta/">wrangler2</a>. That is as simple as publishing the .wasm file on workers.dev. If you don’t have a <a href="https://workers.dev">workers.dev</a> account, you can follow the tutorial on our <a href="https://developers.cloudflare.com/workers/get-started/guide/">getting started guide</a> that will get you from code to deployment within minutes!</p><p>In fact, once I signed up for my account, all I needed to do was complete the first two steps, install wrangler and login.</p>
            <pre><code>$ npx wrangler@wasm login
Attempting to login via OAuth...
Opening a link in your default browser: https://dash.cloudflare.com/oauth2/auth
Successfully logged in.</code></pre>
            <p>Then, I ran the following command to publish my worker:</p>
            <pre><code>$ npx wrangler@wasm publish --name print-with-line-numbers --compatibility-date=2022-07-07 zig-out/bin/print-with-line-numbers.wasm
Uploaded print-with-line-numbers (3.04 sec)
Published print-with-line-numbers (6.28 sec)
  print-with-line-numbers.workers.dev</code></pre>
            <p>With that step completed, the worker is ready to run and can be invoked by calling the URL printed from the output above.</p>
            <pre><code>echo "Hello\nWorld" | curl https://print-with-line-numbers.workers.dev -X POST --data-binary @-
1    Hello
2    World</code></pre>
            <p>Success!</p>
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>What impressed me the most here was just how easy this process was.</p><p>First, I had a binary compiled for the architecture of my laptop, then I compiled the code into WebAssembly by just passing a flag to the compiler, and finally I had this running on workers <b>without having to change any code.</b></p><p>Granted, this program was not very complicated and does not do anything other than read from STDIN and write to STDOUT, but it gives me confidence of what is possible, especially as technology like WASI matures.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[WebAssembly]]></category>
            <category><![CDATA[WASM]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">5Xy5uTuM3ghJ0CV01dfsLi</guid>
            <dc:creator>Daniel Harper</dc:creator>
        </item>
        <item>
            <title><![CDATA[Announcing support for WASI on Cloudflare Workers]]></title>
            <link>https://blog.cloudflare.com/announcing-wasi-on-workers/</link>
            <pubDate>Thu, 07 Jul 2022 16:09:43 GMT</pubDate>
            <description><![CDATA[ Today, we are announcing experimental support for WASI (the WebAssembly System Interface) on Cloudflare Workers and support within wrangler2 to make it a joy to work with ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/38dw1f6v1MK3BDS3rYr7mE/c986b7fcb97d2ae3b88688cdce832f36/image1-OG-2.png" />
            
            </figure><p>Today, we are announcing experimental support for WASI (the WebAssembly System Interface) on Cloudflare Workers and support within wrangler2 to make it a joy to work with. We continue to be incredibly excited about the entire WebAssembly ecosystem and are eager to adopt the standards as they are developed.</p>
    <div>
      <h3>A Quick Primer on WebAssembly</h3>
      <a href="#a-quick-primer-on-webassembly">
        
      </a>
    </div>
    <p>So what is WASI anyway? To understand WASI, and why we’re excited about it, it’s worth a quick recap of WebAssembly, and the ecosystem around it.</p><p>WebAssembly promised us a future in which code written in compiled languages could be compiled to a common binary format and run in a secure sandbox, at near native speeds. While WebAssembly was designed with the browser in mind, the model rapidly extended to server-side platforms such as Cloudflare Workers (which <a href="/webassembly-on-cloudflare-workers/">has supported WebAssembly</a> since 2017).</p><p>WebAssembly was originally designed to run <i>alongside</i> Javascript, and requires developers to interface directly with Javascript in order to access the world outside the sandbox. To put it another way, WebAssembly does not provide any standard interface for I/O tasks such as interacting with files, accessing the network, or reading the system clock. This means if you want to respond to an event from the outside world, it's up to the developer to handle that event in JavaScript, and directly call functions exported from the WebAssembly module. Similarly, if you want to perform I/O from within WebAssembly, you need to implement that logic in Javascript and import it into the WebAssembly module.</p><p>Custom toolchains such as Emscripten or libraries such as wasm-bindgen have emerged to make this easier, but they are language specific and add a tremendous amount of complexity and bloat. We've even built our own library, workers-rs, using wasm-bindgen that attempts to make writing applications in Rust feel native within a Worker – but this has proven not only difficult to maintain, but requires developers to write code that is Workers specific, and is not portable outside the Workers ecosystem.</p><p>We need more.</p>
    <div>
      <h3>The WebAssembly System Interface (WASI)</h3>
      <a href="#the-webassembly-system-interface-wasi">
        
      </a>
    </div>
    <p>WASI aims to provide a standard interface that any language compiling to WebAssembly can target. You can read the original post by Lin Clark <a href="https://hacks.mozilla.org/2019/03/standardizing-wasi-a-webassembly-system-interface/">here</a>, which gives an excellent introduction – code cartoons and all. In a nutshell, Lin describes WebAssembly as an <i>assembly language</i> for a 'conceptual machine', whereas WASI is a <i>systems interface</i> for a ‘conceptual operating system.’</p><p>This standardization of the system interface has paved the way for existing toolchains to cross-compile existing codebases to the wasm32-wasi target. A tremendous amount of progress has already been made, specifically within Clang/LLVM via the <a href="https://github.com/WebAssembly/wasi-sdk">wasi-sdk</a> and <a href="https://doc.rust-lang.org/stable/nightly-rustc/rustc_target/spec/wasm32_wasi/index.html">Rust</a> toolchains. These toolchains leverage a version of <a href="https://github.com/WebAssembly/wasi-libc">Libc</a>, which provides POSIX standard API calls, that is built on top of WASI 'system calls.' There are even basic implementations in more fringe toolchains such as <a href="https://tinygo.org/docs/guides/webassembly/">TinyGo</a> and <a href="https://swiftwasm.org/">SwiftWasm</a>.</p><p>Practically speaking, this means that you can now write applications that not only interoperate with any WebAssembly runtime implementing the standard, but also any POSIX compliant system! This means the exact same 'Hello World!' that runs on your local Linux/Mac/Windows WSL machine.</p>
    <div>
      <h3>Show me the code</h3>
      <a href="#show-me-the-code">
        
      </a>
    </div>
    <p>WASI sounds great, but does it actually make my life easier? You tell us. Let’s run through an example of how this would work in practice.</p><p>First, let’s generate a basic Rust “Hello, world!” application, compile, and run it.</p>
            <pre><code>$ cargo new hello_world
$ cd ./hello_world
$ cargo build --release
   Compiling hello_world v0.1.0 (/Users/benyule/hello_world)
    Finished release [optimized] target(s) in 0.28s
$ ./target/release/hello_world
Hello, world!</code></pre>
            <p>It doesn’t get much simpler than this. You’ll notice we only define a main() function followed by a println to stdout.</p>
            <pre><code>fn main() {
    println!("Hello, world!");
}</code></pre>
            <p>Now, let’s take the exact same program and compile against the wasm32-wasi target, and run it in an ‘off the shelf’ wasm runtime such as <a href="https://wasmtime.dev/">Wasmtime</a>.</p>
            <pre><code>$ cargo build --target wasm32-wasi --release
$ wasmtime target/wasm32-wasi/release/hello_world.wasm

Hello, world!</code></pre>
            <p>Neat! The same code compiles and runs in multiple POSIX environments.</p><p>Finally, let’s take the binary we just generated for Wasmtime, but instead publish it to Workers using Wrangler2.</p>
            <pre><code>$ npx wrangler@wasm dev target/wasm32-wasi/release/hello_world.wasm
$ curl http://localhost:8787/

Hello, world!</code></pre>
            <p>Unsurprisingly, it works! The same code is compatible in multiple POSIX environments and the same binary is compatible across multiple WASM runtimes.</p>
    <div>
      <h3>Running your CLI apps in the cloud</h3>
      <a href="#running-your-cli-apps-in-the-cloud">
        
      </a>
    </div>
    <p>The attentive reader may notice that we played a small trick with the HTTP request made via cURL. In this example, we actually stream stdin and stdout to/from the Worker using the HTTP request and response body respectively. This pattern enables some really interesting use cases, specifically, programs designed to run on the command line can be deployed as 'services' to the cloud.</p><p>‘Hexyl’ is an example that works completely out of the box. Here, we ‘cat’ a binary file on our local machine and ‘pipe’ the output to curl, which will then POST that output to our service and stream the result back. Following the steps we used to compile our 'Hello World!', we can compile hexyl.</p>
            <pre><code>$ git clone git@github.com:sharkdp/hexyl.git
$ cd ./hexyl
$ cargo build --target wasm32-wasi --release</code></pre>
            <p>And without further modification we were able to take a real-world program and create something we can now run or deploy. Again, let's tell wrangler2 to preview hexyl, but this time give it some input.</p>
            <pre><code>$ npx wrangler@wasm dev target/wasm32-wasi/release/hexyl.wasm
$ echo "Hello, world\!" | curl -X POST --data-binary @- http://localhost:8787

┌────────┬─────────────────────────┬─────────────────────────┬────────┬────────┐
│00000000│ 48 65 6c 6c 6f 20 77 6f ┊ 72 6c 64 21 0a          │Hello wo┊rld!_   │
└────────┴─────────────────────────┴─────────────────────────┴────────┴────────┘
</code></pre>
            <p>Give it a try yourself by hitting <a href="https://hexly.examples.workers.dev">https://hexyl.examples.workers.dev</a>.</p>
            <pre><code>echo "Hello world\!" | curl https://hexyl.examples.workers.dev/ -X POST --data-binary @- --output -</code></pre>
            <p>A more useful example, but requires a bit more work, would be to deploy a utility such as swc (swc.rs), to the cloud and use it as an on demand JavaScript/TypeScript transpilation service. Here, we have a few extra steps to ensure that the compiled output is as small as possible, but it otherwise runs out-of-the-box. Those steps are detailed in <a href="https://github.com/zebp/wasi-example-swc">https://github.com/zebp/wasi-example-swc</a>, but for now let’s gloss over that and interact with the hosted example.</p>
            <pre><code>$ echo "const x = (x, y) =&gt; x * y;" | curl -X POST --data-binary @- https://swc-wasi.examples.workers.dev/ --output -

var x=function(a,b){return a*b}</code></pre>
            <p>Finally, we can also do the same with C/C++, but requires a little more lifting to get our Makefile right. Here we show an example of compiling zstd and uploading it as a streaming compression service.</p><p><a href="https://github.com/zebp/wasi-example-zstd">https://github.com/zebp/wasi-example-zstd</a></p>
            <pre><code>$ echo "Hello world\!" | curl https://zstd.examples.workers.dev/ -s -X POST --data-binary @- | file -</code></pre>
            
    <div>
      <h3>What if I want to use WASI from within a JavaScript Worker?</h3>
      <a href="#what-if-i-want-to-use-wasi-from-within-a-javascript-worker">
        
      </a>
    </div>
    <p>Wrangler can make it really easy to deploy code without having to worry about the Workers ecosystem, but in some cases you may actually want to invoke your WASI based WASM module from Javascript. This can be achieved with the following simple boilerplate. An updated README will be kept at <a href="https://github.com/cloudflare/workers-wasi">https://github.com/cloudflare/workers-wasi</a>.</p>
            <pre><code>import { WASI } from "@cloudflare/workers-wasi";
import demoWasm from "./demo.wasm";

export default {
  async fetch(request, _env, ctx) {
    // Creates a TransformStream we can use to pipe our stdout to our response body.
    const stdout = new TransformStream();
    const wasi = new WASI({
      args: [],
      stdin: request.body,
      stdout: stdout.writable,
    });

    // Instantiate our WASM with our demo module and our configured WASI import.
    const instance = new WebAssembly.Instance(demoWasm, {
      wasi_snapshot_preview1: wasi.wasiImport,
    });

    // Keep our worker alive until the WASM has finished executing.
    ctx.waitUntil(wasi.start(instance));

    // Finally, let's reply with the WASM's output.
    return new Response(stdout.readable);
  },
};</code></pre>
            <p>Now with our JavaScript boilerplate and wasm, we can easily deploy our worker with Wrangler’s WASM feature.</p>
            <pre><code>$ npx wrangler publish
Total Upload: 473.89 KiB / gzip: 163.79 KiB
Uploaded wasi-javascript (2.75 sec)
Published wasi-javascript (0.30 sec)
  wasi-javascript.zeb.workers.dev</code></pre>
            
    <div>
      <h2>Back to the future</h2>
      <a href="#back-to-the-future">
        
      </a>
    </div>
    <p>For those of you who have been around for the better part of the past couple of decades, you may notice this looks very similar to RFC3875, better known as CGI (The Common Gateway Interface). While our example here certainly does not conform to the specification, you can imagine how this can be extended to turn the stdin of a basic 'command line' application into a full-blown http handler.</p><p>We are thrilled to learn where developers take this from here. Share what you build with us on <a href="https://discord.com/invite/cloudflaredev">Discord</a> or <a href="https://twitter.com/CloudflareDev">Twitter</a>!</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[WebAssembly]]></category>
            <category><![CDATA[WASM]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">4zx8IwlpdBe2PLRTAA7xje</guid>
            <dc:creator>Ben Yule</dc:creator>
            <dc:creator>Zebulon Piasecki</dc:creator>
        </item>
        <item>
            <title><![CDATA[Native Rust support on Cloudflare Workers]]></title>
            <link>https://blog.cloudflare.com/workers-rust-sdk/</link>
            <pubDate>Thu, 09 Sep 2021 13:00:02 GMT</pubDate>
            <description><![CDATA[ Introducing the Cloudflare Workers Rust SDK: write your entire Worker in Rust, no JavaScript required.
 ]]></description>
            <content:encoded><![CDATA[ <p></p><p>You can now write Cloudflare Workers in 100% Rust, no JavaScript required. Try it out: <a href="https://github.com/cloudflare/workers-rs">https://github.com/cloudflare/workers-rs</a></p><p>Cloudflare Workers has long supported the building blocks to run many languages using  WebAssembly. However, there has always been a challenging “trampoline” step required to allow languages like Rust to talk to JavaScript APIs such as <code><i>fetch()</i></code>.</p><p>In addition to the sizable amount of boilerplate needed, lots of “off the shelf” bindings between languages don’t include support for Cloudflare APIs such as KV and Durable Objects. What we wanted was a way to write a Worker in idiomatic Rust, quickly, and without needing knowledge of the host JavaScript environment. While we had a nice “starter” template that made it easy enough to pull in some Rust libraries and use them from JavaScript, the barrier was still too high if your goal was to write a full program in Rust and ship it to our edge.</p><p>Not anymore!</p><p>Introducing the <code>**worker**</code> crate, available on <a href="https://github.com/cloudflare/workers-rs">GitHub</a> and <a href="https://crates.io/crates/worker">crates.io</a>, which makes Rust developers feel right at home on the Workers platform by running code inside the V8 WebAssembly engine. In the snippet below, you can see how the worker crate does all the heavy lifting by providing Rustacean-friendly Workers APIs.</p>
            <pre><code>use worker::*;

#[event(fetch)]
pub async fn main(req: Request, env: Env) -&gt; Result&lt;Response&gt; {
    console_log!(
        "{} {}, located at: {:?}, within: {}",
        req.method().to_string(),
        req.path(),
        req.cf().coordinates().unwrap_or_default(),
        req.cf().region().unwrap_or("unknown region".into())
    );

    if !matches!(req.method(), Method::Post) {
        return Response::error("Method Not Allowed", 405);
    }

    if let Some(file) = req.form_data().await?.get("file") {
        return match file {
            FormEntry::File(buf) =&gt; {
                Response::ok(&amp;format!("size = {}", buf.bytes().await?.len()))
            }
            _ =&gt; Response::error("`file` part of POST form must be a file", 400),
        };
    }

    Response::error("Bad Request", 400)
}</code></pre>
            <p>Get your own Worker in Rust started with a single command:</p>
            <pre><code># see installation instructions for our `wrangler` CLI at https://github.com/cloudflare/wrangler
# (requires v1.19.2 or higher)
$ wrangler generate --type=rust my-project</code></pre>
            <p>We’ve  stripped away all the glue code, provided an ergonomic HTTP framework, and baked in what you need to build small scripts or full-fledged Workers apps in Rust. You’ll find <i>fetch</i>, a router, easy-to-use HTTP functionality, Workers <a href="https://developers.cloudflare.com/workers/runtime-apis/kv">KV stores</a> and <a href="https://developers.cloudflare.com/workers/learning/using-durable-objects">Durable Objects</a>, secrets, and environment variables too. It’s all <a href="https://github.com/cloudflare/workers-rs">open source</a>, and we’d love <a href="https://github.com/cloudflare/workers-rs/issues">your feedback</a>!</p>
    <div>
      <h3>Why are we doing this?</h3>
      <a href="#why-are-we-doing-this">
        
      </a>
    </div>
    <p>Cloudflare Workers is on a mission to simplify the developer experience. When we took a hard look at the previous experience writing non-JavaScript Workers, we knew we could do better. Rust happens to be a great language for us to kick-start our mission: it has first-class support for WebAssembly, and a wonderful, growing ecosystem. Tools like <a href="https://github.com/rustwasm/wasm-bindgen"><code><i>wasm-bindgen</i></code></a>, libraries like <a href="https://crates.io/crates/web-sys"><code><i>web-sys</i></code></a>, and Rust’s powerful macro system gave us a significant starting-off point. Plus, Rust’s popularity is growing rapidly, and if our own <a href="https://github.com/orgs/cloudflare/repositories?q=&amp;type=source&amp;language=rust&amp;sort=">use of Rust at Cloudflare</a> is any indication, there is no question that Rust is staking its claim as a must-have in the developer toolbox.</p><p>So give it a try, leave some feedback, even <a href="https://github.com/cloudflare/workers-rs">open a PR</a>! By the way, we’re always on the lookout for great people to join us, and we are hiring for many open roles (including Rust engineers!) — <a href="https://www.cloudflare.com/careers/">take a look</a>.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[WebAssembly]]></category>
            <category><![CDATA[Rust]]></category>
            <category><![CDATA[WASM]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">6OsZbm2qTDYQut1ezEzb8p</guid>
            <dc:creator>Steve Manuel</dc:creator>
        </item>
        <item>
            <title><![CDATA[Let's build a Cloudflare Worker with WebAssembly and Haskell]]></title>
            <link>https://blog.cloudflare.com/cloudflare-worker-with-webassembly-and-haskell/</link>
            <pubDate>Tue, 06 Oct 2020 11:00:00 GMT</pubDate>
            <description><![CDATA[ Let's combine the power of Haskell and WebAssembly in a Cloudflare Worker! ]]></description>
            <content:encoded><![CDATA[ <p><i>This is a guest post by Cristhian Motoche of </i><a href="https://www.stackbuilders.com/"><i>Stack Builders</i></a><i>.</i></p><p>At <a href="https://www.stackbuilders.com/">Stack Builders</a>, we believe that Haskell’s system of expressive static types offers many benefits to the software industry and the world-wide community that depends on our services. In order to fully realize these benefits, it is necessary to have proper training and access to an ecosystem that allows for reliable deployment of services. In exploring the tools that help us run our systems based on Haskell, our developer Cristhian Motoche has created a tutorial that shows how to compile Haskell to WebAssembly using Asterius for deployment on Cloudflare.</p>
    <div>
      <h3></h3>
      <a href="#">
        
      </a>
    </div>
    <p>What is a Cloudflare Worker?</p><p><a href="https://developers.cloudflare.com/workers/">Cloudflare Workers</a> is a serverless platform that allows us to run our code on the edge of the Cloudflare infrastructure. It's built on Google V8, so it’s possible to write functionalities in JavaScript or any other language that targets WebAssembly.</p><p><a href="https://webassembly.org/">WebAssembly</a> is a portable binary instruction format that can be executed fast in a memory-safe sandboxed environment. For this reason, it’s especially useful for tasks that need to perform resource-demanding and self-contained operations.</p>
    <div>
      <h3>Why use Haskell to target WebAssembly?</h3>
      <a href="#why-use-haskell-to-target-webassembly">
        
      </a>
    </div>
    <p>Haskell is a pure functional <a href="https://github.com/appcypher/awesome-wasm-langs">language</a> that can target WebAssembly. As such, It helps developers break down complex tasks into small functions that can later be composed to do complex tasks. Additionally, it’s statically typed and has type inference, so it will complain if there are type errors at compile time. Because of that and <a href="https://wiki.haskell.org/Why_Haskell_matters">much more</a>, Haskell is a good source language for targeting WebAssembly.</p>
    <div>
      <h3>From Haskell to WebAssembly</h3>
      <a href="#from-haskell-to-webassembly">
        
      </a>
    </div>
    <p>We’ll use <a href="https://asterius.netlify.app/">Asterius</a> to target WebAssembly from Haskell. It’s a well documented tool that is updated often and supports a lot of Haskell features.</p><p>First, as suggested in the <a href="https://asterius.netlify.app/images.html#using-prebuilt-container-images">documentation</a>, we’ll use podman to pull the Asterius prebuilt container from Docker hub. In this tutorial, we will use Asterius version <a href="https://hub.docker.com/layers/terrorjack/asterius/200617/images/sha256-8e6009198c3940d1b4938b0ab6c0b119d4fce3d7ac0f6336214e758f66887b80?context=explore">200617</a>, which works with GHC 8.8.</p>
            <pre><code>podman run -it --rm -v $(pwd):/workspace -w /workspace terrorjack/asterius:200617</code></pre>
            <p>Now we’ll create a Haskell module called <code>fact.hs</code> file that will export a pure function:</p>
            <pre><code>module Factorial (fact) where

fact :: Int -&gt; Int
fact n = go n 1
  where
    go 0 acc = acc
    go n acc = go (n - 1) (n*acc)

foreign export javascript "fact" fact :: Int -&gt; Int</code></pre>
            <p>In this module, we define a pure function called <code>fact</code>, optimized with tail recursion and exported using the <a href="https://asterius.netlify.app/jsffi.html">Asterius JavaScript FFI</a>, so that it can be called when a WebAssembly module is instantiated in JavaScript.</p><p>Next, we’ll create a JavaScript file called <code>fact_node.mjs</code> that contains the following code:</p>
            <pre><code>import * as rts from "./rts.mjs";
import module from "./fact.wasm.mjs";
import req from "./fact.req.mjs";

async function handleModule(m) {
  const i = await rts.newAsteriusInstance(Object.assign(req, {module: m}));
  const result = await i.exports.fact(5);
  console.log(result);
}

module.then(handleModule);</code></pre>
            <p>This code imports <code>rts.mjs</code> (common runtime), WebAssembly loaders, and the required parameters for the Asterius instance. It creates a new Asterius instance, it calls the exported function <code>fact</code> with the input <code>5</code>, and prints out the result.</p><p>You probably have noted that <code>fact</code> is done <b>asynchronously</b>. This happens with any exported function by Asterius, even if it’s a pure function.</p><p>Next, we’ll compile this code using the Asterius command line interface (CLI) <code>ahc-link</code>, and we’ll run the JavaScript code in Node:</p>
            <pre><code>ahc-link \
  --input-hs fact.hs \
  --no-main \
  --export-function=fact \
  --run \
  --input-mjs fact_node.mjs \
  --output-dir=node</code></pre>
            <p>This command takes <code>fact.hs</code> as a Haskell input file, specifies that no <code>main</code> function is exported, and exports the <code>fact</code> function. Additionally, it takes <code>fact_node.mjs</code> as the entry JavaScript file that replaces the generated file by default, and it places the generated code in a directory called <code>node</code>.</p><p>Running the <code>ahc-link</code> command from above will print the following output in the console:</p>
            <pre><code>[INFO] Compiling fact.hs to WebAssembly
...
[INFO] Running node/fact.mjs
120</code></pre>
            <p>As you can see, the result is executed in <code>node</code> and it prints out the result of <code>fact</code> in the console.</p>
    <div>
      <h3>Push your code to Cloudflare Workers</h3>
      <a href="#push-your-code-to-cloudflare-workers">
        
      </a>
    </div>
    <p>Now we’ll set everything up for deploying our code to Cloudflare Workers.</p><p>First, let’s add a <code>metadata.json</code> file with the following content:</p>
            <pre><code>{
  "body_part": "script",
  "bindings": [
    {
      "type": "wasm_module",
      "name": "WASM",
      "part": "wasm"
    }
  ]
}</code></pre>
            <p>This file is needed to specify the <code>wasm_module</code> binding. The <code>name</code> value corresponds to the global variable to access the WebAssembly module from your Worker code. In our example, it’s going to have the name <code>WASM</code>.</p><p>Our next step is to define the main point of the Workers script.</p>
            <pre><code>import * as rts from "./rts.mjs";
import fact from "./fact.req.mjs";

async function handleFact(param) {
  const i = await rts.newAsteriusInstance(Object.assign(fact, { module: WASM }));
  return await i.exports.fact(param);
}

async function handleRequest(req) {
  if (req.method == "POST") {
    const data = await req.formData();
    const param = parseInt(data.get("param"));
    if (param) {
      const resp = await handleFact(param);
      return new Response(resp, {status: 200});
    } else {
      return new Response(
        "Expecting 'param' in request to be an integer",
        {status: 400},
      );
    }
  }
  return new Response("Method not allowed", {status: 405});
}

addEventListener("fetch", event =&gt; {
  event.respondWith(handleRequest(event.request))
})</code></pre>
            <p>There are a few interesting things to point out in this code:</p><ol><li><p>We import <code>rts.mjs</code> and <code>fact.req.mjs</code> to load the exported functions from our WebAssembly module.</p></li><li><p><code>handleFact</code> is an asynchronous function that creates an instance of Asterius with the global <code>WASM</code> module, as a Workers global variable, and calls the exported function <code>fact</code> with some input.</p></li><li><p><code>handleRequest</code> handles the request of the Worker. It expects a <code>POST</code> request, with a parameter called <code>param</code> in the request body. If <code>param</code> is a number, it calls <code>handleFact</code> to respond with the result of <code>fact</code>.</p></li><li><p>Using the Service Workers API, we listen to the <code>fetch</code> event that will respond with the result of <code>handleRequest</code>.</p></li></ol><p>We need to build and bundle our code in a single JavaScript file, because Workers only accepts one script per worker. Fortunately, Asterius comes with Parcel.js, which will bundle all the necessary code in a single JavaScript file.</p>
            <pre><code>ahc-link \
  --input-hs fact.hs \
  --no-main \
  --export-function=fact \
  --input-mjs fact_cfw.mjs \
  --bundle \
  --browser \
  --output-dir worker</code></pre>
            <p><code>ahc-link</code> will generate some files inside a directory called <code>worker</code>. For our Workers, we’re only interested in the JavaScript file (<code>fact.js</code>) and the WebAssembly module (<code>fact.wasm</code>). Now, it’s time to submit both of them to Workers. We can do this with the provided REST API.</p><p>Make sure you have an account id (<code>$CF_ACCOUNT_ID</code>), a name for your script (<code>$SCRIPT_NAME</code>), and an API Token (<code>$CF_API_TOKEN</code>):</p>
            <pre><code>cd worker
curl -X PUT "https://api.cloudflare.com/client/v4/accounts/$CF_ACCOUNT_ID/workers/scripts/$SCRIPT_NAME" \
     -H  "Authorization: Bearer $CF_API_TOKEN" \
     -F "metadata=@metadata.json;type=application/json" \
     -F "script=@fact.js;type=application/javascript" \
     -F "wasm=@fact.wasm;type=application/wasm"</code></pre>
            <p>Now, visit the Workers UI, where you can use the editor to view, edit, and test the script. Also, you can enable it to on a <code>workers.dev</code> subdomain (<code>$CFW_SUBDOMAIN</code>); in that case, you could then simply:</p>
            <pre><code>curl -X POST $CFW_SUBDOMAIN \
       -H 'Content-Type: application/x-www-form-urlencoded' \
       --data 'param=5'</code></pre>
            
    <div>
      <h3>Beyond a simple Haskell file</h3>
      <a href="#beyond-a-simple-haskell-file">
        
      </a>
    </div>
    <p>So far, we’ve created a WebAssembly module that exports a pure Haskell function we ran in Workers. However, we can also create and build a Cabal project using Asterius <code>ahc-cabal</code> CLI, and then use <code>ahc-dist</code> to compile it to WebAssembly.</p><p>First, let’s create the project:</p>
            <pre><code>ahc-cabal init -m -p cabal-cfw-example</code></pre>
            <p>Then, let’s add some dependencies to our cabal project. The cabal file will look like this:</p>
            <pre><code>cabal-version:       2.4
name:                cabal-cfw-example
version:             0.1.0.0
license:             NONE

executable cabal-cfw-example
  ghc-options: -optl--export-function=handleReq
  main-is:             Main.hs
  build-depends:
    base,
    bytestring,
    aeson &gt;=1.5 &amp;&amp; &lt; 1.6,
    text
  default-language:    Haskell2010</code></pre>
            <p>It’s a simple cabal file, except for the <code>-optl--export-function=handleReq</code> ghc flag. This is <a href="https://github.com/tweag/asterius/issues/362#issuecomment-561576162">necessary</a> when exporting a function from a cabal project.</p><p>In this example, we’ll define a simple <code>User</code> record, and we’ll define its instance automatically using Template Haskell!</p>
            <pre><code>{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE TemplateHaskell   #-}

module Main where

import           Asterius.Types
import           Control.Monad
import           Data.Aeson                 hiding (Object)
import qualified Data.Aeson                 as A
import           Data.Aeson.TH
import qualified Data.ByteString.Lazy.Char8 as B8
import           Data.Text


main :: IO ()
main = putStrLn "CFW Cabal"

data User =
  User
    { name :: Text
    , age  :: Int
    }

$(deriveJSON defaultOptions 'User)</code></pre>
            <p><b>NOTE:</b> It’s not necessary to create a Cabal project for this example, because the prebuilt container comes with a lot of <a href="https://github.com/tweag/asterius/issues/354">prebuilt packages</a> (aesona included). Nevertheless, it will help us show the potential of <code>ahc-cabal</code> and <code>ahc-dist</code>.</p><p>Next, we’ll define <code>handleReq</code>, which we’ll export using JavaScript FFI just like we did before.</p>
            <pre><code>handleReq :: JSString -&gt; JSString -&gt; IO JSObject
handleReq method rawBody =
  case fromJSString method of
    "POST" -&gt;
      let eitherUser :: Either String User
          eitherUser = eitherDecode (B8.pack $ fromJSString rawBody)
       in case eitherUser of
            Right _  -&gt; js_new_response (toJSString "Success!") 200
            Left err -&gt; js_new_response (toJSString err) 400
    _ -&gt; js_new_response (toJSString "Not a valid method") 405

foreign export javascript "handleReq" handleReq :: JSString -&gt; JSString -&gt; IO JSObject

foreign import javascript "new Response($1, {\"status\": $2})"
  js_new_response :: JSString -&gt; Int -&gt; IO JSObject</code></pre>
            <p>This time, we define <code>js_new_response</code>, a Haskell function that creates a JavaScript object, to create a <code>Response</code>. <code>handleReq</code> takes two string parameters from JavaScript and it uses them to prepare a response.</p><p>Now let’s build and install the binary in the current directory:</p>
            <pre><code>ahc-cabal new-install --installdir . --overwrite-policy=always</code></pre>
            <p>This will generate a binary for our executable, called <code>cabal-cfw-example</code>. We’re going to use <code>ahc-dist</code> to take that binary and target WebAssembly:</p>
            <pre><code>ahc-dist --input-exe cabal-cfw-example --export-function=handleReq --no-main --input-mjs cabal_cfw_example.mjs --bundle --browser</code></pre>
            <p><code>cabal_cfw_example.mjs</code> contains the following code:</p>
            <pre><code>import * as rts from "./rts.mjs";
import cabal_cfw_example from "./cabal_cfw_example.req.mjs";

async function handleRequest(req) {
  const i = await rts.newAsteriusInstance(Object.assign(cabal_cfw_example, { module: WASM }));
  const body = await req.text();
  return await i.exports.handleReq(req.method, body);
}

addEventListener("fetch", event =&gt; {
  event.respondWith(handleRequest(event.request))
});</code></pre>
            <p>Finally, we can deploy our code to Workers by defining a <code>metadata.json</code> file and uploading the script and the WebAssembly module using Workers API as we did before.</p>
    <div>
      <h3>Caveats</h3>
      <a href="#caveats">
        
      </a>
    </div>
    <p>Workers <a href="https://developers.cloudflare.com/workers/about/limits/#script-size">limits</a> your JavaScript and WebAssembly in file size. Therefore, you need to be careful with any dependencies you add.</p>
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Stack Builders builds better software for better living through technologies like expressive static types. We used Asterius to compile Haskell to WebAssembly and deployed it to Cloudflare Workers using the Workers API. Asterius supports a lot of Haskell features (e.g. Template Haskell) and it provides an easy-to-use JavaScript FFI to interact with JavaScript. Additionally, it provides prebuilt containers that contain a lot of Haskell packages, so you can start writing a script right away.</p><p>Following this approach, we can write functional type-safe code in Haskell, target it to WebAssembly, and publish it to Workers, which runs on the edge of the Cloudflare infrastructure.</p><p>For more content check our <a href="https://www.stackbuilders.com/news/page/1">blogs</a> and <a href="https://www.stackbuilders.com/tutorials/">tutorials</a>!</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[WebAssembly]]></category>
            <category><![CDATA[WASM]]></category>
            <category><![CDATA[Serverless]]></category>
            <guid isPermaLink="false">5AVLtoBd5aIK5zHzElyM9s</guid>
            <dc:creator>Cristhian Motoche</dc:creator>
        </item>
        <item>
            <title><![CDATA[? The Wrangler CLI: Deploying Rust with WASM on Cloudflare Workers]]></title>
            <link>https://blog.cloudflare.com/introducing-wrangler-cli/</link>
            <pubDate>Thu, 28 Mar 2019 16:08:45 GMT</pubDate>
            <description><![CDATA[ Today, we're open sourcing and announcing wrangler, a CLI tool for building, previewing, and publishing Rust and WebAssembly Cloudflare Workers. If that sounds like some word salad to you, that's a reasonable reaction. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Wrangler is a CLI tool for building Rust WebAssembly Workers</p><p>Today, we're open sourcing and announcing <code>wrangler</code>, a CLI tool for building, previewing, and publishing Rust and WebAssembly Cloudflare Workers.</p><p>If that sounds like some word salad to you, that's a reasonable reaction. All three of the technologies involved are relatively new and upcoming: WebAssembly, Rust, and Cloudflare Workers.</p>
    <div>
      <h3>Why WebAssembly?</h3>
      <a href="#why-webassembly">
        
      </a>
    </div>
    <p>Cloudflare's mission is to help build a better Internet. We see Workers as an extension of the already incredibly powerful Web Platform, where JavaScript has allowed users to go from building small bits of interactivity, to building full applications. Node.js first extended this from the client to the server- unifying web application development around a single language – JavaScript. By choosing to use V8 isolates (the technology that powers both Node.js and the most popular browser, Chrome), we sought to make its Workers product a fully compatible, new platform for the Web, eliding the distinction between server and client. By leveraging its large global network of servers, Workers allows users to run code as close as possible to end users, eliminating the latency associated server-side logic or large client-side bundles.</p><p>But not everyone wants to write JavaScript, and JavaScript is not well suited to express every application. WebAssembly emerged in 2017 as a way to further extend the Web Platform to applications, such as games and other high resource intensive programs, that were previously excluded by the limitations of JavaScript.</p><p>V8 isolates give us both JavaScript and WebAssembly. This means that you can leverage the prototyping power and extensive ecosystem of JavaScript, alongside the power of WebAssembly, which in addition to fast, predictable, performance, also opens up the wealth of libraries written in languages that can target WebAssembly, like C, C++, and Rust.</p><p>WebAssembly on Workers eliminates trade-offs that were originally considered irresolvable: low-latency, high-performing, and Web Platform compatible- pick three.</p>
    <div>
      <h3>Why Rust?</h3>
      <a href="#why-rust">
        
      </a>
    </div>
    <p>Rust is a relatively new programming language with the goal of "empowering everyone to build reliable and efficient software". It's a systems level language that offers its users a high amount of control, while still seeking to offer an ergonomic, friendly and modern development experience.</p><p>The Rust-WebAssembly Working Group made incredible efforts last year to build out a suite of developer tools for WebAssembly. At Cloudflare, we're excited to support those efforts with paid developer hours and leverage those efforts to empower our users to start harnessing the power of WebAssembly on Workers now.</p><p>There are several other toolchains including Emscripten (C, C++) and AssemblyScript (TypeScript) that we're eager to support in the future. Rust is just the beginning (but we think it's a pretty great place to start!).</p>
    <div>
      <h3>Why now?</h3>
      <a href="#why-now">
        
      </a>
    </div>
    <p>When developing new, highly technical, products, it's easy to get caught up in the promise and vision- often to the detriment of getting the technology into the hands of the folks who will be using it every day in the future.</p><p>We want to broaden the community that has access at the early stages of this technology- who can bring their valuable perspectives and experience, and help us shape the future of these tools.</p><p>The first step to accomplishing that is building the tools that can <i>enable</i> folks to engage with the new platform. <code>wrangler</code> is a that enabling tool. It's just enough to unblock users who were previously unable to interact with the platform because there was no paved path.</p><p>We don't plan to stop here. Folks will rightly note that there are some critical developer workflow steps that are missing from <code>wrangler</code>: linting, testing, benchmarking, and size profiling are a few that come to mind. We've got some big plans and we're excited to build out more, but we're eager to release this now to enable <i>more</i> folks to participate in the process. The best way to know what developers need is to ask and listen- by creating and open sourcing <code>wrangler</code> in such an early phase we're hoping to shorten the feedback cycle between product and user- and build the <i>right</i> thing, <i>faster</i>.</p><p>You can install <code>wrangler</code> using <code>cargo</code>:</p>
            <pre><code>cargo install wrangler</code></pre>
            <p>To get started, head on over to the <a href="https://developers.cloudflare.com/workers/cli-wrangler">Cloudflare docs</a> and follow the <a href="https://developers.cloudflare.com/workers/cli-wrangler/install-update">tutorial</a>. You'll build and preview a Cloudflare Worker that uses Rust compiled to WebAssembly to parse <a href="https://blog.cloudflare.com/markdown-for-agents/">Markdown</a>.</p><p>Don't stop there though. Please check out <a href="https://github.com/cloudflare/wrangler">the repo</a>, <a href="https://github.com/cloudflare/wrangler/issues">file some issues</a>, build project templates, and write about your experience- we want to hear from you.</p><p>We're really excited to see what y'all build!</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[WebAssembly]]></category>
            <category><![CDATA[Rust]]></category>
            <category><![CDATA[CLI]]></category>
            <category><![CDATA[Wrangler]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">36wNfaZBn3VBQ13FPMMsJW</guid>
            <dc:creator>Ashley Williams</dc:creator>
        </item>
    </channel>
</rss>