
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Wed, 08 Apr 2026 23:30:07 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Building a serverless, post-quantum Matrix homeserver]]></title>
            <link>https://blog.cloudflare.com/serverless-matrix-homeserver-workers/</link>
            <pubDate>Tue, 27 Jan 2026 14:00:00 GMT</pubDate>
            <description><![CDATA[ As a proof of concept, we built a Matrix homeserver to Cloudflare Workers — delivering encrypted messaging at the edge with automatic post-quantum cryptography. ]]></description>
            <content:encoded><![CDATA[ <p><sup><i>* This post was updated at 11:45 a.m. Pacific time to clarify that the use case described here is a proof of concept and a personal project. Some sections have been updated for clarity.</i></sup></p><p>Matrix is the gold standard for decentralized, end-to-end encrypted communication. It powers government messaging systems, open-source communities, and privacy-focused organizations worldwide. </p><p>For the individual developer, however, the appeal is often closer to home: bridging fragmented chat networks (like Discord and Slack) into a single inbox, or simply ensuring your conversation history lives on infrastructure you control. Functionally, Matrix operates as a decentralized, eventually consistent state machine. Instead of a central server pushing updates, homeservers exchange signed JSON events over HTTP, using a conflict resolution algorithm to merge these streams into a unified view of the room's history.</p><p><b>But there is a "tax" to running it. </b>Traditionally, operating a Matrix <a href="https://matrix.org/homeserver/about/"><u>homeserver</u></a> has meant accepting a heavy operational burden. You have to provision virtual private servers (VPS), tune PostgreSQL for heavy write loads, manage Redis for caching, configure <a href="https://www.cloudflare.com/learning/cdn/glossary/reverse-proxy/"><u>reverse proxies</u></a>, and handle rotation for <a href="https://www.cloudflare.com/application-services/products/ssl/">TLS certificates</a>. It’s a stateful, heavy beast that demands to be fed time and money, whether you’re using it a lot or a little.</p><p>We wanted to see if we could eliminate that tax entirely.</p><p><b>Spoiler: We could.</b> In this post, we’ll explain how we ported a Matrix homeserver to <a href="https://workers.cloudflare.com/"><u>Cloudflare Workers</u></a>. The resulting proof of concept is a serverless architecture where operations disappear, costs scale to zero when idle, and every connection is protected by <a href="https://www.cloudflare.com/learning/ssl/quantum/what-is-post-quantum-cryptography/"><u>post-quantum cryptography</u></a> by default. You can view the source code and <a href="https://github.com/nkuntz1934/matrix-workers"><u>deploy your own instance directly from Github</u></a>.</p><a href="https://deploy.workers.cloudflare.com/?url=https://github.com/nkuntz1934/matrix-workers"><img src="https://deploy.workers.cloudflare.com/button" /></a>
<p></p><p></p>
    <div>
      <h2>From Synapse to Workers</h2>
      <a href="#from-synapse-to-workers">
        
      </a>
    </div>
    <p>Our starting point was <a href="https://github.com/matrix-org/synapse"><u>Synapse</u></a>, the Python-based reference Matrix homeserver designed for traditional deployments. PostgreSQL for persistence, Redis for caching, filesystem for media.</p><p>Porting it to Workers meant questioning every storage assumption we’d taken for granted.</p><p>The challenge was storage. Traditional homeservers assume strong consistency via a central SQL database. Cloudflare <a href="https://developers.cloudflare.com/durable-objects/"><u>Durable Objects</u></a> offers a powerful alternative. This primitive gives us the strong consistency and atomicity required for Matrix state resolution, while still allowing the application to run at the edge.</p><p>We ported the core Matrix protocol logic — event authorization, room state resolution, cryptographic verification — in TypeScript using the Hono framework. D1 replaces PostgreSQL, KV replaces Redis, R2 replaces the filesystem, and Durable Objects handle real-time coordination.</p><p>Here’s how the mapping worked out:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1JTja38UZRbFygluawrnz1/9bce290e3070155c734e874c17051551/BLOG-3101_2.png" />
          </figure>
    <div>
      <h2>From monolith to serverless</h2>
      <a href="#from-monolith-to-serverless">
        
      </a>
    </div>
    <p>Moving to Cloudflare Workers brings several advantages for a developer: simple deployment, lower costs, low latency, and built-in security.</p><p><b>Easy deployment: </b>A traditional Matrix deployment requires server provisioning, PostgreSQL administration, Redis cluster management, <a href="https://www.cloudflare.com/application-services/solutions/certificate-lifecycle-management/">TLS certificate renewal</a>, load balancer configuration, monitoring infrastructure, and on-call rotations.</p><p>With Workers, deployment is simply: wrangler deploy. Workers handles TLS, load balancing, DDoS protection, and global distribution. </p><p><b>Usage-based costs: </b>Traditional homeservers cost money whether anyone is using them or not. Workers pricing is request-based, so you pay when you’re using it, but costs drop to near zero when everyone’s asleep. </p><p><b>Lower latency globally:</b> A traditional Matrix homeserver in us-east-1 adds 200ms+ latency for users in Asia or Europe. Workers, meanwhile, run in 300+ locations worldwide. When a user in Tokyo sends a message, the Worker executes in Tokyo. </p><p><b>Built-in security: </b>Matrix homeservers can be high-value targets: They handle encrypted communications, store message history, and authenticate users. Traditional deployments require careful hardening: firewall configuration, rate limiting, DDoS mitigation, WAF rules, IP reputation filtering.</p><p>Workers provide all of this by default. </p>
    <div>
      <h3>Post-quantum protection </h3>
      <a href="#post-quantum-protection">
        
      </a>
    </div>
    <p>Cloudflare deployed post-quantum hybrid key agreement across all <a href="https://www.cloudflare.com/learning/ssl/why-use-tls-1.3/"><u>TLS 1.3</u></a> connections in <a href="https://blog.cloudflare.com/post-quantum-for-all/"><u>October 2022</u></a>. Every connection to our Worker automatically negotiates X25519MLKEM768 — a hybrid combining classical X25519 with ML-KEM, the post-quantum algorithm standardized by NIST.</p><p>Classical cryptography relies on mathematical problems that are hard for traditional computers but trivial for quantum computers running Shor’s algorithm. ML-KEM is based on lattice problems that remain hard even for quantum computers. The hybrid approach means both algorithms must fail for the connection to be compromised.</p>
    <div>
      <h3>Following a message through the system</h3>
      <a href="#following-a-message-through-the-system">
        
      </a>
    </div>
    <p>Understanding where encryption happens matters for security architecture. When someone sends a message through our homeserver, here’s the actual path:</p><p>The sender’s client takes the plaintext message and encrypts it with Megolm — Matrix’s end-to-end encryption. This encrypted payload then gets wrapped in TLS for transport. On Cloudflare, that TLS connection uses X25519MLKEM768, making it quantum-resistant.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/wGGYZ4LYspufH1c4psmL1/28acad8ab8e6535525dda413669c2d74/BLOG-3101_3.png" />
          </figure><p>The Worker terminates TLS, but what it receives is still encrypted — the Megolm ciphertext. We store that ciphertext in D1, index it by room and timestamp, and deliver it to recipients. But we never see the plaintext. The message “Hello, world” exists only on the sender’s device and the recipient’s device.</p><p>When the recipient syncs, the process reverses. They receive the encrypted payload over another quantum-resistant TLS connection, then decrypt locally with their Megolm session keys.</p>
    <div>
      <h3>Two layers, independent protection</h3>
      <a href="#two-layers-independent-protection">
        
      </a>
    </div>
    <p>This protects via two encryption layers that operate independently:</p><p>The <a href="https://www.cloudflare.com/learning/ssl/transport-layer-security-tls/"><u>transport layer (TLS)</u></a> protects data in transit. It’s encrypted at the client and decrypted at the Cloudflare edge. With X25519MLKEM768, this layer is now post-quantum.</p><p>The <a href="https://www.cloudflare.com/learning/ddos/what-is-layer-7/"><u>application layer</u></a> (Megolm E2EE) protects message content. It’s encrypted on the sender’s device and decrypted only on recipient devices. This uses classical Curve25519 cryptography.</p>
    <div>
      <h3>Who sees what</h3>
      <a href="#who-sees-what">
        
      </a>
    </div>
    <p>Any Matrix homeserver operator — whether running Synapse on a VPS or this implementation on Workers — can see metadata: which rooms exist, who’s in them, when messages were sent. But no one in the infrastructure chain can see the message content, because the E2EE payload is encrypted on sender devices before it ever hits the network. Cloudflare terminates TLS and passes requests to your Worker, but both see only Megolm ciphertext. Media in encrypted rooms is encrypted client-side before upload, and private keys never leave user devices.</p>
    <div>
      <h3>What traditional deployments would need</h3>
      <a href="#what-traditional-deployments-would-need">
        
      </a>
    </div>
    <p>Achieving post-quantum TLS on a traditional Matrix deployment would require upgrading OpenSSL or BoringSSL to a version supporting ML-KEM, configuring cipher suite preferences correctly, testing client compatibility across all Matrix apps, monitoring for TLS negotiation failures, staying current as PQC standards evolve, and handling clients that don’t support PQC gracefully.</p><p>With Workers, it’s automatic. Chrome, Firefox, and Edge all support X25519MLKEM768. Mobile apps using platform TLS stacks inherit this support. The security posture improves as Cloudflare’s <a href="https://developers.cloudflare.com/ssl/post-quantum-cryptography/"><u>PQC</u></a> deployment expands — no action required on our part.</p>
    <div>
      <h2>The storage architecture that made it work</h2>
      <a href="#the-storage-architecture-that-made-it-work">
        
      </a>
    </div>
    <p>The key insight from porting Tuwunel was that different data needs different consistency guarantees. We use each Cloudflare primitive for what it does best.</p>
    <div>
      <h3>D1 for the data model</h3>
      <a href="#d1-for-the-data-model">
        
      </a>
    </div>
    <p>D1 stores everything that needs to survive restarts and support queries: users, rooms, events, device keys. Over 25 tables covering the full Matrix data model. </p>
            <pre><code>CREATE TABLE events (
	event_id TEXT PRIMARY KEY,
	room_id TEXT NOT NULL,
	sender TEXT NOT NULL,
	event_type TEXT NOT NULL,
	state_key TEXT,
	content TEXT NOT NULL,
	origin_server_ts INTEGER NOT NULL,
	depth INTEGER NOT NULL
);
</code></pre>
            <p><a href="https://www.cloudflare.com/developer-platform/products/d1/">D1’s SQLite foundation</a> meant we could port Tuwunel’s queries with minimal changes. Joins, indexes, and aggregations work as expected.</p><p>We learned one hard lesson: D1’s eventual consistency breaks foreign key constraints. A write to rooms might not be visible when a subsequent write to events checks the foreign key. We removed all foreign keys and enforce referential integrity in application code.</p>
    <div>
      <h3>KV for ephemeral state</h3>
      <a href="#kv-for-ephemeral-state">
        
      </a>
    </div>
    <p>OAuth authorization codes live for 10 minutes, while refresh tokens last for a session.</p>
            <pre><code>// Store OAuth code with 10-minute TTL
kv.put(&amp;format!("oauth_code:{}", code), &amp;token_data)?
	.expiration_ttl(600)
	.execute()
	.await?;</code></pre>
            <p>KV’s global distribution means OAuth flows work fast regardless of where users are located.</p>
    <div>
      <h3>R2 for media</h3>
      <a href="#r2-for-media">
        
      </a>
    </div>
    <p>Matrix media maps directly to R2, so you can upload an image, get back a content-addressed URL – and egress is free.</p>
    <div>
      <h3>Durable Objects for atomicity</h3>
      <a href="#durable-objects-for-atomicity">
        
      </a>
    </div>
    <p>Some operations can’t tolerate eventual consistency. When a client claims a one-time encryption key, that key must be atomically removed. If two clients claim the same key, encrypted session establishment fails.</p><p>Durable Objects provide single-threaded, strongly consistent storage:</p>
            <pre><code>#[durable_object]
pub struct UserKeysObject {
	state: State,
	env: Env,
}

impl UserKeysObject {
	async fn claim_otk(&amp;self, algorithm: &amp;str) -&gt; Result&lt;Option&lt;Key&gt;&gt; {
    	// Atomic within single DO - no race conditions possible
    	let mut keys: Vec&lt;Key&gt; = self.state.storage()
        	.get("one_time_keys")
        	.await
        	.ok()
        	.flatten()
        	.unwrap_or_default();

    	if let Some(idx) = keys.iter().position(|k| k.algorithm == algorithm) {
        	let key = keys.remove(idx);
        	self.state.storage().put("one_time_keys", &amp;keys).await?;
        	return Ok(Some(key));
    	}
    	Ok(None)
	}
}</code></pre>
            <p>We use UserKeysObject for E2EE key management, RoomObject for real-time room events like typing indicators and read receipts, and UserSyncObject for to-device message queues. The rest flows through D1.</p>
    <div>
      <h3>Complete end-to-end encryption, complete OAuth</h3>
      <a href="#complete-end-to-end-encryption-complete-oauth">
        
      </a>
    </div>
    <p>Our implementation supports the full Matrix E2EE stack: device keys, cross-signing keys, one-time keys, fallback keys, key backup, and dehydrated devices.</p><p>Modern Matrix clients use OAuth 2.0/OIDC instead of legacy password flows. We implemented a complete OAuth provider, with dynamic client registration, PKCE authorization, RS256-signed JWT tokens, token refresh with rotation, and standard OIDC discovery endpoints.
</p>
            <pre><code>curl https://matrix.example.com/.well-known/openid-configuration
{
  "issuer": "https://matrix.example.com",
  "authorization_endpoint": "https://matrix.example.com/oauth/authorize",
  "token_endpoint": "https://matrix.example.com/oauth/token",
  "jwks_uri": "https://matrix.example.com/.well-known/jwks.json"
}
</code></pre>
            <p>Point Element or any Matrix client at the domain, and it discovers everything automatically.</p>
    <div>
      <h2>Sliding Sync for mobile</h2>
      <a href="#sliding-sync-for-mobile">
        
      </a>
    </div>
    <p>Traditional Matrix sync transfers megabytes of data on initial connection,  draining mobile battery and data plans.</p><p>Sliding Sync lets clients request exactly what they need. Instead of downloading everything, clients get the 20 most recent rooms with minimal state. As users scroll, they request more ranges. The server tracks position and sends only deltas.</p><p>Combined with edge execution, mobile clients can connect and render their room list in under 500ms, even on slow networks.</p>
    <div>
      <h2>The comparison</h2>
      <a href="#the-comparison">
        
      </a>
    </div>
    <p>For a homeserver serving a small team:</p><table><tr><th><p> </p></th><th><p><b>Traditional (VPS)</b></p></th><th><p><b>Workers</b></p></th></tr><tr><td><p>Monthly cost (idle)</p></td><td><p>$20-50</p></td><td><p>&lt;$1</p></td></tr><tr><td><p>Monthly cost (active)</p></td><td><p>$20-50</p></td><td><p>$3-10</p></td></tr><tr><td><p>Global latency</p></td><td><p>100-300ms</p></td><td><p>20-50ms</p></td></tr><tr><td><p>Time to deploy</p></td><td><p>Hours</p></td><td><p>Seconds</p></td></tr><tr><td><p>Maintenance</p></td><td><p>Weekly</p></td><td><p>None</p></td></tr><tr><td><p>DDoS protection</p></td><td><p>Additional cost</p></td><td><p>Included</p></td></tr><tr><td><p>Post-quantum TLS</p></td><td><p>Complex setup</p></td><td><p>Automatic</p></td></tr></table><p><sup>*</sup><sup><i>Based on public rates and metrics published by DigitalOcean, AWS Lightsail, and Linode as of January 15, 2026.</i></sup></p><p>The economics improve further at scale. Traditional deployments require capacity planning and over-provisioning. Workers scale automatically.</p>
    <div>
      <h2>The future of decentralized protocols</h2>
      <a href="#the-future-of-decentralized-protocols">
        
      </a>
    </div>
    <p>We started this as an experiment: could Matrix run on Workers? It can—and the approach can work for other stateful protocols, too.</p><p>By mapping traditional stateful components to Cloudflare’s primitives — Postgres to D1, Redis to KV, mutexes to Durable Objects — we can see  that complex applications don't need complex infrastructure. We stripped away the operating system, the database management, and the network configuration, leaving only the application logic and the data itself.</p><p>Workers offers the sovereignty of owning your data, without the burden of owning the infrastructure.</p><p>I have been experimenting with the implementation and am excited for any contributions from others interested in this kind of service. </p><p>Ready to build powerful, real-time applications on Workers? Get started with<a href="https://developers.cloudflare.com/workers/"> <u>Cloudflare Workers</u></a> and explore<a href="https://developers.cloudflare.com/durable-objects/"> <u>Durable Objects</u></a> for your own stateful edge applications. Join our<a href="https://discord.cloudflare.com"> <u>Discord community</u></a> to connect with other developers building at the edge.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Durable Objects]]></category>
            <category><![CDATA[D1]]></category>
            <category><![CDATA[Cloudflare Workers KV]]></category>
            <category><![CDATA[R2]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Rust]]></category>
            <category><![CDATA[WebAssembly]]></category>
            <category><![CDATA[Post-Quantum]]></category>
            <category><![CDATA[Encryption]]></category>
            <guid isPermaLink="false">6VOVAMNwIZ18hMaUlC6aqp</guid>
            <dc:creator>Nick Kuntz</dc:creator>
        </item>
        <item>
            <title><![CDATA[Connecting to production: the architecture of remote bindings]]></title>
            <link>https://blog.cloudflare.com/connecting-to-production-the-architecture-of-remote-bindings/</link>
            <pubDate>Wed, 12 Nov 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Remote bindings allow you to connect your local Worker code to deployed Cloudflare resources like R2 and D1. Come along on the technical journey of how we built this feature to create a seamless local development experience. ]]></description>
            <content:encoded><![CDATA[ <p>Remote bindings are bindings that connect to a deployed resource on your Cloudflare account <i>instead </i>of a locally simulated resource – and recently, we announced that <a href="https://blog.cloudflare.com/cloudflare-developer-platform-keeps-getting-better-faster-and-more-powerful/#connect-to-production-services-and-resources-from-local-development-with-remote-bindings-now-ga"><u>remote bindings are now generally available</u></a>. </p><p>With this launch, you can now connect to deployed resources like <a href="https://developers.cloudflare.com/r2/"><u>R2 buckets</u></a> and <a href="https://www.cloudflare.com/developer-platform/products/d1/"><u>D1 databases</u></a> while running Worker code on your local machine. This means you can test your local code changes against real data and services, without the overhead of deploying for each iteration. </p><p>In this blog post, we’ll dig into the technical details of how we built it, creating a seamless local development experience.</p>
    <div>
      <h3>Developing on the Workers platform</h3>
      <a href="#developing-on-the-workers-platform">
        
      </a>
    </div>
    <p>A key part of the <a href="https://www.cloudflare.com/developer-platform/products/workers/"><u>Cloudflare Workers platform</u></a> has been the ability to develop your code locally without having to deploy it every time you wanted to test something – though the way we’ve supported this has changed greatly over the years. </p><p>We started with <code>wrangler</code> dev running in remote mode. This works by deploying and connecting to a preview version of your Worker that runs on Cloudflare’s network every time you make a change to your code, allowing you to test things out as you develop. However, remote mode isn’t perfect — it’s complex and hard to maintain. And the developer experience leaves a lot to be desired: slow iteration speed, unstable debugging connections, and lack of support for multi-worker scenarios. </p><p>Those issues and others motivated a significant investment in a fully local development environment for Workers, which was released in mid-2023 and became the <a href="https://blog.cloudflare.com/wrangler3/"><u>default experience for wrangler dev</u></a>. Since then, we've put a huge amount of work into the local dev experience with <a href="https://developers.cloudflare.com/workers/wrangler/"><u>Wrangler</u></a>, the <a href="https://developers.cloudflare.com/workers/vite-plugin/"><u>Cloudflare Vite plugin</u></a> (alongside <a href="https://developers.cloudflare.com/workers/testing/vitest-integration/"><u>@cloudflare/vitest-pool-workers</u></a>) &amp; <a href="https://developers.cloudflare.com/workers/testing/miniflare/"><u>Miniflare</u></a>.</p><p>Still, the original remote mode remained accessible via a flag: <code>wrangler dev --remote</code>. When using remote mode, all the DX benefits of a fully local experience and the improvements we’ve made over the last few years are bypassed. So why do people still use it? It enables one key unique feature: binding to remote resources while locally developing. When you use local mode to develop a Worker locally, all of your <a href="https://developers.cloudflare.com/workers/runtime-apis/bindings/"><u>bindings</u></a> are simulated locally using local (initially empty) data. This is fantastic for iterating on your app’s logic with test data – but sometimes that’s not enough, whether you want to share resources across your team, reproduce bugs tied to real data, or just be confident that your app will work in production with real resources.</p><p>Given this, we saw an opportunity: If we could bring the best parts of remote mode (i.e. access to remote resources) to <code>wrangler dev</code>, there’d be one single flow for developing Workers that would enable many use cases, while not locking people out of the advancements we’ve made to local development. And that’s what we did! </p><p>As of Wrangler v4.37.0 you can pick on a per-binding basis whether a binding should use remote or local resources, simply by specifying the <code>remote</code> option. It’s important to re-emphasise this—you only need to add <code>remote: true!</code> There’s no complex management of API keys and credentials involved, it all just works using Wrangler’s existing Oauth connection to the Cloudflare API.</p>
            <pre><code>{
  "name": "my-worker",
  "compatibility_date": "2025-01-01",
  "kv_namespaces": [{
    "binding": "KV",
    "id": "my-kv-id",
  },{
    "binding": "KV_2",
    "id": "other-kv-id",
    "remote": true
  }],
  "r2_buckets": [{
    "bucket_name": "my-r2-name",
    "binding": "R2"
  }]
}</code></pre>
            <p>The eagle-eyed among you might have realised that some bindings already worked like this, accessing remote resources from local dev. Most prominently, the <a href="https://developers.cloudflare.com/workers-ai/configuration/bindings/"><u>AI binding</u></a> was a trailblazer for what a general remote bindings solution could look like. From its introduction, the AI binding always connected to a remote resource, since a true local experience that supports all the different models you can use with Workers AI would be impractical and require a huge upfront download of AI models. </p><p>As we realised different products within Workers needed something similar to remote bindings (Images and Hyperdrive, for instance), we ended up with a bit of a patchwork of different solutions. We’ve now unified under a single remote bindings solution that works for all binding types.</p>
    <div>
      <h3>How we built it</h3>
      <a href="#how-we-built-it">
        
      </a>
    </div>
    <p>We wanted to make it really easy for developers to access remote resources without having to change their production Workers code, and so we landed on a solution that required us to fetch data from the remote resource at the point of use in your Worker.</p>
            <pre><code>const value = await env.KV.get("some-key")</code></pre>
            <p><sup><i>The above code snippet shows accessing the “some-key” value in the env.KV </i></sup><a href="https://developers.cloudflare.com/kv/api/read-key-value-pairs/"><sup><i><u>KV namespace</u></i></sup></a><sup><i>, which is not available locally and needs to be fetched over the network.</i></sup></p><p>So if that was our requirement, how would we get there? For instance, how would we get from a user calling <code>env.KV.put(“key”, “value”)</code> in their Worker to actually storing that in a remote KV store? The obvious solution was perhaps to use the <a href="https://developers.cloudflare.com/api/resources/kv/subresources/namespaces/subresources/values/methods/update/"><u>Cloudflare API</u></a>. We could have just replaced the entire env locally with stub objects that made API calls, transforming <code>env.KV.put()</code> into PUT <code>http:///accounts/{account_id}/storage/kv/namespaces/{namespace_id}/values/{key_name}</code>. </p><p>This would’ve worked great for KV, R2, D1, and other bindings with mature HTTP APIs, but it would have been a pretty complex solution to implement and maintain. We would have had to replicate the entire bindings API surface and transform every possible operation on a binding to an equivalent API call. Additionally, some binding operations don’t have an equivalent API call, and wouldn’t be supportable using this strategy.</p><p>Instead, we realised that we already had a ready-made API waiting for us — the one we use in production! </p>
    <div>
      <h3>How bindings work under the hood in production</h3>
      <a href="#how-bindings-work-under-the-hood-in-production">
        
      </a>
    </div>
    <p>Most bindings on the Workers platform boil down to essentially a <a href="https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/"><u>service binding</u></a>. A service binding is a link between two Workers that allows them to communicate over HTTP or <a href="https://blog.cloudflare.com/javascript-native-rpc/"><u>JSRPC</u></a> (we’ll come back to JSRPC later). </p><p>For example, the KV binding is implemented as a service binding between your authored Worker and a platform Worker, speaking HTTP. The JS API for the KV binding is implemented in the Workers runtime, and translates calls like <code>env.KV.get()</code> to HTTP calls to the Worker that implements the KV service. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4PwugWAyVPMq086BKt9cjh/cfea7dbb50b9b2983bd21fcb7b80334d/image2.png" />
          </figure><p><i><sup>Diagram showing a simplified model of how a KV binding works in production</sup></i></p><p>You may notice that there’s a natural async network boundary here — between the runtime translating the <code>env.KV.get()</code> call and the Worker that implements the KV service. We realised that we could use that natural network boundary to implement remote bindings. Instead of the <i>production</i> runtime translating <code>env.KV.get()</code> to an HTTP call, we could have the <i>local</i> runtime (<a href="https://github.com/cloudflare/workerd"><u>workerd</u></a>) translate <code>env.KV.get()</code> to an HTTP call, and then send it directly to the KV service, bypassing the production runtime. And so that’s what we did!</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7JNPZ2MM5IZ4TSXfN0qywV/804b7997d115405bc3f4d335cb21e626/image3.png" />
          </figure><p><sup><i>Diagram showing a locally run worker with a single KV binding, with a single remote proxy client that communicates to the remote proxy server, which in turn communicates with the remote KV</i></sup></p><p>The above diagram shows a local Worker running with a remote KV binding. Instead of being handled by the local KV simulation, it’s now being handled by a remote proxy client. This Worker then communicates with a remote proxy server connected to the real remote KV resource, ultimately allowing the local Worker to communicate with the remote KV data seamlessly.</p><p>Each binding can independently either be handled by a remote proxy client (all connected to the same remote proxy server) or by a local simulation, allowing for very dynamic workflows where some bindings are locally simulated while others connect to the real remote resource, as illustrated in the example below:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6yUYvFjpxGv7GcC12l7zr1/a7d46f33869448e1f3f13540d741ff40/image1.png" />
          </figure><p><sup><i>The above diagram and config shows a Worker (running on your computer) bound to 3 different resources—two local (KV &amp; R2), and one remote (KV_2)</i></sup></p>
    <div>
      <h3>How JSRPC fits in</h3>
      <a href="#how-jsrpc-fits-in">
        
      </a>
    </div>
    <p>The above section deals with bindings that are backed by HTTP connections (like KV and R2), but modern bindings use <a href="https://blog.cloudflare.com/javascript-native-rpc/"><u>JSRPC</u></a>. That means we needed a way for the locally running <code>workerd </code>to speak JSRPC to a production runtime instance. </p><p>In a stroke of good luck, a parallel project was going on to make this possible, as detailed in the <a href="https://blog.cloudflare.com/capnweb-javascript-rpc-library/"><u>Cap’n Web blog</u></a>. We integrated that by making the connection between the local <code>workerd</code> instance and the remote runtime instance communicate over <a href="https://github.com/cloudflare/capnweb"><u>websockets using Cap’n Web</u></a>, enabling bindings backed by JSRPC to work. This includes newer bindings like <a href="https://developers.cloudflare.com/images/transform-images/transform-via-workers/"><u>Images</u></a>, as well as JSRPC service bindings to your own Workers.</p>
    <div>
      <h3>Remote bindings with Vite, Vitest and the JavaScript ecosystem</h3>
      <a href="#remote-bindings-with-vite-vitest-and-the-javascript-ecosystem">
        
      </a>
    </div>
    <p>We didn't want to limit this exciting new feature to only <code>wrangler dev</code>. We wanted to support it in our Cloudflare Vite Plugin and vitest-pool-workers packages, as well as allowing any other potential tools and use cases from the JavaScript ecosystem to also benefit from it.</p><p>In order to achieve this, the wrangler package now exports utilities such as <code>startRemoteProxySession</code> that allow tools not leveraging <code>wrangler dev</code> to also support remote bindings. You can find more details in the <a href="https://developers.cloudflare.com/workers/development-testing/#remote-bindings"><u>official remote bindings documentation</u></a>.</p>
    <div>
      <h3>How do I try this out?</h3>
      <a href="#how-do-i-try-this-out">
        
      </a>
    </div>
    <p>Just use <code>wrangler dev</code>! As of Wrangler v4.37.0 (<code>@cloudflare/vite-plugin</code> v1.13.0, <code>@cloudflare/vitest-pool-workers</code> v0.9.0), remote bindings are available in all projects, and can be turned on a per-binding basis by adding <code>remote: true</code> to the binding definition in your <a href="https://developers.cloudflare.com/workers/wrangler/configuration/"><u>Wrangler config file</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[R2]]></category>
            <category><![CDATA[D1]]></category>
            <guid isPermaLink="false">60miu6u2Fk0E5CPdKOCYWX</guid>
            <dc:creator>Samuel Macleod</dc:creator>
            <dc:creator>Dario Piotrowicz</dc:creator>
        </item>
        <item>
            <title><![CDATA[Sequential consistency without borders: how D1 implements global read replication]]></title>
            <link>https://blog.cloudflare.com/d1-read-replication-beta/</link>
            <pubDate>Thu, 10 Apr 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ D1, Cloudflare’s managed SQL database, announces read replication beta. Here's a deep dive of the read replication implementation and how your queries can remain consistent across all regions. ]]></description>
            <content:encoded><![CDATA[ <p>Read replication of <a href="https://www.cloudflare.com/developer-platform/products/d1/">D1 databases</a> is in public beta!</p><p>D1 read replication makes read-only copies of your database available in multiple regions across Cloudflare’s network.  For busy, read-heavy applications like e-commerce websites, content management tools, and mobile apps:</p><ul><li><p>D1 read replication lowers average latency by routing user requests to read replicas in nearby regions.</p></li><li><p>D1 read replication increases overall throughput by offloading read queries to read replicas, allowing the primary database to handle more write queries.</p></li></ul><p>The main copy of your database is called the primary database and the read-only copies are called read replicas.  When you enable replication for a D1 database, the D1 service automatically creates and maintains read replicas of your primary database.  As your users make requests, D1 routes those requests to an appropriate copy of the database (either the primary or a replica) based on performance heuristics, the type of queries made in those requests, and the query consistency needs as expressed by your application.</p><p>All of this global replica creation and request routing is handled by Cloudflare at no additional cost.</p><p>To take advantage of read replication, your Worker needs to use the new D1 <a href="https://developers.cloudflare.com/d1/best-practices/read-replication/"><u>Sessions API</u></a>. Click the button below to run a Worker using D1 read replication with this <a href="https://github.com/cloudflare/templates/tree/main/d1-starter-sessions-api-template"><u>code example</u></a> to see for yourself!</p><a href="https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/d1-starter-sessions-api-template"><img src="https://deploy.workers.cloudflare.com/button" /></a>
<p></p>
    <div>
      <h2>D1 Sessions API</h2>
      <a href="#d1-sessions-api">
        
      </a>
    </div>
    <p>D1’s read replication feature is built around the concept of database <i>sessions</i>.  A session encapsulates all the queries representing one logical session for your application. For example, a session might represent all requests coming from a particular web browser or all requests coming from a mobile app used by one of your users. If you use sessions, your queries will use the appropriate copy of the D1 database that makes the most sense for your request, be that the primary database or a nearby replica.</p><p>The sessions implementation ensures <a href="https://jepsen.io/consistency/models/sequential"><u>sequential consistency</u></a> for all queries in the session, no matter what copy of the database each query is routed to.  The sequential consistency model has important properties like "<a href="https://jepsen.io/consistency/models/read-your-writes"><u>read my own writes</u></a>" and "<a href="https://jepsen.io/consistency/models/writes-follow-reads"><u>writes follow reads</u></a>," as well as a total ordering of writes. The total ordering of writes means that every replica will see transactions committed in the same order, which is exactly the behavior we want in a transactional system.  Said another way, sequential consistency guarantees that the reads and writes are executed in the order in which you write them in your code.</p><p>Some examples of consistency implications in real-world applications:</p><ul><li><p>You are using an online store and just placed an order (write query), followed by a visit to the account page to list all your orders (read query handled by a replica). You want the newly placed order to be listed there as well.</p></li><li><p>You are using your bank’s web application and make a transfer to your electricity provider (write query), and then immediately navigate to the account balance page (read query handled by a replica) to check the latest balance of your account, including that last payment.</p></li></ul><p>Why do we need the Sessions API? Why can we not just query replicas directly?</p><p>Applications using D1 read replication need the Sessions API because D1 runs on Cloudflare’s global network and there’s no way to ensure that requests from the same client get routed to the same replica for every request. For example, the client may switch from WiFi to a mobile network in a way that changes how their requests are routed to Cloudflare. Or the data center that handled previous requests could be down because of an outage or maintenance.</p><p>D1’s read replication is asynchronous, so it’s possible that when you switch between replicas, the replica you switch to lags behind the replica you were using. This could mean that, for example, the new replica hasn’t learned of the writes you just completed.  We could no longer guarantee useful properties like “read your own writes”.  In fact, in the presence of shifty routing, the only consistency property we could guarantee is that what you read had been committed at some point in the past (<a href="https://jepsen.io/consistency/models/read-committed"><u>read committed</u></a> consistency), which isn’t very useful at all!</p><p>Since we can’t guarantee routing to the same replica, we flip the script and use the information we get from the Sessions API to make sure whatever replica we land on can handle the request in a sequentially-consistent manner.</p><p>Here’s what the Sessions API looks like in a Worker:</p>
            <pre><code>export default {
  async fetch(request: Request, env: Env) {
    // A. Create the session.
    // When we create a D1 session, we can continue where we left off from a previous    
    // session if we have that session's last bookmark or use a constraint.
    const bookmark = request.headers.get('x-d1-bookmark') ?? 'first-unconstrained'
    const session = env.DB.withSession(bookmark)

    // Use this session for all our Workers' routes.
    const response = await handleRequest(request, session)

    // B. Return the bookmark so we can continue the session in another request.
    response.headers.set('x-d1-bookmark', session.getBookmark())

    return response
  }
}

async function handleRequest(request: Request, session: D1DatabaseSession) {
  const { pathname } = new URL(request.url)

  if (request.method === "GET" &amp;&amp; pathname === '/api/orders') {
    // C. Session read query.
    const { results } = await session.prepare('SELECT * FROM Orders').all()
    return Response.json(results)

  } else if (request.method === "POST" &amp;&amp; pathname === '/api/orders') {
    const order = await request.json&lt;Order&gt;()

    // D. Session write query.
    // Since this is a write query, D1 will transparently forward it to the primary.
    await session
      .prepare('INSERT INTO Orders VALUES (?, ?, ?)')
      .bind(order.orderId, order.customerId, order.quantity)
      .run()

    // E. Session read-after-write query.
    // In order for the application to be correct, this SELECT statement must see
    // the results of the INSERT statement above.
    const { results } = await session
      .prepare('SELECT * FROM Orders')
      .all()

    return Response.json(results)
  }

  return new Response('Not found', { status: 404 })
}</code></pre>
            <p>To use the Session API, you first need to create a session using the <code>withSession</code> method (<b><i>step A</i></b>).  The <code>withSession</code> method takes a bookmark as a parameter, or a constraint.  The provided constraint instructs D1 where to forward the first query of the session. Using <code>first-unconstrained</code> allows the first query to be processed by any replica without any restriction on how up-to-date it is. Using <code>first-primary</code> ensures that the first query of the session will be forwarded to the primary.</p>
            <pre><code>// A. Create the session.
const bookmark = request.headers.get('x-d1-bookmark') ?? 'first-unconstrained'
const session = env.DB.withSession(bookmark)</code></pre>
            <p>Providing an explicit bookmark instructs D1 that whichever database instance processes the query has to be at least as up-to-date as the provided bookmark (in case of a replica; the primary database is always up-to-date by definition).  Explicit bookmarks are how we can continue from previously-created sessions and maintain sequential consistency across user requests.</p><p>Once you’ve created the session, make queries like you normally would with D1.  The session object ensures that the queries you make are sequentially consistent with regards to each other.</p>
            <pre><code>// C. Session read query.
const { results } = await session.prepare('SELECT * FROM Orders').all()</code></pre>
            <p>For example, in the code example above, the session read query for listing the orders (<b><i>step C</i></b>) will return results that are at least as up-to-date as the bookmark used to create the session (<b><i>step A</i></b><i>)</i>.</p><p>More interesting is the write query to add a new order (<b><i>step D</i></b>) followed by the read query to list all orders (<b><i>step E</i></b>). Because both queries are executed on the same session, it is guaranteed that the read query will observe a database copy that includes the write query, thus maintaining sequential consistency.</p>
            <pre><code>// D. Session write query.
await session
  .prepare('INSERT INTO Orders VALUES (?, ?, ?)')
  .bind(order.orderId, order.customerId, order.quantity)
  .run()

// E. Session read-after-write query.
const { results } = await session
  .prepare('SELECT * FROM Orders')
  .all()</code></pre>
            <p>Note that we could make a single batch query to the primary including both the write and the list, but the benefit of using the new Sessions API is that you can use the extra read replica databases for your read queries and allow the primary database to handle more write queries.</p><p>The session object does the necessary bookkeeping to maintain the latest bookmark observed across all queries executed using that specific session, and always includes that latest bookmark in requests to D1. Note that any query executed without using the session object is not guaranteed to be sequentially consistent with the queries executed in the session.</p><p>When possible, we suggest continuing sessions across requests by including bookmarks in your responses to clients (<b><i>step B</i></b>), and having clients passing previously received bookmarks in their future requests.</p>
            <pre><code>// B. Return the bookmark so we can continue the session in another request.
response.headers.set('x-d1-bookmark', session.getBookmark())</code></pre>
            <p>This allows <i>all</i> of a client’s requests to be in the same session. You can do this by grabbing the session’s current bookmark at the end of the request (<code>session.getBookmark()</code>) and sending the bookmark in the response back to the client in HTTP headers, in HTTP cookies, or in the response body itself.</p>
    <div>
      <h3>Consistency with and without Sessions API</h3>
      <a href="#consistency-with-and-without-sessions-api">
        
      </a>
    </div>
    <p>In this section, we will explore the classic scenario of a read-after-write query to showcase how using the new D1 Sessions API ensures that we get sequential consistency and avoid any issues with inconsistent results in our application.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1zIBf3V1YIogYJKeWm1kDn/f484faf38cc0f8d7227f9db1fa386354/1.png" />
          </figure><p>The Client, a user Worker, sends a D1 write query that gets processed by the database primary and gets the results back. However, the subsequent read query ends up being processed by a database replica. If the database replica is lagging far enough behind the database primary, such that it does not yet include the first write query, then the returned results will be inconsistent, and probably incorrect for your application business logic.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1w81ec5tNGWJ7sQyFBZQ6l/d487ccf225a097a0e48054d88df0ba1f/2.png" />
          </figure><p>Using the Sessions API fixes the inconsistency issue. The first write query is again processed by the database primary, and this time the response includes “<b>Bookmark 100</b>”. The session object will store this bookmark for you transparently.</p><p>The subsequent read query is processed by database replica as before, but now since the query includes the previously received “<b>Bookmark 100</b>”, the database replica will wait until its database copy is at least up-to-date as “<b>Bookmark 100</b>”. Only once it’s up-to-date, the read query will be processed and the results returned, including the replica’s latest bookmark “<b>Bookmark 104</b>”.</p><p>Notice that the returned bookmark for the read query is “<b>Bookmark 104</b>”, which is different from the one passed in the query request. This can happen if there were other writes from other client requests that also got replicated to the database replica in-between the two queries our own client executed.</p>
    <div>
      <h2>Enabling read replication</h2>
      <a href="#enabling-read-replication">
        
      </a>
    </div>
    <p>To start using D1 read replication:</p><ol><li><p>Update your Worker to use the D1 Sessions API to tell D1 what queries are part of the same database session. The Sessions API works with databases that do not have read replication enabled as well, so it’s safe to ship this code even before you enable replicas. Here’s <a href="http://developers.cloudflare.com/d1/best-practices/read-replication/"><u>an example</u></a>.</p></li><li><p><a href="https://developers.cloudflare.com/d1/best-practices/read-replication/#enable-read-replication"><u>Enable replicas</u></a> for your database via <a href="https://dash.cloudflare.com/?to=/:account/workers/d1"><u>Cloudflare dashboard</u></a> &gt; Select D1 database &gt; Settings.</p></li></ol><p>D1 read replication is built into D1, and you don’t pay extra storage or compute costs for replicas. You incur the exact same D1 usage with or without replicas, based on <code>rows_read</code> and <code>rows_written</code> by your queries. Unlike other traditional database systems with replication, you don’t have to manually create replicas, including where they run, or decide how to route requests between the primary database and read replicas. Cloudflare handles this when using the Sessions API while ensuring sequential consistency.</p><p>Since D1 read replication is in beta, we recommend trying D1 read replication on a non-production database first, and migrate to your production workloads after validating read replication works for your use case.</p><p>If you don’t have a D1 database and want to try out D1 read replication, <a href="https://dash.cloudflare.com/?to=/:account/workers/d1/create"><u>create a test database</u></a> in the Cloudflare dashboard.</p>
    <div>
      <h3>Observing your replicas</h3>
      <a href="#observing-your-replicas">
        
      </a>
    </div>
    <p>Once you’ve enabled D1 read replication, read queries will start to be processed by replica database instances. The response of each query includes information in the nested <code>meta</code> object relevant to read replication, like <code>served_by_region</code> and <code>served_by_primary</code>. The first denotes the region of the database instance that processed the query, and the latter will be <code>true</code> if-and-only-if your query was processed by the primary database instance.</p><p>In addition, the <a href="https://dash.cloudflare.com/?to=/:account/workers/d1/"><u>D1 dashboard overview</u></a> for a database now includes information about the database instances handling your queries. You can see how many queries are handled by the primary instance or by a replica, and a breakdown of the queries processed by region. The example screenshots below show graphs displaying the number of queries executed  and number of rows read by each region.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1ChIlqQ5xgJfiftOHw9Egg/b583d00d22dcea60e7439dfbfa1761df/image10.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4Zze5y22759fOIYPOqrK1Y/6cd3c684006ca8234db20924cae8b960/image1.png" />
          </figure>
    <div>
      <h2>Under the hood: how D1 read replication is implemented</h2>
      <a href="#under-the-hood-how-d1-read-replication-is-implemented">
        
      </a>
    </div>
    <p>D1 is implemented on top of SQLite-backed Durable Objects running on top of Cloudflare’s <a href="https://blog.cloudflare.com/sqlite-in-durable-objects/#under-the-hood-storage-relay-service"><u>Storage Relay Service</u></a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3GWWL8goIzrGTmkH54O416/aabd47fcd94bfc73492556b19ac6069f/5.png" />
          </figure><p>D1 is structured with a 3-layer architecture.  First is the binding API layer that runs in the customer’s Worker.  Next is a stateless Worker layer that routes requests based on database ID to a layer of Durable Objects that handle the actual SQL operations behind D1.  This is similar to how <a href="https://developers.cloudflare.com/durable-objects/what-are-durable-objects/#durable-objects-in-cloudflare"><u>most applications using Cloudflare Workers and Durable Objects are structured</u></a>.</p><p>For a non-replicated database, there is exactly one Durable Object per database.  When a user’s Worker makes a request with the D1 binding for the database, that request is first routed to a D1 Worker running in the same location as the user’s Worker.  The D1 Worker figures out which D1 Durable Object backs the user’s D1 database and fetches an RPC stub to that Durable Object.  The Durable Objects routing layer figures out where the Durable Object is located, and opens an RPC connection to it.  Finally, the D1 Durable Object then handles the query on behalf of the user’s Worker using the Durable Objects SQL API.</p><p>In the Durable Objects SQL API, all queries go to a SQLite database on the local disk of the server where the Durable Object is running.  Durable Objects run <a href="https://www.sqlite.org/wal.html"><u>SQLite in WAL mode</u></a>.  In WAL mode, every write query appends to a write-ahead log (the WAL).  As SQLite appends entries to the end of the WAL file, a database-specific component called the Storage Relay Service <i>leader</i> synchronously replicates the entries to 5 <i>durability followers</i> on servers in different datacenters.  When a quorum (at least 3 out of 5) of the durability followers acknowledge that they have safely stored the data, the leader allows SQLite’s write queries to commit and opens the Durable Object’s output gate, so that the Durable Object can respond to requests.</p><p>Our implementation of WAL mode allows us to have a complete log of all of the committed changes to the database. This enables a couple of important features in SQLite-backed Durable Objects and D1:</p><ul><li><p>We identify each write with a <a href="https://en.wikipedia.org/wiki/Lamport_timestamp"><u>Lamport timestamp</u></a> we call a <a href="https://developers.cloudflare.com/d1/reference/time-travel/#bookmarks"><u>bookmark</u></a>.</p></li><li><p>We construct databases anywhere in the world by downloading all of the WAL entries from cold storage and replaying each WAL entry in order.</p></li><li><p>We implement <a href="https://developers.cloudflare.com/d1/reference/time-travel/"><u>Point-in-time recovery (PITR)</u></a> by replaying WAL entries up to a specific bookmark rather than to the end of the log.</p></li></ul><p>Unfortunately, having the main data structure of the database be a log is not ideal.  WAL entries are in write order, which is often neither convenient nor fast.  In order to cut down on the overheads of the log, SQLite <i>checkpoints</i> the log by copying the WAL entries back into the main database file.  Read queries are serviced directly by SQLite using files on disk — either the main database file for checkpointed queries, or the WAL file for writes more recent than the last checkpoint.  Similarly, the Storage Relay Service snapshots the database to cold storage so that we can replay a database by downloading the most recent snapshot and replaying the WAL from there, rather than having to download an enormous number of individual WAL entries.</p><p>WAL mode is the foundation for implementing read replication, since we can stream writes to locations other than cold storage in real time.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/ezp8gcf3gXqkvumzufGfP/1a54fc6f434290968c7e695c2e5bb0c9/6.png" />
          </figure><p>We implemented read replication in 5 major steps.</p><p>First, we made it possible to make replica Durable Objects with a read-only copy of the database.  These replica objects boot by fetching the latest snapshot and replaying the log from cold storage to whatever bookmark primary database’s leader last committed. This basically gave us point-in-time replicas, since without continuous updates, the replicas never updated until the Durable Object restarted.</p><p>Second, we registered the replica leader with the primary’s leader so that the primary leader sends the replicas every entry written to the WAL at the same time that it sends the WAL entries to the durability followers.  Each of the WAL entries is marked with a bookmark that uniquely identifies the WAL entry in the sequence of WAL entries.  We’ll use the bookmark later.</p><p>Note that since these writes are sent to the replicas <i>before</i> a quorum of durability followers have confirmed them, the writes are actually unconfirmed writes, and the replica leader must be careful to keep the writes hidden from the replica Durable Object until they are confirmed.  The replica leader in the Storage Relay Service does this by implementing enough of SQLite’s <a href="https://www.sqlite.org/walformat.html#the_wal_index_file_format"><u>WAL-index protocol</u></a>, so that the unconfirmed writes coming from the primary leader look to SQLite as though it’s just another SQLite client doing unconfirmed writes.  SQLite knows to ignore the writes until they are confirmed in the log.  The upshot of this is that the replica leader can write WAL entries to the SQLite WAL <i>immediately,</i> and then “commit” them when the primary leader tells the replica that the entries have been confirmed by durability followers.</p><p>One neat thing about this approach is that writes are sent from the primary to the replica as quickly as they are generated by the primary, helping to minimize lag between replicas.  In theory, if the write query was proxied through a replica to the primary, the response back to the replica will arrive at almost the same time as the message that updates the replica.  In such a case, it looks like there’s no replica lag at all!</p><p>In practice, we find that replication is really fast.  Internally, we measure <i>confirm lag</i>, defined as the time from when a primary confirms a change to when the replica confirms a change.  The table below shows the confirm lag for two D1 databases whose primaries are in different regions.</p><div>
    <figure>
        <table>
            <colgroup>
                <col></col>
                <col></col>
                <col></col>
            </colgroup>
            <tbody>
                <tr>
                    <td>
                        <p><br /><span><span>Replica Region</span></span></p>
                    </td>
                    <td>
                        <p><span><span>Database A</span></span></p>
                        <p><span><span>(Primary region: ENAM)</span></span></p>
                    </td>
                    <td>
                        <p><span><span>Database B</span></span><br /><span><span>(Primary region: WNAM)</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>ENAM</span></span></p>
                    </td>
                    <td>
                        <p><span><span>N/A</span></span></p>
                    </td>
                    <td>
                        <p><span><span>30 ms</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>WNAM</span></span></p>
                    </td>
                    <td>
                        <p><span><span>45 ms</span></span></p>
                    </td>
                    <td>
                        <p><span><span>N/A</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>WEUR</span></span></p>
                    </td>
                    <td>
                        <p><span><span>55 ms</span></span></p>
                    </td>
                    <td>
                        <p><span><span>75 ms</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>EEUR</span></span></p>
                    </td>
                    <td>
                        <p><span><span>67 ms</span></span></p>
                    </td>
                    <td>
                        <p><span><span>75 ms</span></span></p>
                    </td>
                </tr>
            </tbody>
        </table>
    </figure>
</div><p><sup><i>Confirm lag for 2 replicated databases.  N/A means that we have no data for this combination.  The region abbreviations are the same ones used for </i></sup><a href="https://developers.cloudflare.com/durable-objects/reference/data-location/#supported-locations-1"><sup><i><u>Durable Object location hints</u></i></sup></a><sup><i>.</i></sup></p><p>The table shows that confirm lag is correlated with the network round-trip time between the data centers hosting the primary databases and their replicas.  This is clearly visible in the difference between the confirm lag for the European replicas of the two databases.  As airline route planners know, EEUR is <a href="http://www.gcmap.com/mapui?P=ewr-lhr,+ewr-waw"><u>appreciably further away</u></a> from ENAM than WEUR is, but from WNAM, both European regions (WEUR and EEUR) are <a href="http://www.gcmap.com/mapui?P=sjc-lhr,+sjc-waw"><u>about equally as far away</u></a>.  We see that in our replication numbers.</p><p>The exact placement of the D1 database in the region matters too.  Regions like ENAM and WNAM are quite large in themselves.  Database A’s placement in ENAM happens to be further away from most data centers in WNAM compared to database B’s placement in WNAM relative to the ENAM data centers.  As such, database B sees slightly lower confirm lag.</p><p>Try as we might, we can’t beat the speed of light!</p><p>Third, we updated the Durable Object routing system to be aware of Durable Object replicas.  When read replication is enabled on a Durable Object, two things happen.  First, we create a set of replicas according to a replication policy.  The current replication policy that D1 uses is simple: a static set of replicas in <a href="https://developers.cloudflare.com/d1/configuration/data-location/#available-location-hints"><u>every region that D1 supports</u></a>.  Second, we turn on a routing policy for the Durable Object.  The current policy that D1 uses is also simple: route to the Durable Object replica in the region close to where the user request is.  With this step, we have updateable read-only replicas, and can route requests to them!</p><p>Fourth, we updated D1’s Durable Object code to handle write queries on replicas. D1 uses SQLite to figure out whether a request is a write query or a read query.  This means that the determination of whether something is a read or write query happens <i>after</i> the request is routed.  Read replicas will have to handle write requests!  We solve this by instantiating each replica D1 Durable Object with a reference to its primary.  If the D1 Durable Object determines that the query is a write query, it forwards the request to the primary for the primary to handle. This happens transparently, keeping the user code simple.</p><p>As of this fourth step, we can handle read and write queries at every copy of the D1 Durable Object, whether it's a primary or not.  Unfortunately, as outlined above, if a user's requests get routed to different read replicas, they may see different views of the database, leading to a very weak consistency model.  So the last step is to implement the Sessions API across the D1 Worker and D1 Durable Object.  Recall that every WAL entry is marked with a bookmark.  These bookmarks uniquely identify a point in (logical) time in the database.  Our bookmarks are strictly monotonically increasing; every write to a database makes a new bookmark with a value greater than any other bookmark for that database.</p><p>Using bookmarks, we implement the Sessions API with the following algorithm split across the D1 binding implementation, the D1 Worker, and D1 Durable Object.</p><p>First up in the D1 binding, we have code that creates the <code>D1DatabaseSession</code> object and code within the <code>D1DatabaseSession</code> object to keep track of the latest bookmark.</p>
            <pre><code>// D1Binding is the binding code running within the user's Worker
// that provides the existing D1 Workers API and the new withSession method.
class D1Binding {
  // Injected by the runtime to the D1 Binding.
  d1Service: D1ServiceBinding

  function withSession(initialBookmark) {
    return D1DatabaseSession(this.d1Service, this.databaseId, initialBookmark);
  }
}

// D1DatabaseSession holds metadata about the session, most importantly the
// latest bookmark we know about for this session.
class D1DatabaseSession {
  constructor(d1Service, databaseId, initialBookmark) {
    this.d1Service = d1Service;
    this.databaseId = databaseId;
    this.bookmark = initialBookmark;
  }

  async exec(query) {
    // The exec method in the binding sends the query to the D1 Worker
    // and waits for the the response, updating the bookmark as
    // necessary so that future calls to exec use the updated bookmark.
    var resp = await this.d1Service.handleUserQuery(databaseId, query, bookmark);
    if (isNewerBookmark(this.bookmark, resp.bookmark)) {
      this.bookmark = resp.bookmark;
    }
    return resp;
  }

  // batch and other SQL APIs are implemented similarly.
}</code></pre>
            <p>The binding code calls into the D1 stateless Worker (<code>d1Service</code> in the snippet above), which figures out which Durable Object to use, and proxies the request to the Durable Object.</p>
            <pre><code>class D1Worker {
  async handleUserQuery(databaseId, query) {
    var doId = /* look up Durable Object for databaseId */;
    return await this.D1_DO.get(doId).handleWorkerQuery(query, bookmark)
  }
}</code></pre>
            <p>Finally, we reach the Durable Objects layer, which figures out how to actually handle the request.</p>
            <pre><code>class D1DurableObject {
  async handleWorkerQuery(queries, bookmark) {
    var bookmark = bookmark ?? "first-primary";
    var results = {};

    if (this.isPrimaryDatabase()) {
      // The primary always has the latest data so we can run the
      // query without checking the bookmark.
      var result = /* execute query directly */;
      bookmark = getCurrentBookmark();
      results = result;
    } else {
      // This is running on a replica.
      if (bookmark === "first-primary" || isWriteQuery(query)) {
        // The primary must handle this request, so we'll proxy the
        // request to the primary.
        var resp = await this.primary.handleWorkerQuery(query, bookmark);
        bookmark = resp.bookmark;
        results = resp.results;
      } else {
        // The replica can handle this request, but only after the
        // database is up-to-date with the bookmark.
        if (bookmark !== "first-unconstrained") {
          await waitForBookmark(bookmark);
        }
        var result = /* execute query locally */;
        bookmark = getCurrentBookmark();
        results = result;
      }
    }
    return { results: results, bookmark: bookmark };
  }
}</code></pre>
            <p>The D1 Durable Object first figures out if this instance can handle the query, or if the query needs to be sent to the primary.  If the Durable Object can execute the query, it ensures that we execute the query with a bookmark at least as up-to-date as the bookmark requested by the binding.</p><p>The upshot is that the three pieces of code work together to ensure that all of the queries in the session see the database in a sequentially consistent order, because each new query will be blocked until it has seen the results of previous queries within the same session.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>D1’s new read replication feature is a significant step towards making globally distributed databases easier to use without sacrificing consistency. With automatically provisioned replicas in every region, your applications can now serve read queries faster while maintaining strong sequential consistency across requests, and keeping your application Worker code simple.</p><p>We’re excited for developers to explore this feature and see how it improves the performance of your applications. The public beta is just the beginning—we’re actively refining and expanding D1’s capabilities, including evolving replica placement policies, and your feedback will help shape what’s next.</p><p>Note that the Sessions API is only available through the <a href="https://developers.cloudflare.com/d1/worker-api/"><u>D1 Worker Binding</u></a> for now, and support for the HTTP REST API will follow soon.</p><p>Try out D1 read replication today by clicking the “Deploy to Cloudflare" button, check out <a href="http://developers.cloudflare.com/d1/best-practices/read-replication/"><u>documentation and examples</u></a>, and let us know what you build in the <a href="https://discord.com/channels/595317990191398933/992060581832032316"><u>D1 Discord channel</u></a>!</p><a href="https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/d1-starter-sessions-api-template"><img src="https://deploy.workers.cloudflare.com/button" /></a>
<p></p><p></p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[D1]]></category>
            <category><![CDATA[Deep Dive]]></category>
            <category><![CDATA[Edge Database]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[SQL]]></category>
            <guid isPermaLink="false">2qUAO70BqnRBomg83fCRPe</guid>
            <dc:creator>Justin Mazzola Paluska</dc:creator>
            <dc:creator>Lambros Petrou</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare acquires Outerbase to expand database and agent developer experience capabilities]]></title>
            <link>https://blog.cloudflare.com/cloudflare-acquires-outerbase-database-dx/</link>
            <pubDate>Mon, 07 Apr 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare has acquired Outerbase, expanding our database and agent developer experience capabilities. ]]></description>
            <content:encoded><![CDATA[ <p>I’m thrilled to share that Cloudflare has acquired <a href="https://www.outerbase.com/"><u>Outerbase</u></a>. This is such an amazing opportunity for us, and I want to explain how we got here, what we’ve built so far, and why we are so excited about becoming part of the Cloudflare team.</p><p>Databases are key to building almost any production application: you need to persist state for your users (or agents), be able to query it from a number of different clients, and you want it to be fast. But databases aren’t always easy to use: designing a good schema, writing performant queries, creating indexes, and optimizing your access patterns tends to require a lot of experience. Add that to exposing your data through easy-to-grok APIs that make the ‘right’ way to do things obvious, a great developer experience (from dashboard to CLI), and well… there’s a lot of work involved.</p><p>The Outerbase team is already getting to work on some big changes to how databases (and your data) are viewed, edited, and visualized from within <a href="https://developers.cloudflare.com/workers/"><u>Workers</u></a>, and we’re excited to give you a few sneak peeks into what we’ll be landing as we get to work.</p>
    <div>
      <h3>Database DX</h3>
      <a href="#database-dx">
        
      </a>
    </div>
    <p>When we first started Outerbase, we saw how complicated databases could be. Even experienced developers struggled with writing queries, indexing data, and locking down their data. Meanwhile, non-developers often felt locked out and that they couldn’t access the data they needed. We believed there had to be a better way. From day one, our goal was to make data accessible to everyone, no matter their skill level. While it started out by simply building a better database interface, it quickly evolved into something much more special.</p><p>Outerbase became a platform that helps you manage data in a way that feels natural. You can browse tables, edit rows, and run queries without having to deal with memorizing SQL structure. Even if you do know SQL, you can use Outerbase to dive in deeper and share your knowledge with your team. We also added visualization features so entire teams, both technical and not, could see what’s happening with their data at a glance. Then, with the growth of AI, we realized we could use it to handle many of the more complicated tasks.</p><p>One of our more exciting offerings is Starbase, a <a href="https://www.cloudflare.com/developer-platform/products/d1/">SQLite-compatible database</a> built on top of Cloudflare’s <a href="https://developers.cloudflare.com/durable-objects/"><u>Durable Objects</u></a>. Our goal was never to simply wrap a legacy system in a shiny interface; we wanted to make it so easy to get started from day one with nothing, and Cloudflare’s Durable Objects gave us a way to easily manage and spin up databases for anyone who needed one. On top of them, we provided automatic REST APIs, row-level security, WebSocket support for streaming queries, and much more.</p>
    <div>
      <h3>1 + 1 = 3</h3>
      <a href="#1-1-3">
        
      </a>
    </div>
    <p>Our collaboration with Cloudflare first started last year, when we introduced a way for developers to import and manage their <a href="https://developers.cloudflare.com/d1/"><u>D1</u></a> databases inside Outerbase. We were impressed with how powerful Cloudflare’s tools are for deploying and scaling applications. As we worked together, we quickly saw how well our missions aligned. Cloudflare was building the infrastructure we wished we’d had when we first started, and we were building the data experience that many Cloudflare developers were asking for. This eventually led to the seemingly obvious decision of Outerbase joining Cloudflare — it just made so much sense.</p><p>Going forward, we’ll integrate Outerbase’s core features into Cloudflare’s platform. If you’re a developer using D1 or Durable Objects, you’ll start seeing features from Outerbase show up in the Cloudflare dashboard. Expect to see our data explorer for browsing and editing tables, new REST APIs, query editor with type-ahead functionality, real-time data capture, and more of the other tooling we’ve been refining over the last couple of years show up inside the Cloudflare dashboard.</p><p>As part of this transition, the hosted Outerbase cloud will shut down on October 15, 2025, which is about six months from now. We know some of you rely on Outerbase as it stands today, so we’re leaving the open-source repositories as they are.</p><p>You will still be able to self-host Outerbase if you prefer, and we’ll provide guidance on how to do that within your own Cloudflare account. Our main goal will be to ensure that the best parts of Outerbase become part of the Cloudflare developer experience, so you no longer have to make a choice (it’ll be obvious!).</p>
    <div>
      <h3>Sneak peek</h3>
      <a href="#sneak-peek">
        
      </a>
    </div>
    <p>We’ve already done a lot of thinking about how we’re going to bring the best parts of Outerbase into D1, Durable Objects, Workflows, and Agents, and we’re going to a share a little about what will be landing over the course of Q2 2025 as the Outerbase team gets to work.</p><p>Specifically, we’ll be heads-down focusing on:</p><ul><li><p>Adapting the powerful table viewer and query runner experiences to D1 and Durable Objects (amongst many other things!)</p></li><li><p>Making it easier to get started with Durable Objects: improving the experience in Wrangler (our CLI tooling), the Cloudflare dashboard, and how you plug into them from your client applications</p></li><li><p>Improvements to how you visualize the state of a Workflow and the (thousands to millions!) of Workflow instances you might have at any point in time</p></li><li><p>Pre- and post-query hooks for D1 that allow you to automatically register handlers that can act on your data</p></li><li><p>Bringing the <a href="https://starbasedb.com/"><u>Starbase</u></a> API to D1, expanding D1’s existing REST API, and adding WebSockets support — making it easier to use D1, even for applications hosted outside of Workers.</p></li></ul><p>We have already started laying the groundwork for these changes. In the coming weeks, we’ll release a unified data explorer for D1 and Durable Objects that borrows heavily from the Outerbase interface you know. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/FHinAqMr5I8ukmIZLln3a/a34734a3ed680556b01794c6de5e1f63/image2.png" />
          </figure><p><i><sup>Bringing Outerbase’s Data Explorer into the Cloudflare Dashboard</sup></i></p><p>We’ll also tie some of Starbase’s features directly into Cloudflare’s platform, so you can tap into its unique offerings like pre- and post-query hooks or row-level security right from your existing D1 databases and Durable Objects:</p>
            <pre><code>const beforeQuery = ({ sql, params }) =&gt; {
    // Prevent unauthorized queries
    if (!isAllowedQuery(sql)) throw new Error('Query not allowed');
};

const afterQuery = ({ sql, result }) =&gt; {
    // Basic PII masking example
    for (const row of result) {
        if ('email' in row) row.email = '[redacted]';
    }
};

// Execute the query with pre- and post- query hooks
const { results } = await env.DB.prepare("SELECT * FROM users;", beforeQuery, afterQuery);</code></pre>
            <p><i><sup>Define hooks on your D1 queries that can be re-used, shared and automatically executed before or after your queries run.</sup></i></p><p>This should give you more clarity and control over your data, as well as new ways to secure and optimize it.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6W2C3WRqP13ghnHYnZsJHl/fecc1a6f8e92b6cac9499716ab5d7bc4/image1.png" />
          </figure><p><sup><i>Rethinking the Durable Objects getting started experience</i></sup></p><p>We have even begun optimizing the Cloudflare dashboard experience around Durable Objects and D1 to improve the empty state, provide more Getting Started resources, and overall, make managing and tracking your database resources even easier.

For those of you who’ve supported us, given us feedback, and stuck with us as we grew: thank you. You have helped shape Outerbase into what it is today. This acquisition means we can pour even more resources and attention into building the data experience we’ve always wanted to deliver. Our hope is that, by working as part of Cloudflare, we can help reach even more developers by building intuitive experiences, accelerating the speed of innovation, and creating tools that naturally fit into your workflows.</p><p>This is a big step for Outerbase, and we couldn’t be more excited. Thank you for being part of our journey so far. We can’t wait to show you what we’ve got in store as we continue to make data more accessible, intuitive, and powerful — together with Cloudflare.</p>
    <div>
      <h3>What’s next?</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>We’re planning to get to work on some of the big changes to how you interact with your data on Cloudflare, starting with D1 and Durable Objects.</p><p>We’ll also be ensuring we bring a great developer experience to the broader database &amp; storage platform on Cloudflare, including how you access data in <a href="https://developers.cloudflare.com/kv/"><u>Workers KV</u></a>, <a href="https://developers.cloudflare.com/r2/"><u>R2</u></a>, <a href="https://developers.cloudflare.com/workflows/"><u>Workflows</u></a> and even your <a href="https://developers.cloudflare.com/agents/"><u>AI Agents</u></a> (just to name a few).</p><p>To keep up, follow the new <a href="https://developers.cloudflare.com/changelog/"><u>Cloudflare Changelog</u></a> and join our <a href="http://discord.cloudflare.com/"><u>Developer Discord</u></a> to chat with the team and see early previews before they land.</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[D1]]></category>
            <category><![CDATA[Durable Objects]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <guid isPermaLink="false">4Epls86yTVhCR1tmlP4u67</guid>
            <dc:creator>Brandon Strittmatter</dc:creator>
            <dc:creator>Matt Silverlock</dc:creator>
        </item>
        <item>
            <title><![CDATA[Billions and billions (of logs): scaling AI Gateway with the Cloudflare Developer Platform]]></title>
            <link>https://blog.cloudflare.com/billions-and-billions-of-logs-scaling-ai-gateway-with-the-cloudflare/</link>
            <pubDate>Thu, 24 Oct 2024 13:00:00 GMT</pubDate>
            <description><![CDATA[ How we scaled AI Gateway to handle and store billions of requests, using Cloudflare Workers, D1, Durable Objects, and R2. ]]></description>
            <content:encoded><![CDATA[ <p>With the rapid advancements occurring in the AI space, developers face significant challenges in keeping up with the ever-changing landscape. New models and providers are continuously emerging, and understandably, developers want to experiment and test these options to find the best fit for their use cases. This creates the need for a streamlined approach to managing multiple models and providers, as well as a centralized platform to efficiently monitor usage, implement controls, and gather data for optimization.</p><p><a href="https://developers.cloudflare.com/ai-gateway/"><u>AI Gateway</u></a> is specifically designed to address these pain points. Since its launch in <a href="https://blog.cloudflare.com/announcing-ai-gateway"><u>September 2023</u></a>, AI Gateway has empowered developers and organizations by successfully proxying over 2 billion requests in just one year, as we <a href="https://blog.cloudflare.com/workers-ai-bigger-better-faster/#optimizing-ai-workflows-with-ai-gateway"><u>highlighted during September’s Birthday Week</u></a>. With AI Gateway, developers can easily store, analyze, and optimize their AI <a href="https://www.cloudflare.com/learning/ai/inference-vs-training/"><u>inference</u></a> requests and responses in real time.</p><p>With our initial architecture, AI Gateway faced a significant challenge: the logs, those critical trails of data interactions between applications and AI models, could only be retained for 30 minutes. This limitation was not just a minor inconvenience; it posed a substantial barrier for developers and businesses needing to analyze long-term patterns, ensure compliance, or simply debug over more extended periods.</p><p>In this post, we'll explore the technical challenges and strategic decisions behind extending our log storage capabilities from 30 minutes to being able to store billions of logs indefinitely. We'll discuss the challenges of scale, the intricacies of data management, and how we've engineered a system that not only meets the demands of today, but is also scalable for the future of AI development.</p>
    <div>
      <h2>Background</h2>
      <a href="#background">
        
      </a>
    </div>
    <p>AI Gateway is built on <a href="https://workers.cloudflare.com"><u>Cloudflare Workers</u></a>, a serverless platform that runs on the Cloudflare network, allowing developers to write small JavaScript functions that can execute at the point of need, near the user, on Cloudflare's vast network of data centers, without worrying about platform scalability.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6jV3iKCN771ixU21Hixfpz/18086a52cfe05cd20f1c94bbba21e293/_BLOG-2593_2.png" />
          </figure><p>Our customers use multiple providers and models and are always looking to optimize the way they do inference. And, of course, in order to evaluate their prompts, performance, cost, and to troubleshoot what’s going on, AI Gateway’s customers need to store requests and responses. New requests show up within 15 seconds and customers can check a request’s cost, duration, number of tokens, and provide their feedback (thumbs up or down).</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/RBqZXnLJNCaQPbtbzjQmj/70aa2598f9b9294b67db8cd5712a6345/_BLOG-2593_3.png" />
          </figure><p>This scales in a way where an account can have multiple gateways and each gateway has its own settings. In our first implementation, a backend worker was responsible for storing Real Time Logs and other background tasks. However, in the rapidly evolving domain of artificial intelligence, where real-time data is as precious as the insights it provides, <a href="https://www.cloudflare.com/learning/performance/log-retention-best-practices/">managing log data efficiently</a> becomes paramount. We recognized that to truly empower our users, we needed to offer a solution where logs weren't just transient records but could be stored permanently. Permanent log storage means developers can now track the performance, security, and operational insights of their AI applications over time, enabling not only immediate troubleshooting but also longitudinal studies of AI behavior, usage trends, and system health.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1TcC1ZdyNzT0xwFwme2oBt/a9202691a0a983fa3eafdf6c0ee92f2c/_BLOG-2593_4.png" />
          </figure><p>The diagram above describes our old architecture, which could only store 30 minutes of data.</p><p>Tracing the path of a request through the AI Gateway, as depicted in the sequence above:</p><ol><li><p>A developer sends a new inference request, which is first received by our Gateway Worker.</p></li><li><p>The Gateway Worker then performs several checks: it looks for cached results, enforces rate limits, and verifies any other configurations set by the user for their gateway. Provided all conditions are met, it forwards the request to the selected inference provider (in this diagram, OpenAI).</p></li><li><p>The inference provider processes the request and sends back the response.</p></li><li><p>Simultaneously, as the response is relayed back to the developer, the request and response details are also dispatched to our Backend Worker. This worker's role is to manage and store the log of this transaction.</p></li></ol>
    <div>
      <h2>The challenge: Store two billion logs</h2>
      <a href="#the-challenge-store-two-billion-logs">
        
      </a>
    </div>
    
    <div>
      <h3>First step: real-time logs</h3>
      <a href="#first-step-real-time-logs">
        
      </a>
    </div>
    <p>Initially, the AI Gateway project stored both request metadata and the actual request bodies in a <a href="https://developers.cloudflare.com/d1/"><u>D1 database</u></a>. This approach facilitated rapid development in the project's infancy. However, as customer engagement grew, the <a href="https://www.cloudflare.com/developer-platform/products/d1/">D1 database</a> began to fill at an accelerating rate, eventually retaining logs for only 30 minutes at a time.</p><p>To mitigate this, we first optimized the database schema, which extended the log retention to one hour. However, we soon encountered diminishing returns due to the sheer volume of byte data from the request bodies. Post-launch, it became clear that a more scalable solution was necessary. We decided to migrate the request bodies to R2 storage, significantly alleviating the data load on D1. This adjustment allowed us to incrementally extend log retention to 24 hours.</p><p>Consequently, D1 functioned primarily as a log index, enabling users to search and filter logs efficiently. When users needed to view details or download a log, these actions were seamlessly proxied through to R2.</p><p>This dual-system approach provided us with the breathing room to contemplate and develop more sophisticated storage solutions for the future.</p>
    <div>
      <h3>Second step: persistent logs and Durable Object transactional storage</h3>
      <a href="#second-step-persistent-logs-and-durable-object-transactional-storage">
        
      </a>
    </div>
    <p>As our traffic surged, we encountered a growing number of requests from customers wanting to access and compare older logs.</p><p>Upon learning that the Durable Objects team was seeking beta testers for their new <a href="https://blog.cloudflare.com/sqlite-in-durable-objects/"><u>Durable Objects with SQLite</u></a>, we eagerly signed up.</p><p>Originally, we considered Durable Objects as the ideal solution for expanding our log storage capacity, which required us to shard the logs by a unique string. Initially, this string was the account ID, but during a mid-development load test, we hit a cap at 10 million logs per Durable Object. This limitation meant that each account could only support up to this number of logs.</p><p>Given our commitment to the DO migration, we saw an opportunity rather than a constraint. To overcome the 10 million log limit per account, we refined our approach to shard by both account ID and gateway name. This adjustment effectively raised the storage ceiling from 10 million logs per account to 10 million per gateway. With the default setting allowing each account up to 10 gateways, the potential storage for each account skyrocketed to 100 million logs.</p><p>This strategic pivot not only enabled us to store a significantly larger number of logs. But also enhanced our flexibility in gateway management. Now, when a gateway is deleted, we can simply remove the corresponding Durable Object.</p><p>Additionally, this sharding method isolates high-volume request scenarios. If one customer's heavy usage slows down log insertion, it only impacts their specific Durable Object, thereby preserving performance for other customers.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3Q6degDA3V02dZFVugW2LO/ae121890a3d4493e5c01459c477f32d9/_BLOG-2593_5.png" />
          </figure><p>Taking a glance at the revised architecture diagram, we replaced the Backend Worker with our newly integrated Durable Object. The rest of the request flow remains unchanged, including the concurrent response to the user and the interaction with the Durable Object, which occurs in the fourth step.</p><p>Leveraging Cloudflare’s network, our Gateway Worker operates near the user's location, which in turn positions the user's Durable Object close by. This proximity significantly enhances the speed of log insertion and query operations.</p>
    <div>
      <h3>Third step: managing thousands of Durable Objects</h3>
      <a href="#third-step-managing-thousands-of-durable-objects">
        
      </a>
    </div>
    <p>As the number of users and requests on AI Gateway grows, managing each unique Durable Object (DO) becomes increasingly complex. New customers join continuously, and we needed an efficient method to track each DO, ensure users stay within their 10 gateway limit, and manage the storage capacity for free users.</p><p>To address these challenges, we introduced another layer of control with a new Durable Object we've named the Account Manager. The primary function of the Account Manager is straightforward yet crucial: it keeps user activities in check.</p><p>Here's how it works: before any Gateway commits a new log to permanent storage, it consults the Account Manager. This check determines whether the gateway is allowed to insert the log based on the user's current usage and entitlements. The Account Manager uses its own SQLite database to verify the total number of rows a user has and their service level. If all checks pass, it signals the Gateway that the log can be inserted. It was paramount to guarantee that this entire validation process occurred in the background, ensuring that the user experience remains seamless and uninterrupted.</p><p>The Account Manager stays updated by periodically receiving data from each Gateway’s Durable Object. Specifically, after every 1000 inference requests, the Gateway sends an update on its total rows to the Account Manager, which then updates its local records. This system ensures that the Account Manager has the most current data when making its decisions.</p><p>Additionally, the Account Manager is responsible for monitoring customer entitlements. It tracks whether an account is on a free or paid plan, how many gateways a user is permitted to create, and the log storage capacity allocated to each gateway. </p><p>Through these mechanisms, the Account Manager not only helps in maintaining system integrity but also ensures fair usage across all users of AI Gateway.</p>
    <div>
      <h2>AI evaluations and Durable Objects sharding</h2>
      <a href="#ai-evaluations-and-durable-objects-sharding">
        
      </a>
    </div>
    <p>As we continue to develop evaluations to fully automatic and, in the future, use Large Language Models (LLMs),  we are now taking the first step towards this goal and launching the open beta phase of comprehensive <a href="https://blog.cloudflare.com/workers-ai-bigger-better-faster/#optimizing-ai-workflows-with-ai-gateway"><u>AI evaluations</u></a>, centered on Human-in-the-Loop feedback.</p><p>This feature empowers users to create bespoke datasets from their application logs, thereby enabling them to score and evaluate the performance, speed, and cost-effectiveness of their models, with a primary focus on LLMs and automated scoring, analyzing the performance of LLMs, providing developers with objective, data-driven insights to refine their models.</p><p>To do this, developers require a reliable logging mechanism that persists logs from multiple gateways, storing up to 100 million logs in total (10 million logs per gateway, across 10 gateways). This represents a significant volume of data, as each request made through the AI Gateway generates a log entry, with some log entries potentially exceeding 50 MB in size.</p><p>This necessity leads us to work on the expansion of log storage capabilities. Since log storage is limited to 10 million logs per gateway, in future iterations, we aim to scale this capacity by implementing sharded Durable Objects (DO), allowing multiple Durable Objects per gateway to handle and store logs. This scaling strategy will enable us to store significantly larger volumes of logs, providing richer data for evaluations (using LLMs as a judge or from user input), all through AI Gateway.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7FLy2JEfvGFo8P7PCVBZYT/a4d6367341e9fc224dedaad3aa0f02e2/_BLOG-2593_6.png" />
          </figure>
    <div>
      <h2>Coming Soon</h2>
      <a href="#coming-soon">
        
      </a>
    </div>
    <p>We are working on improving our existing <a href="https://developers.cloudflare.com/ai-gateway/providers/universal/"><u>Universal Endpoint</u></a>, the next step on an enhanced solution that builds on existing fallback mechanisms to offer greater resilience, flexibility, and intelligence in request management.</p><p>Currently, when a provider encounters an error or is unavailable, our system <a href="https://developers.cloudflare.com/ai-gateway/configuration/fallbacks/"><u>falls back</u></a> to an alternative provider to ensure continuity. The improved Universal Endpoint takes this a step further by introducing automatic retry capabilities, allowing failed requests to be reattempted before fallback is triggered. This significantly improves reliability by handling transient errors and increasing the likelihood of successful request fulfillment. It will look something like this:</p>
            <pre><code>curl --location 'https://aig.example.com/' \
--header 'CF-AIG-TOKEN: Bearer XXXX' \
--header 'Content-Type: application/json' \
--data-raw '[
    {
        "id": "0001",
        "provider": "openai",
        "endpoint": "chat/completions",
        "headers": {
            "Authorization": "Bearer XXXX",
            "Content-Type": "application/json"
        },
        "query": {
            "model": "gpt-3.5-turbo",
            "messages": [
                {
                    "role": "user",
                    "content": "generate a prompt to create cloudflare random images"
                }
            ]
        },
        "option": {
            "retry": 2,
            "delay": 200,
            "onComplete": {
                "provider": "workers-ai",
                "endpoint": "@cf/stabilityai/stable-diffusion-xl-base-1.0",
                "headers": {
                    "Authorization": "Bearer XXXXXX",
                    "Content-Type": "application/json"
                },
                "query": {
                    "messages": [
                        {
                            "role": "user",
                            "content": "&lt;prompt-response id='\''0001'\'' /&gt;"
                        }
                    ]
                }
            }
        }
    },
    {
        "provider": "workers-ai",
        "endpoint": "@cf/stabilityai/stable-diffusion-xl-base-1.0",
        "headers": {
            "Authorization": "Bearer XXXXXX",
            "Content-Type": "application/json"
        },
        "query": {
            "messages": [
                {
                    "role": "user",
                    "content": "create a image of a missing cat"
                }
            ]
        }
    }
]'</code></pre>
            <p>The request to the improved Universal Endpoint system demonstrates how it handles multiple providers with integrated retry mechanisms and fallback logic. In this example, the first request is sent to a provider like OpenAI, asking it to generate a text-to-image prompt. The “retry” option ensures that transient issues don’t result in immediate failure.</p><p>The system’s ability to seamlessly switch between providers while applying retry strategies ensures higher reliability and robustness in managing requests. By leveraging fallback logic, the Improved Universal Endpoint can dynamically adapt to provider failures, ensuring that tasks are completed successfully even in complex, multi-step workflows.</p><p>In addition to retry logic, we will have the ability to inspect requests and responses and make dynamic decisions based on the content of the result. This enables developers to create conditional workflows where the system can adapt its behavior depending on the nature of the response, creating a highly flexible and intelligent decision-making process.</p><p>If you haven’t yet used AI Gateway, check out our <a href="https://developers.cloudflare.com/ai-gateway/"><u>developer documentation</u></a> on how to get started. If you have any questions, reach out on our <a href="http://discord.cloudflare.com/"><u>Discord channel</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[AI Gateway]]></category>
            <category><![CDATA[Durable Objects]]></category>
            <category><![CDATA[D1]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <guid isPermaLink="false">2LUyKREpCJjJ5qGqwZyoAx</guid>
            <dc:creator>Catarina Pires Mota</dc:creator>
            <dc:creator>Gabriel Massadas</dc:creator>
            <dc:creator>Nelson Duarte</dc:creator>
        </item>
        <item>
            <title><![CDATA[Developer Week 2024 wrap-up]]></title>
            <link>https://blog.cloudflare.com/developer-week-2024-wrap-up/</link>
            <pubDate>Mon, 08 Apr 2024 13:00:02 GMT</pubDate>
            <description><![CDATA[ Developer Week 2024 has officially come to a close. Here’s a quick recap of the announcements and in-depth technical explorations that went out last week ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7fwPu75tSubJgSS8nJ5gOt/6e2fd9b7cc6f9dcd7b86d73988a6e5fb/Dev-week-wrap-up-1.jpg" />
            
            </figure><p>Developer Week 2024 has officially come to a close. Each day last week, we shipped new products and functionality geared towards giving developers the components they need to build full-stack applications on Cloudflare.</p><p>Even though Developer Week is now over, we are continuing to innovate with the over two million developers who build on our platform. Building a platform is only as exciting as seeing what developers build on it. Before we dive into a recap of the announcements, to send off the week, we wanted to share how a couple of companies are using Cloudflare to power their applications:</p><blockquote><p><i>We have been using Workers for image delivery using R2 and have been able to maintain stable operations for a year after implementation. The speed of deployment and the flexibility of detailed configurations have greatly reduced the time and effort required for traditional server management. In particular, we have seen a noticeable cost savings and are deeply appreciative of the support we have received from Cloudflare Workers.</i>- <a href="http://www.fancs.com/">FAN Communications</a></p></blockquote><blockquote><p><i>Milkshake helps creators, influencers, and business owners create engaging web pages directly from their phone, to simply and creatively promote their projects and passions. Cloudflare has helped us migrate data quickly and affordably with R2. We use Workers as a routing layer between our users' websites and their images and assets, and to build a personalized analytics offering affordably. Cloudflare’s innovations have consistently allowed us to run infrastructure at a fraction of the cost of other developer platforms and we have been eagerly awaiting updates to D1 and Queues to sustainably scale Milkshake as the product continues to grow.</i>- <a href="https://milkshake.app/">Milkshake</a></p></blockquote><p>In case you missed anything, here’s a quick recap of the announcements and in-depth technical explorations that went out last week:</p>
    <div>
      <h2>Summary of announcements</h2>
      <a href="#summary-of-announcements">
        
      </a>
    </div>
    
    <div>
      <h3>Monday</h3>
      <a href="#monday">
        
      </a>
    </div>
    
<table>
<thead>
  <tr>
    <th><span>Announcement</span></th>
    <th><span>Summary</span></th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/making-full-stack-easier-d1-ga-hyperdrive-queues"><span>Making state easy with D1 GA, Hyperdrive, Queues and Workers Analytics Engine updates</span></a></td>
    <td><span>A core part of any full-stack application is storing and persisting data! We kicked off the week with announcements that help developers build stateful applications on top of Cloudflare, including making D1, Cloudflare’s SQL database and Hyperdrive, our database accelerating service, generally available.</span></td>
  </tr>
  <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/building-d1-a-global-database"><span>Building D1: a Global Database</span></a></td>
    <td><span>D1, Cloudflare’s SQL database, is now generally available. With new support for 10GB databases, data export, and enhanced query debugging, we empower developers to build production-ready applications with D1 to meet all their relational SQL needs. To support Workers in global applications, we’re sharing a sneak peek of our design and API for D1 global read replication to demonstrate how developers scale their workloads with D1.</span></td>
  </tr>
  <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/workers-environment-live-object-bindings"><span>Why Workers environment variables contain live objects</span></a></td>
    <td><span>Bindings don't just reduce boilerplate. They are a core design feature of the Workers platform which simultaneously improve developer experience and application security in several ways. Usually these two goals are in opposition to each other, but bindings elegantly solve for both at the same time.</span></td>
  </tr>
</tbody>
</table>
    <div>
      <h3>Tuesday</h3>
      <a href="#tuesday">
        
      </a>
    </div>
    
<table>
<thead>
  <tr>
    <th><span>Announcement</span></th>
    <th><span>Summary</span></th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/workers-ai-ga-huggingface-loras-python-support"><span>Leveling up Workers AI: General Availability and more new capabilities</span></a></td>
    <td><span>We made a series of AI-related announcements, including Workers AI, Cloudflare’s inference platform becoming GA, support for fine-tuned models with LoRAs, one-click deploys from HuggingFace, Python support for Cloudflare Workers, and more.</span></td>
  </tr>
  <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/fine-tuned-inference-with-loras"><span>Running fine-tuned models on Workers AI with LoRAs</span></a></td>
    <td><span>Workers AI now supports fine-tuned models using LoRAs. But what is a LoRA and how does it work? In this post, we dive into fine-tuning, LoRAs and even some math to share the details of how it all works under the hood.</span></td>
  </tr>
  <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/python-workers"><span>Bringing Python to Workers using Pyodide and WebAssembly</span></a></td>
    <td><span>We introduced Python support for Cloudflare Workers, now in open beta. We've revamped our systems to support Python, from the Workers runtime itself to the way Workers are deployed to Cloudflare’s network. Learn about a Python Worker's lifecycle, Pyodide, dynamic linking, and memory snapshots in this post.</span></td>
  </tr>
</tbody>
</table>
    <div>
      <h3>Wednesday</h3>
      <a href="#wednesday">
        
      </a>
    </div>
    
<table>
<thead>
  <tr>
    <th><span>Announcement</span></th>
    <th><span>Summary</span></th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/r2-events-gcs-migration-infrequent-access"><span>R2 adds event notifications, support for migrations from Google Cloud Storage, and an infrequent access storage tier</span></a></td>
    <td><span>We announced three new features for Cloudflare R2: event notifications, support for migrations from Google Cloud Storage, and an infrequent access storage tier.</span></td>
  </tr>
  <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/data-anywhere-events-pipelines-durable-execution-workflows"><span>Data Anywhere with Pipelines, Event Notifications, and Workflows</span></a></td>
    <td><span>We’re making it easier to build scalable, reliable, data-driven applications on top of our global network, and so we announced a new Event Notifications framework; our take on durable execution, Workflows; and an upcoming streaming ingestion service, Pipelines.</span></td>
  </tr>
  <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/prisma-orm-and-d1"><span>Improving Cloudflare Workers and D1 developer experience with Prisma ORM</span></a></td>
    <td><span>Together, Cloudflare and Prisma make it easier than ever to deploy globally available apps with a focus on developer experience. To further that goal, Prisma ORM now natively supports Cloudflare Workers and D1 in Preview. With version 5.12.0 of Prisma ORM you can now interact with your data stored in D1 from your Cloudflare Workers with the convenience of the Prisma Client API. Learn more and try it out now.</span></td>
  </tr>
  <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/picsart-move-to-workers-huge-performance-gains"><span>How Picsart leverages Cloudflare's Developer Platform to build globally performant services</span></a></td>
    <td><span>Picsart, one of the world’s largest digital creation platforms, encountered performance challenges in catering to its global audience. Adopting Cloudflare's global-by-default Developer Platform emerged as the optimal solution, empowering Picsart to enhance performance and scalability substantially.</span></td>
  </tr>
</tbody>
</table>
    <div>
      <h3>Thursday</h3>
      <a href="#thursday">
        
      </a>
    </div>
    
<table>
<thead>
  <tr>
    <th><span>Announcement</span></th>
    <th><span>Summary</span></th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/pages-workers-integrations-monorepos-nextjs-wrangler"><span>Announcing Pages support for monorepos, wrangler.toml, database integrations and more!</span></a></td>
    <td><span>We launched four improvements to Pages that bring functionality previously restricted to Workers, with the goal of unifying the development experience between the two. Support for monorepos, wrangler.toml, new additions to Next.js support and database integrations!</span></td>
  </tr>
  <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/workers-production-safety"><span>New tools for production safety — Gradual Deployments, Stack Traces, Rate Limiting, and API SDKs</span></a></td>
    <td><span>Production readiness isn’t just about scale and reliability of the services you build with. We announced five updates that put more power in your hands – Gradual Deployments, Source mapped stack traces in Tail Workers, a new Rate Limiting API, brand-new API SDKs, and updates to Durable Objects – each built with mission-critical production services in mind.</span></td>
  </tr>
  <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/whats-next-for-cloudflare-media"><span>What’s new with Cloudflare Media: updates for Calls, Stream, and Images</span></a></td>
    <td><span>With Cloudflare Calls in open beta, you can build real-time, serverless video and audio applications. Cloudflare Stream lets your viewers instantly clip from ongoing streams. Finally, Cloudflare Images now supports automatic face cropping and has an upload widget that lets you easily integrate into your application.</span></td>
  </tr>
  <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/cloudflare-calls-anycast-webrtc"><span>Cloudflare Calls: millions of cascading trees all the way down</span></a></td>
    <td><span>Cloudflare Calls is a serverless SFU and TURN service running at Cloudflare’s edge. It’s now in open beta and costs $0.05/ real-time GB. It’s 100% anycast WebRTC.</span></td>
  </tr>
</tbody>
</table>
    <div>
      <h3>Friday</h3>
      <a href="#friday">
        
      </a>
    </div>
    
<table>
<thead>
  <tr>
    <th><span>Announcement</span></th>
    <th><span>Summary</span></th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/browser-rendering-api-ga-rolling-out-cloudflare-snippets-swr-and-bringing-workers-for-platforms-to-our-paygo-plans"><span>Browser Rendering API GA, rolling out Cloudflare Snippets, SWR, and bringing Workers for Platforms to all users</span></a></td>
    <td><span>Browser Rendering API is now available to all paid Workers customers with improved session management.</span></td>
  </tr>
  <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/cloudflare-acquires-baselime-expands-observability-capabilities"><span>Cloudflare acquires Baselime to expand serverless application observability capabilities</span></a></td>
    <td><span>We announced that Cloudflare has acquired Baselime, a serverless observability company.</span></td>
  </tr>
  <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/cloudflare-acquires-partykit"><span>Cloudflare acquires PartyKit to allow developers to build real-time multi-user applications</span></a></td>
    <td><span>We announced that PartyKit, a trailblazer in enabling developers to craft ambitious real-time, collaborative, multiplayer applications, is now a part of Cloudflare. This acquisition marks a significant milestone in our journey to redefine the boundaries of serverless computing, making it more dynamic, interactive, and, importantly, stateful.</span></td>
  </tr>
  <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/blazing-fast-development-with-full-stack-frameworks-and-cloudflare"><span>Blazing fast development with full-stack frameworks and Cloudflare</span></a></td>
    <td><span>Full-stack web development with Cloudflare is now faster and easier! You can now use your framework’s development server while accessing D1 databases, R2 object stores, AI models, and more. Iterate locally in milliseconds to build sophisticated web apps that run on Cloudflare. Let’s dev together!</span></td>
  </tr>
  <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/javascript-native-rpc"><span>We've added JavaScript-native RPC to Cloudflare Workers</span></a></td>
    <td><span>Cloudflare Workers now features a built-in RPC (Remote Procedure Call) system for use in Worker-to-Worker and Worker-to-Durable Object communication, with absolutely minimal boilerplate. We've designed an RPC system so expressive that calling a remote service can feel like using a library.</span></td>
  </tr>
  <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/2024-community-update"><span>Community Update: empowering startups building on Cloudflare and creating an inclusive community</span></a></td>
    <td><span>We closed out Developer Week by sharing updates on our Workers Launchpad program, our latest Developer Challenge, and the work we’re doing to ensure our community spaces – like our Discord and Community forums – are safe and inclusive for all developers.</span></td>
  </tr>
</tbody>
</table><p>Here's a video summary, by Craig Dennis, Developer Educator, AI:</p><blockquote><p>🏃<a href="https://twitter.com/CloudflareDev?ref_src=twsrc%5Etfw">@CloudflareDev</a> Developer Week 2024 🧡 ICYMI 🧡 Speed run <a href="https://t.co/0uzPJshC93">pic.twitter.com/0uzPJshC93</a></p>— Craig Dennis (@craigsdennis) <a href="https://twitter.com/craigsdennis/status/1778875721575989734?ref_src=twsrc%5Etfw">April 12, 2024</a></blockquote> 
    <div>
      <h3>Continue the conversation</h3>
      <a href="#continue-the-conversation">
        
      </a>
    </div>
    <p>Thank you for being a part of Developer Week! Want to continue the conversation and share what you’re building? Join us on <a href="https://discord.com/invite/cloudflaredev">Discord</a>. To get started building on Workers, check out our <a href="https://developers.cloudflare.com/workers/">developer documentation</a>.</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Cloudflare Pages]]></category>
            <category><![CDATA[Rate Limiting]]></category>
            <category><![CDATA[API]]></category>
            <category><![CDATA[D1]]></category>
            <category><![CDATA[Connectivity Cloud]]></category>
            <guid isPermaLink="false">VNnYecAmN7CpST4nBbas0</guid>
            <dc:creator>Phillip Jones</dc:creator>
        </item>
        <item>
            <title><![CDATA[Improving Cloudflare Workers and D1 developer experience with Prisma ORM]]></title>
            <link>https://blog.cloudflare.com/prisma-orm-and-d1/</link>
            <pubDate>Wed, 03 Apr 2024 13:03:09 GMT</pubDate>
            <description><![CDATA[ Together, Cloudflare and Prisma make it easier than ever to deploy globally available apps with a focus on developer experience. To further that goal, Prisma ORM now natively supports Cloudflare Workers and D1 in Preview ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Working with <a href="https://www.cloudflare.com/developer-platform/products/d1/">databases</a> can be difficult. Developers face increasing data complexity and needs beyond simple create, read, update, and delete (CRUD) operations. Unfortunately, these issues also compound on themselves: developers have a harder time iterating in an increasingly complex environment. Cloudflare Workers and D1 help by reducing time spent managing infrastructure and deploying applications, and <a href="https://prisma.io">Prisma</a> provides a great experience for your team to work and interact with data.  </p><p>Together, Cloudflare and Prisma make it easier than ever to deploy globally available apps with a focus on developer experience. To further that goal, Prisma Object Relational Mapper (ORM) now natively supports Cloudflare Workers and D1 in Preview. With version <code>5.12.0</code> of Prisma ORM you can now interact with your data stored in D1 from your Cloudflare Workers with the convenience of the Prisma Client API. Learn more and <a href="https://pris.ly/d/d1">try it out now</a>.</p>
    <div>
      <h3>What is Prisma?</h3>
      <a href="#what-is-prisma">
        
      </a>
    </div>
    <p>From writing to debugging, SQL queries take a long time and slow developer productivity. Even before writing queries, modeling tables can quickly become unwieldy, and migrating data is a nerve-wracking process. Prisma ORM looks to resolve all of these issues by providing an intuitive data modeling language, an automated migration workflow, and a developer-friendly and type-safe client for JavaScript and TypeScript, allowing developers to focus on what they enjoy: developing!</p><p><a href="https://prisma.io">Prisma</a> is focused on making working with data easy. Alongside an ORM, Prisma offers <a href="https://www.prisma.io/data-platform/accelerate">Accelerate</a> and <a href="https://www.prisma.io/data-platform/pulse">Pulse</a>, products built on Cloudflare that cover needs from connection pooling, to query caching, to real-time type-safe database subscriptions.</p>
    <div>
      <h2><b>How to get started with Prisma ORM, Cloudflare Workers, and D1</b></h2>
      <a href="#how-to-get-started-with-prisma-orm-cloudflare-workers-and-d1">
        
      </a>
    </div>
    <p>To get started with Prisma ORM and D1, first create a basic Cloudflare Workers app. This guide will start with the <code>”Hello World”</code> Worker example app, but any Workers example app will work. If you don’t have a project yet, start by creating a new one. Name your project something memorable, like <code>my-d1-prisma-app</code> and select “Hello World” worker and TypeScript. For now, we will choose to not deploy and will wait until after we have set up D1 and Prisma ORM.</p><p><code>npm create cloudflare@latest</code></p><p>Next, move into your newly created project and make sure that dependencies are installed:</p><p><code>cd my-d1-prisma-app &amp;&amp; npm install</code></p><p>After dependencies are installed, we can move on to the D1 setup.</p><p>First, create a new D1 database for your app.</p>
            <pre><code>npx wrangler d1 create prod-prisma-d1-app
.
.
.

[[d1_databases]]
binding = "DB" # i.e. available in your Worker on env.DB
database_name = "prod-prisma-d1-app"
database_id = "&lt;unique-ID-for-your-database&gt;"</code></pre>
            <p>The section starting with <code>[[d1_databases]]</code> is the binding configuration needed in your <code>wrangler.toml</code> for your Worker to communicate with D1. Add that now:</p>
            <pre><code>// wrangler.toml
name="my-d1-prisma-app"
main = "src/index.ts"
compatibility_date = "2024-03-20"
compatibility_flags = ["nodejs_compat"]

[[d1_databases]]
binding = "DB" # i.e. available in your Worker on env.DB
database_name = "prod-prisma-d1-app"
database_id = "&lt;unique-ID-for-your-database&gt;"</code></pre>
            <p>Your application now has D1 available! Next, add Prisma ORM to manage your queries, schema and migrations! To add Prisma ORM, first make sure the latest version is installed. Prisma ORM versions 5.12.0 and up support Cloudflare Workers and D1.</p><p><code>npm install prisma@latest @prisma/client@latest @prisma/adapter-d1</code></p><p>Now run <code>npx prisma init</code> in order to create the necessary files to start with. Since D1 uses SQLite’s SQL dialect, we set the provider to be <code>sqlite</code>.</p><p><code>npx prisma init --datasource-provider sqlite</code></p><p>This will create a few files, but the one to look at first is your <a href="https://www.prisma.io/docs/orm/prisma-schema/overview">Prisma schema file</a>, available at <code>prisma/schema.prisma</code></p>
            <pre><code>// schema.prisma
// This is your Prisma schema file,
// learn more about it in the docs: https://pris.ly/d/prisma-schema

generator client {
  provider = "prisma-client-js"
}

datasource db {
  provider = "sqlite"
  url  = env("DATABASE_URL")
}</code></pre>
            <p>Before you can create any models, first enable the <code>driverAdapters</code> Preview feature. This will allow the Prisma Client to use an adapter to communicate with D1.</p>
            <pre><code>// schema.prisma
// This is your Prisma schema file,
// learn more about it in the docs: https://pris.ly/d/prisma-schema

generator client {
  provider = "prisma-client-js"
+ previewFeatures = ["driverAdapters"]
}

datasource db {
  provider = "sqlite"
  url      = env("DATABASE_URL")
}</code></pre>
            <p>Now you are ready to create your first model! In this app, you will be creating a “ticker”, a mainstay of many classic Internet sites.</p><p>Add a new model to your schema, <code>Visit</code>, which will track that an individual visited your site. A <code>Visit</code> is a simple model that will have a unique ID and the time at which an individual visited your site.</p>
            <pre><code>// This is your Prisma schema file,
// learn more about it in the docs: https://pris.ly/d/prisma-schema

generator client {
  provider        = "prisma-client-js"
  previewFeatures = ["driverAdapters"]
}

datasource db {
  provider = "sqlite"
  url      = env("DATABASE_URL")
}

+ model Visit {
+   id        Int      @id @default(autoincrement())
+   visitTime DateTime @default(now())
+ }
</code></pre>
            <p>Now that you have a schema and a model, let’s create a migration. First use <code>wrangler</code> to generate an empty migration file and <code>prisma migrate</code> to fill it. If prompted, select “yes” to create a migrations folder at the root of your project.</p>
            <pre><code>npx wrangler d1 migrations create prod-prisma-d1-app init
 ⛅️ wrangler 3.36.0
-------------------
✔ No migrations folder found. Set `migrations_dir` in wrangler.toml to choose a different path.
Ok to create /path/to/your/project/my-d1-prisma-app/migrations? … yes
✅ Successfully created Migration '0001_init.sql'!

The migration is available for editing here
/path/to/your/project/my-d1-prisma-app/migrations/0001_init.sql</code></pre>
            
            <pre><code>npx prisma migrate diff --script --from-empty --to-schema-datamodel ./prisma/schema.prisma &gt;&gt; migrations/0001_init.sql</code></pre>
            <p>The <code>npx prisma migrate diff</code> command takes the difference between your database (which is currently empty) and the Prisma schema. It then saves this difference to a new file in the migrations directory.</p>
            <pre><code>// 0001_init.sql
-- Migration number: 0001 	 2024-03-21T22:15:50.184Z
-- CreateTable
CREATE TABLE "Visit" (
    "id" INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT,
    "visitTime" DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP</code></pre>
            <p>Now you can migrate your local and remote D1 database instances using <code>wrangler</code> and re-generate your Prisma Client to begin making queries.</p>
            <pre><code>npx wrangler d1 migrations apply prod-prisma-d1-app --local
npx wrangler d1 migrations apply prod-prisma-d1-app --remote
npx prisma generate</code></pre>
            <p>Make sure to import <code>PrismaClient</code> and <code>PrismaD1</code>, define the binding for your D1 database, and you’re ready to use Prisma in your application.</p>
            <pre><code>// src/index.ts
import { PrismaClient } from "@prisma/client";
import { PrismaD1 } from "@prisma/adapter-d1";

export interface Env {
  DB: D1Database,
}

export default {
  async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise&lt;Response&gt; {
    const adapter = new PrismaD1(env.DB);
    const prisma = new PrismaClient({ adapter });
    const { pathname } = new URL(request.url);

    if (pathname === '/') {
      const numVisitors = await prisma.visit.count();
      return new Response(
        `You have had ${numVisitors} visitors!`
      );
    }

    return new Response('');
  },
};</code></pre>
            <p>You may notice that there’s always 0 visitors. Add another route to create a new visitor whenever someone visits the <code>/visit</code> route</p>
            <pre><code>// src/index.ts
import { PrismaClient } from "@prisma/client";
import { PrismaD1 } from "@prisma/adapter-d1";

export interface Env {
  DB: D1Database,
}

export default {
  async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise&lt;Response&gt; {
    const adapter = new PrismaD1(env.DB);
    const prisma = new PrismaClient({ adapter });
    const { pathname } = new URL(request.url);

    if (pathname === '/') {
      const numVisitors = await prisma.visit.count();
      return new Response(
        `You have had ${numVisitors} visitors!`
      );
    } else if (pathname === '/visit') {
      const newVisitor = await prisma.visit.create({ data: {} });
      return new Response(
        `You visited at ${newVisitor.visitTime}. Thanks!`
      );
    }

    return new Response('');
  },
};</code></pre>
            <p>Your app is now set up to record visits and report how many visitors you have had!</p>
    <div>
      <h2>Summary and further reading</h2>
      <a href="#summary-and-further-reading">
        
      </a>
    </div>
    <p>We were able to build a simple app easily with Cloudflare Workers, D1 and Prisma ORM, but the benefits don’t stop there! Check the <a href="https://www.prisma.io/docs">official documentation</a> for information on using Prisma ORM with <a href="https://www.prisma.io/docs/orm/prisma-client/deployment/edge/deploy-to-cloudflare#d1">D1</a> along with workflows for <a href="https://www.prisma.io/docs/orm/prisma-migrate/getting-started">migrating your data</a>, and even <a href="https://www.prisma.io/docs/orm/prisma-client/client-extensions">extending the Prisma Client</a> for your specific needs.</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[D1]]></category>
            <category><![CDATA[Prisma]]></category>
            <category><![CDATA[Partners]]></category>
            <guid isPermaLink="false">57u9f36ao6LNU9KGRJIV7m</guid>
            <dc:creator>Jon Harrell (Guest Author)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Building D1: a Global Database]]></title>
            <link>https://blog.cloudflare.com/building-d1-a-global-database/</link>
            <pubDate>Mon, 01 Apr 2024 13:00:41 GMT</pubDate>
            <description><![CDATA[ D1, Cloudflare’s SQL database, is now generally available.  ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/76hMKeBHewbCLm4XlVS4zL/92271c25576185cad1ab5e70e29ede58/image2-33.png" />
            
            </figure><p>Developers who build Worker applications focus on what they're creating, not the infrastructure required, and benefit from the global reach of <a href="https://www.cloudflare.com/network/">Cloudflare's network</a>. Many applications require persistent data, from personal projects to business-critical workloads. Workers offer various <a href="https://developers.cloudflare.com/workers/platform/storage-options/">database and storage options</a> tailored to developer needs, such as key-value and <a href="https://www.cloudflare.com/learning/cloud/what-is-object-storage/">object storage</a>.</p><p>Relational databases are the backbone of many applications today. <a href="https://developers.cloudflare.com/d1/">D1</a>, Cloudflare's relational database complement, is now generally available. Our journey from alpha in late 2022 to GA in April 2024 focused on enabling developers to build production workloads with the familiarity of relational data and SQL.</p>
    <div>
      <h3>What’s D1?</h3>
      <a href="#whats-d1">
        
      </a>
    </div>
    <p>D1 is Cloudflare's built-in, serverless relational database. For Worker applications, D1 offers SQL's expressiveness, leveraging SQLite's SQL dialect, and developer tooling integrations, including object-relational mappers (ORMs) like <a href="https://orm.drizzle.team/docs/connect-cloudflare-d1">Drizzle ORM</a>. D1 is accessible via <a href="https://developers.cloudflare.com/d1/build-with-d1/d1-client-api/">Workers</a> or an <a href="https://developers.cloudflare.com/api/operations/cloudflare-d1-create-database">HTTP API</a>.</p><p>Serverless means no provisioning, default disaster recovery with <a href="https://developers.cloudflare.com/d1/reference/time-travel/">Time Travel</a>, and <a href="https://developers.cloudflare.com/d1/platform/pricing/">usage-based pricing</a>. D1 includes a generous free tier that allows developers to experiment with D1 and then graduate those trials to production.</p>
    <div>
      <h3>How to make data global?</h3>
      <a href="#how-to-make-data-global">
        
      </a>
    </div>
    <p>D1 GA has focused on reliability and developer experience. Now, we plan on extending D1 to better support globally-distributed applications.</p><p>In the Workers model, an incoming request invokes serverless execution in the closest data center. A Worker application can scale globally with user requests. Application data, however, remains stored in centralized databases, and global user traffic must account for access round trips to data locations. For example, a D1 database today resides in a single location.</p><p>Workers support <a href="https://developers.cloudflare.com/workers/configuration/smart-placement">Smart Placement</a> to account for frequently accessed data locality. Smart Placement invokes a Worker closer to centralized backend services like databases to lower latency and improve application performance. We’ve addressed Workers placement in global applications, but need to solve data placement.</p><p>The question, then, is how can D1, as Cloudflare’s <a href="https://www.cloudflare.com/developer-platform/products/d1/">built-in database solution</a>, better support data placement for global applications? The answer is asynchronous read replication.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1I58tQFeSOcIyqrGfv9UnB/c1bc267b8cd2eb09332ae909429aeb5b/image4-30.png" />
            
            </figure>
    <div>
      <h3>What is asynchronous read replication?</h3>
      <a href="#what-is-asynchronous-read-replication">
        
      </a>
    </div>
    <p>In a server-based database management system, like Postgres, MySQL, SQL Server, or Oracle, a <b><i>read replica</i></b> is a separate database server that serves as a read-only, almost up-to-date copy of the primary database server. An administrator creates a read replica by starting a new server from a snapshot of the primary server and configuring the primary server to send updates asynchronously to the replica server. Since the updates are asynchronous, the read replica may be behind the current state of the primary server. The difference between the primary server and a replica is called <b><i>replica lag</i></b>. It's possible to have more than one read replica.</p><p>Asynchronous read replication is a time-proven solution for improving the performance of databases:</p><ul><li><p>It's possible to increase throughput by distributing load across multiple replicas.</p></li><li><p>It's possible to lower query latency when the replicas are close to the users making queries.</p></li></ul><p>Note that some database systems also offer synchronous replication. In a synchronous replicated system, writes must wait until all replicas have confirmed the write. Synchronous replicated systems can run only as fast as the slowest replica and come to a halt when a replica fails. If we’re trying to improve performance on a global scale, we want to avoid synchronous replication as much as possible!</p>
    <div>
      <h3>Consistency models &amp; read replicas</h3>
      <a href="#consistency-models-read-replicas">
        
      </a>
    </div>
    <p>Most database systems provide <a href="https://jepsen.io/consistency/models/read-committed">read committed</a>, <a href="https://jepsen.io/consistency/models/snapshot-isolation">snapshot isolation</a>, or <a href="https://jepsen.io/consistency/models/serializable">serializable</a> consistency models, depending on their configuration. For example, Postgres <a href="https://jepsen.io/consistency/models/read-committed">defaults to read committed</a> but can be configured to use stronger modes. SQLite provides <a href="https://www.sqlite.org/draft/isolation.html">snapshot isolation in WAL mode</a>. Stronger modes like snapshot isolation or serializable are easier to program against because they limit the permitted system concurrency scenarios and the kind of concurrency race conditions the programmer has to worry about.</p><p>Read replicas are updated independently, so each replica's contents may differ at any moment. If all of your queries go to the same server, whether the primary or a read replica, your results should be consistent according to whatever <a href="https://jepsen.io/consistency">consistency model</a> your underlying database provides. If you're using a read replica, the results may just be a little old.</p><p>In a server-based database with read replicas, it's important to stick with the same server for all of the queries in a session. If you switch among different read replicas in the same session, you compromise the consistency model provided by your application, which may violate your assumptions about how the database acts and cause your application to return incorrect results!</p><p><b>Example</b>For example, there are two replicas, A and B. Replica A lags the primary database by 100ms, and replica B lags the primary database by 2s. Suppose a user wishes to:</p><ol><li><p>Execute query 11a. Do some computation based on query 1 results</p></li><li><p>Execute query 2 based on the results of the computation in (1a)</p></li></ol><p>At time t=10s, query 1 goes to replica A and returns. Query 1 sees what the primary database looked like at t=9.9s. Suppose it takes 500ms to do the computation, so at t=10.5s, query 2 goes to replica B. Remember, replica B lags the primary database by 2s, so at t=10.5s, query 2 sees what the database looks like at t=8.5s. As far as the application is concerned, the results of query 2 look like the database has gone backwards in time!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2R1p29j20c7szuRmY2Sjlp/52e4982c6c45e18c4d0c18835931b016/image3-34.png" />
            
            </figure><p>Formally, this is <a href="https://jepsen.io/consistency/models/read-committed">read committed</a> consistency since your queries will only see committed data, but there’s no other guarantee - not even that you can read your own writes. While read committed is a valid consistency model, it’s hard to reason about all of the possible race conditions the read committed model allows, making it difficult to write applications correctly.</p>
    <div>
      <h3>D1’s consistency model &amp; read replicas</h3>
      <a href="#d1s-consistency-model-read-replicas">
        
      </a>
    </div>
    <p>By default, D1 provides the <a href="https://jepsen.io/consistency/models/snapshot-isolation">snapshot isolation</a> that SQLite provides.</p><p>Snapshot isolation is a familiar consistency model that most developers find easy to use. We implement this consistency model in D1 by ensuring at most one active copy of the D1 database and routing all HTTP requests to that single database. While ensuring that there's at most one active copy of the D1 database is a gnarly distributed systems problem, it's one that we’ve solved by building D1 using <a href="https://developers.cloudflare.com/durable-objects/">Durable Objects</a>. Durable Objects guarantee global uniqueness, so once we depend on Durable Objects, routing HTTP requests is easy: just send them to the D1 Durable Object.</p><p>This trick doesn't work if you have multiple active copies of the database since there's no 100% reliable way to look at a generic incoming HTTP request and route it to the same replica 100% of the time. Unfortunately, as we saw in the previous section's example, if we don't route related requests to the same replica 100% of the time, the best consistency model we can provide is read committed.</p><p>Given that it's impossible to route to a particular replica consistently, another approach is to route requests to any replica and ensure that the chosen replica responds to requests according to a consistency model that "makes sense" to the programmer. If we're willing to include a <a href="https://en.wikipedia.org/wiki/Lamport_timestamp">Lamport timestamp</a> in our requests, we can implement <a href="https://jepsen.io/consistency/models/sequential">sequential consistency</a> using any replica. The sequential consistency model has important properties like "<a href="https://jepsen.io/consistency/models/read-your-writes">read my own writes</a>" and "<a href="https://jepsen.io/consistency/models/writes-follow-reads">writes follow reads</a>," as well as a total ordering of writes. The total ordering of writes means that every replica will see transactions commit in the same order, which is exactly the behavior we want in a transactional system. Sequential consistency comes with the caveat that any individual entity in the system may be arbitrarily out of date, but that caveat is a feature for us because it allows us to consider replica lag when designing our APIs.</p><p>The idea is that if D1 gives applications a Lamport timestamp for every database query and those applications tell D1 the last Lamport timestamp they've seen, we can have each replica determine how to make queries work according to the sequential consistency model.</p><p>A robust, yet simple, way to implement sequential consistency with replicas is to:</p><ul><li><p>Associate a Lamport timestamp with every single request to the database. A monotonically increasing commit token works well for this.</p></li><li><p>Send all write queries to the primary database to ensure the total ordering of writes.</p></li><li><p>Send read queries to any replica, but have the replica delay servicing the query until the replica receives updates from the primary database that are later than the Lamport timestamp in the query.</p></li></ul><p>What's nice about this implementation is that it's fast in the common case where a read-heavy workload always goes to the same replica and will work even if requests get routed to different replicas.</p>
    <div>
      <h3><b><i>Sneak Preview:</i></b> bringing read replication to D1 with Sessions</h3>
      <a href="#sneak-preview-bringing-read-replication-to-d1-with-sessions">
        
      </a>
    </div>
    <p>To bring read replication to D1, we will expand the D1 API with a new concept: <b>Sessions</b>. A Session encapsulates all the queries representing one logical session for your application. For example, a Session might represent all requests coming from a particular web browser or all requests coming from a mobile app. If you use Sessions, your queries will use whatever copy of the D1 database makes the most sense for your request, be that the primary database or a nearby replica. D1's Sessions implementation will ensure sequential consistency for all queries in the Session.</p><p>Since the Sessions API changes D1's consistency model, developers must opt-in to the new API. Existing D1 API methods are unchanged and will still have the same snapshot isolation consistency model as before. However, only queries made using the new Sessions API will use replicas.</p><p>Here’s an example of the D1 Sessions API:</p>
            <pre><code>export default {
  async fetch(request: Request, env: Env) {
    // When we create a D1 Session, we can continue where we left off
    // from a previous Session if we have that Session's last commit
    // token.  This Worker will return the commit token back to the
    // browser, so that it can send it back on the next request to
    // continue the Session.
    //
    // If we don't have a commit token, make the first query in this
    // session an "unconditional" query that will use the state of the
    // database at whatever replica we land on.
    const token = request.headers.get('x-d1-token') ?? 'first-unconditional'
    const session = env.DB.withSession(token)


    // Use this Session for all our Workers' routes.
    const response = await handleRequest(request, session)


    if (response.status === 200) {
      // Set the token so we can continue the Session in another request.
      response.headers.set('x-d1-token', session.latestCommitToken)
    }
    return response
  }
}


async function handleRequest(request: Request, session: D1DatabaseSession) {
  const { pathname } = new URL(request.url)


  if (pathname === '/api/orders/list') {
    // This statement is a read query, so it will execute on any
    // replica that has a commit equal or later than `token` we used
    // to create the Session.
    const { results } = await session.prepare('SELECT * FROM Orders').all()


    return Response.json(results)
  } else if (pathname === '/api/orders/add') {
    const order = await request.json&lt;Order&gt;()


    // This statement is a write query, so D1 will send the query to
    // the primary, which always has the latest commit token.
    await session
      .prepare('INSERT INTO Orders VALUES (?, ?, ?)')
      .bind(order.orderName, order.customer, order.value)
      .run()


    // In order for the application to be correct, this SELECT
    // statement must see the results of the INSERT statement above.
    // The Session API keeps track of commit tokens for queries
    // within the session and will ensure that we won't execute this
    // query until whatever replica we're using has seen the results
    // of the INSERT.
    const { results } = await session
      .prepare('SELECT COUNT(*) FROM Orders')
      .all()


    return Response.json(results)
  }


  return new Response('Not found', { status: 404 })
}</code></pre>
            <p>D1’s implementation of Sessions makes use of commit tokens.  Commit tokens identify a particular committed query to the database.  Within a session, D1 will use commit tokens to ensure that queries are sequentially ordered.  In the example above, the D1 session ensures that the “SELECT COUNT(*)” query happens <i>after</i> the “INSERT” of the new order, <i>even if</i> we switch replicas between the awaits.  </p><p>There are several options on how you want to start a session in a Workers fetch handler.  <code>db.withSession(&lt;condition&gt;)</code> accepts these arguments:</p><table><colgroup><col></col><col></col></colgroup><tbody><tr><td><p><span><b><code>condition</code> argument</b></span></p></td><td><p><span><b>Behavior</b></span></p></td></tr><tr><td><p><span><code>&lt;commit_token&gt;</code></span></p></td><td><p><span>(1) starts Session as of given commit token</span></p><p><span>(2) subsequent queries have sequential consistency</span></p></td></tr><tr><td><p><span><code>first-unconditional</code></span></p></td><td><p><span>(1) if the first query is read, read whatever current replica has and use the commit token of that read as the basis for subsequent queries.  If the first query is a write, forward the query to the primary and use the commit token of the write as the basis for subsequent queries.</span></p><p><span>(2) subsequent queries have sequential consistency</span></p></td></tr><tr><td><p><span><code>first-primary</code></span></p></td><td><p><span>(1) runs first query, read or write, against the primary</span></p><p><span>(2) subsequent queries have sequential consistency</span></p></td></tr><tr><td><p><span><code>null</code> or missing argument</span></p></td><td><p><span>treated like <code>first-unconditional</code> </span></p></td></tr></tbody></table><p>It’s possible to have a session span multiple requests by “round-tripping” the commit token from the last query of the session and using it to start a new session.  This enables individual user agents, like a web app or a mobile app, to make sure that all of the queries the user sees are sequentially consistent.</p><p>D1’s read replication will be built-in, will not incur extra usage or storage costs, and will require no replica configuration. Cloudflare will <a href="https://www.cloudflare.com/application-services/solutions/app-performance-monitoring/">monitor</a> an application’s D1 traffic and automatically create database replicas to spread user traffic across multiple servers in locations closer to users. Aligned with our serverless model, D1 developers shouldn’t worry about replica provisioning and management. Instead, developers should focus on designing applications for replication and data consistency tradeoffs.</p><p>We’re actively working on global read replication and realizing the above proposal (share feedback In the <a href="https://discord.cloudflare.com/">#d1 channel</a> on our Developer Discord). Until then, D1 GA includes several exciting new additions.</p>
    <div>
      <h3>Check out D1 GA</h3>
      <a href="#check-out-d1-ga">
        
      </a>
    </div>
    <p>Since D1’s open beta in October 2023, we’ve focused on D1’s reliability, scalability, and developer experience demanded of critical services. We’ve invested in several new features that allow developers to build and debug applications faster with D1.</p><p><b>Build bigger with larger databases</b>We’ve listened to developers who requested larger databases. D1 now supports up to 10 GB databases, with 50K databases on the Workers Paid plan. With D1’s horizontal scaleout, applications can model database-per-business-entity use cases. Since beta, new D1 databases process 40x more requests than D1 alpha databases in a given period.</p><p><b>Import &amp; export bulk data</b>Developers import and export data for multiple reasons:</p><ul><li><p>Database migration testing to/from different database systems</p></li><li><p>Data copies for local development or testing</p></li><li><p>Manual backups for custom requirements like compliance</p></li></ul><p>While you could execute SQL files against D1 before, we’re improving <code>wrangler d1 execute –file=&lt;filename&gt;</code> to ensure large imports are atomic operations, never leaving your database in a halfway state. <code>wrangler d1 execute</code> also now defaults to local-first to protect your remote production database.</p><p>To import our <a href="https://github.com/cloudflare/d1-northwind/tree/main">Northwind Traders</a> demo database, you can download the <a href="https://github.com/cloudflare/d1-northwind/blob/main/db/schema.sql">schema</a> &amp; <a href="https://github.com/cloudflare/d1-northwind/blob/main/db/data.sql">data</a> and execute the SQL files.</p>
            <pre><code>npx wrangler d1 create northwind-traders

# omit --remote to run on a local database for development
npx wrangler d1 execute northwind-traders --remote --file=./schema.sql

npx wrangler d1 execute northwind-traders --remote --file=./data.sql</code></pre>
            <p>D1 database data &amp; schema, schema-only, or data-only can be exported to a SQL file using:</p>
            <pre><code># database schema &amp; data
npx wrangler d1 export northwind-traders --remote --output=./database.sql

# single table schema &amp; data
npx wrangler d1 export northwind-traders --remote --table='Employee' --output=./table.sql

# database schema only
npx wrangler d1 export &lt;database_name&gt; --remote --output=./database-schema.sql --no-data=true</code></pre>
            <p><b>Debug query performance</b>Understanding SQL query performance and debugging slow queries is a crucial step for production workloads. We’ve added the experimental <a href="https://developers.cloudflare.com/d1/observability/metrics-analytics/#query-insights"><code>wrangler d1 insights</code></a> to help developers analyze query performance metrics also available via <a href="https://developers.cloudflare.com/d1/observability/metrics-analytics/">GraphQL API</a>.</p>
            <pre><code># To find top 10 queries by average execution time:
npx wrangler d1 insights &lt;database_name&gt; --sort-type=avg --sort-by=time --count=10</code></pre>
            <p><b>Developer tooling</b>Various <a href="https://developers.cloudflare.com/d1/reference/community-projects">community developer projects</a> support D1. New additions include <a href="https://developers.cloudflare.com/d1/tutorials/d1-and-prisma-orm">Prisma ORM</a>, in version 5.12.0, which now supports Workers and D1.</p>
    <div>
      <h3>Next steps</h3>
      <a href="#next-steps">
        
      </a>
    </div>
    <p>The features available now with GA and our global read replication design are just the start of delivering the SQL database needs for developer applications. If you haven’t yet used D1, you can <a href="https://developers.cloudflare.com/d1/get-started/">get started</a> right now, visit D1’s <a href="https://developers.cloudflare.com/d1/">developer documentation</a> to spark some ideas, or <a href="https://discord.cloudflare.com/">join the #d1 channel</a> on our Developer Discord to talk to other D1 developers and our product engineering team.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2dTCMeWMaQjhBd1SM8hM6O/2cbe9ec1a7a4fb0c061afe0e1c0bf666/image1-35.png" />
            
            </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[D1]]></category>
            <category><![CDATA[Database]]></category>
            <guid isPermaLink="false">6y8LbpExPriYEVMgzCDp4B</guid>
            <dc:creator>Vy Ton</dc:creator>
            <dc:creator>Justin Mazzola Paluska</dc:creator>
        </item>
        <item>
            <title><![CDATA[Making state easy with D1 GA, Hyperdrive, Queues and Workers Analytics Engine updates]]></title>
            <link>https://blog.cloudflare.com/making-full-stack-easier-d1-ga-hyperdrive-queues/</link>
            <pubDate>Mon, 01 Apr 2024 13:00:06 GMT</pubDate>
            <description><![CDATA[ We kick off the week with announcements that help developers build stateful applications on top of Cloudflare, including making D1, our SQL database and Hyperdrive, our database accelerating service, generally available ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4BKrpfqvHnl6yaHdXXsCoc/70280206c43fc4ecfa026968440f52f0/image4-31.png" />
            
            </figure>
    <div>
      <h3>Making full-stack easier</h3>
      <a href="#making-full-stack-easier">
        
      </a>
    </div>
    <p>Today might be April Fools, and while we like to have fun as much as anyone else, we like to use this day for serious announcements. In fact, as of today, there are over 2 million developers building on top of Cloudflare’s platform — that’s no joke!</p><p>To kick off this Developer Week, we’re flipping the big “production ready” switch on three products: <a href="https://developers.cloudflare.com/d1/">D1, our serverless SQL database</a>; <a href="https://developers.cloudflare.com/hyperdrive/">Hyperdrive</a>, which makes your <i>existing</i> databases feel like they’re distributed (and faster!); and <a href="https://developers.cloudflare.com/analytics/analytics-engine/">Workers Analytics Engine</a>, our time-series database.</p><p>We’ve been on a mission to allow developers to bring their entire stack to Cloudflare for some time, but what might an application built on Cloudflare look like?</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5D3F21rYXhLv0bI6FID3Kc/4b0ca6dfc52e168a852599345e111a02/image6-11.png" />
            
            </figure><p>The diagram itself shouldn’t look too different from the tools you’re already familiar with: you want a <a href="https://developers.cloudflare.com/d1/">database</a> for your core user data. <a href="https://www.cloudflare.com/learning/cloud/what-is-object-storage/">Object storage</a> for assets and user content. Maybe a <a href="https://developers.cloudflare.com/queues/">queue</a> for background tasks, like email or upload processing. A <a href="https://developers.cloudflare.com/kv/">fast key-value store</a> for runtime configuration. Maybe even a <a href="https://developers.cloudflare.com/analytics/analytics-engine/">time-series database</a> for aggregating user events and/or performance data. And that’s before we get to <a href="https://developers.cloudflare.com/workers-ai/">AI</a>, which is increasingly becoming a core part of many applications in search, recommendation and/or image analysis tasks (at the very least!).</p><p>Yet, without having to think about it, this architecture runs on Region: Earth, which means it’s scalable, reliable and fast — all out of the box.</p>
    <div>
      <h3>D1 GA: Production Ready</h3>
      <a href="#d1-ga-production-ready">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6FBwcKFjSHTCL2LcJRtNCo/46c6a403e7f8c743ac8a4dff252d85e4/image2-35.png" />
            
            </figure><p>Your core database is one of the most critical pieces of your infrastructure. It needs to be ultra-reliable. It can’t lose data. It needs to scale. And so we’ve been heads down over the last year getting the pieces into place to make sure D1 is production-ready, and we’re extremely excited to say that D1 — our <a href="https://www.cloudflare.com/developer-platform/products/d1/">global, serverless SQL database</a> — is now Generally Available.</p><p>The GA for D1 lands some of the most asked-for features, including:</p><ul><li><p>Support for 10GB databases — and 50,000 databases per account;</p></li><li><p>New data export capabilities; and</p></li><li><p>Enhanced query debugging (we call it “D1 Insights”) — that allows you to understand what queries are consuming the most time, cost, or that are just plain inefficient…  </p></li></ul><p>… to empower developers to build production-ready applications with D1 to meet all their relational SQL needs. And importantly, in an era where the concept of a “<a href="https://www.cloudflare.com/plans/free/">free plan</a>” or “hobby plan” is seemingly at risk, we have no intention of removing the free tier for D1 or reducing the <i>25 billion row reads</i> included in the $5/mo Workers Paid plan:</p><table><colgroup><col></col><col></col><col></col><col></col></colgroup><tbody><tr><td><p><span>Plan</span></p></td><td><p><span>Rows Read</span></p></td><td><p><span>Rows Written</span></p></td><td><p><span>Storage</span></p></td></tr><tr><td><p><span>Workers</span><span> </span><span>Paid</span></p></td><td><p><span>First 25 billion / month included</span><span><br /></span><span><br /></span><span>+ $0.001 / million rows</span></p></td><td><p><span>First 50 million / month included</span><span><br /></span><span><br /></span><span>+ $1.00 / million rows</span></p></td><td><p><span>First 5 GB included</span></p><br /><p><span>+ $0.75 / GB-mo</span></p></td></tr><tr><td><p><span>Workers Free</span></p></td><td><p><span>5 million / day</span></p></td><td><p><span>100,000 / day</span><span><span>	</span></span></p></td><td><p><span>5 GB (total)</span></p></td></tr></tbody></table><p><i>For those who’ve been following D1 since the start: this is the same pricing we announced at </i><a href="/d1-open-beta-is-here"><i>open beta</i></a></p><p>But things don’t just stop at GA: we have some major new features lined up for D1, including global read replication, even larger databases, more <a href="https://developers.cloudflare.com/d1/reference/time-travel/">Time Travel</a> capabilities that will allow you to branch your database, and new APIs for dynamically querying and/or creating new databases-on-the-fly from within a Worker.</p><p>D1’s read replication will automatically deploy read replicas as needed to get data closer to your users: and without you having to spin up, manage scaling, or run into consistency (replication lag) issues. Here’s a sneak preview of what D1’s upcoming Replication API looks like:</p>
            <pre><code>export default {
  async fetch(request: Request, env: Env) {
    const {pathname} = new URL(request.url);
    let resp = null;
    let session = env.DB.withSession(token); // An optional commit token or mode

    // Handle requests within the session.
    if (pathname === "/api/orders/list") {
      // This statement is a read query, so it will work against any
      // replica that has a commit equal or later than `token`.
      const { results } = await session.prepare("SELECT * FROM Orders");
      resp = Response.json(results);
    } else if (pathname === "/api/orders/add") {
      order = await request.json();

      // This statement is a write query, so D1 will send the query to
      // the primary, which always has the latest commit token.
      await session.prepare("INSERT INTO Orders VALUES (?, ?, ?)")
        .bind(order.orderName, order.customer, order.value);
        .run();

      // In order for the application to be correct, this SELECT
      // statement must see the results of the INSERT statement above.
      //
      // D1's new Session API keeps track of commit tokens for queries
      // within the session and will ensure that we won't execute this
      // query until whatever replica we're using has seen the results
      // of the INSERT.
      const { results } = await session.prepare("SELECT COUNT(*) FROM Orders")
        .run();
      resp = Response.json(results);
    }

    // Set the token so we can continue the session in another request.
    resp.headers.set("x-d1-token", session.latestCommitToken);
    return resp;
  }
}</code></pre>
            <p>Importantly, we will give developers the ability to maintain session-based consistency, so that users still see their own changes reflected, whilst still benefiting from the performance and latency gains that replication can bring.</p><p>You can learn more about how D1’s read replication works under the hood <a href="/building-d1-a-global-database/">in our deep-dive post</a>, and if you want to start building on D1 today, <a href="https://developers.cloudflare.com/d1/">head to our developer docs</a> to create your first database.</p>
    <div>
      <h3>Hyperdrive: GA</h3>
      <a href="#hyperdrive-ga">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/47WBGHvqFpRkza2ldA5RBi/7f7f47055e1f4f066e213b88e9e98737/image1-37.png" />
            
            </figure><p>We launched Hyperdrive into open beta <a href="/hyperdrive-making-regional-databases-feel-distributed">last September during Birthday Week</a>, and it’s now Generally Available — or in other words, battle-tested and production-ready.</p><p>If you’re not caught up on what Hyperdrive is, it’s designed to make the centralized databases you already have feel like they’re global. We use our <a href="https://www.cloudflare.com/network/">global network</a> to get faster routes to your database, keep connection pools primed, and cache your most frequently run queries as close to users as possible.</p><p>Importantly, Hyperdrive supports the most popular drivers and ORM (Object Relational Mapper) libraries out of the box, so you don’t have to re-learn or re-write your queries:</p>
            <pre><code>// Use the popular 'pg' driver? Easy. Hyperdrive just exposes a connection string
// to your Worker.
const client = new Client({ connectionString: env.HYPERDRIVE.connectionString });
await client.connect();

// Prefer using an ORM like Drizzle? Use it with Hyperdrive too.
// https://orm.drizzle.team/docs/get-started-postgresql#node-postgres
const client = new Client({ connectionString: env.HYPERDRIVE.connectionString });
await client.connect();
const db = drizzle(client);</code></pre>
            <p>But the work on Hyperdrive doesn’t stop just because it’s now “GA”. Over the next few months, we’ll be bringing support for the <i>other</i> most widely deployed database engine there is: MySQL. We’ll also be bringing support for connecting to databases inside private networks (including cloud VPC networks) via <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/">Cloudflare Tunnel</a> and <a href="https://developers.cloudflare.com/magic-wan/">Magic WAN</a> On top of that, we plan to bring more configurability around invalidation and caching strategies, so that you can make more fine-grained decisions around performance vs. data freshness.</p><p>As we thought about how we wanted to price Hyperdrive, we realized that it just didn’t seem right to charge for it. After all, the performance benefits from Hyperdrive are not only significant, but essential to connecting to traditional database engines. Without Hyperdrive, paying the latency overhead of 6+ round-trips to connect &amp; query your database per request just isn’t right.</p><p>And so we’re happy to announce that <b>for any developer on a Workers Paid plan, Hyperdrive is free</b>. That includes both query caching and connection pooling, as well as the ability to create multiple Hyperdrives — to separate different applications, prod vs. staging, or to provide different configurations (cached vs. uncached, for example).</p><table><colgroup><col></col><col></col><col></col></colgroup><tbody><tr><td><p><span>Plan</span></p></td><td><p><span>Price per query</span></p></td><td><p><span>Connection Pooling</span></p></td></tr><tr><td><p><span>Workers</span><span> </span><span>Paid</span></p></td><td><p><span>$0 </span></p></td><td><p><span>$0</span></p></td></tr></tbody></table><p>To get started with Hyperdrive, <a href="https://developers.cloudflare.com/hyperdrive/">head over to the docs</a> to learn how to connect your existing database and start querying it from your Workers.</p>
    <div>
      <h3>Queues: Pull From Anywhere</h3>
      <a href="#queues-pull-from-anywhere">
        
      </a>
    </div>
    <p>The task queue is an increasingly critical part of building a modern, full-stack application, and this is what we had in mind when we <a href="/cloudflare-queues-open-beta">originally announced</a> the open beta of <a href="https://developers.cloudflare.com/queues/">Queues</a>. We’ve since been working on several major Queues features, and we’re launching two of them this week: pull-based consumers and new message delivery controls.</p><p>Any HTTP-speaking client <a href="https://developers.cloudflare.com/queues/reference/pull-consumers/">can now pull messages from a queue</a>: call the new /pull endpoint on a queue to request a batch of messages, and call the /ack endpoint to acknowledge each message (or batch of messages) as you successfully process them:</p>
            <pre><code>// Pull and acknowledge messages from a Queue using any HTTP client
$  curl "https://api.cloudflare.com/client/v4/accounts/${CF_ACCOUNT_ID}/queues/${QUEUE_ID}/messages/pull" -X POST --data '{"visibilityTimeout":10000,"batchSize":100}}' \
     -H "Authorization: Bearer ${QUEUES_TOKEN}" \
     -H "Content-Type:application/json"

// Ack the messages you processed successfully; mark others to be retried.
$ curl "https://api.cloudflare.com/client/v4/accounts/${CF_ACCOUNT_ID}/queues/${QUEUE_ID}/messages/ack" -X POST --data '{"acks":["lease-id-1", "lease-id-2"],"retries":["lease-id-100"]}' \
     -H "Authorization: Bearer ${QUEUES_TOKEN}" \
     -H "Content-Type:application/json"</code></pre>
            <p>A pull-based consumer can run anywhere, allowing you to run queue consumers alongside your existing legacy cloud infrastructure. Teams inside Cloudflare adopted this early on, with one use-case focused on writing device telemetry to a queue from our <a href="https://www.cloudflare.com/network/">310+ data centers</a> and consuming within some of our back-of-house infrastructure running on Kubernetes. Importantly, our globally distributed queue infrastructure means that messages are retained within the queue until the consumer is ready to process them.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2UUkrE3bqqIdQiemV49Hal/496c2d539b366a794d58479c99b1c9ec/image5-19.png" />
            
            </figure><p>Queues also <a href="https://developers.cloudflare.com/queues/reference/batching-retries/#delay-messages">now supports delaying messages</a>, both when sending to a queue, as well as when marking a message for retry. This can be useful to queue (pun intended) tasks for the future, as well apply a backoff mechanism if an upstream API or infrastructure has rate limits that require you to pace how quickly you are processing messages.</p>
            <pre><code>// Apply a delay to a message when sending it
await env.YOUR_QUEUE.send(msg, { delaySeconds: 3600 })

// Delay a message (or a batch of messages) when marking it for retry
for (const msg of batch.messages) {
	msg.retry({delaySeconds: 300})
} </code></pre>
            <p>We’ll also be bringing substantially increased per-queue throughput over the coming months on the path to getting Queues to GA. It’s important to us that Queues is <i>extremely</i> reliable: lost or dropped messages means that a user doesn’t get their order confirmation email, that password reset notification, and/or their uploads processed — each of those are user-impacting and hard to recover from.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1RhxWjKGRmoJtgQ4toybvY/57469d1ee721096a3c2b7551bbd277a4/image3-35.png" />
            
            </figure>
    <div>
      <h3>Workers Analytics Engine is GA</h3>
      <a href="#workers-analytics-engine-is-ga">
        
      </a>
    </div>
    <p><a href="https://developers.cloudflare.com/analytics/analytics-engine/">Workers Analytics Engine</a> provides unlimited-cardinality analytics at scale, via a built-in API to write data points from Workers, and a SQL API to query that data.</p><p>Workers Analytics Engine is backed by the same ClickHouse-based system we have depended on for years at Cloudflare. We use it ourselves to observe the health of our own services, to capture product usage data for billing, and to answer questions about specific customers’ usage patterns. At least one data point is written to this system on nearly every request to Cloudflare’s network. Workers Analytics Engine lets you build your own custom analytics using this same infrastructure, while we manage the hard parts for you.</p><p>Since <a href="/workers-analytics-engine">launching in beta</a>, developers have started depending on Workers Analytics Engine for these same use cases and more, from large enterprises to open-source projects like <a href="https://github.com/benvinegar/counterscale/">Counterscale</a>. Workers Analytics Engine has been operating at production scale with mission-critical workloads for years — but we hadn’t shared anything about pricing, until today.</p><p>We are keeping Workers Analytics Engine pricing simple, and based on two metrics:</p><ol><li><p><b>Data points written</b> — every time you call <a href="https://developers.cloudflare.com/analytics/analytics-engine/get-started/#3-write-data-from-your-worker">writeDataPoint()</a> in a Worker, this counts as one data point written. Every data point costs the same amount — unlike other platforms, there is no penalty for adding dimensions or cardinality, and no need to predict what the size and cost of a compressed data point might be.</p></li><li><p><b>Read queries</b> — every time you post to the Workers Analytics Engine <a href="https://developers.cloudflare.com/analytics/analytics-engine/sql-api/">SQL API</a>, this counts as one read query. Every query costs the same amount — unlike other platforms, there is no penalty for query complexity, and no need to reason about the number of rows of data that will be read by each query.</p></li></ol><p>Both the Workers Free and Workers Paid plans will include an allocation of data points written and read queries, with pricing for additional usage as follows:</p><table><colgroup><col></col><col></col><col></col></colgroup><tbody><tr><td><p><span>Plan</span></p></td><td><p><span>Data points written</span></p></td><td><p><span>Read queries</span></p></td></tr><tr><td><p><span>Workers</span><span> </span><span>Paid</span></p></td><td><p><span>10 million included per month</span></p><p><span><br /></span><span>+$0.25 per additional million</span></p></td><td><p><span>1 million included per month</span></p><p><span><br /></span><span>+$1.00 per additional million</span></p></td></tr><tr><td><p><span>Workers Free</span></p></td><td><p><span>100,000 included per day</span></p></td><td><p><span>10,000 included per day</span></p></td></tr></tbody></table><p>With this pricing, you can answer, “how much will Workers Analytics Engine cost me?” by counting the number of times you call a function in your Worker, and how many times you make a request to a HTTP API endpoint. Napkin math, rather than spreadsheet math.</p><p>This pricing will be made available to everyone in coming months. Between now and then, Workers Analytics Engine continues to be available at no cost. You can <a href="https://developers.cloudflare.com/analytics/analytics-engine/get-started/#limits">start writing data points from your Worker today</a> — it takes just a few minutes and less than 10 lines of code to start capturing data. We’d love to hear what you think.</p>
    <div>
      <h3>The week is just getting started</h3>
      <a href="#the-week-is-just-getting-started">
        
      </a>
    </div>
    <p>Tune in to what we have in store for you tomorrow on our second day of Developer Week. If you have questions or want to show off something cool you already built, please join our developer <a href="https://discord.cloudflare.com/"><i>Discord</i></a>.</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[D1]]></category>
            <category><![CDATA[Hyperdrive]]></category>
            <category><![CDATA[Queues]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <guid isPermaLink="false">5O8kPvrc2dyHIwmf2c0shv</guid>
            <dc:creator>Rita Kozlov</dc:creator>
            <dc:creator>Matt Silverlock</dc:creator>
        </item>
        <item>
            <title><![CDATA[LangChain Support for Workers AI, Vectorize and D1]]></title>
            <link>https://blog.cloudflare.com/langchain-support-for-workers-ai-vectorize-and-d1/</link>
            <pubDate>Wed, 31 Jan 2024 14:00:12 GMT</pubDate>
            <description><![CDATA[ During Developer Week, we announced LangChain support for Cloudflare Workers. Since then, we’ve been working with the LangChain team on deeper integration of many tools across Cloudflare’s developer platform and are excited to share what we’ve been up to ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7vfmv2IUCSWhFxb0iIhRwp/d0ac73a938febe9bc24538979871acba/X2uuTU5jqOf4kskV9IUy6-EFJtW1QL7NCTeaIK1Ezs29fHv5rxii32xZ_eAu-9IHMQQhzevrxEeUR4Zq5_C0Y_HmIciI-GZaj-RbEnRI4vnshmYAV6jymeq1KXQr.png" />
            
            </figure><p>During Developer Week, we announced <a href="/langchain-and-cloudflare/">LangChain support for Cloudflare Workers</a>. Langchain is an open-source framework that allows developers to create powerful AI workflows by combining different models, providers, and plugins using a declarative API — and it dovetails perfectly with Workers for creating full stack, AI-powered applications.</p><p>Since then, we’ve been working with the LangChain team on deeper integration of many tools across Cloudflare’s developer platform and are excited to share what we’ve been up to.</p><p>Today, we’re announcing five new key integrations with LangChain:</p><ol><li><p><a href="https://js.langchain.com/docs/integrations/chat/cloudflare_workersai">Workers AI Chat Models</a>: This allows you to use <a href="https://developers.cloudflare.com/workers-ai/models/text-generation/">Workers AI text generation</a> to power your chat model within your LangChain.js application.</p></li><li><p><a href="https://js.langchain.com/docs/integrations/llms/cloudflare_workersai">Workers AI Instruct Models</a>: This allows you to use Workers AI models fine-tuned for instruct use-cases, such as Mistral and CodeLlama, inside your Langchain.js application.</p></li><li><p><a href="https://js.langchain.com/docs/integrations/text_embedding/cloudflare_ai">Text Embeddings Models</a>: If you’re working with text embeddings, you can now use <a href="https://developers.cloudflare.com/workers-ai/models/text-embeddings/">Workers AI text embeddings</a> with LangChain.js.</p></li><li><p><a href="https://js.langchain.com/docs/integrations/vectorstores/cloudflare_vectorize">Vectorize Vector Store</a>: When working with a Vector database and LangChain.js, you now have the option of using <a href="https://developers.cloudflare.com/vectorize/">Vectorize</a>, Cloudflare’s powerful vector database.</p></li><li><p><a href="https://js.langchain.com/docs/integrations/chat_memory/cloudflare_d1">Cloudflare D1-Backed Chat Memory</a>: For longer-term persistence across chat sessions, you can swap out LangChain’s default in-memory chatHistory that backs chat memory classes like BufferMemory for a <a href="https://developers.cloudflare.com/d1/">Cloudflare D1 instance</a>.</p></li></ol><p>With the addition of these five Cloudflare AI tools into LangChain, developers have powerful new primitives to integrate into new and existing AI applications. With LangChain’s expressive tooling for mixing and matching AI tools and models, you can use Vectorize, Cloudflare AI’s text embedding and generation models, and <a href="https://www.cloudflare.com/developer-platform/products/d1/">Cloudflare D1 </a>to build a fully-featured AI application in just a few lines of code.</p><blockquote><p>This is a full persistent chat app powered by an LLM in 10 lines of code–deployed to <a href="https://twitter.com/Cloudflare?ref_src=twsrc%5Etfw">@Cloudflare</a> Workers, powered by <a href="https://twitter.com/LangChainAI?ref_src=twsrc%5Etfw">@LangChainAI</a> and <a href="https://twitter.com/Cloudflare?ref_src=twsrc%5Etfw">@Cloudflare</a> D1.</p><p>You can even pass in a unique sessionId and have completely user/session-specific conversations 🤯 <a href="https://t.co/le9vbMZ7Mc">https://t.co/le9vbMZ7Mc</a> <a href="https://t.co/jngG3Z7NQ6">pic.twitter.com/jngG3Z7NQ6</a></p><p>— Kristian Freeman (@kristianf_) <a href="https://twitter.com/kristianf_/status/1704592544099631243?ref_src=twsrc%5Etfw">September 20, 2023</a></p></blockquote>
    <div>
      <h3>Getting started with a Cloudflare + LangChain + Nuxt Multi-source Chatbot template</h3>
      <a href="#getting-started-with-a-cloudflare-langchain-nuxt-multi-source-chatbot-template">
        
      </a>
    </div>
    <p>You can get started by using LangChain’s Cloudflare Chatbot template: <a href="https://github.com/langchain-ai/langchain-cloudflare-nuxt-template">https://github.com/langchain-ai/langchain-cloudflare-nuxt-template</a></p><p>This application shows how various pieces of Cloudflare Workers AI fit together and expands on the concept of <a href="https://developers.cloudflare.com/workers-ai/tutorials/build-a-retrieval-augmented-generation-ai/">retrieval augmented generation (RAG)</a> to build a conversational retrieval system that can route between multiple data sources, choosing the one more relevant based on the incoming question. This method helps cut down on distraction from off-topic documents getting pulled in by a vector store’s similarity search, which could occur if only a single database were used.</p><p>The base version runs entirely on the Cloudflare Workers AI stack with the Llama 2-7B model. It uses:</p><ul><li><p>A chat variant of Llama 2-7B run on Cloudflare Workers AI</p></li><li><p>A Cloudflare Workers AI embeddings model</p></li><li><p>Two different Cloudflare Vectorize DBs (though you could add more!)</p></li><li><p>Cloudflare Pages for hosting</p></li><li><p>LangChain.js for orchestration</p></li><li><p>Nuxt + Vue for the frontend</p></li></ul><p>The two default data sources are <a href="https://www.cloudflare.com/resources/assets/slt3lc6tev37/3HWObubm6fybC0FWUdFYAJ/5d5e3b0a4d9c5a7619984ed6076f01fe/Cloudflare_for_Campaigns_Security_Guide.pdf">a PDF detailing some of Cloudflare's features</a> and <a href="https://lilianweng.github.io/posts/2023-06-23-agent/">a blog post by Lilian Weng at OpenAI</a> that talks about autonomous agents.</p><p>The bot will classify incoming questions as being about Cloudflare, AI, or neither, and draw on the corresponding data source for more targeted results. Everything is fully customizable - you can change the content of the ingested data, the models used, and all prompts!</p><p>And if you have access to the <a href="https://smith.langchain.com/">LangSmith</a> beta, the app also has tracing set up so that you can easily <a href="https://smith.langchain.com/public/24807f4a-4335-497e-bfbf-3a1de019b22e/r">see and debug each step</a> in the application.</p>
    <div>
      <h3>We can’t wait to see what you build</h3>
      <a href="#we-cant-wait-to-see-what-you-build">
        
      </a>
    </div>
    <p>We can't wait to see what you all build with LangChain and Cloudflare. Come tell us about it in <a href="https://discord.cloudflare.com/">discord</a> or on our <a href="https://community.cloudflare.com/">community forums</a>.</p> ]]></content:encoded>
            <category><![CDATA[LangChain]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[D1]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <guid isPermaLink="false">5JnxNQ7W8w3O5d2wAroT18</guid>
            <dc:creator>Ricky Robinett</dc:creator>
            <dc:creator>Kristian Freeman</dc:creator>
            <dc:creator>Jacob Lee (Guest Author)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Birthday Week recap: everything we announced — plus an AI-powered opportunity for startups]]></title>
            <link>https://blog.cloudflare.com/birthday-week-2023-wrap-up/</link>
            <pubDate>Mon, 02 Oct 2023 13:00:24 GMT</pubDate>
            <description><![CDATA[ Need a recap or refresher on all the big Birthday Week news this week? This recap has you covered ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3xrCvUVi0SgWrjDPrwi77i/2097296d3db2c219c5e0e904fe19adfb/image1-45.png" />
            
            </figure><p>This year, Cloudflare officially became a teenager, turning 13 years old. We celebrated this milestone with a series of announcements that benefit both our customers and the Internet community.</p><p>From developing applications in the age of AI to securing against the most advanced attacks that are yet to come, Cloudflare is proud to provide the tools that help our customers stay one step ahead.</p><p>We hope you’ve had a great time following along and for anyone looking for a recap of everything we launched this week, here it is:</p>
    <div>
      <h3>Monday</h3>
      <a href="#monday">
        
      </a>
    </div>
    <table><colgroup><col></col><col></col></colgroup><tbody><tr><td><p><span>What</span></p></td><td><p><span>In a sentence…</span></p></td></tr><tr><td><p><a href="http://staging.blog.mrk.cfdata.org/switching-cloudflare-cut-your-network-carbon-emissions-sbti/"><span>Switching to Cloudflare can cut emissions by up to 96%</span></a></p></td><td><p><span>Switching enterprise network services from on-prem to Cloudflare can cut related carbon emissions by up to 96%. </span></p></td></tr><tr><td><p><a href="http://staging.blog.mrk.cfdata.org/traffic-transparency-unleashing-cloudflare-trace/"><span>Cloudflare Trace</span></a></p></td><td><p><span>Use Cloudflare Trace to see which rules and settings are invoked when an HTTP request for your site goes through our network. </span></p></td></tr><tr><td><p><a href="http://staging.blog.mrk.cfdata.org/cloudflare-fonts-enhancing-website-privacy-speed/"><span>Cloudflare Fonts</span></a></p></td><td><p><span>Introducing Cloudflare Fonts. Enhance privacy and performance for websites using Google Fonts by loading fonts directly from the Cloudflare network. </span></p></td></tr><tr><td><p><a href="http://staging.blog.mrk.cfdata.org/meet-traffic-manager/"><span>How Cloudflare intelligently routes traffic</span></a></p></td><td><p><span>Technical deep dive that explains how Cloudflare uses machine learning to intelligently route traffic through our vast network. </span></p></td></tr><tr><td><p><a href="http://staging.blog.mrk.cfdata.org/cloudflare-stream-low-latency-hls-open-beta/"><span>Low Latency Live Streaming</span></a></p></td><td><p><span>Cloudflare Stream’s LL-HLS support is now in open beta. You can deliver video to your audience faster, reducing the latency a viewer may experience on their player to as little as 3 seconds. </span></p></td></tr><tr><td><p><a href="http://staging.blog.mrk.cfdata.org/permissions-best-practices/"><span>Account permissions for all</span></a></p></td><td><p><span>Cloudflare account permissions are now available to all customers, not just Enterprise. In addition, we’ll show you how you can use them and best practices. </span></p></td></tr><tr><td><p><a href="http://staging.blog.mrk.cfdata.org/incident-alerts/"><span>Incident Alerts</span></a></p></td><td><p><span>Customers can subscribe to Cloudflare Incident Alerts and choose when to get notified based on affected products and level of impact. </span></p></td></tr></tbody></table>
    <div>
      <h3>Tuesday</h3>
      <a href="#tuesday">
        
      </a>
    </div>
    <table><colgroup><col></col><col></col></colgroup><tbody><tr><td><p><span>What</span></p></td><td><p><span>In a sentence…</span></p></td></tr><tr><td><p><a href="http://staging.blog.mrk.cfdata.org/welcome-to-connectivity-cloud/"><span>Welcome to the connectivity cloud</span></a></p></td><td><p><span>Cloudflare is the world’s first connectivity cloud — the modern way to connect and protect your cloud, networks, applications and users. </span></p></td></tr><tr><td><p><a href="http://staging.blog.mrk.cfdata.org/amazon-2bn-ipv4-tax-how-avoid-paying/"><span>Amazon’s $2bn IPv4 tax — and how you can avoid paying it</span></a><span> </span></p></td><td><p><span>Amazon will begin taxing their customers $43 for IPv4 addresses, so Cloudflare will give those \$43 back in the form of credits to bypass that tax. </span></p></td></tr><tr><td><p><a href="http://staging.blog.mrk.cfdata.org/sippy-incremental-migration-s3-r2/"><span>Sippy</span></a></p><br /></td><td><p><span>Minimize egress fees by using Sippy to incrementally migrate your data from AWS to R2. </span></p></td></tr><tr><td><p><a href="http://staging.blog.mrk.cfdata.org/merging-images-and-image-resizing/"><span>Cloudflare Images</span></a></p></td><td><p><span>All Image Resizing features will be available under Cloudflare Images and we’re simplifying pricing to make it more predictable and reliable.  </span></p></td></tr><tr><td><p><a href="http://staging.blog.mrk.cfdata.org/traffic-anomalies-notifications-radar/"><span>Traffic anomalies and notifications with Cloudflare Radar</span></a></p></td><td><p><span>Cloudflare Radar will be publishing anomalous traffic events for countries and Autonomous Systems (ASes).</span></p></td></tr><tr><td><p><a href="http://staging.blog.mrk.cfdata.org/detecting-internet-outages/"><span>Detecting Internet outages</span></a></p></td><td><p><span>Deep dive into how Cloudflare detects Internet outages, the challenges that come with it, and our approach to overcome these problems. </span></p></td></tr></tbody></table>
    <div>
      <h3>Wednesday</h3>
      <a href="#wednesday">
        
      </a>
    </div>
    <table><colgroup><col></col><col></col></colgroup><tbody><tr><td><p><span>What</span></p></td><td><p><span>In a sentence…</span></p></td></tr><tr><td><p><a href="http://staging.blog.mrk.cfdata.org/best-place-region-earth-inference/"><span>The best place on Region: Earth for inference</span></a></p></td><td><p><span>Now available: Workers AI, a serverless GPU cloud for AI, Vectorize so you can build your own vector databases, and AI Gateway to help manage costs and observability of your AI applications. </span></p><br /><p><span>Cloudflare delivers the best infrastructure for next-gen AI applications, supported by partnerships with NVIDIA, Microsoft, Hugging Face, Databricks, and Meta.</span></p></td></tr><tr><td><p><a href="http://staging.blog.mrk.cfdata.org/workers-ai/"><span>Workers AI </span></a></p></td><td><p><span>Launching Workers AI — AI inference as a service platform, empowering developers to run AI models with just a few lines of code, all powered by our global network of GPUs. </span></p></td></tr><tr><td><p><a href="http://staging.blog.mrk.cfdata.org/partnering-with-hugging-face-deploying-ai-easier-affordable/"><span>Partnering with Hugging Face </span></a></p></td><td><p><span>Cloudflare is partnering with Hugging Face to make AI models more accessible and affordable to users. </span></p></td></tr><tr><td><p><a href="http://staging.blog.mrk.cfdata.org/vectorize-vector-database-open-beta/"><span>Vectorize</span></a></p></td><td><p><span>Cloudflare’s vector database, designed to allow engineers to build full-stack, AI-powered applications entirely on Cloudflare's global network — available in Beta. </span></p></td></tr><tr><td><p><a href="http://staging.blog.mrk.cfdata.org/announcing-ai-gateway/"><span>AI Gateway</span></a></p></td><td><p><span>AI Gateway helps developers have greater control and visibility in their AI apps, so that you can focus on building without worrying about observability, reliability, and scaling. AI Gateway handles the things that nearly all AI applications need, saving you engineering time so you can focus on what you're building.</span></p><br /><p><span> </span></p></td></tr><tr><td><p><a href="http://staging.blog.mrk.cfdata.org/webgpu-in-workers/"><span>You can now use WebGPU in Cloudflare Workers</span></a></p></td><td><p><span>Developers can now use WebGPU in Cloudflare Workers. Learn more about why WebGPUs are important, why we’re offering them to customers, and what’s next. </span></p></td></tr><tr><td><p><a href="http://staging.blog.mrk.cfdata.org/ai-companies-building-cloudflare/"><span>What AI companies are building with Cloudflare</span></a></p></td><td><p><span>Many AI companies are using Cloudflare to build next generation applications. Learn more about what they’re building and how Cloudflare is helping them on their journey. </span></p></td></tr><tr><td><p><a href="http://staging.blog.mrk.cfdata.org/writing-poems-using-llama-2-on-workers-ai/"><span>Writing poems using LLama 2 on Workers AI</span></a></p></td><td><p><span>Want to write a poem using AI? Learn how to run your own AI chatbot in 14 lines of code, running on Cloudflare’s global network. </span></p></td></tr></tbody></table>
    <div>
      <h3>Thursday</h3>
      <a href="#thursday">
        
      </a>
    </div>
    <table><colgroup><col></col><col></col></colgroup><tbody><tr><td><p><span>What</span></p></td><td><p><span>In a sentence…</span></p></td></tr><tr><td><p><a href="http://staging.blog.mrk.cfdata.org/hyperdrive-making-regional-databases-feel-distributed/"><span>Hyperdrive</span></a></p></td><td><p><span>Cloudflare launches a new product, Hyperdrive, that makes existing regional databases much faster by dramatically speeding up queries that are made from Cloudflare Workers.</span></p></td></tr><tr><td><p><a href="http://staging.blog.mrk.cfdata.org/d1-open-beta-is-here/"><span>D1 Open Beta</span></a></p></td><td><p><span>D1 is now in open beta, and the theme is “scale”: with higher per-database storage limits and the ability to create more databases, we’re unlocking the ability for developers to build production-scale applications on D1.</span></p></td></tr><tr><td><p><a href="http://staging.blog.mrk.cfdata.org/race-ahead-with-build-caching/"><span>Pages Build Caching</span></a></p></td><td><p><span>Build cache is a feature designed to reduce your build times by caching and reusing previously computed project components — now available in Beta. </span></p></td></tr><tr><td><p><a href="http://staging.blog.mrk.cfdata.org/running-serverless-puppeteer-workers-durable-objects/"><span>Running serverless Puppeteer with Workers and Durable Objects</span></a></p></td><td><p><span>Introducing the Browser Rendering API, which enables developers to utilize the Puppeteer browser automation library within Workers, eliminating the need for serverless browser automation system setup and maintenance</span></p></td></tr><tr><td><p><a href="http://staging.blog.mrk.cfdata.org/cloudflare-now-powering-microsoft-edge-secure-network/"><span>Cloudflare partners with Microsoft to power their Edge Secure Network</span></a></p></td><td><p><span>We partnered with Microsoft Edge to provide a fast and secure VPN, right in the browser. Users don’t have to install anything new or understand complex concepts to get the latest in network-level privacy: Edge Secure Network VPN is available on the latest consumer version of Microsoft Edge in most markets, and automatically comes with 5GB of data. </span></p></td></tr><tr><td><p><a href="http://staging.blog.mrk.cfdata.org/workers-playground/"><span>Re-introducing the Cloudflare Workers playground</span></a></p></td><td><p><span>We are revamping the playground that demonstrates the power of Workers, along with new development tooling, and the ability to share your playground code and deploy instantly to Cloudflare’s global network</span></p></td></tr><tr><td><p><a href="http://staging.blog.mrk.cfdata.org/cloudflare-integrations-marketplace-new-partners-sentry-momento-turso/"><span>Cloudflare integrations marketplace expands</span></a></p></td><td><p><span>Introducing the newest additions to Cloudflare’s Integration Marketplace. Now available: Sentry, Momento and Turso. </span></p></td></tr><tr><td><p><a href="http://staging.blog.mrk.cfdata.org/socket-api-works-javascript-runtimes-wintercg-polyfill-connect/"><span>A Socket API that works across Javascript runtimes — announcing WinterCG spec and polyfill for connect()</span></a></p></td><td><p><span>Engineers from Cloudflare and Vercel have published a draft specification of the connect() sockets API for review by the community, along with a Node.js compatible polyfill for the connect() API that developers can start using.</span></p></td></tr><tr><td><p><a href="http://staging.blog.mrk.cfdata.org/workers-pricing-scale-to-zero/"><span>New Workers pricing</span></a></p></td><td><p><span>Announcing new pricing for Cloudflare Workers, where you are billed based on CPU time, and never for the idle time that your Worker spends waiting on network requests and other I/O.</span></p></td></tr></tbody></table>
    <div>
      <h3>Friday</h3>
      <a href="#friday">
        
      </a>
    </div>
    <table><colgroup><col></col><col></col></colgroup><tbody><tr><td><p><span>What</span></p></td><td><p><span>In a sentence…</span></p></td></tr><tr><td><p><a href="http://staging.blog.mrk.cfdata.org/post-quantum-cryptography-ga/"><span>Post Quantum Cryptography goes GA</span></a><span> </span></p></td><td><p><span>Cloudflare is rolling out post-quantum cryptography support to customers, services, and internal systems to proactively protect against advanced attacks. </span></p></td></tr><tr><td><p><a href="http://staging.blog.mrk.cfdata.org/announcing-encrypted-client-hello/"><span>Encrypted Client Hello</span></a></p></td><td><p><span>Announcing a contribution that helps improve privacy for everyone on the Internet. Encrypted Client Hello, a new standard that prevents networks from snooping on which websites a user is visiting, is now available on all Cloudflare plans. </span></p></td></tr><tr><td><p><a href="http://staging.blog.mrk.cfdata.org/threats-lurking-office-365-cloudflare-email-retro-scan/"><span>Email Retro Scan</span></a><span> </span></p></td><td><p><span>Cloudflare customers can now scan messages within their Office 365 Inboxes for threats. The Retro Scan will let you look back seven days to see what threats your current email security tool has missed. </span></p></td></tr><tr><td><p><a href="http://staging.blog.mrk.cfdata.org/turnstile-ga/"><span>Turnstile is Generally Available</span></a></p></td><td><p><span>Turnstile, Cloudflare’s CAPTCHA replacement, is now generally available and available for free to everyone and includes unlimited use. </span></p></td></tr><tr><td><p><a href="http://staging.blog.mrk.cfdata.org/ai-bots/"><span>AI crawler bots</span></a></p></td><td><p><span>Any Cloudflare user, on any plan, can choose specific categories of bots that they want to allow or block, including AI crawlers. We are also recommending a new standard to robots.txt that will make it easier for websites to clearly direct how AI bots can and can’t crawl.</span></p><br /></td></tr><tr><td><p><a href="http://staging.blog.mrk.cfdata.org/detecting-zero-days-before-zero-day/"><span>Detecting zero-days before zero-day</span></a></p></td><td><p><span>Deep dive into Cloudflare’s approach and ongoing research into detecting novel web attack vectors in our WAF before they are seen by a security researcher. </span></p></td></tr><tr><td><p><a href="http://staging.blog.mrk.cfdata.org/deep-dive-privacy-preserving-measurement/"><span>Privacy Preserving Metrics</span></a></p></td><td><p><span>Deep dive into the fundamental concepts behind the Distributed Aggregation Protocol (DAP) protocol with examples on how we’ve implemented it into Daphne, our open source aggregator server. </span></p></td></tr><tr><td><p><a href="http://staging.blog.mrk.cfdata.org/post-quantum-to-origins/"><span>Post-quantum cryptography to origin</span></a></p></td><td><p><span>We are rolling out post-quantum cryptography support for outbound connections to origins and Cloudflare Workers fetch() calls. Learn more about what we enabled, how we rolled it out in a safe manner, and how you can add support to your origin server today. </span></p></td></tr><tr><td><p><a href="http://staging.blog.mrk.cfdata.org/network-performance-update-birthday-week-2023/"><span>Network performance update</span></a></p></td><td><p><span>Cloudflare’s updated benchmark results regarding network performance plus a dive into the tools and processes that we use to monitor and improve our network performance. </span></p></td></tr></tbody></table>
    <div>
      <h3>One More Thing</h3>
      <a href="#one-more-thing">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1e1X3l01L7kc5DgRUgAoU5/76d3d1955e15044bbbd14c57ee05877c/image1-51.png" />
            
            </figure><p>When Cloudflare turned 12 last year, we announced the <a href="https://www.cloudflare.com/lp/workers-launchpad/">Workers Launchpad Funding Program</a> - you can think of it like a startup accelerator program for companies building on Cloudlare’s Developer Platform, with no restrictions on your size, stage, or geography.</p><p><b>A refresher on how the Launchpad works:</b> Each quarter, we admit a group of startups who then get access to a wide range of technical advice, mentorship, and fundraising opportunities. That includes our Founders Bootcamp, Open Office Hours with our Solution Architects, and Demo Day. Those who are ready to fundraise will also be connected to our community of 40+ leading global Venture Capital firms.</p><p>In exchange, we just ask for your honest feedback. We want to know what works, what doesn’t and what you need us to build for you. We don’t ask for a stake in your company, and we don’t ask you to pay to be a part of the program.</p><blockquote><p>Targum (my startup) was one of the first AI companies (w/ <a href="https://twitter.com/jamdotdev?ref_src=twsrc%5Etfw">@jamdotdev</a> ) in the Cloudflare workers launchpad!</p><p>In return to tons of stuff we got from CF 🙏 they asked for feedback, and my main one was, let me do everything end to end on CF, I don't want to rent GPU servers… <a href="https://t.co/0j2ZymXpsL">https://t.co/0j2ZymXpsL</a></p><p>— Alex Volkov (@altryne) <a href="https://twitter.com/altryne/status/1707034613699547433?ref_src=twsrc%5Etfw">September 27, 2023</a></p></blockquote><p>Over the past year, we’ve received applications from nearly 60 different countries. We’ve had a chance to work closely with 50 amazing early and growth-stage startups admitted into the first two cohorts, and have grown our VC partner community to 40+ firms and more than $2 billion in potential investments in startups building on Cloudflare.</p><p><b>Next up: Cohort #3!</b> Between recently wrapping up Cohort #2 (check out their <a href="https://cloudflare.tv/shows/workers-launchpad-demo-day/workers-launchpad-demo-day-cohort-2/3vVqLOgq">Demo Day</a>!), celebrating the Launchpad’s 1st birthday, and the heaps of announcements we made last week, we thought that everyone could use a little extra time to catch up on all the news - which is why we are extending the deadline for Cohort #3 a few weeks to <b>October 13, 2023. AND</b> we’re <b>reserving 5 spots in the class for those who are already using any of last Wednesday’s AI announcements.</b> Just be sure to mention what you’re using in your application.</p><p>So once you’ve had a chance to check out the announcements and pour yourself a cup of coffee, check out the <a href="https://www.cloudflare.com/lp/workers-launchpad/"><b>Workers Launchpad</b></a>. Applying is a breeze — you’ll be done long before your coffee gets cold.</p>
    <div>
      <h3>Until next time</h3>
      <a href="#until-next-time">
        
      </a>
    </div>
    <p>That’s all for Birthday Week 2023. We hope you enjoyed the ride, and we’ll see you at our next innovation week!</p><blockquote><p>i hate <a href="https://twitter.com/Cloudflare?ref_src=twsrc%5Etfw">@Cloudflare</a> launch week</p><p>most launch weeks are underwhelming</p><p>cloudflare always makes me rethink everything i’m doing</p><p>— Dax (@thdxr) <a href="https://twitter.com/thdxr/status/1707575791234560377?ref_src=twsrc%5Etfw">September 29, 2023</a></p></blockquote><p></p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Turnstile]]></category>
            <category><![CDATA[CAPTCHA]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Machine Learning]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Connectivity Cloud]]></category>
            <category><![CDATA[undefined]]></category>
            <category><![CDATA[D1]]></category>
            <category><![CDATA[Beta]]></category>
            <guid isPermaLink="false">7umrrlVzTKNvArNCqoQUSI</guid>
            <dc:creator>Dina Kozlov</dc:creator>
            <dc:creator>Mia Wang</dc:creator>
        </item>
        <item>
            <title><![CDATA[D1: open beta is here]]></title>
            <link>https://blog.cloudflare.com/d1-open-beta-is-here/</link>
            <pubDate>Thu, 28 Sep 2023 13:00:14 GMT</pubDate>
            <description><![CDATA[ D1 is now in open beta, and the theme is “scale”: with higher per-database storage limits and the ability to create more databases, we’re unlocking the ability for developers to build production-scale applications on D1 ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4sioTCCEWQ0hiLg5ZSCD46/c53658e14bc379bea56cd0f3fed1d42b/image1-37.png" />
            
            </figure><p><b>D1 is now in open beta</b>, and the theme is “scale”: with higher per-database storage limits <i>and</i> the ability to create more databases, we’re unlocking the ability for developers to build production-scale applications on D1. Any developers with an existing paid Workers plan don’t need to lift a finger to benefit: we’ve retroactively applied this to all existing D1 databases.</p><p>If you missed the <a href="/d1-turning-it-up-to-11/">last D1 update</a> back during Developer Week, the <a href="https://developers.cloudflare.com/d1/changelog/">multitude of updates in the changelog</a>, or are just new to D1 in general: read on.</p>
    <div>
      <h3>Remind me: D1? Databases?</h3>
      <a href="#remind-me-d1-databases">
        
      </a>
    </div>
    <p>D1 our <a href="https://www.cloudflare.com/developer-platform/products/d1/">native serverless database</a>, which we launched into alpha in November last year: the queryable database complement to <a href="https://developers.cloudflare.com/kv/">Workers KV</a>, <a href="https://developers.cloudflare.com/durable-objects/">Durable Objects</a> and <a href="https://developers.cloudflare.com/r2/">R2</a>.</p><p>When we set out to build D1, we knew a few things for certain: it needed to be fast, it needed to be incredibly easy to create a database, and it needed to be SQL-based.</p><p>That last one was critical: so that developers could a) avoid learning another custom query language and b) make it easier for existing query buildings, ORM (object relational mapper) libraries and other tools to connect to D1 with minimal effort. From this, we’ve seen a huge number of projects build support in for D1: from support for D1 in the <a href="https://github.com/drizzle-team/drizzle-orm/blob/main/examples/cloudflare-d1/README.md">Drizzle ORM</a> and <a href="https://developers.cloudflare.com/d1/platform/community-projects/#d1-adapter-for-kysely-orm">Kysely</a>, to the <a href="https://t4stack.com/">T4 App</a>, a full-stack toolkit that uses D1 as its database.</p><p>We also knew that D1 couldn’t be the only way to query a database from Workers: for teams with existing databases and thousands of lines of SQL or existing ORM code, migrating across to D1 isn’t going to be an afternoon’s work. For those teams, we built <a href="/hyperdrive-making-regional-databases-feel-distributed/">Hyperdrive</a>, allowing you to connect to your existing databases and make them feel global. We think this gives teams flexibility: combine D1 and Workers for globally distributed apps, and use Hyperdrive for querying the databases you have in legacy clouds and just can’t get rid of overnight.</p>
    <div>
      <h3>Larger databases, and more of them</h3>
      <a href="#larger-databases-and-more-of-them">
        
      </a>
    </div>
    <p>This has been the biggest ask from the thousands of D1 users throughout the alpha: not just more databases, but also <i>bigger</i> databases.</p><p><b>Developers on the Workers paid plan will now be able to grow each database up to 2GB and create 50,000 databases (up from 500MB and 10). Yes, you read that right: 50,000 databases per account. This unlocks a whole raft of database-per-user use-cases and enables true isolation between customers, something that traditional relational database deployments can’t.</b></p><p>We’ll be continuing to work on unlocking even larger databases over the coming weeks and months: developers using the D1 beta will see automatic increases to these limits published on <a href="https://developers.cloudflare.com/d1/changelog/">D1’s public changelog</a>.</p><p>One of the biggest impediments to double-digit-gigabyte databases is performance: we want to ensure that a database can load in and be ready <i>really</i> quickly — cold starts of seconds (or more) just aren’t acceptable. A 10GB or 20GB database that takes 15 seconds before it can answer a query ends up being pretty frustrating to use.</p><p>Users on the <a href="https://www.cloudflare.com/plans/free/">Workers free plan</a> will keep the ten 500MB databases (<a href="https://developers.cloudflare.com/d1/changelog/#per-database-limit-now-500-mb">changelog</a>) forever: we want to give more developers the room to experiment with D1 and Workers before jumping in.</p>
    <div>
      <h3>Time Travel is here</h3>
      <a href="#time-travel-is-here">
        
      </a>
    </div>
    <p><a href="https://developers.cloudflare.com/d1/learning/time-travel/">Time Travel</a> allows you to roll your database back to a specific point in time: specifically, any minute in the last 30 days. And it’s enabled by default for every D1 database, doesn’t cost any more, and doesn’t count against your storage limit.</p><p>For those who have been keeping tabs: we originally announced Time Travel earlier this year, and made it <a href="https://developers.cloudflare.com/d1/changelog/#time-travel">available to all D1 users in July</a>. At its core, it’s deceptively simple: Time Travel introduces the concept of a “bookmark” to D1. A bookmark represents the state of a database at a specific point in time, and is effectively an append-only log. Time Travel can take a timestamp and turn it into a bookmark, or a bookmark directly: allowing you to restore back to that point. Even better: restoring doesn’t prevent you from going back further.</p><p>We think Time Travel works best with an example, so let’s make a change to a database: one with an Order table that stores every order made against our e-commerce store:</p>
            <pre><code># To illustrate: we have 89,185 unique addresses in our order database. 
➜  wrangler d1 execute northwind --command "SELECT count(distinct ShipAddress) FROM [Order]" 
┌──────────┐
│ count(*) │
├──────────┤
│ 89185    │
└──────────┘</code></pre>
            <p>OK, great. Now what if we wanted to make a change to a specific set of orders: an address change or freight company change?</p>
            <pre><code># I think we might be forgetting something here...
➜  wrangler d1 execute northwind --command "UPDATE [Order] SET ShipAddress = 'Av. Veracruz 38, Roma Nte., Cuauhtémoc, 06700 Ciudad de México, CDMX, Mexico' </code></pre>
            <p>Wait: we’ve made a mistake that many, many folks have before: we forgot the WHERE clause on our UPDATE query. Instead of updating a specific order Id, we’ve instead updated the ShipAddress for every order in our table.</p>
            <pre><code># Every order is now going to a wine bar in Mexico City. 
➜  wrangler d1 execute northwind --command "SELECT count(distinct ShipAddress) FROM [Order]" 
┌──────────┐
│ count(*) │
├──────────┤
│ 1        │
└──────────┘</code></pre>
            <p>Panic sets in. Did we remember to make a backup before we did this? How long ago was it? Did we turn on point-in-time recovery? It seemed potentially expensive at the time…</p><p>It’s OK. We’re using D1. We can Time Travel. It’s on by default: let’s fix this and travel back a few minutes.</p>
            <pre><code># Let's go back in time.
➜  wrangler d1 time-travel restore northwind --timestamp="2023-09-23T14:20:00Z"

🚧 Restoring database northwind from bookmark 0000000b-00000002-00004ca7-9f3dba64bda132e1c1706a4b9d44c3c9
✔ OK to proceed (y/N) … yes

⚡️ Time travel in progress...
✅ Database dash-db restored back to bookmark 00000000-00000004-00004ca7-97a8857d35583887de16219c766c0785
↩️ To undo this operation, you can restore to the previous bookmark: 00000013-ffffffff-00004ca7-90b029f26ab5bd88843c55c87b26f497</code></pre>
            <p>Let's check if it worked:</p>
            <pre><code># Phew. We're good. 
➜  wrangler d1 execute northwind --command "SELECT count(distinct ShipAddress) FROM [Order]" 
┌──────────┐
│ count(*) │
├──────────┤
│ 89185    │
└──────────┘</code></pre>
            <p>We think that Time Travel becomes even more powerful when you have many smaller databases, too: the downsides of any restore operation is reduced further and scoped to a single user or tenant.</p><p>This is also just the beginning for Time Travel: we’re working to support not just only restoring a database, but also the ability to fork from and overwrite existing databases. If you can fork a database with a single command and/or test migrations and schema changes against real data, you can de-risk a lot of the traditional challenges that working with databases has historically implied.</p>
    <div>
      <h3>Row-based pricing</h3>
      <a href="#row-based-pricing">
        
      </a>
    </div>
    <p><a href="/d1-turning-it-up-to-11/#not-going-to-burn-a-hole-in-your-wallet">Back in May</a> we announced pricing for D1, to a lot of positive feedback around how much we’d included in our Free and Paid plans. In August, we published a new row-based model, replacing the prior byte-units, that makes it easier to predict and quantify your usage. Specifically, we moved to rows as it’s easier to reason about: if you’re writing a row, it doesn’t matter if it’s 1KB or 1MB. If your read query uses an indexed column to filter on, you’ll see not only performance benefits, but cost savings too.</p><p>Here’s D1’s pricing — almost everything has stayed the same, with the added benefit of charging based on rows:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4053N3dvxuEp46TQG6xec9/74244f620374666d3b8fcbcf5d0016bb/Screenshot-2023-09-29-at-09.33.51.png" />
            
            </figure><p>D1’s pricing — you can find more details in <a href="https://developers.cloudflare.com/d1/platform/pricing/">D1’s public documentation</a>.</p><p>As before, D1 does not charge you for “database hours”, the number of databases, or point-in-time recovery (<a href="https://developers.cloudflare.com/d1/learning/time-travel/">Time Travel</a>) — just query D1 and pay for your reads, writes, and storage — that’s it.</p><p>We believe this makes D1 not only far more cost-efficient, but also makes it easier to manage multiple databases to isolate customer data or prod vs. staging: we don’t care <i>which</i> database you query. Manage your data how you like, separate your customer data, and avoid having to fall for the trap of “Billing Based Architecture”, where you build solely around how you’re charged, even if it’s not intuitive or what makes sense for your team.</p><p>To make it easier to both see how much a given query charges <i>and</i> when to <a href="https://developers.cloudflare.com/d1/learning/using-indexes/">optimize your queries with indexes</a>, D1 also returns the number of rows a query read or wrote (or both) so that you can understand how it’s costing you in both cents and speed.</p><p>For example, the following query filters over orders based on date:</p>
            <pre><code>SELECT * FROM [Order] WHERE ShippedDate &gt; '2016-01-22'" 

[
  {
    "results": [],
    "success": true,
    "meta": {
      "duration": 5.032,
      "size_after": 33067008,
      "rows_read": 16818,
      "rows_written": 0
    }
  }
]</code></pre>
            <p>The unindexed query above scans 16,800 rows. Even if we don’t optimize it, D1 includes 25 billion queries per month for free, meaning we could make this query 1.4 million times for a whole month before having to worry about extra costs.</p><p>But we can do better with an index:</p>
            <pre><code>CREATE INDEX IF NOT EXISTS idx_orders_date ON [Order](ShippedDate)</code></pre>
            <p>With the index created, let’s see how many rows our query needs to read now:</p>
            <pre><code>SELECT * FROM [Order] WHERE ShippedDate &gt; '2016-01-22'" 

[
  {
    "results": [],
    "success": true,
    "meta": {
      "duration": 3.793,
             "size_after": 33067008,
      "rows_read": 417,
      "rows_written": 0
    }
  }
]</code></pre>
            <p>The same query with an index on the ShippedDate column reads just 417 rows: not only it is faster (duration is in milliseconds!), but it costs us less: we could run this query 59 million times per month before we’d have to pay any more than what the $5 Workers plan gives us.</p><p>D1 also <a href="https://developers.cloudflare.com/d1/platform/metrics-analytics/#metrics">exposes row counts</a> via both the Cloudflare dashboard and our GraphQL analytics API: so not only can you look at this per-query when you’re tuning performance, but also break down query patterns across all of your databases.</p>
    <div>
      <h3>D1 for Platforms</h3>
      <a href="#d1-for-platforms">
        
      </a>
    </div>
    <p>Throughout D1’s alpha period, we’ve both heard from and worked with teams who are excited about D1’s ability to scale out horizontally: the ability to deploy a database-per-customer (or user!) in order to keep data closer to where teams access it <i>and</i> more strongly isolate that data from their other users.</p><p>Teams building the next big thing on <a href="https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/">Workers for Platforms</a> — think of it as “Functions as a Service, as a Service” — can use D1 to deploy a <b>database per user</b> — keeping customer data strongly separated from each other.</p><p>For example, and as one of the early adopters of D1, <a href="https://twitter.com/roninapp">RONIN</a> is building an edge-first content &amp; data platform backed by a dedicated D1 database per customer, which allows customers to place data closer to users and provides each customer isolation from the queries of others.</p><p>Instead of spinning up and managing countless traditional database instances, RONIN uses D1 for Platforms to offer automatic infinite scalability at the edge. This allows RONIN to focus on providing an intuitive editing experience for your content.</p><p>When it comes to enabling “D1 for Platforms”, we’ve thought about this in a few ways from the very beginning:</p><ul><li><p><b>Support for more than 100,000+ databases for Workers for Platforms users — there’s no limit, but if we said “unlimited” you might not believe us — on top of the 50,000 databases per account that D1 already enables.</b></p></li><li><p>D1’s pricing - you don’t pay per-database or for “idle databases”. If you have a range of users, from thousands of QPS down to 1-2 every 10 minutes — you aren’t paying more for “database hours” on the less trafficked databases, or having to plan around spiky workloads across your user-base.</p></li><li><p>The ability to programmatically configure more databases via <a href="https://developers.cloudflare.com/api/operations/cloudflare-d1-create-database">D1’s HTTP API</a> <i>and</i> <a href="https://developers.cloudflare.com/api/operations/worker-script-patch-settings">attach them to your Worker</a> without re-deploying. There’s no “provisioning” delay, either: you create the database, and it’s immediately ready to query by you or your users.</p></li><li><p>Detailed <a href="https://developers.cloudflare.com/d1/platform/metrics-analytics/">per-database analytics</a>, so you can understand which databases are being used and how they’re being queried via D1’s GraphQL analytics API.</p></li></ul><p>If you’re building the next big platform on top of Workers &amp; want to use D1 at scale — whether you’re part of the <a href="https://www.cloudflare.com/lp/workers-launchpad/">Workers Launchpad program</a> or not — reach out.</p>
    <div>
      <h3>What’s next for D1?</h3>
      <a href="#whats-next-for-d1">
        
      </a>
    </div>
    <p><b>We’re setting a clear goal: we want to make D1 “generally available” (GA) for production use-cases by early next year</b> <b>(Q1 2024)</b>. Although you can already use D1 without a waitlist or approval process, we understand that the GA label is an important one for many when it comes to a database (and as do we).</p><p>Between now and GA, we’re working on some really key parts of the D1 vision, with a continued focus on reliability and performance.</p><p>One of the biggest remaining pieces of that vision is global read replication, which we <a href="/d1-turning-it-up-to-11/">wrote about earlier this year</a>. Importantly, replication will be free, won’t multiply your storage consumption, and will still enable session consistency (read-your-writes). Part of D1’s mission is about getting data closer to where users are, and we’re excited to land it.</p><p>We’re also working to expand <a href="https://developers.cloudflare.com/d1/learning/time-travel/">Time Travel</a>, D1’s built-in point-in-time recovery capabilities, so that you can branch and/or clone a database from a specific point-in-time on the fly.</p><p>We’ll also <b>be progressively opening up our limits around per-database storage, unlocking more storage per account, and the number of databases you can create over the rest of this year</b>, so keep an eye on the D1 <a href="https://developers.cloudflare.com/d1/changelog/">changelog</a> (or your inbox).</p><p>In the meantime, if you haven’t yet used D1, you can <a href="https://developers.cloudflare.com/d1/get-started/">get started</a> right now, visit D1’s <a href="https://developers.cloudflare.com/d1/">developer documentation</a> to spark some ideas, or <a href="https://discord.cloudflare.com/">join the #d1-beta channel</a> on our Developer Discord to talk to other D1 developers and our product-engineering team.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Database]]></category>
            <category><![CDATA[D1]]></category>
            <guid isPermaLink="false">5I0knbF5YIn2PbvvOTa1q2</guid>
            <dc:creator>Matt Silverlock</dc:creator>
            <dc:creator>Ben Yule</dc:creator>
        </item>
        <item>
            <title><![CDATA[How we built an open-source SEO tool using Workers, D1, and Queues]]></title>
            <link>https://blog.cloudflare.com/how-we-built-an-open-source-seo-tool-using-workers-d1-and-queues/</link>
            <pubDate>Thu, 02 Mar 2023 15:03:54 GMT</pubDate>
            <description><![CDATA[ In this blog post, I’m excited to show off some of the new tools in Cloudflare’s developer arsenal, D1 and Queues, to prototype and ship an internal tool for our SEO experts at Cloudflare. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Building applications on Cloudflare Workers has always been fun. Workers applications have low latency response times by default, and easy developer ergonomics thanks to Wrangler. It's no surprise that for years now, developers have been going from idea to production with Workers in just a few minutes.</p><p>Internally, we're no different. When a member of our team has a project idea, we often reach for Workers first, and not just for the MVP stage, but in production, too. Workers have been a secret ingredient to Cloudflare’s innovation for some time now, allowing us to build products like Access, Stream and Workers KV. Even better, when we have new ideas <i>and</i> we can use new Cloudflare products to build them, it's a great way to give feedback on those products.</p><p>We've discussed this in the past on the Cloudflare blog - in May last year, <a href="/new-dev-docs/">I wrote how we rebuilt Cloudflare's developer documentation</a> using many of the tools that had recently been released in the Workers ecosystem: Cloudflare Pages for hosting, and Bulk Redirects for the redirect rules. In November, <a href="/building-a-better-developer-experience-through-api-documentation/">we released a new version of our API documentation</a>, which again used Pages for hosting, and Pages functions for intelligent caching and transformation of our API schema.</p><p>In this blog post, I’m excited to show off some of the new tools in Cloudflare’s developer arsenal, <a href="https://www.cloudflare.com/developer-platform/products/d1/">D1</a> and <a href="/introducing-cloudflare-queues/">Queues</a>, to prototype and ship an internal tool for our SEO experts at Cloudflare. We've made this project, which we're calling Prospector, open-source too - check it out in our <code>[cloudflare/templates](https://github.com/cloudflare/workers-sdk/tree/main/templates/worker-prospector)</code> repo on GitHub. Whether you're a developer looking to understand how to use multiple parts of Cloudflare's developer stack together, or an SEO specialist who may want to deploy the tool in production, we've made it incredibly easy to get up and running.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6AbvyVpEkBfnOITizlfkhT/73156db0a0fe274ead622a677b6eb959/image1.png" />
            
            </figure>
    <div>
      <h2>What we're building</h2>
      <a href="#what-were-building">
        
      </a>
    </div>
    <p>Prospector is a tool that allows Cloudflare's SEO experts to monitor our blog and marketing site for specific keywords. When a keyword is matched on a page, Prospector will notify an email address. This allows our SEO experts to stay informed of any changes to our website, and take action accordingly.</p><p><a href="/sending-email-from-workers-with-mailchannels/">Using MailChannels' integration with Workers</a>, we can quickly and easily send emails from our application using a single API call. This allows us to focus on the core functionality of the application, and not worry about the details of sending emails.</p><p>Prospector uses Cloudflare Workers as the user-facing API for the application. It uses D1 to store and retrieve data in real-time, and Queues to handle the fetching of all URLs and the notification process. We've also included an intuitive user interface for the application, which is built with HTML, CSS, and JavaScript.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/339I5LA7CAciIdEOyfKkl3/fc05a5f4b5d41ef794d638df5e41d1fb/image3-1.png" />
            
            </figure>
    <div>
      <h2>Why we built it</h2>
      <a href="#why-we-built-it">
        
      </a>
    </div>
    <p>It is widely known in SEO that both internal and external links help Google and other search engines understand what a website is about, which impacts keyword rankings. Not only do these links guide readers to additional helpful information, they also allow <a href="https://www.cloudflare.com/learning/bots/what-is-a-web-crawler/">web crawlers</a> for search engines to discover and index content on the site.</p><p>Acquiring external links is often a time-consuming process and at the discretion of third parties, whereas website owners typically have much more control over internal links. As a result, internal linking is one of the most useful levers available in SEO.</p><p>In an ideal world, every piece of content would be fully formed upon publication, replete with helpful internal links throughout the piece. However, this is often not the case. Many times, content is edited after the fact or additional pieces of relevant content come along after initial publication. These situations result in missed opportunities for internal linking.</p><p>Like other large organizations, Cloudflare has published thousands of blogs and web pages over the years. We share new content every time a product/technology is introduced and improved. Ultimately, that also means it's become more challenging to identify opportunities for internal linking in a timely, automated fashion. We needed a tool that would allow us to identify internal linking opportunities as they appear, and speed up the time it takes to identify new internal linking opportunities.</p><p>Although we tested several tools that might solve this problem, we found that they were limited in several ways. First, some tools only scanned the first 2,000 characters of a web page. Any opportunities found beyond that limit would not be detected. Next, some tools did not allow us to limit searches to certain areas of the site and resulted in many false positives. Finally, other potential solutions required manual operation, leaving the process at the mercy of human memory.</p><p>To solve our problem (and ultimately, improve our SEO), we needed an automated tool that could discover and notify us of new instances of targeted phrases on a specified range of pages.</p>
    <div>
      <h2>How it works</h2>
      <a href="#how-it-works">
        
      </a>
    </div>
    
    <div>
      <h3>Data model</h3>
      <a href="#data-model">
        
      </a>
    </div>
    <p>First, let's explore the data model for Prospector. We have two main tables: <code>notifiers</code> and <code>urls</code>. The <code>notifiers</code> table stores the email address and keyword that we want to monitor. The <code>urls</code> table stores the URL and sitemap that we want to scrape. The <code>notifiers</code> table has a one-to-many relationship with the <code>urls</code> table, meaning that each notifier can have many URLs associated with it.</p><p>In addition, we have a <code>sitemaps</code> table that stores the sitemap URLs that we've scraped. Many larger websites don't just have a single sitemap: the Cloudflare blog, for instance, has a primary sitemap that contains four sub-sitemaps. When the application is deployed, a primary sitemap is provided as configuration, and Prospector will parse it to find all of the sub-sitemaps.</p><p>Finally, <code>notifier_matches</code> is a table that stores the matches between a notifier and a URL. This allows us to keep track of which URLs have already been matched, and which ones still need to be processed. When a match has been found, the <code>notifier_matches</code> table is updated to reflect that, and "matches" for a keyword are no longer processed. This saves our SEO experts from a crowded inbox, and allows them to focus and act on new matches.</p><p><b>Connecting the pieces with Cloudflare Queues</b>Cloudflare Queues acts as the work queue for Prospector. When a new notifier is added, a new job is created for it and added to the queue. Behind the scenes, Queues will distribute the work across multiple Workers, allowing us to scale the application as needed. When a job is processed, Prospector will scrape the URL and check for matches. If a match is found, Prospector will send an email to the notifier's email address.</p><p>Using the Cron Triggers functionality in Workers, we can schedule the scraping process to run at a regular interval - by default, once a day. This allows us to keep our data up-to-date, and ensures that we're always notified of any changes to our website. It also allows the end-user to configure when they receive emails in case they want to receive them more or less frequently, or at the beginning of their workday.</p><p>The Module Workers syntax for Workers makes accessing the application bindings - the constants available in the application for querying D1, Queues, and other services - incredibly easy. <code>src/index.ts</code>, the entrypoint for the application, looks like this:</p>
            <pre><code>import { DBUrl, Env } from './types'

import {
  handleQueuedUrl,
  scheduled,
} from './functions';

import h from './api'

export default {
  async fetch(
	request: Request,
	env: Env,
	ctx: ExecutionContext
  ): Promise&lt;Response&gt; {
	return h.fetch(request, env, ctx)
  },

  async queue(
	batch: MessageBatch&lt;Error&gt;,
	env: Env
  ): Promise&lt;void&gt; {
	for (const message of batch.messages) {
  	const url: DBUrl = JSON.parse(message.body)
  	await handleQueuedUrl(url, env.DB)
	}
  },

  async scheduled(
	env: Env,
  ): Promise&lt;void&gt; {
	await scheduled({
  	authToken: env.AUTH_TOKEN,
  	db: env.DB,
  	queue: env.QUEUE,
  	sitemapUrl: env.SITEMAP_URL,
	})
  }
};</code></pre>
            <p>With this syntax, we can see where the various events incoming to the application - the <code>fetch</code> event, the <code>queue</code> event, and the <code>scheduled</code> event - are handled. The <code>fetch</code> event is the main entrypoint for the application, and is where we handle all of the API routes. The <code>queue</code> event is where we handle the work that's been added to the queue, and the <code>scheduled</code> event is where we handle the scheduled scraping process.</p><p>Central to the application, of course, is Workers - acting as the API gateway and coordinator. We've elected to use the popular open-source framework <a href="https://honojs.dev/">Hono</a>, an Express-style API for Workers, in Prospector. With Hono, we can quickly map out a REST API in just a few lines of code. Here's an example of a few API routes and how they're defined with Hono:</p>
            <pre><code>const app = new Hono()

app.get("/", (context) =&gt; {
  return context.html(index)
})

app.post("/notifiers", async context =&gt; {
  try {
	const { keyword, email } = await context.req.parseBody()
	await context.env.DB.prepare(
  	"insert into notifiers (keyword, email) values (?, ?)"
	).bind(keyword, email).run()
	return context.redirect('/')
  } catch (err) {
	context.status(500)
	return context.text("Something went wrong")
  }
})

app.get('/sitemaps', async (context) =&gt; {
  const query = await context.env.DB.prepare(
	"select * from sitemaps"
  ).all();
  const sitemaps: Array&lt;DBSitemap&gt; = query.results
  return context.json(sitemaps)
})</code></pre>
            <p>Crucial to the development of Prospector are the improved TypeScript bindings for Workers. <a href="/improving-workers-types/">As announced in November of last year</a>, TypeScript bindings for Workers are now automatically generated based on <a href="/workerd-open-source-workers-runtime/">our open source runtime, <code>workerd</code></a>. This means that whenever we use the types provided from the <a href="https://github.com/cloudflare/workers-types"><code>@cloudflare/workers-types</code> package</a> in our application, we can be sure that the types are always up-to-date.</p><p>With these bindings, we can define the types for our environment variables, and use them in our application. Here's an example of the <code>Env</code> type, which defines the environment variables that we use in the application:</p>
            <pre><code>export interface Env {
  AUTH_TOKEN: string
  DB: D1Database
  QUEUE: Queue
  SITEMAP_URL: string
}</code></pre>
            <p>Notice the types of the <code>DB</code> and <code>QUEUE</code> bindings - <code>D1Database</code> and <code>Queue</code>, respectively. These types are automatically generated, complete with type signatures for each method inside of the D1 and Queue APIs. This means that we can be sure that we're using the correct methods, and that we're passing the correct arguments to them, directly from our text editor - without having to refer to the documentation.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6uqW1v9MEpdsEMieihxKFk/e865da1397167301f051616d61f83a1a/image4.png" />
            
            </figure>
    <div>
      <h2>How to use it</h2>
      <a href="#how-to-use-it">
        
      </a>
    </div>
    <p>One of my favorite things about Workers is that deploying applications is quick and easy. Using `wrangler.toml` and some simple build scripts, we can deploy a fully-functional application in just a few minutes. Prospector is no different. With just a few commands, we can create the necessary D1 database and Queues instance, and deploy the application to our account.</p><p>First, you'll need to clone the repository from our cloudflare/templates repository:</p><p><code>git clone $URL</code></p><p>If you haven't installed wrangler yet, you can do so by running:</p><p><code>npm install @cloudflare/wrangler -g</code></p><p>With Wrangler installed, you can login to your account by running:</p><p><code>wrangler login</code></p><p>After you've done that, you'll need to create a new D1 database, as well as a Queues instance. You can do this by running the following commands:</p><p><code>wrangler d1 create $DATABASE_NAMEwrangler queues create $QUEUE_NAME</code></p><p>Configure your <code>wrangler.toml</code> with the appropriate bindings (see [the README](URL) for an example):</p>
            <pre><code>[[ d1_databases ]]
binding = "DB"
database_name = "keyword-tracker-db"
database_id = "ab4828aa-723b-4a77-a3f2-a2e6a21c4f87"
preview_database_id = "8a77a074-8631-48ca-ba41-a00d0206de32"
	
[[queues.producers]]
  queue = "queue"
  binding = "QUEUE"

[[queues.consumers]]
  queue = "queue"
  max_batch_size = 10
  max_batch_timeout = 30
  max_retries = 10
  dead_letter_queue = "queue-dlq"</code></pre>
            <p>Next, you can run the <code>bin/migrate</code> script to create the tables in your database:</p><p><code>bin/migrate</code></p><p>This will create all the needed tables in your database, both in development (locally) and in production. Note that you'll even see the creation of a honest-to-goodness <code>.sqlite3</code> file in your project directory - this is the local development database, which you can connect to directly using the same SQLite CLI that you're used to:</p><p><code>$ sqlite3 .wrangler/state/d1/DB.sqlite3sqlite&gt; .tables notifier_matches  notifiers      sitemaps       urls</code></p><p>Finally, you can deploy the application to your account:</p><p><code>npm run deploy</code></p><p>With a deployed application, you can visit your Workers URL to see the user interface. From there, you can add new notifiers and URLs, and see the results of your scraping process. When a new keyword match is found, you’ll receive an email with the details of the match instantly:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2P59U5wQRoysgE8nWLbwLD/b1e7240ddd90dd36676163b201998cb3/image2-1.png" />
            
            </figure>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>For some time, there have been a great deal of applications that were hard to build on Workers without relational data or background task tooling. Now, with D1 and Queues, we can build applications that seamlessly integrate between real-time user interfaces, geographically distributed data, background processing, and more, all using the same developer ergonomics and low latency that Workers is known for.</p><p>D1 has been crucial for building this application. On larger sites, the number of URLs that need to be scraped can be quite large. If we were to use Workers KV, our key-value store, for storing this data, we would quickly struggle with how to model, retrieve, and update the data needed for this use-case. With D1, we can build relational data models and quickly query <i>just</i> the data we need for each queued processing task.</p><p>Using these tools, developers can build internal tools and applications for their companies that are more powerful and more scalable than ever before. With the integration of Cloudflare's Zero Trust suite, developers can make these applications secure by default, and deploy them to Cloudflare's global network. This allows developers to build applications that are fast, secure, and reliable, all without having to worry about the underlying infrastructure.</p><p>Prospector is a great example of how easy it is to build applications on Cloudflare Workers. With the recent addition of D1 and Queues, we've been able to build fully-functional applications that require real-time data and background processing in just a few hours. We're excited to share the open-source code for Prospector, and we'd love to hear your feedback on the project.</p><p>If you have any questions, feel free to reach out to us on Twitter at <a href="https://twitter.com/cloudflaredev">@cloudflaredev</a>, or join us in the Cloudflare Workers Discord community, which recently hit 20k members and is a great place to ask questions and get help from other developers.</p> ]]></content:encoded>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Storage]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[D1]]></category>
            <category><![CDATA[Queues]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">3Ye7OiZdwDby0AGqA7LQAh</guid>
            <dc:creator>Kristian Freeman</dc:creator>
            <dc:creator>Neal Kindschi</dc:creator>
        </item>
        <item>
            <title><![CDATA[Welcome to Wildebeest: the Fediverse on Cloudflare]]></title>
            <link>https://blog.cloudflare.com/welcome-to-wildebeest-the-fediverse-on-cloudflare/</link>
            <pubDate>Wed, 08 Feb 2023 19:00:00 GMT</pubDate>
            <description><![CDATA[ Today we're announcing Wildebeest, an open-source, easy-to-deploy ActivityPub and Mastodon-compatible server built entirely on top of Cloudflare's Supercloud. ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5BbRRixkMxcIiNYgdA9go/f2d1e27e932958951271d36ccffa2c16/Wildebeest.png" />
            
            </figure><p><a href="https://en.wikipedia.org/wiki/Fediverse">The Fediverse</a> has been a hot topic of discussion lately, with thousands, if not <a href="https://bitcoinhackers.org/@mastodonusercount">millions</a>, of new users creating accounts on platforms like <a href="https://joinmastodon.org/">Mastodon</a> to either move entirely to "the other side" or experiment and learn about this new social network.</p><p>Today we're introducing <a href="https://github.com/cloudflare/wildebeest">Wildebeest</a>, an open-source, easy-to-deploy ActivityPub and Mastodon-compatible server built entirely on top of Cloudflare's Supercloud. If you want to run your own spot in the Fediverse you can now do it entirely on Cloudflare.</p>
    <div>
      <h2>The Fediverse, built on Cloudflare</h2>
      <a href="#the-fediverse-built-on-cloudflare">
        
      </a>
    </div>
    <p>Today you're left with two options if you want to join the Mastodon federated network: either you join one of the <a href="https://joinmastodon.org/servers">existing servers</a> (servers are also called communities, and each one has its own infrastructure and rules), or you can run your self-hosted server.</p><p>There are a few reasons why you'd want to run your own server:</p><ul><li><p>You want to create a new community and attract other users over a common theme and usage rules.</p></li><li><p>You don't want to have to trust third-party servers or abide by their policies and want your server, under your domain, for your personal account.</p></li><li><p>You want complete control over your data, personal information, and content and visibility over what happens with your instance.</p></li></ul><p>The Mastodon gGmbH non-profit organization provides a server implementation using Ruby, Node.js, PostgreSQL and Redis. Running the <a href="https://github.com/mastodon/mastodon">official server</a> can be challenging, though. You need to own or rent a server or VPS somewhere; you have to install and configure the software, set up the database and public-facing web server, and configure and protect your network against attacks or abuse. And then you have to maintain all of that and deal with constant updates. It's a lot of scripting and technical work before you can get it up and running; definitely not something for the less technical enthusiasts.</p><p>Wildebeest serves two purposes: you can quickly deploy your Mastodon-compatible server on top of Cloudflare and connect it to the Fediverse in minutes, and you don't need to worry about maintaining or protecting it from abuse or attacks; Cloudflare will do it for you automatically.</p><p>Wildebeest is not a managed service. It's your instance, data, and code running in our cloud under your Cloudflare account. Furthermore, it's <a href="https://github.com/cloudflare/wildebeest">open-sourced</a>, which means it keeps evolving with more features, and anyone can <a href="https://github.com/cloudflare/wildebeest/pulls">extend</a> and improve it.</p><p>Here's what we support today:</p><ul><li><p><a href="https://www.w3.org/TR/activitypub/">ActivityPub</a>, <a href="https://www.rfc-editor.org/rfc/rfc7033">WebFinger</a>, <a href="https://github.com/cloudflare/wildebeest/tree/main/functions/nodeinfo">NodeInfo</a>, <a href="https://datatracker.ietf.org/doc/html/rfc8030">WebPush</a> and <a href="https://docs.joinmastodon.org/api/">Mastodon-compatible</a> APIs. Wildebeest can connect to or receive connections from other Fediverse servers.</p></li><li><p>Compatible with the most popular Mastodon <a href="https://github.com/nolanlawson/pinafore">web</a> (like <a href="https://github.com/nolanlawson/pinafore">Pinafore</a>), desktop, and <a href="https://joinmastodon.org/apps">mobile clients</a>. We also provide a simple read-only web interface to explore the timelines and user profiles.</p></li><li><p>You can publish, edit, boost, or delete posts, sorry, toots. We support text, images, and (soon) video.</p></li><li><p>Anyone can follow you; you can follow anyone.</p></li><li><p>You can search for content.</p></li><li><p>You can register one or multiple accounts under your instance. Authentication can be email-based on or using any Cloudflare Access compatible IdP, like GitHub or Google.</p></li><li><p>You can edit your profile information, avatar, and header image.</p></li></ul>
    <div>
      <h2>How we built it</h2>
      <a href="#how-we-built-it">
        
      </a>
    </div>
    <p>Our implementation is built entirely on top of our <a href="https://www.cloudflare.com/cloudflare-product-portfolio/">products</a> and <a href="https://developers.cloudflare.com/">APIs</a>. Building Wildebeest was another excellent opportunity to showcase our technology stack's power and versatility and prove how anyone can also use Cloudflare to build larger applications that involve multiple systems and complex requirements.</p><p>Here's a birds-eye diagram of Wildebeest's architecture:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/33R5UHXYSBDBUsoFLMkoC8/0304880c93af0a41d168616da4c73b90/Screenshot-2023-02-08-at-10.58.01-AM.png" />
            
            </figure><p>Let's get into the details and get technical now.</p>
    <div>
      <h3>Cloudflare Pages</h3>
      <a href="#cloudflare-pages">
        
      </a>
    </div>
    <p>At the core, Wildebeest is a <a href="https://pages.cloudflare.com/">Cloudflare Pages</a> project running its code using <a href="https://developers.cloudflare.com/pages/platform/functions/">Pages Functions</a>. Cloudflare Pages provides an excellent foundation for building and deploying your application and serving your bundled assets, Functions gives you full access to the Workers ecosystem, where you can run any code.</p><p>Functions has a built-in <a href="https://developers.cloudflare.com/pages/platform/functions/routing/">file-based router</a>. The <a href="https://github.com/cloudflare/wildebeest/tree/main/functions">/functions</a> directory structure, which is uploaded by Wildebeest’s continuous deployment builds, defines your application routes and what files and code will process each HTTP endpoint request. This routing technique is similar to what other frameworks like Next.js <a href="https://nextjs.org/docs/routing/introduction">use</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5NsNlFYtyqKVzhFFBuGeRW/767c9b102b9d97ad067c343df387c5db/2b.png" />
            
            </figure><p>For example, Mastodon’s <a href="https://docs.joinmastodon.org/methods/timelines/#public">/api/v1/timelines/public</a> API endpoint is handled by <a href="https://github.com/cloudflare/wildebeest/blob/main/functions/api/v1/timelines/public.ts">/functions/api/v1/timelines/public.ts</a> with the onRequest method.</p>
            <pre><code>export onRequest = async ({ request, env }) =&gt; {
	const { searchParams } = new URL(request.url)
	const domain = new URL(request.url).hostname
...
	return handleRequest(domain, env.DATABASE, {})
}

export async function handleRequest(
    …
): Promise&lt;Response&gt; {
    …
}
</code></pre>
            <p>Unit testing these endpoints becomes easier too, since we only have to call the handleRequest() function from the testing framework. Check one of our <a href="https://jestjs.io/">Jest</a> tests, <a href="https://github.com/cloudflare/wildebeest/blob/main/backend/test/mastodon.spec.ts">mastodon.spec.ts</a>:</p>
            <pre><code>import * as v1_instance from 'wildebeest/functions/api/v1/instance'

describe('Mastodon APIs', () =&gt; {
	describe('instance', () =&gt; {
		test('return the instance infos v1', async () =&gt; {
			const res = await v1_instance.handleRequest(domain, env)
			assert.equal(res.status, 200)
			assertCORS(res)

			const data = await res.json&lt;Data&gt;()
			assert.equal(data.rules.length, 0)
			assert(data.version.includes('Wildebeest'))
		})
       })
})
</code></pre>
            <p>As with any other regular Worker, Functions also lets you set up <a href="https://developers.cloudflare.com/pages/platform/functions/bindings/">bindings</a> to interact with other Cloudflare products and features like <a href="https://developers.cloudflare.com/workers/runtime-apis/kv/">KV</a>, <a href="https://developers.cloudflare.com/r2/data-access/workers-api/workers-api-reference/">R2</a>, <a href="https://developers.cloudflare.com/d1/">D1</a>, <a href="https://developers.cloudflare.com/workers/runtime-apis/durable-objects/">Durable Objects</a>, and more. The list keeps growing.</p><p>We use Functions to implement a large portion of the official <a href="https://docs.joinmastodon.org/api/">Mastodon API</a> specification, making Wildebeest compatible with the existing ecosystem of other servers and client applications, and also to run our own read-only web frontend under the same project codebase.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/Wz8EZKQyMvyEfDvH7cOV9/02183c976fe7c619c2fc4f8e99795463/3b.png" />
            
            </figure><p>Wildebeest’s web frontend uses <a href="https://qwik.builder.io/">Qwik</a>, a general-purpose web framework that is optimized for speed, uses modern concepts like the JSX JavaScript syntax extension and supports server-side-rendering (SSR) and static site generation (SSG).</p><p>Qwik provides a <a href="https://qwik.builder.io/integrations/deployments/cloudflare-pages/">Cloudflare Pages Adaptor</a> out of the box, so we use that (check our <a href="https://developers.cloudflare.com/pages/framework-guides/deploy-a-qwik-site/">framework guide</a> to know more about how to deploy a Qwik site on Cloudflare Pages). For styling we use the <a href="https://tailwindcss.com/">Tailwind CSS</a> framework, which Qwik supports natively.</p><p>Our frontend website code and static assets can be found under the <a href="https://github.com/cloudflare/wildebeest/tree/main/frontend">/frontend</a> directory. The application is handled by the <a href="https://github.com/cloudflare/wildebeest/blob/main/functions/%5B%5Bpath%5D%5D.ts">/functions/[[path]].js</a> dynamic route, which basically catches all the non-API requests, and then <a href="https://github.com/cloudflare/wildebeest/blob/main/frontend/src/entry.cloudflare-pages.tsx">invokes</a> Qwik’s own internal router, <a href="https://qwik.builder.io/qwikcity/routing/overview/">Qwik City</a>, which takes over everything else after that.</p><p>The power and versatility of Pages and Functions routes make it possible to run both the backend APIs and a server-side-rendered dynamic client, effectively a full-stack app, under the same project.</p><p>Let's dig even deeper now, and understand how the server interacts with the other components in our architecture.</p>
    <div>
      <h3>D1</h3>
      <a href="#d1">
        
      </a>
    </div>
    <p>Wildebeest uses <a href="https://developers.cloudflare.com/d1/">D1</a>, <a href="https://www.cloudflare.com/developer-platform/products/d1/">Cloudflare’s first SQL database</a> for the Workers platform built on top of SQLite, now open to everyone in <a href="/d1-open-alpha/">alpha</a>, to store and query data. Here’s our schema:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/67Xq9kbn6qh2XgRveGSdHt/3a020d6c71a89f0020b8fb7e87433601/4b.png" />
            
            </figure><p>The schema will probably change in the future, as we add more features. That’s fine, D1 supports <a href="https://developers.cloudflare.com/d1/platform/migrations/">migrations</a> which are great when you need to update your database schema without losing your data. With each new Wildebeest version, we can create a <a href="https://github.com/cloudflare/wildebeest/blob/main/migrations/0001_add-unique-following.sql">new migration</a> file if it requires database schema changes.</p>
            <pre><code>-- Migration number: 0001 	 2023-01-16T13:09:04.033Z

CREATE UNIQUE INDEX unique_actor_following ON actor_following (actor_id, target_actor_id);
</code></pre>
            <p>D1 exposes a powerful <a href="https://developers.cloudflare.com/d1/platform/client-api/">client API</a> that developers can use to manipulate and query data from Worker scripts, or in our case, Pages Functions.</p><p>Here’s a simplified example of how we interact with D1 when you start following someone on the Fediverse:</p>
            <pre><code>export async function addFollowing(db, actor, target, targetAcct): Promise&lt;UUID&gt; {
	const query = `INSERT OR IGNORE INTO actor_following (id, actor_id, target_actor_id, state, target_actor_acct) VALUES (?, ?, ?, ?, ?)`
	const out = await db
		.prepare(query)
		.bind(id, actor.id.toString(), target.id.toString(), STATE_PENDING, targetAcct)
		.run()
	return id
}
</code></pre>
            <p>Cloudflare’s culture of dogfooding and building on top of our own products means that we sometimes experience their shortcomings before our users. We did face a few challenges using D1, which is built on SQLite, to store our data. Here are two examples.</p><p><a href="https://www.w3.org/TR/activitypub/">ActivityPub</a> uses <a href="https://www.rfc-editor.org/rfc/rfc4122.txt">UUIDs</a> to identify objects and reference them in URIs extensively. These objects need to be stored in the database. Other databases like PostgreSQL provide built-in functions to <a href="https://www.postgresql.org/docs/current/functions-uuid.html">generate unique identifiers</a>. SQLite and D1 don't have that, yet, it’s in our roadmap.</p><p>Worry not though, the Workers runtime supports <a href="https://developers.cloudflare.com/workers/runtime-apis/web-crypto/">Web Crypto</a>, so we use crypto.randomUUID() to get our unique identifiers. Check the <a href="https://github.com/cloudflare/wildebeest/blob/main/backend/src/activitypub/actors/inbox.ts">/backend/src/activitypub/actors/inbox.ts</a>:</p>
            <pre><code>export async function addObjectInInbox(db, actor, obj) {
	const id = crypto.randomUUID()
	const out = await db
		.prepare('INSERT INTO inbox_objects(id, actor_id, object_id) VALUES(?, ?, ?)')
		.bind(id, actor.id.toString(), obj.id.toString())
		.run()
}</code></pre>
            <p>Problem solved.</p><p>The other example is that we need to store dates with sub-second resolution. Again, databases like PostgreSQL have that:</p>
            <pre><code>psql&gt; select now();
2023-02-01 11:45:17.425563+00</code></pre>
            <p>However SQLite falls short with:</p>
            <pre><code>sqlite&gt; select datetime();
2023-02-01 11:44:02</code></pre>
            <p>We worked around this problem with a small hack using <a href="https://www.sqlite.org/lang_datefunc.html">strftime()</a>:</p>
            <pre><code>sqlite&gt; select strftime('%Y-%m-%d %H:%M:%f', 'NOW');
2023-02-01 11:49:35.624</code></pre>
            <p>See our <a href="https://github.com/cloudflare/wildebeest/blob/main/migrations/0000_initial.sql">initial SQL schema</a>, look for the <i>cdate</i> defaults.</p>
    <div>
      <h3>Images</h3>
      <a href="#images">
        
      </a>
    </div>
    <p>Mastodon content has a lot of rich media. We don't need to reinvent the wheel and build an image pipeline; Cloudflare Images <a href="https://developers.cloudflare.com/images/">provides APIs</a> to upload, transform, and serve optimized images from our global CDN, so it's the perfect fit for Wildebeest's requirements.</p><p>Things like posting content images, the profile avatar, or headers, all use the Images APIs. See <a href="https://github.com/cloudflare/wildebeest/blob/main/backend/src/media/image.ts">/backend/src/media/image.ts</a> to understand how we interface with Images.</p>
            <pre><code>async function upload(file: File, config: Config): Promise&lt;UploadResult&gt; {
	const formData = new FormData()
	const url = `https://api.cloudflare.com/client/v4/accounts/${config.accountId}/images/v1`

	formData.set('file', file)

	const res = await fetch(url, {
		method: 'POST',
		body: formData,
		headers: {
			authorization: 'Bearer ' + config.apiToken,
		},
	})

      const data = await res.json()
	return data.result
}</code></pre>
            <p>If you're curious about Images for your next project, here's a tutorial on <a href="https://developers.cloudflare.com/images/cloudflare-images/tutorials/integrate-cloudflare-images/">how to integrate Cloudflare Images</a> on your website.</p><p>Cloudflare Images is also available from the dashboard. You can use it to browse or manage your catalog quickly.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1A4gwBFdbSGDvS4DAJRyhR/95849178c4b10c82d5f619ffc1153ba0/5b.png" />
            
            </figure>
    <div>
      <h3>Queues</h3>
      <a href="#queues">
        
      </a>
    </div>
    <p>The <a href="https://www.w3.org/TR/activitypub/">ActivityPub</a> protocol is chatty by design. Depending on the size of your social graph, there might be a lot of back-and-forth HTTP traffic. We can’t have the clients blocked waiting for hundreds of Fediverse message deliveries every time someone posts something.</p><p>We needed a way to work asynchronously and launch background jobs to offload data processing away from the main app and keep the clients snappy. The official Mastodon server has a similar strategy using <a href="https://docs.joinmastodon.org/admin/scaling/#sidekiq">Sidekiq</a> to do background processing.</p><p>Fortunately, we don't need to worry about any of this complexity either. <a href="https://developers.cloudflare.com/queues/">Cloudflare Queues</a> allows developers to send and receive messages with guaranteed delivery, and offload work from your Workers' requests, effectively providing you with asynchronous batch job capabilities.</p><p>To put it simply, you have a queue topic identifier, which is basically a buffered list that scales automatically, then you have one or more producers that, well, produce structured messages, JSON objects in our case, and put them in the queue (you define their schema), and finally you have one or more consumers that subscribes that queue, receive its messages and process them, at their own speed.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5m1TSSTZesMX1jt7K7YpHS/c192aa543426e12c03b2c753f4e4b8c4/6b.png" />
            
            </figure><p>Here’s the <a href="https://developers.cloudflare.com/queues/learning/how-queues-works/">How Queues works</a> page for more information.</p><p>In our case, the main application produces queue jobs whenever any incoming API call requires long, expensive operations. For example, when someone posts, sorry, <i>toots</i> something, we need to broadcast that to their followers' inboxes, potentially triggering many requests to remote servers. <a href="https://github.com/cloudflare/wildebeest/blob/main/backend/src/activitypub/deliver.ts">Here we are</a> queueing a job for that, thus freeing the APIs to keep responding:</p>
            <pre><code>export async function deliverFollowers(
	db: D1Database,
	from: Actor,
	activity: Activity,
	queue: Queue
) {
	const followers = await getFollowers(db, from)

	const messages = followers.map((id) =&gt; {
		const body = {
			activity: JSON.parse(JSON.stringify(activity)),
			actorId: from.id.toString(),
			toActorId: id,
		}
		return { body }
	})

	await queue.sendBatch(messages)
}</code></pre>
            <p>Similarly, we don't want to stop the main APIs when remote servers deliver messages to our instance inboxes. Here's Wildebeest creating asynchronous jobs when it <a href="https://github.com/cloudflare/wildebeest/blob/main/functions/ap/users/%5Bid%5D/inbox.ts">receives messages</a> in the inbox:</p>
            <pre><code>export async function handleRequest(
	domain: string,
	db: D1Database,
	id: string,
	activity: Activity,
	queue: Queue,
): Promise&lt;Response&gt; {
	const handle = parseHandle(id)

	const actorId = actorURL(domain, handle.localPart)
const actor = await actors.getPersonById(db, actorId)

      // creates job
	await queue.send({
		type: MessageType.Inbox,
		actorId: actor.id.toString(),
		activity,
	})

	// frees the API
	return new Response('', { status: 200 })
}</code></pre>
            <p>And the final piece of the puzzle, our <a href="https://github.com/cloudflare/wildebeest/tree/main/consumer">queue consumer</a> runs in a separate Worker, independently from the Pages project. The consumer listens for new messages and processes them sequentially, at its rhythm, freeing everyone else from blocking. When things get busy, the queue grows its buffer. Still, things keep running, and the jobs will eventually get dispatched, freeing the main APIs for the critical stuff: responding to remote servers and clients as quickly as possible.</p>
            <pre><code>export default {
	async queue(batch, env, ctx) {
		for (const message of batch.messages) {
			…

			switch (message.body.type) {
				case MessageType.Inbox: {
					await handleInboxMessage(...)
					break
				}
				case MessageType.Deliver: {
					await handleDeliverMessage(...)
					break
				}
			}
		}
	},
}</code></pre>
            <p>If you want to get your hands dirty with Queues, here’s a simple example on <a href="https://developers.cloudflare.com/queues/examples/send-errors-to-r2/">Using Queues to store data in R2</a>.</p>
    <div>
      <h3>Caching and Durable Objects</h3>
      <a href="#caching-and-durable-objects">
        
      </a>
    </div>
    <p>Caching repetitive operations is yet another strategy for improving performance in complex applications that require data processing. A famous Netscape developer, Phil Karlton, once said: "There are only two hard things in Computer Science: <b>cache invalidation</b> and naming things."</p><p>Cloudflare obviously knows a lot about caching since <a href="https://developers.cloudflare.com/cache/">it's a core feature</a> of our global CDN. We also provide <a href="https://developers.cloudflare.com/workers/learning/how-kv-works/">Workers KV</a> to our customers, a global, low-latency, key-value data store that anyone can use to cache data objects in our data centers and build fast websites and applications.</p><p>However, KV achieves its performance by being eventually consistent. While this is fine for many applications and use cases, it's not ideal for others.</p><p>The ActivityPub protocol is highly transactional and can't afford eventual consistency. Here's an example: generating complete timelines is expensive, so we cache that operation. However, when you post something, we need to invalidate that cache before we reply to the client. Otherwise, the new post won't be in the timeline and the client can fail with an error because it doesn’t see it. This actually happened to us with one of the most popular clients.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7mBZfs5UZumkHzh9ITpUSn/1f9d5e53e7d61417d962a9fd566df9e6/7b.png" />
            
            </figure><p>We needed to get clever. The team discussed a few options. Fortunately, our API catalog has plenty of options. Meet <a href="https://developers.cloudflare.com/workers/learning/using-durable-objects/">Durable Objects</a>.</p><p>Durable Objects are single-instance Workers that provide a transactional storage API. They're ideal when you need central coordination, strong consistency, and state persistence. You can use Durable Objects in cases like handling the state of <a href="https://developers.cloudflare.com/workers/learning/using-websockets/#durable-objects-and-websocket-state">multiple WebSocket</a> connections, coordinating and routing messages in a <a href="https://github.com/cloudflare/workers-chat-demo">chatroom</a>, or even <a href="/doom-multiplayer-workers/">running a multiplayer game like Doom</a>.</p><p>You know where this is going now. Yes, we implemented our key-value caching subsystem for Wildebeest <a href="https://github.com/cloudflare/wildebeest/tree/main/do">on top of a Durable Object</a>. By taking advantage of the DO's native transactional storage API, we can have strong guarantees that whenever we create or change a key, the next read will always return the latest version.</p><p>The idea is so simple and effective that it took us literally a <a href="https://github.com/cloudflare/wildebeest/blob/main/do/src/index.ts">few lines of code</a> to implement a key-value cache with two primitives: HTTP PUT and GET.</p>
            <pre><code>export class WildebeestCache {
	async fetch(request: Request) {
		if (request.method === 'GET') {
			const { pathname } = new URL(request.url)
			const key = pathname.slice(1)
			const value = await this.storage.get(key)
			return new Response(JSON.stringify(value))
		}

		if (request.method === 'PUT') {
			const { key, value } = await request.json()
			await this.storage.put(key, value)
			return new Response('', { status: 201 })
		}
	}
}</code></pre>
            <p>Strong consistency it is. Let's move to user registration and authentication now.</p>
    <div>
      <h3>Zero Trust Access</h3>
      <a href="#zero-trust-access">
        
      </a>
    </div>
    <p>The official Mastodon server <a href="https://docs.joinmastodon.org/user/signup/">handles user registrations</a>, typically using email, before you can choose your local username and start using the service. Handling user registration and authentication can be daunting and time-consuming if we were to build it from scratch though.</p><p>Furthermore, people don't want to create new credentials for every new service they want to use and instead want more convenient OAuth-like authorization and authentication methods so that they can reuse their existing Apple, Google, or GitHub accounts.</p><p>We wanted to simplify things using Cloudflare’s built-in features. Needless to say, we have a product that handles user onboarding, authentication, and <a href="https://developers.cloudflare.com/cloudflare-one/policies/access/policy-management/">access policies</a> to any application behind Cloudflare; it's called <a href="https://developers.cloudflare.com/cloudflare-one/">Zero Trust</a>. So we put Wildebeest behind it.</p><p>Zero Trust Access can either do one-time PIN (<a href="https://en.wikipedia.org/wiki/One-time_password">OTP</a>) authentication using email or single-sign-on (SSO) with many identity providers (examples: Google, Facebook, GitHub, LinkedIn), including any generic one supporting <a href="https://developers.cloudflare.com/cloudflare-one/identity/idp-integration/generic-saml/">SAML 2.0</a>.</p><p>When you start using Wildebeest with a client, you don't need to register at all. Instead, you go straight to log in, which will redirect you to the Access page and handle the authentication according to the policy that you, the owner of your instance, configured.</p><p>The policy defines who can authenticate, and how.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1zDpfgueYrKRmhmNvHCBGX/68b6b579fcb33110566b07ea6e5a3d3e/8b.png" />
            
            </figure><p>When authenticated, Access will redirect you back to Wildebeest. The first time this happens, we will detect that we don't have information about the user and ask for your Username and Display Name. This will be asked only once and is what will be to create your public Mastodon profile.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/76J7DmTtShD7slpawYXNAE/ccc908ed0dffb75a7ce6afc7b0b55510/9b.png" />
            
            </figure><p>Technically, Wildebeest implements the <a href="https://docs.joinmastodon.org/spec/oauth/#implementation">OAuth 2 specification</a>. <a href="https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/">Zero Trust</a> protects the <a href="https://github.com/cloudflare/wildebeest/blob/main/functions/oauth/authorize.ts">/oauth/authorize</a> endpoint and issues a valid <a href="https://developers.cloudflare.com/cloudflare-one/identity/authorization-cookie/validating-json/">JWT token</a> in the request headers when the user is authenticated. Wildebeest then reads and verifies the JWT and returns an authorization code in the URL redirect.</p><p>Once the client has an authorization code, it can use the <a href="https://github.com/cloudflare/wildebeest/blob/main/functions/oauth/token.ts">/oauth/token</a> endpoint to obtain an API access token. Subsequent API calls inject a bearer token in the Authorization header:</p><p><code>Authorization: Bearer access_token</code></p>
    <div>
      <h3>Deployment and Continuous Integration</h3>
      <a href="#deployment-and-continuous-integration">
        
      </a>
    </div>
    <p>We didn't want to run a managed service for Mastodon as it would somewhat diminish the concepts of federation and data ownership. Also, we recognize that ActivityPub and Mastodon are emerging, fast-paced technologies that will evolve quickly and in ways that are difficult to predict just yet.</p><p>For these reasons, we thought the best way to help the ecosystem right now would be to provide an open-source software package that anyone could use, customize, improve, and deploy on top of our cloud. Cloudflare will obviously keep improving Wildebeest and support the community, but we want to give our Fediverse maintainers complete control and ownership of their instances and data.</p><p>The remaining question was, how do we distribute the Wildebeest bundle and make it easy to deploy into someone's account when it requires configuring so many Cloudflare features, and how do we facilitate updating the software over time?</p><p>The solution ended up being a clever mix of using GitHub with <a href="https://github.com/features/actions">GitHub Actions</a>, <a href="https://developers.cloudflare.com/workers/platform/deploy-button/">Deploy with Workers</a>, and <a href="https://github.com/cloudflare/terraform-provider-cloudflare">Terraform</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5V8fRfu3U03n2ZVNtsh01L/404637763a8476b425a562ff5bbf8739/Screenshot-2023-02-08-at-11.13.05-AM-1.png" />
            
            </figure><p>The Deploy with Workers button is a specially crafted link that auto-generates a workflow page where the user gets asked some questions, and Cloudflare handles authorizing GitHub to deploy to Workers, automatically forks the Wildebeest repository into the user's account, and then configures and deploys the project using a <a href="https://github.com/marketplace/actions/deploy-to-cloudflare-workers-with-wrangler">GitHub Actions</a> workflow.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3MhyoOAbQEjlNnEhwl70Jm/5000c8c1dc1dfea549ee6ca62f8460b4/10b.png" />
            
            </figure><p>A GitHub Actions <a href="https://docs.github.com/en/actions/using-workflows/about-workflows">workflow</a> is a YAML file that declares what to do in every step. Here’s the <a href="https://github.com/cloudflare/wildebeest/blob/main/.github/workflows/deploy.yml">Wildebeest workflow</a> (simplified):</p>
            <pre><code>name: Deploy
on:
  push:
    branches:
      - main
  repository_dispatch:
jobs:
  deploy:
    runs-on: ubuntu-latest
    timeout-minutes: 60
    steps:
      - name: Ensure CF_DEPLOY_DOMAIN and CF_ZONE_ID are defined
        ...
      - name: Create D1 database
        uses: cloudflare/wrangler-action@2.0.0
        with:
          command: d1 create wildebeest-${{ env.OWNER_LOWER }}
        ...
      - name: retrieve Zero Trust organization
        ...
      - name: retrieve Terraform state KV namespace
        ...
      - name: download VAPID keys
        ...
      - name: Publish DO
      - name: Configure
        run: terraform plan &amp;&amp; terraform apply -auto-approve
      - name: Create Queue
        ...
      - name: Publish consumer
        ...
      - name: Publish
        uses: cloudflare/wrangler-action@2.0.0
        with:
          command: pages publish --project-name=wildebeest-${{ env.OWNER_LOWER }} .</code></pre>
            
    <div>
      <h4>Updating Wildebeest</h4>
      <a href="#updating-wildebeest">
        
      </a>
    </div>
    <p>This workflow runs automatically every time the main branch changes, so updating the Wildebeest is as easy as synchronizing the upstream official repository with the fork. You don't even need to use git commands for that; GitHub provides a convenient Sync button in the UI that you can simply click.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6vkcs7XzLMZdihq7z5n2L5/b83e8499970012ebcf0b47e686b6518a/11b.png" />
            
            </figure><p>What's more? Updates are incremental and non-destructive. When the GitHub Actions workflow redeploys Wildebeest, we only make the necessary changes to your configuration and nothing else. You don't lose your data; we don't need to delete your existing configurations. Here’s how we achieved this:</p><p>We use <a href="https://registry.terraform.io/providers/cloudflare/cloudflare/latest/docs">Terraform</a>, a declarative configuration language and tool that interacts with our APIs and can query and configure your Cloudflare features. Here's the trick, whenever we apply a new configuration, we keep a copy of the Terraform state for Wildebeest in a <a href="https://developers.cloudflare.com/workers/learning/how-kv-works/">Cloudflare KV</a> key. When a new deployment is triggered, we get that state from the KV copy, calculate the differences, then change only what's necessary.</p><p>Data loss is not a problem either because, as you read above, D1 supports <a href="https://developers.cloudflare.com/d1/platform/migrations/">migrations</a>. If we need to add a new column to a table or a new table, we don't need to destroy the database and create it again; we just apply the necessary SQL to that change.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3UW6Qm1KE662wiFVgrWFPZ/ba80730c09161abc81c85bb89fd758b3/12b.png" />
            
            </figure>
    <div>
      <h3>Protection, optimization and observability, naturally</h3>
      <a href="#protection-optimization-and-observability-naturally">
        
      </a>
    </div>
    <p>Once Wildebeest is up and running, you can protect it from bad traffic and malicious actors. Cloudflare offers you <a href="https://www.cloudflare.com/ddos/">DDoS</a>, <a href="https://www.cloudflare.com/waf/">WAF</a>, and <a href="https://www.cloudflare.com/products/bot-management/">Bot Management</a> protection out of the box at a click's distance.</p><p>Likewise, you'll get instant network and content delivery optimizations from our products and <a href="https://www.cloudflare.com/analytics/">analytics</a> on how your Wildebeest instance is performing and being used.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4EYUh9pE5NNPnpj9mwVSfz/7d97cf99ad29cd9436b8e48c7918cd24/13b.png" />
            
            </figure>
    <div>
      <h3>ActivityPub, WebFinger, NodeInfo and Mastodon APIs</h3>
      <a href="#activitypub-webfinger-nodeinfo-and-mastodon-apis">
        
      </a>
    </div>
    <p>Mastodon popularized the Fediverse concept, but many of the underlying technologies used have been around for quite a while. This is one of those rare moments when everything finally comes together to create a working platform that answers an actual use case for Internet users. Let's quickly go through the protocols that Wildebeest had to implement:</p>
    <div>
      <h4>ActivityPub</h4>
      <a href="#activitypub">
        
      </a>
    </div>
    <p><a href="https://www.w3.org/TR/activitypub/">ActivityPub</a> is a decentralized social networking protocol and has been around as a W3C recommendation since at least 2018. It defines client APIs for creating and manipulating content and server-to-server APIs for content exchange and notifications, also known as federation. ActivityPub uses <a href="https://www.w3.org/TR/activitystreams-core/">ActivityStreams</a>, an even older W3C protocol, for its vocabulary.</p><p>The concepts of <a href="https://www.w3.org/TR/activitypub/#actors">Actors</a> (profiles), messages or <a href="https://www.w3.org/TR/activitypub/#obj">Objects</a> (the toots), <a href="https://www.w3.org/TR/activitypub/#inbox">inbox</a> (where you receive toots from people you follow), and <a href="https://www.w3.org/TR/activitypub/#outbox">outbox</a> (where you send your toots to the people you follow), to name a few of many other actions and activities, are all defined on the ActivityPub specification.</p><p>Here’s our folder with the <a href="https://github.com/cloudflare/wildebeest/tree/main/backend/src/activitypub">ActivityPub implementation</a>.</p>
            <pre><code>import type { APObject } from 'wildebeest/backend/src/activitypub/objects'
import type { Actor } from 'wildebeest/backend/src/activitypub/actors'

export async function addObjectInInbox(db, actor, obj) {
	const id = crypto.randomUUID()
	const out = await db
		.prepare('INSERT INTO inbox_objects(id, actor_id, object_id) VALUES(?, ?, ?)')
		.bind(id, actor.id.toString(), obj.id.toString())
		.run()
}
</code></pre>
            
    <div>
      <h4>WebFinger</h4>
      <a href="#webfinger">
        
      </a>
    </div>
    <p>WebFinger is a simple HTTP protocol used to discover information about any entity, like a profile, a server, or a specific feature. It resolves URIs to resource objects.</p><p>Mastodon uses <a href="https://www.rfc-editor.org/rfc/rfc7033">WebFinger</a> lookups to discover information about remote users. For example, say you want to interact with @<a>user@example.com</a>. Your local server would <a href="https://github.com/cloudflare/wildebeest/blob/main/backend/src/webfinger/index.ts">request</a> <a href="https://example.com/.well-known/webfinger?resource=acct:user@example.com">https://example.com/.well-known/webfinger?resource=acct:user@example.com</a> (using the <a href="https://www.rfc-editor.org/rfc/rfc7565">acct scheme</a>) and get something like this:</p>
            <pre><code>{
    "subject": "acct:user@example.com",
    "aliases": [
        "https://example.com/ap/users/user"
    ],
    "links": [
        {
            "rel": "self",
            "type": "application/activity+json",
            "href": "https://example.com/ap/users/user"
        }
    ]
}
</code></pre>
            <p>Now we know how to interact with <code>@user@example.com</code>, using the <code>https://example.com/ap/users/user endpoint</code>.</p><p>Here’s our WebFinger <a href="https://github.com/cloudflare/wildebeest/blob/main/functions/.well-known/webfinger.ts">response</a>:</p>
            <pre><code>export async function handleRequest(request, db): Promise&lt;Response&gt; {
	…
	const jsonLink = /* … link to actor */

	const res: WebFingerResponse = {
		subject: `acct:...`,
		aliases: [jsonLink],
		links: [
			{
				rel: 'self',
				type: 'application/activity+json',
				href: jsonLink,
			},
		],
	}
	return new Response(JSON.stringify(res), { headers })
}</code></pre>
            
    <div>
      <h4>Mastodon API</h4>
      <a href="#mastodon-api">
        
      </a>
    </div>
    <p>Finally, things like setting your server information, profile information, generating timelines, notifications, and searches, are all Mastodon-specific APIs. The Mastodon open-source project defines a catalog of REST APIs, and you can find all the documentation for them on <a href="https://docs.joinmastodon.org/api/">their website</a>.</p><p>Our Mastodon API implementation can be found <a href="https://github.com/cloudflare/wildebeest/tree/main/functions/api">here</a> (REST endpoints) and <a href="https://github.com/cloudflare/wildebeest/tree/main/backend/src/mastodon">here</a> (backend primitives). Here’s an example of Mastodon’s server information <a href="https://docs.joinmastodon.org/methods/instance/#v2">/api/v2/instance</a> implemented by <a href="https://github.com/cloudflare/wildebeest/blob/main/functions/api/v2/instance.ts">Wildebeest</a>:</p>
            <pre><code>export async function handleRequest(domain, db, env) {

	const res: InstanceConfigV2 = {
		domain,
		title: env.INSTANCE_TITLE,
		version: getVersion(),
		source_url: 'https://github.com/cloudflare/wildebeest',
		description: env.INSTANCE_DESCR,
		thumbnail: {
			url: DEFAULT_THUMBNAIL,
		},
		languages: ['en'],
		registrations: {
			enabled: false,
		},
		contact: {
			email: env.ADMIN_EMAIL,
		},
		rules: [],
	}

	return new Response(JSON.stringify(res), { headers })
}</code></pre>
            <p>Wildebeest also implements <a href="https://github.com/cloudflare/wildebeest/tree/main/backend/src/webpush">WebPush</a> for client notifications and <a href="https://github.com/cloudflare/wildebeest/tree/main/functions/nodeinfo">NodeInfo</a> for server information.</p><p>Other Mastodon-compatible servers had to implement all these protocols <a href="https://pleroma.social/">too</a>; Wildebeest is one of them. The community is very active in discussing future enhancements; we will keep improving our compatibility and adding support to more features over time, ensuring that Wildebeest plays well with the Fediverse ecosystem of servers and clients emerging.</p>
    <div>
      <h3>Get started now</h3>
      <a href="#get-started-now">
        
      </a>
    </div>
    <p>Enough about technology; let's get you into the Fediverse. We tried to detail all the steps to deploy your server. To start using Wildebeest, head to the public GitHub repository and check our <a href="https://github.com/cloudflare/wildebeest/blob/main/README.md">Get Started tutorial</a>.</p><p>Most of Wildebeest's dependencies offer a generous free plan that allows you to try them for personal or hobby projects that aren't business-critical, however you will need to subscribe an <a href="https://www.cloudflare.com/products/cloudflare-images/">Images</a> plan (the lowest tier should be enough for most needs) and, depending on your server load, <a href="https://developers.cloudflare.com/workers/platform/limits/#unbound-usage-model">Workers Unbound</a> (again, the minimum cost should be plenty for most use cases).</p><p>Following our dogfooding mantra, Cloudflare is also officially joining the Fediverse today. You can start following our Mastodon accounts and get the same experience of having regular updates from Cloudflare as you get from us on other social platforms, using your favorite Mastodon apps. These accounts are entirely running on top of a Wildebeest server:</p><ul><li><p><a href="https://cloudflare.social/@cloudflare">@cloudflare@cloudflare.social</a> - Our main account</p></li><li><p><a href="https://cloudflare.social/@radar">@radar@cloudflare.social</a> - Cloudflare Radar</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2QJsY2PkGqLfVQCDJc1AlH/d52823cfd8d2d9e7de9845686790a3bf/14b.png" />
            
            </figure><p>Wildebeest is compatible with most client apps; we are confirmed to work with the official Mastodon <a href="https://play.google.com/store/apps/details?id=org.joinmastodon.android">Android</a> and <a href="https://apps.apple.com/us/app/mastodon-for-iphone/id1571998974">iOS</a> apps, <a href="https://pinafore.social/">Pinafore</a>, <a href="https://mastodon.social/@JPEGuin/109315609418460036">Mammoth</a>, and <a href="https://tooot.app/">tooot</a>, and looking into others like <a href="https://tapbots.com/ivory/">Ivory</a>. If your favorite isn’t working, please submit an <a href="https://github.com/cloudflare/wildebeest/issues">issue here</a>, we’ll do our best to help support it.</p>
    <div>
      <h3>Final words</h3>
      <a href="#final-words">
        
      </a>
    </div>
    <p>Wildebeest was built entirely on top of our <a href="/welcome-to-the-supercloud-and-developer-week-2022/">Supercloud</a> stack. It was one of the most complete and complex projects we have created that uses various Cloudflare products and features.</p><p>We hope this write-up inspires you to not only try deploying Wildebeest and joining the Fediverse, but also building your next application, however demanding it is, on top of Cloudflare.</p><p>Wildebeest is a minimally viable Mastodon-compatible server right now, but we will keep improving it with more features and supporting it over time; after all, we're using it for our official accounts. It is also open-sourced, meaning you are more than welcome to contribute with pull requests or feedback.</p><p>In the meantime, we opened a <a href="https://discord.com/channels/595317990191398933/1064925651464896552">Wildebeest room</a> on our <a href="https://discord.gg/cloudflaredev">Developers Discord Server</a> and are keeping an eye open on the GitHub repo <a href="https://github.com/cloudflare/wildebeest/issues">issues</a> tab. Feel free to engage with us; the team is eager to know how you use Wildebeest and answer your questions.</p><p><i>PS: The code snippets in this blog were simplified to benefit readability and space (the TypeScript types and error handling code were removed, for example). Please refer to the GitHub repo links for the complete versions.</i></p> ]]></content:encoded>
            <category><![CDATA[Wildebeest]]></category>
            <category><![CDATA[Cloudflare Pages]]></category>
            <category><![CDATA[D1]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Cloudflare Zero Trust]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[SASE]]></category>
            <guid isPermaLink="false">5dmHcGVas7xv8tKbRbWLWN</guid>
            <dc:creator>Celso Martinho</dc:creator>
            <dc:creator>Sven Sauleau</dc:creator>
        </item>
        <item>
            <title><![CDATA[ICYMI: Developer Week 2022 announcements]]></title>
            <link>https://blog.cloudflare.com/icymi-developer-week-2022-announcements/</link>
            <pubDate>Fri, 18 Nov 2022 21:13:51 GMT</pubDate>
            <description><![CDATA[ This week we made over 30 announcements, in case you missed any here’s a quick round-up.  ]]></description>
            <content:encoded><![CDATA[ <p><i></i></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5hJumg76O4azWrTEzO7r35/c611fec9c69576a134d3fa3ee61f714a/2022-Developer-Week-Hero-Dark_b-1.png" />
            
            </figure><p>Developer Week 2022 has come to a close. Over the last week we’ve shared with you 31 posts on what you can build on Cloudflare and our vision and roadmap on where we’re headed. We shared product announcements, customer and partner stories, and provided technical deep dives. In case you missed any of the posts here’s a handy recap.</p>
    <div>
      <h2>Product and feature announcements</h2>
      <a href="#product-and-feature-announcements">
        
      </a>
    </div>
    <table>
<thead>
  <tr>
    <th>Announcement</th>
    <th>Summary</th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/welcome-to-the-supercloud-and-developer-week-2022/">Welcome to the Supercloud (and Developer Week 2022)</a></td>
    <td>Our vision of the cloud -- a model of cloud computing that promises to make developers highly productive at scaling from one to Internet-scale in the most flexible, efficient, and economical way.</td>
  </tr>
  <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/cloudflare-queues-open-beta">Build applications of any size on Cloudflare with the Queues open beta</a></td>
    <td>Build performant and resilient distributed applications with Queues. Available to all developers with a paid Workers plan. 
</td>
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/cloudflare-r2-super-slurper/">Migrate from S3 easily with the R2 Super Slurper</a></td>
    <td>A tool to easily and efficiently move objects from your existing storage provider to R2. 
</td>
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/cloudflare-workers-templates/">Get started with Cloudflare Workers with ready-made templates</a></td>
    <td>See what’s possible with Workers and get building faster with these starter templates. 
</td>
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/cache-reserve-open-beta/">Reduce origin load, save on cloud egress fees, and maximize cache hits with Cache Reserve</a></td>
    <td>Cache Reserve is graduating to open beta – users can now test and integrate it into their content delivery strategy without any additional waiting. 
</td>
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/announcing-logs-engine/">Store and process your Cloudflare Logs... with Cloudflare</a></td>
    <td>Query Cloudflare logs stored on R2. 
</td>
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/d1-open-alpha/">UPDATE Supercloud SET status = 'open alpha' WHERE product = 'D1'</a></td>
    <td>D1, our first global relational database, is in open alpha. Start building and share your feedback with us. 
</td>
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/introducing-workers-browser-rendering-api/">Automate an isolated browser instance with just a few lines of code</a></td>
    <td>The Browser Rendering API is an out of the box solution to run browser automation tasks with Puppeteer in Workers.
</td>
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/mutual-tls-for-workers/">Bringing authentication and identification to Workers through Mutual TLS</a></td>
    <td>Send outbound requests with Workers through a mutually authenticated channel. 
</td>
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/pages-function-goes-GA/">Spice up your sites on Cloudflare Pages with Pages Functions General Availability</a></td>
    <td>Easily add dynamic content to your Pages projects with Functions.  </td>    
  </tr>
     <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/launchpad-fall-22/">Announcing the first Workers Launchpad cohort and growth of the program to $2 billion</a></td>
    <td>We were blown away by the interest in the Workers Launchpad Funding Program and are proud to introduce the first cohort. 
</td>    
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/snippets-announcement">The most programmable Supercloud with Cloudflare Snippets</a></td>
    <td>Modify traffic routed through the Cloudflare CDN without having to write a Worker. 
</td>    
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/deployments-for-workers">Keep track of Workers’ code and configuration changes with Deployments</a></td>
    <td>Track your changes to a Worker configuration, binding, and code. 
</td>    
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/workers-logpush-GA">Send Cloudflare Workers logs to a destination of your choice with Workers Trace Events Logpush</a></td>
    <td>Gain visibility into your Workers when logs are sent to your analytics platform or <a href="https://www.cloudflare.com/learning/cloud/what-is-object-storage/">object storage</a>. Available to all users on a Workers paid plan. </td>    
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/improving-workers-types">Improved Workers TypeScript support</a></td>
    <td>Based on feedback from users we’ve improved our types and are open-sourcing the automatic generation scripts. </td>    
  </tr>
    

</tbody>
</table>
    <div>
      <h3>Technical deep dives</h3>
      <a href="#technical-deep-dives">
        
      </a>
    </div>
    <table>
<thead>
  <tr>
    <th>Announcement</th>
    <th>Summary</th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/standards-compliant-workers-api/">The road to a more standards-compliant Workers API</a></td>
      <td>An update on the work the <a href="https://github.com/wintercg">WinterCG</a> is doing on the creation of common API standards in JavaScript runtimes and how Workers is implementing them.   
</td>
  </tr>
  <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/r2-rayid-retrieval">Indexing millions of HTTP requests using Durable Objects
</a></td>
    <td>Indexing and querying millions of logs stored in R2 using Workers, Durable Objects, and the Streams API.  </td>
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/building-a-better-developer-experience-through-api-documentation">Iteration isn't just for code: here are our latest API docs</a></td>
    <td>We’ve revamped our API reference documentation to standardize our API content and improve the overall developer experience when using the Cloudflare APIs. </td>
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/making-static-sites-dynamic-with-cloudflare-d1">Making static sites dynamic with D1</a></td>
    <td>A template to build a D1-based comments API. </td>
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/open-api-transition">The Cloudflare API now uses OpenAPI schemas</a></td>
    <td>OpenAPI schemas are now available for the Cloudflare API. </td>
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/pages-full-stack-frameworks">Server-side render full stack applications with Pages Functions</a></td>
    <td>Run server-side rendering in a Function using a variety of frameworks including Qwik, Astro, and SolidStart.</td>
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/fragment-piercing">Incremental adoption of micro-frontends with Cloudflare Workers</a></td>
    <td>How to replace selected elements of a legacy client-side rendered application with server-side rendered fragments using Workers. </td>
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/technology-behind-radar2/">How we built it: the technology behind Cloudflare Radar 2.0</a></td>
    <td>Details on how we rebuilt Radar using Pages, Remix, Workers, and R2. </td>
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/terraforming-cloudflare-at-cloudflare">How Cloudflare uses Terraform to manage Cloudflare</a></td>
    <td>How we made it easier for our developers to make changes with the Cloudflare Terraform provider. </td>
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/network-performance-update-developer-week/">Network performance Update: Developer Week 2022</a></td>
    <td>See how fast Cloudflare Workers are compared to other solutions.</td>
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/using-analytics-engine-to-improve-analytics-engine">How Cloudflare instruments services using Workers Analytics Engine</a></td>
    <td>Instrumentation with Analytics Engine provides data to find bugs and helps us prioritize new features. </td>
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/miniflare-and-workerd">Doubling down on local development with Workers:Miniflare meets workerd</a></td>
    <td>Improving local development using Miniflare3, now powered by workerd.</td>
  </tr>
 
</tbody>
</table>
    <div>
      <h3>Customer and partner stories</h3>
      <a href="#customer-and-partner-stories">
        
      </a>
    </div>
    <table>
<thead>
  <tr>
    <th>Announcement</th>
    <th>Summary</th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/devcycle-customer-story">Cloudflare Workers scale too well and broke our infrastructure, so we are rebuilding it on Workers</a></td>
    <td>How DevCycle re-architected their feature management tool using Workers. </td>
  </tr>
  <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/neon-postgres-database-from-workers">Easy Postgres integration with Workers and Neon.tech</a></td>
    <td>Neon.tech solves the challenges of connecting to Postgres from Workers</td>
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/xata-customer-story">Xata Workers: client-side database access without client-side secrets</a></td>
    <td>Xata uses Workers for Platform to reduce security risks of running untrusted code.</td>
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/twilio-segment-sdk-powered-by-cloudflare-workers">Twilio Segment Edge SDK powered by Cloudflare Workers</a></td>
    <td>The Segment Edge SDK, built on Workers, helps applications collect and track events from the client, and get access to realtime user state to personalize experiences.</td>
  </tr>
</tbody>
</table>
    <div>
      <h3>Next</h3>
      <a href="#next">
        
      </a>
    </div>
    <p>And that’s it for Developer Week 2022. But you can keep the conversation going by joining our <a href="https://discord.gg/cloudflaredev">Discord Community</a>.</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Supercloud]]></category>
            <category><![CDATA[undefined]]></category>
            <category><![CDATA[D1]]></category>
            <category><![CDATA[R2 Super Slurper]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">5T3oI2mNgSH8cND974FYm5</guid>
            <dc:creator>Dawn Parzych</dc:creator>
        </item>
        <item>
            <title><![CDATA[UPDATE Supercloud SET status = 'open alpha' WHERE product = 'D1';]]></title>
            <link>https://blog.cloudflare.com/d1-open-alpha/</link>
            <pubDate>Wed, 16 Nov 2022 14:01:00 GMT</pubDate>
            <description><![CDATA[ As we continue down the road to making D1 production ready, it wouldn’t be “the Cloudflare way” unless we stopped for feedback first. D1 is now in Open Alpha! ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7w9UvQOVgrNbxPrz1tOWJz/611cdc1253d0c6971709f5dddacc0811/image1-48.png" />
            
            </figure><p>In May 2022, we <a href="/introducing-d1/">announced</a> our quest to simplify databases – building them, maintaining them, integrating them. Our goal is to empower you with the tools to run a database that is <a href="https://www.cloudflare.com/developer-platform/products/d1/">powerful, scalable, with world-beating performance</a> without any hassle. And we first set our sights on reimagining the database development experience for every type of user – not just database experts.</p><p>Over the past couple of months, we’ve <a href="/whats-new-with-d1/">been working</a> to create just that, while learning some very important lessons along the way. As it turns out, building a global relational database product on top of Workers pushes the boundaries of the developer platform to their absolute limit, and often beyond them, but in a way that’s absolutely thrilling to us at Cloudflare. It means that while our progress might seem slow from outside, every improvement, bug fix or stress test helps lay down a path for <i>all</i> of our customers to build the world’s most <a href="/welcome-to-the-supercloud-and-developer-week-2022/">ambitious serverless application</a>.</p><p>However, as we continue down the road to making D1 production ready, it wouldn’t be “the Cloudflare way” unless we stopped for feedback first – even though it’s not <i>quite</i> finished yet. In the spirit of Developer Week, <b>there is no better time to introduce the D1 open alpha</b>!</p><p>An “open alpha” is a new concept for us. You'll likely hear the term “open beta” on various announcements at Cloudflare, and while it makes sense for many products here, it wasn’t quite right for D1. There are still some crucial pieces that are still in active development and testing, so before we release the fully-formed D1 as a public beta for you to start building real-world apps with, we want to make sure everybody can start to get a feel for the product on their hobby apps or side-projects.</p>
    <div>
      <h2>What’s included in the alpha?</h2>
      <a href="#whats-included-in-the-alpha">
        
      </a>
    </div>
    <p>While a lot is still changing behind the scenes with D1, we’ve put a lot of thought into how you, as a developer, interact with it – even if you’re new to databases.</p>
    <div>
      <h3>Using the D1 dashboard</h3>
      <a href="#using-the-d1-dashboard">
        
      </a>
    </div>
    <p>In a few clicks you can get your D1 database up and running right from within your dashboard. In our D1 interface, you can create, maintain and view your database as you please. Changes made in the UI are instantly available to your Worker - no redeploy required!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6vOzmnP9cvUYbJanSvprvl/b4a01d4edcc3dcada5a326e352b5f0e2/image2-30.png" />
            
            </figure>
    <div>
      <h3>Use Wrangler</h3>
      <a href="#use-wrangler">
        
      </a>
    </div>
    <p>If you’re looking to get your hands a little dirty, you can also work with your database using our Wrangler CLI. Create your database and begin adding your data manually or bootstrap your database with one of two ways:</p><p><b>1.  Execute an SQL file</b></p>
            <pre><code>$ wrangler d1 execute my-database-name --file ./customers.sql</code></pre>
            <p>where your <code>.sql</code> file looks something like this:</p><p>customers.sql</p>
            <pre><code>DROP TABLE IF EXISTS Customers;
CREATE TABLE Customers (CustomerID INT, CompanyName TEXT, ContactName TEXT, PRIMARY KEY (`CustomerID`));
INSERT INTO Customers (CustomerID, CompanyName, ContactName) 
VALUES (1, 'Alfreds Futterkiste', 'Maria Anders'),(4, 'Around the Horn', 'Thomas Hardy'),(11, 'Bs Beverages', 'Victoria Ashworth'),(13, 'Bs Beverages', 'Random Name');</code></pre>
            <p><b>2. Create and run migrations</b></p><p>Migrations are a way to version your database changes. With D1, you can <a href="https://developers.cloudflare.com/d1/migrations/">create a migration</a> and then apply it to your database.</p><p>To create the migration, execute:</p>
            <pre><code>wrangler d1 migrations create &lt;my-database-name&gt; &lt;short description of migration&gt;</code></pre>
            <p>This will create an SQL file in a <code>migrations</code> folder where you can then go ahead and add your queries. Then apply the migrations to your database by executing:</p>
            <pre><code>wrangler d1 migrations apply &lt;my-database-name&gt;</code></pre>
            
    <div>
      <h3>Access D1 from within your Worker</h3>
      <a href="#access-d1-from-within-your-worker">
        
      </a>
    </div>
    <p>You can attach your D1 to a Worker by adding the D1 binding to your <code>wrangler.toml</code> configuration file. Then interact with D1 by executing queries inside your Worker like so:</p>
            <pre><code>export default {
 async fetch(request, env) {
   const { pathname } = new URL(request.url);

   if (pathname === "/api/beverages") {
     const { results } = await env.DB.prepare(
       "SELECT * FROM Customers WHERE CompanyName = ?"
     )
       .bind("Bs Beverages")
       .all();
     return Response.json(results);
   }

   return new Response("Call /api/beverages to see Bs Beverages customers");
 },
};</code></pre>
            
    <div>
      <h3>Or access D1 from within your Pages Function</h3>
      <a href="#or-access-d1-from-within-your-pages-function">
        
      </a>
    </div>
    <p>In this Alpha launch, D1 also supports integration with <a href="https://pages.cloudflare.com/">Cloudflare Pages</a>! You can add a D1 binding inside the Pages dashboard, and write your queries inside a Pages Function to build a full-stack application! Check out the <a href="https://developers.cloudflare.com/pages/platform/functions/bindings/#d1-database">full documentation</a> to get started with Pages and D1.</p>
    <div>
      <h2>Community built tooling</h2>
      <a href="#community-built-tooling">
        
      </a>
    </div>
    <p>During our private alpha period, the excitement behind D1 led to some valuable contributions to the D1 ecosystem and developer experience by members of the community. Here are some of our favorite projects to date:</p>
    <div>
      <h3>d1-orm</h3>
      <a href="#d1-orm">
        
      </a>
    </div>
    <p>An Object Relational Mapping (ORM) is a way for you to query and manipulate data by using JavaScript. Created by a Cloudflare Discord Community Champion, the <code>d1-orm</code> seeks to provide a strictly typed experience while using D1:</p>
            <pre><code>const users = new Model(
    // table name, primary keys, indexes etc
    tableDefinition,
    // column types, default values, nullable etc
    columnDefinitions
)

// TS helper for typed queries
type User = Infer&lt;type of users&gt;;

// ORM-style query builder
const user = await users.First({
    where: {
        id: 1,
    },
});</code></pre>
            <p>You can check out the <a href="https://docs.interactions.rest/d1-orm/">full documentation</a>, and provide feedback by making an issue on the <a href="https://github.com/Interactions-as-a-Service/d1-orm/issues">GitHub repository</a>.</p>
    <div>
      <h3>workers-qb</h3>
      <a href="#workers-qb">
        
      </a>
    </div>
    <p>This is a zero-dependency query builder that provides a simple standardized interface while keeping the benefits and speed of using raw queries over a traditional ORM. While not intended to provide ORM-like functionality, <code>workers-qb</code> makes it easier to interact with the database from code for direct SQL access:</p>
            <pre><code>const qb = new D1QB(env.DB)

const fetched = await qb.fetchOne({
  tableName: 'employees',
  fields: 'count(*) as count',
  where: {
    conditions: 'department = ?1',
    params: ['HQ'],
  },
})</code></pre>
            <p>You can read more about the query builder <a href="https://workers-qb.massadas.com/">here</a>.</p>
    <div>
      <h3>d1-console</h3>
      <a href="#d1-console">
        
      </a>
    </div>
    <p>Instead of running the <code>wrangler d1 execute</code> command in your terminal every time you want to interact with your database, you can interact with D1 from within the <code>d1-console</code>. Created by a Discord Community Champion, this gives the benefit of executing multi-line queries, obtaining command history, and viewing a cleanly formatted table output.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4QR9Tf5DXnp3brBVvlvgJq/7f5b5083198492190dfc9f24e4fb70e0/image3-23.png" />
            
            </figure><p>While this is a community project today, we plan to natively support a “D1 Console” in the future. For now, get started by checking out the <code>d1-console</code> package <a href="https://github.com/isaac-mcfadyen/d1-console">here</a>.</p>
    <div>
      <h3>D1 adapter for <a href="https://github.com/koskimas/kysely">Kysely</a></h3>
      <a href="#d1-adapter-for">
        
      </a>
    </div>
    <p>Kysely is a type-safe and autocompletion-friendly typescript SQL query builder. With this adapter you can interact with D1 with the familiar Kysely interface:</p>
            <pre><code>// Create Kysely instance with kysely-d1
const db = new Kysely&lt;Database&gt;({ 
  dialect: new D1Dialect({ database: env.DB })
});
    
// Read row from D1 table
const result = await db
  .selectFrom('kv')
  .selectAll()
  .where('key', '=', key)
  .executeTakeFirst();</code></pre>
            <p>Check out the project <a href="https://github.com/aidenwallis/kysely-d1">here</a>.</p>
    <div>
      <h2>What’s still in testing?</h2>
      <a href="#whats-still-in-testing">
        
      </a>
    </div>
    <p>The biggest pieces that have been disabled for this alpha release are replication and JavaScript transaction support. While we’ll be rolling out these changes gradually, we want to call out some limitations that exist today that we’re actively working on testing:</p><ul><li><p><b>Database location:</b> Each D1 database only runs a single instance. It’s created close to where you, as the developer, create the database, and does not currently move regions based on access patterns. Workers running elsewhere in the world will see higher latency as a result.</p></li><li><p><b>Concurrency limitations:</b> Under high load, read and write queries may be queued rather than triggering new replicas to be created. As a result, the performance &amp; throughput characteristics of the open alpha won’t be representative of the final product.</p></li><li><p><b>Availability limitations:</b> Backups will block access to the DB while they’re running. In most cases this should only be a second or two, and any requests that arrive during the backup will be queued.</p></li></ul><p>You can also check out a more detailed, up-to-date list on <a href="https://developers.cloudflare.com/d1/platform/limits/">D1 alpha Limitations</a>.</p>
    <div>
      <h2>Request for feedback</h2>
      <a href="#request-for-feedback">
        
      </a>
    </div>
    <p>While we can make all sorts of guesses and bets on the kind of databases you want to use D1 for, we are not the users – you are! We want developers from all backgrounds to preview the D1 tech at its early stages, and let us know where we need to improve to make it suitable for your production apps.</p><p>For general feedback about your experience and to interact with other folks in the alpha, join our <a href="https://discord.com/channels/595317990191398933/992060581832032316">#d1-open-alpha</a> channel in the <a href="https://discord.gg/cloudflaredev">Cloudflare Developers Discord</a>. We plan to make any important announcements and changes in this channel as well as on our <a href="https://discord.com/channels/595317990191398933/832698219824807956">monthly community calls</a>.</p><p>To file more specific feature requests (no matter how wacky) and report any bugs, create a thread in the <a href="https://community.cloudflare.com/c/developers/d1">Cloudflare Community forum</a> under the D1 category. We will be maintaining this forum as a way to plan for the months ahead!</p>
    <div>
      <h2>Get started</h2>
      <a href="#get-started">
        
      </a>
    </div>
    <p>Want to get started right away? Check out our <a href="https://developers.cloudflare.com/d1/">D1 documentation</a> to get started today. <a href="https://github.com/cloudflare/d1-northwind">Build</a> our classic <a href="https://northwind.d1sql.com/">Northwind Traders demo</a> to explore the D1 experience and deploy your first D1 database!</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Database]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Supercloud]]></category>
            <category><![CDATA[D1]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">1rFO7pAwS1HGnsa6rhrIXa</guid>
            <dc:creator>Nevi Shah</dc:creator>
            <dc:creator>Glen Maddern</dc:creator>
            <dc:creator>Sven Sauleau</dc:creator>
        </item>
        <item>
            <title><![CDATA[Making static sites dynamic with Cloudflare D1]]></title>
            <link>https://blog.cloudflare.com/making-static-sites-dynamic-with-cloudflare-d1/</link>
            <pubDate>Wed, 16 Nov 2022 14:00:00 GMT</pubDate>
            <description><![CDATA[ In this blog post, I'll show you how to use D1 to add comments to a static blog site. To do this, we'll construct a new D1 database and build a simple JSON API that allows the creation and retrieval of comments. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4EtdDhbUmqO6pJ0onYlD80/6c88d9a3b9189dac7ead2fb45c2450f0/image1-40.png" />
            
            </figure>
    <div>
      <h3>Introduction</h3>
      <a href="#introduction">
        
      </a>
    </div>
    <p>There are many ways to store data in your applications. For example, in Cloudflare Workers applications, we have Workers KV for key-value storage and Durable Objects for real-time, coordinated storage without compromising on consistency. Outside the Cloudflare ecosystem, you can also plug in other tools like NoSQL and graph databases.</p><p>But sometimes, you want SQL. Indexes allow us to retrieve data quickly. Joins enable us to describe complex relationships between different tables. SQL declaratively describes how our application's data is validated, created, and performantly queried.</p><p><a href="/d1-open-alpha">D1 was released today in open alpha</a>, and to celebrate, I want to share my experience building apps with D1: specifically, how to get started, and why I’m excited about D1 joining the long list of tools you can use to build apps on Cloudflare.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1T6IVlTeQ59oZQ2M6auyey/ea995b570a3dfb93c2c1d466d75f4524/image3-24.png" />
            
            </figure><p><a href="https://www.cloudflare.com/developer-platform/products/d1/">D1</a> is remarkable because it's an instant value-add to applications without needing new tools or stepping out of the Cloudflare ecosystem. Using wrangler, we can do local development on our Workers applications, and with the addition of D1 in wrangler, we can now develop proper stateful applications locally as well. Then, when it's time to deploy the application, wrangler allows us to both access and execute commands to your D1 database, as well as your API itself.</p>
    <div>
      <h3>What we’re building</h3>
      <a href="#what-were-building">
        
      </a>
    </div>
    <p>In this blog post, I'll show you how to use D1 to add comments to a static blog site. To do this, we'll construct a new D1 database and build a simple JSON API that allows the creation and retrieval of comments.</p><p>As I mentioned, separating D1 from the app itself - an API and database that remains separate from the static site - allows us to abstract the static and dynamic pieces of our website from each other. It also makes it easier to deploy our application: we will deploy the frontend to Cloudflare Pages, and the D1-powered API to Cloudflare Workers.</p>
    <div>
      <h3>Building a new application</h3>
      <a href="#building-a-new-application">
        
      </a>
    </div>
    <p>First, we'll add a basic <a href="https://www.cloudflare.com/learning/security/api/what-is-an-api/">API</a> in Workers. Create a new directory and in it a new wrangler project inside it:</p>
            <pre><code>$ mkdir d1-example &amp;&amp; d1-example
$ wrangler init</code></pre>
            <p>In this example, we’ll use Hono, an Express.js-style framework, to rapidly build our API. To use Hono in this project, install it using NPM:</p>
            <pre><code>$ npm install hono</code></pre>
            <p>Then, in <code>src/index.ts</code>, we’ll initialize a new Hono app, and define a few endpoints - GET /API/posts/:slug/comments, and POST /get/api/:slug/comments.</p>
            <pre><code>import { Hono } from 'hono'
import { cors } from 'hono/cors'

const app = new Hono()

app.get('/api/posts/:slug/comments', async c =&gt; {
  // do something
})

app.post('/api/posts/:slug/comments', async c =&gt; {
  // do something
})

export default app</code></pre>
            <p>Now we'll create a D1 database. In Wrangler 2, there is support for the <code>wrangler d1</code> subcommand, which allows you to create and query your D1 databases directly from the command line. So, for example, we can create a new database with a single command:</p>
            <pre><code>$ wrangler d1 create d1-example</code></pre>
            <p>With our created database, we can take the database name ID and associate it with a <b>binding</b> inside of wrangler.toml, wrangler's configuration file. Bindings allow us to access Cloudflare resources, like D1 databases, KV namespaces, and R2 buckets, using a simple variable name in our code. Below, we’ll create the binding <code>DB</code> and use it to represent our new database:</p>
            <pre><code>[[ d1_databases ]]
binding = "DB" # i.e. available in your Worker on env.DB
database_name = "d1-example"
database_id = "4e1c28a9-90e4-41da-8b4b-6cf36e5abb29"</code></pre>
            <p>Note that this directive, the <code>[[d1_databases]]</code> field, currently requires a beta version of wrangler. You can install this for your project using the command <code>npm install -D wrangler/beta</code>.</p><p>With the database configured in our wrangler.toml, we can start interacting with it from the command line and inside our Workers function.</p><p>First, you can issue direct SQL commands using <code>wrangler d1 execute</code>:</p>
            <pre><code>$ wrangler d1 execute d1-example --command "SELECT name FROM sqlite_schema WHERE type ='table'"
Executing on d1-example:
┌─────────────────┐
│ name │
├─────────────────┤
│ sqlite_sequence │
└─────────────────┘</code></pre>
            <p>You can also pass a SQL file - perfect for initial data seeding in a single command. Create <code>src/schema.sql</code>, which will create a new <code>comments</code> table for our project:</p>
            <pre><code>drop table if exists comments;
create table comments (
  id integer primary key autoincrement,
  author text not null,
  body text not null,
  post_slug text not null
);
create index idx_comments_post_id on comments (post_slug);

-- Optionally, uncomment the below query to create data

-- insert into comments (author, body, post_slug)
-- values ("Kristian", "Great post!", "hello-world");</code></pre>
            <p>With the file created, execute the schema file against the D1 database by passing it with the flag <code>--file</code>:</p>
            <pre><code>$ wrangler d1 execute d1-example --file src/schema.sql</code></pre>
            <p>We've created a SQL database with just a few commands and seeded it with initial data. Now we can add a route to our Workers function to retrieve data from that database. Based on our wrangler.toml config, the D1 database is now accessible via the <code>DB</code> binding. In our code, we can use the binding to prepare SQL statements and execute them, for instance, to retrieve comments:</p>
            <pre><code>app.get('/api/posts/:slug/comments', async c =&gt; {
  const { slug } = c.req.param()
  const { results } = await c.env.DB.prepare(`
    select * from comments where post_slug = ?
  `).bind(slug).all()
  return c.json(results)
})</code></pre>
            <p>In this function, we accept a <code>slug</code> URL query parameter and set up a new SQL statement where we select all comments with a matching <code>post_slug</code> value to our query parameter. We can then return it as a simple JSON response.</p><p>So far, we've built read-only access to our data. But "inserting" values to SQL is, of course, possible as well. So let's define another function that allows POST-ing to an endpoint to create a new comment:</p>
            <pre><code>app.post('/API/posts/:slug/comments', async c =&gt; {
  const { slug } = c.req.param()
  const { author, body } = await c.req.json&lt;Comment&gt;()

  if (!author) return c.text("Missing author value for new comment")
  if (!body) return c.text("Missing body value for new comment")

  const { success } = await c.env.DB.prepare(`
    insert into comments (author, body, post_slug) values (?, ?, ?)
  `).bind(author, body, slug).run()

  if (success) {
    c.status(201)
    return c.text("Created")
  } else {
    c.status(500)
    return c.text("Something went wrong")
  }
})</code></pre>
            <p>In this example, we built a comments API for powering a blog. To see the source for this D1-powered comments API, you can visit <a href="https://github.com/cloudflare/templates/tree/main/worker-d1-api">cloudflare/templates/worker-d1-api</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3Bbc7exdfzvFnV47Btu7Gn/362b947983416c62e0b9670417e1babb/image2-31.png" />
            
            </figure>
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>One of the things most exciting about D1 is the opportunity to augment existing applications or websites with dynamic, relational data. As a former Ruby on Rails developer, one of the things I miss most about that framework in the world of JavaScript and serverless development tools is the ability to rapidly spin up full data-driven applications without needing to be an expert in managing database infrastructure. With D1 and its easy onramp to SQL-based data, we can build true data-driven applications without compromising on performance or developer experience.</p><p>This shift corresponds nicely with the advent of static sites in the last few years, using tools like Hugo or Gatsby. A blog built with a static site generator like Hugo is incredibly performant - it will build in seconds with small asset sizes.</p><p>But by trading a tool like WordPress for a static site generator, you lose the opportunity to add dynamic information to your site. Many developers have patched over this problem by adding more complexity to their build processes: fetching and retrieving data and generating pages using that data as part of the build.</p><p>This addition of complexity in the build process attempts to fix the lack of dynamism in applications, but it still isn't genuinely dynamic. Instead of being able to retrieve and display new data as it's created, the application rebuilds and redeploys whenever data changes so that it appears to be a live, dynamic representation of data. Your application can remain static, and the dynamic data will live geographically close to the users of your site, accessible via a queryable and expressive API.</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Storage]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[D1]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">42d4M0F5dhHm6ImRHKTL7Z</guid>
            <dc:creator>Kristian Freeman</dc:creator>
        </item>
        <item>
            <title><![CDATA[D1: our quest to simplify databases]]></title>
            <link>https://blog.cloudflare.com/whats-new-with-d1/</link>
            <pubDate>Tue, 27 Sep 2022 13:00:00 GMT</pubDate>
            <description><![CDATA[ Get an inside look on the D1 experience today, what the team is currently working on and what’s coming up!  ]]></description>
            <content:encoded><![CDATA[ <p><i></i></p><p><i>This blog post references a feature which has updated documentation. For the latest reference content, visit </i><a href="https://developers.cloudflare.com/d1/best-practices/read-replication/"><i>D1 read replication documentation</i></a><i>.</i></p><p>When we announced D1 in May of this year, we knew it would be the start of something new – our first SQL database with Cloudflare Workers. Prior to D1 we’ve announced storage options like KV (key-value store), Durable Objects (single location, strongly consistent data storage) and <a href="https://www.cloudflare.com/learning/cloud/what-is-blob-storage/">R2 (blob storage)</a>. But the question always remained “How can I store and query relational data without latency concerns and an easy API?”</p><p>The long awaited “Cloudflare Database'' was the true missing piece to build your application <b>entirely</b> on Cloudflare’s global network, going from a blank canvas in VSCode to a full stack application in seconds. Compatible with the popular SQLite API, D1 empowers developers to build out their databases without getting bogged down by complexity and having to manage every underlying layer.</p><p>Since our launch announcement in May and private beta in June, we’ve made great strides in building out our vision of a <a href="https://www.cloudflare.com/developer-platform/products/d1/">serverless database</a>. With D1 still in <a href="https://www.cloudflare.com/lp/d1/">private beta</a> but an open beta on the horizon, we’re excited to show and tell our journey of building D1 and what’s to come.</p>
    <div>
      <h2>The D1 Experience</h2>
      <a href="#the-d1-experience">
        
      </a>
    </div>
    <p>We knew from Cloudflare Workers feedback that using Wrangler as the mechanism to create and deploy applications is loved and preferred by many. That’s why when <a href="/10-things-i-love-about-wrangler/">Wrangler 2.0</a> was announced this past May alongside D1, we took advantage of the new and improved CLI for every part of the experience from data creation to every update and iteration. Let’s take a quick look on how to get set up in a few easy steps.</p>
    <div>
      <h3>Create your database</h3>
      <a href="#create-your-database">
        
      </a>
    </div>
    <p>With the latest version of <a href="https://github.com/cloudflare/wrangler2">Wrangler</a> installed, you can create an initialized empty database with a quick</p><p><code>npx wrangler d1 create my_database_name</code></p><p>To get your database up and running! Now it’s time to add your data.</p>
    <div>
      <h3>Bootstrap it</h3>
      <a href="#bootstrap-it">
        
      </a>
    </div>
    <p>It wouldn’t be the “Cloudflare way” if you had to sit through an agonizingly long process to get set up. So we made it easy and painless to bring your existing data from an old database and bootstrap your new D1 database.  You can run</p><p><code>wrangler d1 execute my_database-name --file ./filename.sql</code></p><p>and pass through an existing SQLite .sql file of your choice. Your database is now ready for action.</p>
    <div>
      <h3>Develop &amp; Test Locally</h3>
      <a href="#develop-test-locally">
        
      </a>
    </div>
    <p>With all the improvements we’ve made to Wrangler since version 2 launched <a href="/wrangler-v2-beta/">a few months ago</a>, we’re pleased to report that D1 has full remote &amp; local wrangler dev support:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7JRGM62yWrL3h7BKLhj5Jf/6d324ce4a2b19691ef4ec39095d2e43b/image2-43.png" />
            
            </figure><p>When running <code>wrangler dev -–local -–persist</code>, an SQLite file will be created inside <code>.wrangler/state</code>. You can then use a local GUI program for managing it, like SQLiteFlow (<a href="https://www.sqliteflow.com/">https://www.sqliteflow.com/</a>) or Beekeeper (<a href="https://www.beekeeperstudio.io/">https://www.beekeeperstudio.io/</a>).</p><p>Or you can simply use SQLite directly with the SQLite command line by running <code>sqlite3 .wrangler/state/d1/DB.sqlite3</code>:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7wSVzsxnFxKpJDbF5pO4hs/89aab6231071b6cd8cc657a9fd2bd24b/image6-8.png" />
            
            </figure>
    <div>
      <h3>Automatic backups &amp; one-click restore</h3>
      <a href="#automatic-backups-one-click-restore">
        
      </a>
    </div>
    <p>No matter how much you test your changes, sometimes things don’t always go according to plan. But with Wrangler you can create a backup of your data, view your list of backups or restore your database from an existing backup. In fact, during the beta, we’re taking backups of your data every hour automatically and storing them in R2, so you will have the option to rollback if needed.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7BgC81NRBtxLJAl4Gf09oz/03084ab36894c484675f0ec7e58e9462/image1-53.png" />
            
            </figure><p>And the best part – if you want to use a production snapshot for local development or to reproduce a bug, simply copy it into the .wrangler/state directory and <code>wrangler dev –-local –-persist</code> will pick it up!</p><p>Let’s download a D1 backup to our local disk. It’s SQLite compatible.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4lGMre56aSosKozuHmITRD/51b282602897ed9af9d0813461f81732/image4-14.png" />
            
            </figure><p>Now let’s run our D1 worker locally, from the backup.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4t10c5k7VKcT2tF4CjW9Dw/eb5ba21817f6a38b1d1f450d6e2e2c3a/image5-16.png" />
            
            </figure>
    <div>
      <h3>Create and Manage from the dashboard</h3>
      <a href="#create-and-manage-from-the-dashboard">
        
      </a>
    </div>
    <p>However, we realize that CLIs are not everyone’s jam. In fact, we believe databases should be accessible to every kind of developer – even those without much database experience! D1 is available right from the Cloudflare dashboard giving you near total command parity with Wrangler in just a few clicks. Bootstrapping your database, creating tables, updating your database, viewing tables and triggering backups are all accessible right at your fingertips.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2mXkO7uRDs4lVvgjm8VC4r/c32f96a738980294dfea5db7f2ea8794/image3-32.png" />
            
            </figure><p>Changes made in the UI are instantly available to your Worker — no deploy required!</p><p>We’ve told you about some of the improvements we’ve landed since we first announced D1, but as always, we also wanted to give you a small taste (with some technical details) of what’s ahead. One really important functionality of a database is transactions — something D1 wouldn’t be complete without.</p>
    <div>
      <h2>Sneak peek: how we're bringing JavaScript transactions to D1</h2>
      <a href="#sneak-peek-how-were-bringing-javascript-transactions-to-d1">
        
      </a>
    </div>
    <p>With D1, we strive to present a dramatically simplified interface to creating and querying relational data, which for the most part is a good thing. But simplification occasionally introduces drawbacks, where a use-case is no longer easily supported without introducing some new concepts. D1 transactions are one example.</p>
    <div>
      <h3>Transactions are a unique challenge</h3>
      <a href="#transactions-are-a-unique-challenge">
        
      </a>
    </div>
    <p>You don't need to specify where a Cloudflare Worker or a D1 database run—they simply run everywhere they need to. For Workers, that is as close as possible to the users that are hitting your site right this second. For D1 today, we don't try to run a copy in every location worldwide, but dynamically manage the number and location of read-only replicas based on how many queries your database is getting, and from where. However, for queries that make changes to a database (which we generally call "writes" for short), they all have to travel back to the single Primary D1 instance to do their work, to ensure consistency.</p><p>But what if you need to do a series of updates at once? While you can send multiple SQL queries with <code>.batch()</code> (which does in fact use database transactions under the hood), it's likely that, at some point, you'll want to interleave database queries &amp; JS code in a single unit of work.</p><p>This is exactly what database transactions were invented for, but if you try running <code>BEGIN TRANSACTION</code> in D1 you'll get an error. Let's talk about why that is.</p><p><b>Why native transactions don't work</b>The problem arises from SQL statements and JavaScript code running in dramatically different places—your SQL executes inside your D1 database (primary for writes, nearest replica for reads), but your Worker is running near the user, which might be on the other side of the world. And because D1 is built on SQLite, only one write transaction can be open at once. Meaning that, if we permitted <code>BEGIN TRANSACTION</code>, any one Worker request, anywhere in the world, could effectively block your whole database! This is a quite dangerous thing to allow:</p><ul><li><p>A Worker could start a transaction then crash due to a software bug, without calling <code>ROLLBACK</code>. The primary would be blocked, waiting for more commands from a Worker that would never come (until, probably, some timeout).</p></li><li><p>Even without bugs or crashes, transactions that require multiple round-trips between JavaScript and SQL could end up blocking your whole system for multiple seconds, dramatically limiting how high an application built with Workers &amp; D1 could scale.</p></li></ul><p>But allowing a developer to define transactions that mix both SQL and JavaScript makes building applications with Workers &amp; D1 so much more flexible and powerful. We need a new solution (or, in our case, a new version of an old solution).</p><p><b>A way forward: stored procedures</b>Stored procedures are snippets of code that are uploaded to the database, to be executed directly next to the data. Which, at first blush, sounds exactly like what we want.</p><p>However, in practice, stored procedures in traditional databases are notoriously frustrating to work with, as anyone who's developed a system making heavy use of them will tell you:</p><ul><li><p>They're often written in a different language to the rest of your application. They're usually written in (a specific dialect of) SQL or an embedded language like Tcl/Perl/Python. And while it's technically possible to write them in JavaScript (using an embedded V8 engine), they run in such a different environment to your application code it still requires significant context-switching to maintain them.</p></li><li><p>Having both application code and in-database code affects every part of the development lifecycle, from authoring, testing, deployment, rollbacks and debugging. But because stored procedures are usually introduced to solve a specific problem, not as a general purpose application layer, they're often managed completely manually. You can end up with them being written once, added to the database, then never changed for fear of breaking something.</p></li></ul><p>With D1, we can do better.</p><p>The <i>point</i> of a stored procedure was to execute directly next to the data—uploading the code and executing it inside the database was simply a means to that end. But we're using Workers, a global JavaScript execution platform, can we use them to solve this problem?</p><p>It turns out, absolutely! But here we have a few options of exactly how to make it work, and we're working with our private beta users to find the right <a href="https://www.cloudflare.com/learning/security/api/what-is-an-api/">API</a>. In this section, I'd like to share with you our current leading proposal, and invite you all to give us your feedback.</p><p>When you connect a Worker project to a D1 database, you add the section like the following to your <code>wrangler.toml</code>:</p>
            <pre><code>[[ d1_databases ]]
# What binding name to use (e.g. env.DB):
binding = "DB"
# The name of the DB (used for wrangler d1 commands):
database_name = "my-d1-database"
# The D1's ID for deployment:
database_id = "48a4224e-...3b09"
# Which D1 to use for `wrangler dev`:
# (can be the same as the previous line)
preview_database_id = "48a4224e-...3b09"

# NEW: adding "procedures", pointing to a new JS file:
procedures = "./src/db/procedures.js"</code></pre>
            <p>That D1 Procedures file would contain the following (note the new <code>db.transaction()</code> API, that is only available within a file like this):</p>
            <pre><code>export default class Procedures {
  constructor(db, env, ctx) {
    this.db = db
  }

  // any methods you define here are available on env.DB.Procedures
  // inside your Worker
  async Checkout(cartId: number) {
    // Inside a Procedure, we have a new db.transaction() API
    const result = await this.db.transaction(async (txn) =&gt; {
      
      // Transaction has begun: we know the user can't add anything to
      // their cart while these actions are in progress.
      const [cart, user] = Helpers.loadCartAndUser(cartId)

      // We can update the DB first, knowing that if any of the later steps
      // fail, all these changes will be undone.
      await this.db
        .prepare(`UPDATE cart SET status = ?1 WHERE cart_id = ?2`)
        .bind('purchased', cartId)
        .run()
      const newBalance = user.balance - cart.total_cost
      await this.db
        .prepare(`UPDATE user SET balance = ?1 WHERE user_id = ?2`)
        // Note: the DB may have a CHECK to guarantee 'user.balance' can not
        // be negative. In that case, this statement may fail, an exception
        // will be thrown, and the transaction will be rolled back.
        .bind(newBalance, cart.user_id)
        .run()

      // Once all the DB changes have been applied, attempt the payment:
      const { ok, details } = await PaymentAPI.processPayment(
        user.payment_method_id,
        cart.total_cost
      )
      if (!ok) {
        // If we throw an Exception, the transaction will be rolled back
        // and result.error will be populated:
        // throw new PaymentFailedError(details)
        
        // Alternatively, we can do both of those steps explicitly
        await txn.rollback()
        // The transaction is rolled back, our DB is now as it was when we
        // started. We can either move on and try something new, or just exit.
        return { error: new PaymentFailedError(details) }
      }

      // This is implicitly called when the .transaction() block finishes,
      // but you can explicitly call it too (potentially committing multiple
      // times in a single db.transaction() block).
      await txn.commit()

      // Anything we return here will be returned by the 
      // db.transaction() block
      return {
        amount_charged: cart.total_cost,
        remaining_balance: newBalance,
      }
    })

    if (result.error) {
      // Our db.transaction block returned an error or threw an exception.
    }

    // We're still in the Procedure, but the Transaction is complete and
    // the DB is available for other writes. We can either do more work
    // here (start another transaction?) or return a response to our Worker.
    return result
  }
}</code></pre>
            <p>And in your Worker, your DB binding now has a “Procedures” property with your function names available:</p>
            <pre><code>const { error, amount_charged, remaining_balance } =
  await env.DB.Procedures.Checkout(params.cartId)

if (error) {
  // Something went wrong, `error` has details
} else {
  // Display `amount_charged` and `remaining_balance` to the user.
}</code></pre>
            <p>Multiple Procedures can be triggered at one time, but only one <code>db.transaction()</code> function can be active at once: any other write queries or other transaction blocks will be queued, but all read queries will continue to hit local replicas and run as normal. This API gives you the ability to ensure consistency when it’s essential but with the minimal impact on total overall performance worldwide.</p>
    <div>
      <h3>Request for feedback</h3>
      <a href="#request-for-feedback">
        
      </a>
    </div>
    <p>As with all our products, feedback from our users drives the roadmap and development. While the D1 API is in beta testing today, we're still seeking feedback on the specifics. However, we’re pleased that it solves both the problems with transactions that are specific to D1 and the problems with stored procedures described earlier:</p><ul><li><p>Code is executing as close as possible to the database, removing network latency while a transaction is open.</p></li><li><p>Any exceptions or cancellations of a transaction cause an instant rollback—there is no way to accidentally leave one open and block the whole D1 instance.</p></li><li><p>The code is in the same language as the rest of your Worker code, in the exact same dialect (e.g. same TypeScript config as it's part of the same build).</p></li><li><p>It's deployed seamlessly as part of your Worker. If two Workers bind to the same D1 instance but define different procedures, they'll only see their own code. If you want to share code between projects or databases, extract a library as you would with any other shared code.</p></li><li><p>In local development and test, the procedure works just like it does in production, but without the network call, allowing seamless testing and debugging as if it was a local function.</p></li><li><p>Because procedures and the Worker that define them are treated as a single unit, rolling back to an earlier version never causes a skew between the code in the database and the code in the Worker.</p></li></ul>
    <div>
      <h2>The D1 ecosystem: contributions from the community</h2>
      <a href="#the-d1-ecosystem-contributions-from-the-community">
        
      </a>
    </div>
    <p>We've told you about what we've been up to and what's ahead, but one of the unique things about this project is all the contributions from our users. One of our favorite parts of private betas is not only getting feedback and feature requests, but also seeing what ideas and projects come to fruition. While sometimes this means personal projects, with D1, we’re seeing some incredible contributions to the D1 ecosystem. Needless to say, the work on D1 hasn’t just been coming from within the D1 team, but also from the wider community and other developers at Cloudflare. Users have been showing off their D1 additions within our Discord private beta channel and giving others the opportunity to use them as well. We wanted to take a moment to highlight them.</p>
    <div>
      <h3>workers-qb</h3>
      <a href="#workers-qb">
        
      </a>
    </div>
    <p>Dealing with raw SQL syntax is powerful (and using the D1 .bind() API, safe against <a href="https://www.cloudflare.com/learning/security/threats/how-to-prevent-sql-injection/">SQL injections</a>) but it can be a little clumsy. On the other hand, most existing query builders assume direct access to the underlying DB, and so aren’t suitable to use with D1. So Cloudflare developer Gabriel Massadas designed a small, zero-dependency query builder called <code>workers-qb</code>:</p>
            <pre><code>import { D1QB } from 'workers-qb'
const qb = new D1QB(env.DB)

const fetched = await qb.fetchOne({
    tableName: "employees",
    fields: "count(*) as count",
    where: {
      conditions: "active = ?1",
      params: [true]
    },
})</code></pre>
            <p>Check out the project homepage for more information: <a href="https://workers-qb.massadas.com/">https://workers-qb.massadas.com/</a>.</p>
    <div>
      <h3>D1 console</h3>
      <a href="#d1-console">
        
      </a>
    </div>
    <p>While you can interact with D1 through both Wrangler and the dashboard, Cloudflare Community champion, Isaac McFadyen created the very first D1 console where you can quickly execute a series of queries right through your terminal. With the D1 console, you don’t need to spend time writing the various Wrangler commands we’ve created – just execute your queries.</p><p>This includes all bells and whistles you would expect from a modern database console including multiline input, command history, validation for things D1 may not yet support, and ability to save your Cloudflare credentials for later use.</p><p>Check out the full project on <a href="https://github.com/isaac-mcfadyen/d1-console">GitHub</a> or <a href="https://www.npmjs.com/package/d1-console">NPM</a> for more information.</p>
    <div>
      <h3>Miniflare test Integration</h3>
      <a href="#miniflare-test-integration">
        
      </a>
    </div>
    <p>The <a href="https://miniflare.dev/">Miniflare project,</a> which powers Wrangler’s local development experience, also provides fully-fledged test environments for popular JavaScript test runners, <a href="https://miniflare.dev/testing/jest">Jest</a> and <a href="https://miniflare.dev/testing/vitest">Vitest</a>. With this comes the concept of <a href="https://miniflare.dev/testing/jest#isolated-storage"><i>Isolated Storage</i></a>, allowing each test to run independently, so that changes made in one don’t affect the others. Brendan Coll, creator of Miniflare, guided the D1 test implementation to give the same benefits:</p>
            <pre><code>import Worker from ‘../src/index.ts’
const { DB } = getMiniflareBindings();

beforeAll(async () =&gt; {
  // Your D1 starts completely empty, so first you must create tables
  // or restore from a schema.sql file.
  await DB.exec(`CREATE TABLE entries (id INTEGER PRIMARY KEY, value TEXT)`);
});

// Each describe block &amp; each test gets its own view of the data.
describe(‘with an empty DB’, () =&gt; {
  it(‘should report 0 entries’, async () =&gt; {
    await Worker.fetch(...)
  })
  it(‘should allow new entries’, async () =&gt; {
    await Worker.fetch(...)
  })
])

// Use beforeAll &amp; beforeEach inside describe blocks to set up particular DB states for a set of tests
describe(‘with two entries in the DB’, () =&gt; {
  beforeEach(async () =&gt; {
    await DB.prepare(`INSERT INTO entries (value) VALUES (?), (?)`)
            .bind(‘aaa’, ‘bbb’)
            .run()
  })
  // Now, all tests will run with a DB with those two values
  it(‘should report 2 entries’, async () =&gt; {
    await Worker.fetch(...)
  })
  it(‘should not allow duplicate entries’, async () =&gt; {
    await Worker.fetch(...)
  })
])</code></pre>
            <p>All the databases for tests are run in-memory, so these are lightning fast. And fast, reliable testing is a big part of building maintainable real-world apps, so we’re thrilled to extend that to D1.</p>
    <div>
      <h2>Want access to the private beta?</h2>
      <a href="#want-access-to-the-private-beta">
        
      </a>
    </div>
    <p>Feeling inspired?</p><p>We love to see what our beta users build or want to build especially when our products are at an early stage. As we march toward an open beta, we’ll be looking specifically for your feedback. We are slowly letting more folks into the beta, but if you haven’t received your “golden ticket” yet with access, sign up <a href="https://www.cloudflare.com/lp/d1/">here</a>! Once you’ve been invited in, you’ll receive an official welcome email.</p><p>As always, happy building!</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Database]]></category>
            <category><![CDATA[D1]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">nODp0eoC5szCr7aW59sde</guid>
            <dc:creator>Nevi Shah</dc:creator>
            <dc:creator>Glen Maddern</dc:creator>
        </item>
        <item>
            <title><![CDATA[Going originless with Cloudflare Workers – Building a Todo app – Part 1: The API]]></title>
            <link>https://blog.cloudflare.com/workers-todo-part-1/</link>
            <pubDate>Wed, 21 Sep 2022 13:45:00 GMT</pubDate>
            <description><![CDATA[ Custom Domains are now in Generally Available! Today we go through Part 1 in a series on building completely serverless applications on Cloudflare’s Developer Platform ]]></description>
            <content:encoded><![CDATA[ <p></p><p>A few months ago we launched Custom Domains into an <a href="/custom-domains-for-workers/">open beta</a>. Custom Domains allow you to hook up your Workers to the Internet, without having to deal with DNS records or certificates – just enter a valid hostname and Cloudflare will do the rest! The beta’s over, and Custom Domains are now GA.</p><p>Custom Domains aren’t just about a seamless developer experience; they also allow you to build a globally distributed instantly scalable application on Cloudflare’s Developer Platform. That’s because Workers leveraging Custom Domains have no concept of an ‘Origin Server’. There’s no ‘home’ to phone to - and that also means your application can use the power of Cloudflare’s global network to run your application, well, everywhere. It’s truly serverless.</p>
    <div>
      <h3>Let’s build “Todo”, but without the servers</h3>
      <a href="#lets-build-todo-but-without-the-servers">
        
      </a>
    </div>
    <p>Today we’ll start a series of posts outlining a simple todo list application. We’ll start with an <a href="https://www.cloudflare.com/learning/security/api/what-is-an-api/">API</a> and hook it up to the Internet using Custom Domains.</p><p>With Custom Domains, you’re treating the whole network as the application server. Any time a request comes into a Cloudflare data center, Workers are triggered in that data center and connect to resources across the network as needed. Our developers don’t need to think about regions, or replication, or spinning up the right number of instances to handle unforeseen load. Instead, just deploy your Workers and Cloudflare will handle the rest.</p><p>For our todo application, we begin by building an <a href="https://www.cloudflare.com/learning/security/api/what-is-an-api-gateway/">API Gateway</a> to perform routing, any authorization checks, and drop invalid requests. We then fan out to each individual use case in a separate Worker, so our teams can independently make updates or add features to each endpoint without a full redeploy of the whole application. Finally, each Worker has a D1 binding to be able to create, read, update, and delete records from the database. All of this happens on Cloudflare’s global network, so your API is truly available everywhere. The architecture will look something like this:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7qm175kv1nM4LmxTUhY48p/237e8c8839a15d139a8c6a689888eaa3/image2-24.png" />
            
            </figure>
    <div>
      <h3>Bootstrap the D1 Database</h3>
      <a href="#bootstrap-the-d1-database">
        
      </a>
    </div>
    <p>First off, we’re going to need a D1 database set up, with a schema for our todo application to run on. If you’re not familiar with D1, it’s <a href="https://www.cloudflare.com/developer-platform/products/d1/">Cloudflare’s serverless database offering</a> - explained in more detail <a href="/introducing-d1/">here</a>. To get started, we use the <code>wrangler d1</code> command to create a new database:</p>
            <pre><code>npx wrangler d1 create &lt;todo | custom-database-name&gt;</code></pre>
            <p>After executing this command, you will be asked to add a snippet of code to your <code><b>wrangler.toml</b></code> file that looks something like this:</p>
            <pre><code>[[ d1_databases ]]
binding = "db" # i.e. available in your Worker on env.db
database_name = "&lt;todo | custom-database-name&gt;"
database_id = "&lt;UUID&gt;"</code></pre>
            <p>Let’s save that for now, and we’ll put these into each of our private microservices in a few moments. Next, we’re going to create our database schema. It’s a simple todo application, so it’ll look something like this, with some seeded data:</p><p><b><i>db/schema.sql</i></b></p>
            <pre><code>DROP TABLE IF EXISTS todos;
CREATE TABLE todos (id INTEGER PRIMARY KEY, todo TEXT, todoStatus BOOLEAN NOT NULL CHECK (todoStatus IN (0, 1)));
INSERT INTO todos (todo, todoStatus) VALUES ("Fold my laundry", 0),("Get flowers for mum’s birthday", 0),("Find Nemo", 0),("Water the monstera", 1);</code></pre>
            <p>You can bootstrap your new D1 database by running:</p>
            <pre><code>npx wrangler d1 execute &lt;todo | custom-database-name&gt; --file=./schema.sql</code></pre>
            <p>Then validate your new data by running a query through Wrangler using the following command:</p>
            <pre><code>npx wrangler d1 execute &lt;todo | custom-database-name&gt; --command='SELECT * FROM todos';</code></pre>
            <p>Great! We’ve now got a database running entirely on Cloudflare’s global network.</p>
    <div>
      <h3>Build the endpoint Workers</h3>
      <a href="#build-the-endpoint-workers">
        
      </a>
    </div>
    <p>To talk to your database, we’ll spin up a series of private microservices for each endpoint in our application. We want to be able to create, read, update, delete, and list our todos. The full source code for each is available <a href="https://github.com/kabirsikand/worker-todos-api">here</a>. Below is code from a Worker that lists all our todos from D1.</p><p><b><i>list/src/list.js</i></b></p>
            <pre><code>export default {
   async fetch(request, env) {
     const { results } = await env.db.prepare(
       "SELECT * FROM todos"
     ).all();
     return Response.json(results);
   },
 };</code></pre>
            <p>The Worker ‘todo-list’ needs to be able to access D1 from the environment variable <code>db</code>. To do this, we’ll define the D1 binding in our <code>wrangler.toml</code> file. We also specify that workers_dev is false, preventing a preview from being generated via workers.dev (we want this to be a <i>private</i> microservice).</p><p><b><i>list/wrangler.toml</i></b></p>
            <pre><code>name = "todo-list"
main = "src/list.js"
compatibility_date = "2022-09-07"
workers_dev = false
usage_model = "unbound"

[[ d1_databases ]]
binding = "db" # i.e. available in your Worker on env.db
database_name = "&lt;todo | custom-database-name&gt;"
database_id = "UUID"</code></pre>
            <p>Finally, use wrangler publish to deploy this microservice.</p>
            <pre><code>todo/list on ∞main [!] 
› wrangler publish
 ⛅️ wrangler 0.0.0-893830aa
-----------------------------------------------------------------------
Retrieving cached values for account from ../../../node_modules/.cache/wrangler
Your worker has access to the following bindings:
- D1 Databases:
  - db: todo (UUID)
Total Upload: 4.71 KiB / gzip: 1.60 KiB
Uploaded todo-list (0.96 sec)
No publish targets for todo-list (0.00 sec)</code></pre>
            <p>Notice that wrangler mentions there are no ‘publish targets’ for todo-list. That’s because we haven’t hooked todo-list up to any HTTP endpoints. That’s fine! We’re going to use Service Bindings to route requests through a gateway worker, as described in the architecture diagram above.</p><p>Next, reuse these steps to create similar microservices for each of our create, read, update, and delete endpoints. The source code is available to <a href="https://github.com/kabirsikand/worker-todos-api">follow along</a>.</p>
    <div>
      <h3>Tying it all together with an API Gateway</h3>
      <a href="#tying-it-all-together-with-an-api-gateway">
        
      </a>
    </div>
    <p>Each of our Workers are able to talk to the D1 database, but how can our application talk to our API? We’ll build out a simple API gateway to route incoming requests to the appropriate microservice. For the purposes of our application, we’re using a combination of URL pathname and request method to detect which endpoint is appropriate.</p><p><b><i>gateway/src/gateway.js</i></b></p>
            <pre><code>export default {
 async fetch(request, env) {
   try{
     const url = new URL(request.url)
     const idPattern = new URLPattern({ pathname: '/:id' })
     if (idPattern.test(request.url)) {
       switch (request.method){
         case 'GET':
           return await env.get.fetch(request.clone())
         case 'PATCH':
           return await env.update.fetch(request.clone())
         case 'DELETE':
           return await env.delete.fetch(request.clone())
         default:
           return new Response("Unsupported method for endpoint /:id", {status: 405})
       }
     } else if (url.pathname == '/') {
       switch (request.method){
         case 'GET':
           return await env.list.fetch(request.clone())
         case 'POST':
           return await env.create.fetch(request.clone())
         default:
           return new Response("Unsupported method for endpoint /", {status: 405})
       }
     }
     return new Response("Not found. Supported endpoints are /:id and /", {status: 404})
   } catch(e) {
     return new Response(e, {status: 500})
   }
 },
};</code></pre>
            <p>With our API gateway all set, we just need to expose our application to the Internet using a Custom Domain, and hook up our Service Bindings, so the gateway Worker can access each appropriate microservice. We’ll set this up in a <code>wrangler.toml</code>.</p><p><b><i>gateway/wrangler.toml</i></b></p>
            <pre><code>name = "todo-gateway"
main = "src/gateway.js"
compatibility_date = "2022-09-07"
workers_dev = false
usage_model = "unbound"
 
routes =  [
   {pattern="todos.radiobox.tv", custom_domain=true, zone_name="radiobox.tv"}
]
 
services = [
   {binding = "get",service = "todo-get"},
   {binding = "delete",service = "todo-delete"},
   {binding = "create",service = "todo-create"},
   {binding = "update",service = "todo-update"},
   {binding = "list",service = "todo-list"}
]</code></pre>
            <p>Next, use <code>wrangler publish</code> to deploy your application to the Cloudflare network. Seconds later, you’ll have a simple, functioning todo API built entirely on Cloudflare’s Developer Platform!</p>
            <pre><code>› wrangler publish
 ⛅️ wrangler 0.0.0-893830aa
-----------------------------------------------------------------------
Retrieving cached values for account from ../../../node_modules/.cache/wrangler
Your worker has access to the following bindings:
- Services:
  - get: todo-get
  - delete: todo-delete
  - create: todo-create
  - update: todo-update
  - list: todo-list
Total Upload: 1.21 KiB / gzip: 0.42 KiB
Uploaded todo-gateway (0.62 sec)
Published todo-gateway (0.51 sec)
  todos.radiobox.tv (custom domain - zone name: radiobox.tv)</code></pre>
            
    <div>
      <h3>Natively Global</h3>
      <a href="#natively-global">
        
      </a>
    </div>
    <p>Since it’s built natively on Cloudflare, you can also include Cloudflare’s security suite in front of the application. If we want to <a href="https://www.cloudflare.com/learning/security/threats/how-to-prevent-sql-injection/">prevent SQL Injection attacks</a> for this endpoint, we can enable the appropriate Managed WAF rules on our todos API endpoint. Alternatively, if we wanted to prevent global access to our API (only allowing privileged clients to access the application), we can simply put Cloudflare Access in front, with custom Access Rules.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3i5ZY1TUMPyiUuvZ6Uggy9/4341c43b2022b5820c0eb002802e18be/image3-15.png" />
            
            </figure><p>With Custom Domains on Workers, you’re able to easily create applications that are native to Cloudflare’s global network, instantly. Best of all, your developers don’t need to worry about maintaining DNS records or certificate renewal - Cloudflare handles it all on their behalf. We’d like to give a huge shout out to the 5,000+ developers who used Custom Domains during the open beta period, and those that gave feedback along the way to make this possible. Can’t wait to see what you build next! As always, if you have any questions or would like to get involved, please join us on <a href="http://discord.gg/cloudflaredev">Discord</a>.</p><p>Tune in next time to see how we can build a frontend for our application. In the meantime, you can play around with the todos API we built today at <a href="https://todos.radiobox.tv"><code>todos.radiobox.tv</code></a>.</p>
    <div>
      <h3>Watch on Cloudflare TV</h3>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div></div><p></p> ]]></content:encoded>
            <category><![CDATA[GA Week]]></category>
            <category><![CDATA[General Availability]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[D1]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">3pIn020UceWWDhjyEQnm08</guid>
            <dc:creator>Kabir Sikand</dc:creator>
        </item>
    </channel>
</rss>