Subscribe to receive notifications of new posts:

Cloudflare Workers

Moving Baselime from AWS to Cloudflare: simpler architecture, improved performance, over 80% lower cloud costs

2024-10-31

ObservabilityCloudflare WorkersDeveloper PlatformPerformance

Post-acquisition, we migrated Baselime from AWS to the Cloudflare Developer Platform and in the process, we improved query times, simplified data ingestion, and now handle far more events, all while cutting costs. Here’s how we built a modern, high-performing observability platform on Cloudflare’s network....

Build durable applications on Cloudflare Workers: you write the Workflows, we take care of the rest

2024-10-24

Developer PlatformCloudflare WorkersDurable ObjectsWorkflows

Cloudflare Workflows is now in open beta! Workflows allows you to build reliable, repeatable, long-lived multi-step applications that can automatically retry, persist state, and scale out. Read on to learn how Workflows works, how we built it on top of Durable Objects, and how you can deploy your first Workflows application....

MORE POSTS

October 24, 2024 1:00 PM

Durable Objects aren't just durable, they're fast: a 10x speedup for Cloudflare Queues

Learn how we built Cloudflare Queues using our own Developer Platform and how it evolved to a geographically-distributed, horizontally-scalable architecture built on Durable Objects. Our new architecture supports over 10x more throughput and over 3x lower latency compared to the ...

September 26, 2024 9:00 PM

Builder Day 2024: 18 big updates to the Workers platform

To celebrate Builder Day 2024, we’re shipping 18 updates inspired by direct feedback from developers building on Cloudflare. This includes new capabilities, like running evals with AI Gateway, beta products like Queues and Vectorize graduating to GA, massive speed improvements to...

September 26, 2024 1:00 PM

Making Workers AI faster and more efficient: Performance optimization with KV cache compression and speculative decoding

With a new generation of data center accelerator hardware and using optimization techniques such as KV cache compression and speculative decoding, we’ve made large language model (LLM) inference lightning-fast on the Cloudflare Workers AI platform....