สมัครเพื่อรับการแจ้งเตือนเมื่อมีโพสต์ใหม่:สมัครรับข้อมูล

We made Workers KV up to 3x faster — here’s the data

2024-09-26

อ่านเมื่อ 7 นาทีก่อน
โพสต์นี้ยังแสดงเป็นภาษาEnglishด้วย

Speed is a critical factor that dictates Internet behavior. Every additional millisecond a user spends waiting for your web page to load results in them abandoning your website. The old adage remains as true as ever: faster websites result in higher conversion rates. And with such outcomes tied to Internet speed, we believe a faster Internet is a better Internet.

Customers often use Workers KV to provide Workers with key-value data for configuration, routing, personalization, experimentation, or serving assets. Many of Cloudflare’s own products rely on KV for just this purpose: Pages stores static assets, Access stores authentication credentials, AI Gateway stores routing configuration, and Images stores configuration and assets, among others. So KV’s speed affects the latency of every request to an application, throughout the entire lifecycle of a user session. 

Today, we’re announcing up to 3x faster KV hot reads, with all KV operations faster by up to 20ms. And we want to pull back the curtain and show you how we did it. 

BLOG-2518 2

Workers KV read latency (ms) by percentile measured from Pages

Optimizing Workers KV’s architecture to minimize latency

At a high level, Workers KV is itself a Worker that makes requests to central storage backends, with many layers in between to properly cache and route requests across Cloudflare’s network. You can rely on Workers KV to support operations made by your Workers at any scale, and KV’s architecture will seamlessly handle your required throughput. 

BLOG-2518 3

Sequence diagram of a Workers KV operation

When your Worker makes a read operation to Workers KV, your Worker establishes a network connection within its Cloudflare region to KV’s Worker. The KV Worker then accesses the Cache API, and in the event of a cache miss, retrieves the value from the storage backends. 

Let’s look one level deeper at a simplified trace: 

BLOG-2518 4

Simplified trace of a Workers KV operation

From the top, here are the operations completed for a KV read operation from your Worker:

  1. Your Worker makes a connection to Cloudflare’s network in the same data center. This incurs ~5 ms of network latency.

  2. Upon entering Cloudflare’s network, a service called Front Line (FL) is used to process the request. This incurs ~10 ms of operational latency.

  3. FL proxies the request to the KV Worker. The KV Worker does a cache lookup for the key being accessed. This, once again, passes through the Front Line layer, incurring an additional ~10 ms of operational latency.

  4. Cache is stored in various backends within each region of Cloudflare’s network. A service built upon Pingora, our open-sourced Rust framework for proxying HTTP requests, routes the cache lookup to the proper cache backend.

  5. Finally, if the cache lookup is successful, the KV read operation is resolved. Otherwise, the request reaches our storage backends, where it gets its value.

Looking at these flame graphs, it became apparent that a major opportunity presented itself to us: reducing the FL overhead (or eliminating it altogether) and reducing the cache misses across the Cloudflare network would reduce the latency for KV operations.

Bypassing FL layers between Workers and services to save ~20ms

A request from your Worker to KV doesn’t need to go through FL. Much of FL’s responsibility is to process and route requests from outside of Cloudflare — that’s more than is needed to handle a request from the KV binding to the KV Worker. So we skipped the Front Line altogether in both layers.

Reducing latency in a Workers KV operation by removing FL layers

To bypass the FL layer from the KV binding in your Worker, we modified the KV binding to connect directly to the KV Worker within the same Cloudflare location. Within the Workers host, we configured a C++ subpipeline to allow code from bindings to establish a direct connection with the proper routing configuration and authorization loaded. 

The KV Worker also passes through the FL layer on its way to our internal Pingora service. In this case, we were able to use an internal Worker binding that allows Workers for Cloudflare services to bind directly to non-Worker services within Cloudflare’s network. With this fix, the KV Worker sets the proper cache control headers and establishes its connection to Pingora without leaving the network. 

Together, both of these changes reduced latency by ~20 ms for every KV operation. 

Implementing tiered cache to minimize requests to storage backends

We also optimized KV’s architecture to reduce the amount of requests that need to reach our centralized storage backends. These storage backends are further away and incur network latency, so improving the cache hit rate in regions close to your Workers significantly improves read latency.

BLOG-2518 5

Workers KV uses Tiered Cache to resolve operations closer to your users

To accomplish this, we used Tiered Cache, and implemented a cache topology that is fine-tuned to the usage patterns of KV. With a tiered cache, requests to KV’s storage backends are cached in regional tiers in addition to local (lower) tiers. With this architecture, KV operations that may be cache misses locally may be resolved regionally, which is especially significant if you have traffic across an entire region spanning multiple Cloudflare data centers. 

This significantly reduced the amount of requests that needed to hit the storage backends, with ~30% of requests resolved in tiered cache instead of storage backends.

KV’s new architecture

As a result of these optimizations, KV operations are now simplified:

  1. When you read from KV in your Worker, the KV binding binds directly to KV’s Worker, saving 10 ms. 

  2. The KV Worker binds directly to the Tiered Cache service, saving another 10 ms. 

  3. Tiered Cache is used in front of storage backends, to resolve local cache misses regionally, closer to your users.

BLOG-2518 6

Sequence diagram of KV operations with new architecture

In aggregate, these changes significantly reduced KV’s latency. The impact of the direct binding to cache is clearly seen in the wall time of the KV Worker, given this value measures the duration of a retrieval of a key-value pair from cache. The 90th percentile of all KV Worker invocations now resolve in less than 12 ms — before the direct binding to cache, that was 22 ms. That’s a 10 ms decrease in latency. 

BLOG-2518 7

Wall time (ms) within the KV Worker by percentile

These KV read operations resolve quickly because the data is cached locally in the same Cloudflare location. But what about reads that aren’t resolved locally? ~30% of these resolve regionally within the tiered cache. Reads from tiered cache are up to 100 ms faster than when resolved at central storage backends, once again contributing to making KV reads faster in aggregate.

BLOG-2518 8

Wall time (ms) within the KV Worker for tiered cache vs. storage backends reads

These graphs demonstrate the impact of direct binding from the KV binding to cache, and tiered cache. To see the impact of the direct binding from a Worker to the KV Worker, we need to look at the latencies reported by Cloudflare products that use KV.

Cloudflare Pages, which serves static assets like HTML, CSS, and scripts from KV, saw load times for fetching assets improve by up to 68%. Workers asset hosting, which we also announced as part of today’s Builder Day announcements, gets this improved performance from day 1.

BLOG-2518 2

Workers KV read operation latency measured within Cloudflare Pages by percentile

Queues and Access also saw their latencies for KV operations drop, with their KV read operations now 2-5x faster. These services rely on Workers KV data for configuration and routing data, so KV’s performance improvement directly contributes to making them faster on each request. 

BLOG-2518 8

Workers KV read operation latency measured within Cloudflare Queues by percentile

BLOG-2518 10

Workers KV read operation latency measured within Cloudflare Access by percentile

These are just some of the direct effects that a faster KV has had on other services. Across the board, requests are resolving faster thanks to KV’s faster response times. 

And we have one more thing to make KV lightning fast. 

Optimizing KV’s hottest keys with an in-memory cache 

Less than 0.03% of keys account for nearly half of requests to the Workers KV service across all namespaces. These keys are read thousands of times per second, so making these faster has a disproportionate impact. Could these keys be resolved within the KV Worker without needing additional network hops?

Almost all of these keys are under 100 KB. At this size, it becomes possible to use the in-memory cache of the KV Worker — a limited amount of memory within the main runtime process of a Worker sandbox. And that’s exactly what we did. For the highest throughput keys across Workers KV, reads resolve without even needing to leave the Worker runtime process.

Sequence diagram of KV operations with the hottest keys resolved within an in-memory cache

As a result of these changes, KV reads for these keys, which represent over 40% of Workers KV requests globally, resolve in under a millisecond. We’re actively testing these changes internally and expect to roll this out during October to speed up the hottest key-value pairs on Workers KV.

A faster KV for all

Most of these speed gains are already enabled with no additional action needed from customers. Your websites that are using KV are already responding to requests faster for your users, as are the other Cloudflare services using KV under the hood and the countless websites that depend upon them. 

And we’re not done: we’ll continue to chase performance throughout our stack to make your websites faster. That’s how we’re going to move the needle towards a faster Internet. 

To see Workers KV’s recent speed gains for your own KV namespaces, head over to your dashboard and check out the new KV analytics, with latency and cache status detailed per namespace.

เราปกป้องเครือข่ายของทั้งองค์กร โดยช่วยลูกค้าสร้างแอปพลิเคชันรองรับผู้ใช้อินเทอร์เน็ตทั่วโลกได้อย่างมีประสิทธิภาพ เพิ่มความรวดเร็วของการใช้เว็บไซต์หรือแอปพลิเคชันอินเทอร์เน็ต จัดการการโจมตีแบบ–DDoS ป้องปรามบรรดาแฮกเกอร์ และช่วยเหลือคุณตลอดเส้นทางสู่ Zero Trust

เข้าไปที่ 1.1.1.1 จากอุปกรณ์ใดก็ได้เพื่อลองใช้แอปฟรีของเราที่จะช่วยให้อินเทอร์เน็ตของคุณเร็วขึ้นและปลอดภัยขึ้น

หากต้องการศึกษาข้อมูลเพิ่มเติมเกี่ยวกับภารกิจของเราเพื่อปรับปรุงการใช้งานอินเทอร์เน็ต เริ่มได้จากที่นี่ หากคุณกำลังมองหางานในสายงานใหม่ ลองดูตำแหน่งที่เราเปิดรับ
Birthday WeekCloudflare Workers KVDeveloper PlatformนักพัฒนาPerformance

ติดตามบน X

Rob Sutter|@rts_rob
Cloudflare|@cloudflare

โพสต์ที่เกี่ยวข้อง

14 พฤศจิกายน 2567 เวลา 14:00

What’s new in Cloudflare: Account Owned Tokens and Zaraz Automated Actions

Cloudflare customers can now create Account Owned Tokens , allowing more flexibility around access control for their Cloudflare services. Additionally, Zaraz Automation Actions streamlines event tracking and third-party tool integration. ...