
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Thu, 09 Apr 2026 20:03:32 GMT</lastBuildDate>
        <item>
            <title><![CDATA[How Cloudflare is powering the next generation of platforms with Workers]]></title>
            <link>https://blog.cloudflare.com/powering-platforms-on-workers/</link>
            <pubDate>Thu, 18 May 2023 13:00:01 GMT</pubDate>
            <description><![CDATA[ Workers for Platforms is our Workers offering for customers building new platforms on Cloudflare Workers. Let’s take a look back and recap why we built Workers for Platforms, show you some of the most interesting problems our customers have been solving and share new features that are now available! ]]></description>
            <content:encoded><![CDATA[ <p></p><p>We launched Workers for Platforms, our Workers offering for SaaS businesses, almost exactly one year ago to the date! We’ve seen a wide array of customers using Workers for Platforms – from e-commerce to CMS, low-code/no-code platforms and also a new wave of AI businesses running tailored inference models for their end customers!</p><p>Let’s take a look back and recap why we built Workers for Platforms, show you some of the most interesting problems our customers have been solving and share new features that are now available!</p>
    <div>
      <h2>What is Workers for Platforms?</h2>
      <a href="#what-is-workers-for-platforms">
        
      </a>
    </div>
    <p>SaaS businesses are all too familiar with the never ending need to keep up with their users' feature requests. Thinking back, the introduction of Workers at Cloudflare was to solve this very pain point. Workers gave our customers the power to program our network to meet their specific requirements!</p><p>Need to implement complex load balancing across many origins? <i>Write a Worker.</i> Want a custom set of WAF rules for each region your business operates in? <i>Go crazy, write a Worker.</i></p><p>We heard the same themes coming up with our customers – which is why we partnered with early customers to build Workers for Platforms. We worked with the Shopify Oxygen team early on in their journey to create a built-in hosting platform for Hydrogen, their Remix-based eCommerce framework. Shopify’s Hydrogen/Oxygen combination gives their merchants the flexibility to build out personalized shopping for buyers. It’s an experience that storefront developers can make their own, and it’s powered by Cloudflare Workers behind the scenes. For more details, check out Shopify’s “<a href="https://shopify.engineering/how-we-built-oxygen">How we Built Oxygen</a>” blog post.</p><blockquote><p><i>Oxygen is Shopify's built-in hosting platform for Hydrogen storefronts, designed to provide users with a seamless experience in deploying and managing their ecommerce sites. Our integration with Workers for Platforms has been instrumental to our success in providing fast, globally-available, and secure storefronts for our merchants. The flexibility of Cloudflare's platform has allowed us to build delightful merchant experiences that integrate effortlessly with the best that the Shopify ecosystem has to offer.</i><i>-</i> <b>Lance Lafontaine</b>, Senior Developer <a href="https://shopify.dev/docs/custom-storefronts/oxygen">Shopify Oxygen</a></p></blockquote><p>Another customer that we’ve been working very closely with is Grafbase. Grafbase started out on the <a href="https://www.cloudflare.com/forstartups/?ref=blog.cloudflare.com">Cloudflare for Startups</a> program, building their company from the ground up on Workers. Grafbase gives their customers the ability to deploy serverless GraphQL backends instantly. On top of that, their developers can build custom GraphQL resolvers to program their own business logic right at the edge. Using Workers and Workers for Platforms means that Grafbase can focus their team on building Grafbase, rather than having to focus on building and architecting at the infrastructure layer.</p><blockquote><p><i>Our mission at Grafbase is to enable developers to deploy globally fast GraphQL APIs without worrying about complex infrastructure. We provide a unified data layer at the edge that accelerates development by providing a single endpoint for all your data sources. We needed a way to deploy serverless GraphQL gateways for our customers with fast performance globally without cold starts. We experimented with container-based workloads and FaaS solutions, but turned our attention to WebAssembly (Wasm) in order to achieve our performance targets. We chose Rust to build the Grafbase platform for its performance, type system, and its Wasm tooling. Cloudflare Workers was a natural fit for us given our decision to go with Wasm. On top of using Workers to build our platform, we also wanted to give customers the control and flexibility to deploy their own logic. Workers for Platforms gave us the ability to deploy customer code written in JavaScript/TypeScript or Wasm straight to the edge.</i><i>-</i> <b>Fredrik Björk</b>, Founder &amp; CEO at <a href="https://grafbase.com/">Grafbase</a></p></blockquote><p>Over the past year, it’s been incredible seeing the velocity that building on Workers allows companies both big and small to move at.</p>
    <div>
      <h2>New building blocks</h2>
      <a href="#new-building-blocks">
        
      </a>
    </div>
    <p>Workers for Platforms uses <a href="https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/learning/how-workers-for-platforms-works/#dynamic-dispatch-worker">Dynamic Dispatch</a> to give our customers, like Shopify and Grafbase, the ability to run their own Worker before user code that’s written by Shopify and Grafbase’s developers is executed. With Dynamic Dispatch, Workers for Platforms customers (referred to as platform customers) can authenticate requests, add context to a request or run any custom code before their developer’s Workers (referred to as user Workers) are called.</p><p>This is a key building block for Workers for Platforms, but we’ve also heard requests for even more levels of visibility and control from our platform customers. Delivering on this theme, we’re releasing three new highly requested features:</p>
    <div>
      <h3>Outbound Workers</h3>
      <a href="#outbound-workers">
        
      </a>
    </div>
    <p>Dynamic Dispatch gives platforms visibility into all incoming requests to their user’s Workers, but customers have also asked for visibility into all outgoing requests from their user’s Workers in order to do things like:</p><ul><li><p>Log all subrequests in order to identify malicious hosts or usage patterns</p></li><li><p>Create allow or block lists for hostnames requested by user Workers</p></li><li><p>Configure authentication to your APIs behind the scenes (without end developers needing to set credentials)</p></li></ul><p>Outbound Workers sit between user Workers and fetch() requests out to the Internet. User Workers will trigger a <a href="https://maddy-outbound-workers.cloudflare-docs-7ou.pages.dev/workers/runtime-apis/fetch-event/">FetchEvent</a> on the Outbound Worker and from there platform customers have full visibility over the request before it’s sent out.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/74ISEbqaIX58uc6z88wV0s/28a040fdb9e044b8420dbf7930bf27aa/image1-48.png" />
            
            </figure><p>It’s also important to have context in the Outbound Worker to answer questions like “which user Worker is this request coming from?”. You can declare variables to pass through to the Outbound Worker in the dispatch namespaces binding:</p>
            <pre><code>[[dispatch_namespaces]]
binding = "dispatcher"
namespace = "&lt;NAMESPACE_NAME&gt;"
outbound = {service = "&lt;SERVICE_NAME&gt;", parameters = [customer_name,url]}</code></pre>
            <p>From there, the variables declared in the binding can be accessed in the Outbound Worker through env. <code>&lt;VAR_NAME&gt;</code>.</p>
    <div>
      <h3>Custom Limits</h3>
      <a href="#custom-limits">
        
      </a>
    </div>
    <p>Workers are really powerful, but, as a platform, you may want guardrails around their capabilities to shape your pricing and packaging model. For example, if you run a freemium model on your platform, you may want to set a lower CPU time limit for customers on your free tier.</p><p>Custom Limits let you set usage caps for CPU time and number of subrequests on your customer’s Workers. Custom limits are set from within your dynamic dispatch Worker allowing them to be dynamically scripted. They can also be combined to set limits based on <a href="https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/platform/tags/">script tags</a>.</p><p>Here’s an example of a Dynamic Dispatch Worker that puts both Outbound Workers and Custom Limits together:</p>
            <pre><code>export default {
async fetch(request, env) {
  try {
    let workerName = new URL(request.url).host.split('.')[0];
    let userWorker = env.dispatcher.get(
      workerName,
      {},
      {// outbound arguments
       outbound: {
           customer_name: workerName,
           url: request.url},
        // set limits
       limits: {cpuMs: 10, subRequests: 5}
      }
    );
    return await userWorker.fetch(request);
  } catch (e) {
    if (e.message.startsWith('Worker not found')) {
      return new Response('', { status: 404 });
    }
    return new Response(e.message, { status: 500 });
  }
}
};</code></pre>
            <p>They’re both incredibly simple to configure, and the best part – the configuration is completely programmatic. You have the flexibility to build on both of these features with your own custom logic!</p>
    <div>
      <h3>Tail Workers</h3>
      <a href="#tail-workers">
        
      </a>
    </div>
    <p>Live logging is an essential piece of the developer experience. It allows developers to monitor for errors and troubleshoot in real time. On Workers, giving users real time logs though <code>wrangler tail</code> is a feature that developers love! Now with Tail Workers, platform customers can give their users the same level of visibility to provide a faster debugging experience.</p><p>Tail Worker logs contain metadata about the original trigger event (like the incoming URL and status code for fetches), console.log() messages and capture any unhandled exceptions. Tail Workers can be added to the Dynamic Dispatch Worker in order to capture logs from both the Dynamic Dispatch Worker and any User Workers that are called.</p><p>A Tail Worker can be configured by adding the following to the <code>wrangler.toml</code> file of the producing script</p>
            <pre><code>tail_consumers = [{service = "&lt;TAIL_WORKER_NAME&gt;", environment = "&lt;ENVIRONMENT_NAME&gt;"}]</code></pre>
            <p>From there, events are captured in the Tail Worker using a new tail handler:</p>
            <pre><code>export default {
  async tail(events) =&gt; {
    fetch("https://example.com/endpoint", {
      method: "POST",
      body: JSON.stringify(events),
    })
  }
}</code></pre>
            <p>Tail Workers are full-fledged Workers empowered by the usual Worker ecosystem. You can send events to any HTTP endpoint, like for example a logging service that parses the events and passes on real-time logs to customers.</p>
    <div>
      <h2>Try it out!</h2>
      <a href="#try-it-out">
        
      </a>
    </div>
    <p>All three of these features are now in open beta for users with access to Workers for Platforms. For more details and try them out for yourself, check out our developer documentation:</p><ul><li><p><a href="https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/platform/outbound-workers/">Outbound Workers</a></p></li><li><p><a href="https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/platform/custom-limits/">Custom Limits</a></p></li><li><p><a href="https://developers.cloudflare.com/workers/platform/tail-workers">Tail Workers</a></p></li></ul><p>Workers for Platforms is an enterprise only product (for now) but we’ve heard a lot of interest from developers. In the later half of the year, we’ll be bringing Workers for Platforms down to our pay as you go plan! In the meantime, if you’re itching to get started, reach out to us through the <a href="https://discord.cloudflare.com/">Cloudflare Developer Discord</a> (channel name: workers-for-platforms).</p>
    <div>
      <h3>Watch on Cloudflare TV</h3>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div></div> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">6KYE0S5UOjVcVum73JwOp0</guid>
            <dc:creator>Nathan Disidore</dc:creator>
            <dc:creator>Tanushree Sharma</dc:creator>
        </item>
        <item>
            <title><![CDATA[From 0 to 20 billion - How We Built Crawler Hints]]></title>
            <link>https://blog.cloudflare.com/from-0-to-20-billion-how-we-built-crawler-hints/</link>
            <pubDate>Thu, 16 Dec 2021 13:58:29 GMT</pubDate>
            <description><![CDATA[ Cloudflare Is reducing the environmental impact of web searches with 20+ billions crawler hints delivered so far. This blog describes the technical solution of how we built the Crawler Hints system that makes all this possible.
 ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/xs3odwiEbW2X4uv6wNNdA/a08cd0796a54029678fd143edc84e4d6/image2-57.png" />
            
            </figure><p>In July 2021, as part of <a href="/welcome-to-cloudflare-impact-week/">Impact Innovation Week</a>, we announced our intention to launch <a href="/crawler-hints-how-cloudflare-is-reducing-the-environmental-impact-of-web-searches/">Crawler Hints</a> as a means to reduce the environmental impact of web searches. We spent the weeks following the announcement hard at work, and in October 2021, we announced <a href="/cloudflare-now-supports-indexnow/">General Availability for the first iteration of the product</a>. This post explains how we built it, some of the interesting engineering problems we had to solve, and shares some metrics on how it's going so far.</p>
    <div>
      <h2>Before We Begin...</h2>
      <a href="#before-we-begin">
        
      </a>
    </div>
    <p>Search indexers crawl sites periodically to check for new content. Algorithms vary by search provider, but are often based on either a regular interval or cadence of past updates, and these crawls are often not aligned with real world content changes. This naive crawling approach may harm customer page rank and also works to the detriment of search engines with respect to their operational costs and environmental impact. To make the Internet greener and more energy efficient, the goal of Crawler Hints is to help search indexers make more informed decisions on when content has changed, saving valuable compute cycles/bandwidth and having a net positive environmental impact.</p><p>Cloudflare is in an advantageous position to help inform crawlers of content changes, as we are often the “front line” of the interface between site visitors and the origin server where the content updates take place. This grants us knowledge of some key data points like headers, content hashes, and site purges among others. For customers who have opted in to Crawler Hints, we leverage this data to generate a “content freshness score” using an ensemble of active and passive signals from our customer base and request flow. To help with efficiency, Crawler Hints helps to improve SEO for websites behind Cloudflare, improves relevance for search engine users, and improves origin responsiveness by reducing bot traffic to our customers’ origin servers.</p><p>A high level design of the system we built looks as follows:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4kSfCs22YboeTPDLdbffsG/b21c528beaa8579a0d6e2e3b874fccca/image3-42.png" />
            
            </figure><p>In this blog we will dig into each aspect of it in more detail.</p>
    <div>
      <h2>Keeping Things Fresh</h2>
      <a href="#keeping-things-fresh">
        
      </a>
    </div>
    <p><a href="https://www.cloudflare.com/network/">Cloudflare has a large global network spanning 250 cities</a>.  A popular use case for Cloudflare is to use our CDN product to cache your website's assets so that users accessing your site can benefit from lightning fast response times. You can read more about how Cloudflare manages our cache <a href="/why-we-started-putting-unpopular-assets-in-memory/">here</a>. The important thing to call out for the purpose of this post is that the cache is Data Center local. A cache hit in London might be a cache miss in San Francisco unless you have opted-in to <a href="/orpheus/">tiered-caching</a>, but that is beyond the scope of this post.</p><p>For Crawler Hints to work, we make use of a number of signals available at request time to make an informed decision on the “freshness” of content. For our first iteration of Crawler Hints, we used a cache miss from Cloudflare’s cache as a starting basis. Although a naive signal on its own, getting the data pipelines in place to forward cache miss data from our global network to our control plane meant we would have everything in place to iterate on and improve the signal processing quickly going forward. To do this, we leveraged some existing services from our data team that takes request data , marshalls it into <a href="https://capnproto.org/">Cap'n Proto format</a>, and forwards it to a message bus (we use apache Kafka). These messages include the URLs of the resources that have met the signal criteria, along with some additional metadata for analytics/future improvement.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1XeWU8Bx1SuIOajaVbu3Kd/76edad32d2eff51af6b2d9887ea2825e/image4-34.png" />
            
            </figure><p>The amount of traffic our global network receives is substantial. We serve over 28 million HTTP requests per second on average, with more than 35 million HTTP requests per second at peak. Typically, Cloudflare teams sample this data to enable products <a href="/get-notified-when-your-site-is-under-attack/">such as being alerted when you are under attack</a>. For Crawler Hints, every cache miss is important. Therefore, 100% of all cache misses for opted-in sites were sent for further processing, and we’ll discuss more on opt-in later.</p>
    <div>
      <h2>Redis as a Distributed Buffer</h2>
      <a href="#redis-as-a-distributed-buffer">
        
      </a>
    </div>
    <p>With messages buffered in Kafka, we can now begin the work of aggregation and deduplication. We wrote a consumer service that we call an ingestor. The ingestor reads the data from Kafka. The ingestor performs validation to ensure proper sanitization and data integrity and passes this data onto the next stage of the system. We run the ingestor as part of a Kafka consumer group, allowing us to scale our consumer count up to the partition size as throughput increases.</p><p>We ultimately want to deliver a set of “fresh” content to our search partners on a dynamic interval. For example, we might want to send a batch of 10,000 URLs every two minutes. There are, however, a couple of important things to call out though:</p><ul><li><p>There should be no duplicate resources in each batch.</p></li><li><p>We should strike a balance in our size and frequency such that overall request size isn’t too large, but big enough to remove some pressure on the receiving API by not sending too <i>many</i> requests at once.</p></li></ul><p>For the deduplication, the simplest thing to do would be to have an in-memory map in our service to track resources between a pre-specified interval. A naive implementation in Go might look something like this.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2vlmka7b85QzcVxziLLGdR/b3669b1bbd3fd4108d94ffa8052b2f8d/image5-22.png" />
            
            </figure><p>The problem with this approach is we have little resilience. If the service was to crash, we would lose all the data for our current batch. Furthermore, if we were to run multiple instances of our services, they would all have a different “view” of which resources they had seen before and therefore we would not be deduplicating.To mitigate this issue, we decided to use a specialist caching service. There are a number of distributed caches that would fit the bill, but we chose Redis given our team’s familiarity with operating it at scale.</p><p>Redis is well known as a Key Value(KV) store often used for caching things,optionally with a specified Time To Live(TTL). Perhaps slightly less obvious is its value as a distributed buffer, housing ephemeral data with periodic flush/tear-downs. For Crawler Hints, we leveraged both these traits via a multi-generational, multi-cluster setup to achieve a highly available rolling aggregation service.</p><p>Two standalone Redis clusters were spun up. For each generation of request data, one cluster would be designated as the active primary. The validated records would be inserted as keys on the primary, serving the dual purpose of buffering while also deduplicating since Redis keys are unique. Separately, a downstream service (more on this later!) would periodically issue the command for these inserters to switch from the active primary (cluster A) to the inactive cluster (cluster B). Cluster A could then be flushed with records being batch read in a size of our choosing.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4IYn9s1nmM2Qcci2c5A8cn/21069dec8e9b95ec87e17d8b2d31a767/image9-5.png" />
            
            </figure>
    <div>
      <h2>Buffering for Dispatch</h2>
      <a href="#buffering-for-dispatch">
        
      </a>
    </div>
    <p>At this point, we have clean, batched data. Things are looking good! However, there’s one small hiccup in the plan: we’re reading these batches from Redis at some set interval. What if it takes longer to dispatch than the interval itself? What if the search partner API is having issues?</p><p>We need a way to ensure the durability of the batch URLs and reduce the impact of any dispatch issues. To do this, we revisit an old friend from earlier: Kafka. The batches that get read from Redis are then fed into a Kafka topic. We wrote a Kafka consumer that we call the “dispatcher service” which runs within a consumer group to enable us to scale it if necessary just like the ingestor. The dispatcher reads from the Kafka topic and sends a batch of resources to each of our API partners.</p><p>Launching in tandem with Cloudflare, Crawler Hints was a joint venture between a few early adopters in the search engine space to provide a means for sites to inform indexers of content changes called IndexNow. <a href="/cloudflare-now-supports-indexnow/">You can read more about this launch here.</a> IndexNow is a large part of what makes Crawler Hints possible. As part of its manifest, it provides a common API spec to publish resources that should be re-indexed. The standardized API makes abstracting the communication layer quite simple for the partners that support it. “Pushing” these signals to our search engine partners is a big step away from the inefficient “Pull” based model that is used today (you can read more about that <a href="/crawler-hints-how-cloudflare-is-reducing-the-environmental-impact-of-web-searches/">here</a>). We launched with Yandex and Bing as Search Engine Partners.</p><p>To ensure we can add more partners in the future, we defined an interface which we call a “Hinter”.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7K0wQWVx2hciGxZVwFMxM6/a3b6f0272225bbddd393465592c98721/image8-12.png" />
            
            </figure><p>We then satisfy this interface for each partner that we work with. We return a custom error from the Hinter service that is of type *indexers.Error. The definition of which is:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7Aq6eqraSEg1TpQ2b2QJK9/e763eb8303e0d1fdef3807ae58503cda/image1-84.png" />
            
            </figure><p>This allows us to “bubble up” information about which indexer has failed and increment metrics and retry only those calls to indexers which have failed.</p><p>This all culminates together with the following in our service layer:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5fwN6ZGTRJ5NARBrJWLfOq/a69a7ef3147e17c02de3c99e0f92ee95/image6-21.png" />
            
            </figure><p>Simple, performant, maintainable, AND easy to add more partners in the future.</p>
    <div>
      <h2>Rolling out Crawler Hints</h2>
      <a href="#rolling-out-crawler-hints">
        
      </a>
    </div>
    <p>At Cloudflare, we often release things that haven’t been done before at scale. This project is a great example of that. Trying to gauge how many users would be interested in this product and what the uptake might be like on day one, day ten, and day one thousand is close to impossible. As engineers responsible for running this system, it is essential we build in checks and balances so that the system does not become overwhelmed and responds appropriately. For this particular project, there are three different types of “protection” we put in place. These are:</p><ul><li><p>Customer opt-in</p></li><li><p>Monitoring &amp; Alerts</p></li><li><p>System resilience via “self-healing”</p></li></ul>
    <div>
      <h3>Customer opt-in</h3>
      <a href="#customer-opt-in">
        
      </a>
    </div>
    <p>Cloudflare takes any changes that can impact customer traffic flow seriously. Considering Crawler Hints has the potential to change how sites are seen externally (even if in this instance the site’s viewers are robots!) and can impact things like SEO and bandwidth usage, asking customers to opt-in is a sensible default. By asking customers to opt-in to the service, we can start to get an understanding of our system’s capacity and look for bottle necks and how to remove them. To do this, we make extensive use of Prometheus, Grafana, and Kibana.</p>
    <div>
      <h3>Monitoring &amp; Alerting</h3>
      <a href="#monitoring-alerting">
        
      </a>
    </div>
    <p>We do our best to make our systems as “self-healing” and easy to run as possible, but as they say, “By failing to prepare, you are preparing to fail.” We therefore invest a lot of time creating ways to track the health and performance of our system and creating automated alerts when things fall outside of expected bounds.</p><p>Below is a small sample of the Grafana dashboard we created for this project. As you can see, we can track customer enablement and the rate of hint dispatch in real time. The bottom two panels show the throughput of our Kafka clusters by partition. Even just these four metrics give us a lot of insight into how things are going, but we also track (as well as other things):</p><ul><li><p>Lag on Kafka by partition (how far behind real time we are)</p></li><li><p>Bad messages received from Kafka</p></li><li><p>Amount of URLs processed per “run”</p></li><li><p>Response code per index partner over time</p></li><li><p>Response time of partner API over time</p></li><li><p>Health of the Redis clusters (how much memory is used, frequency of commands we are using received by the cluster)</p></li><li><p>Memory, CPU usage, and pods available against configured limits/requests</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7bunGQ2Uvk8bZ5jdRz3t6G/08a155b3ec60b9bf44218a4d2e12b774/image7-12.png" />
            
            </figure><p>It seems a lot to track, but this information is invaluable to us, and we use it to generate alerts that notify the on-call engineer if a threshold is breached. For example, we have an alert that would escalate to an engineer if our Redis cluster approached 80% capacity. For some thresholds we specify, we may want the system to “self-heal.” In this instance, we would want an engineer to investigate as this is outside the bounds of “normal,” and it might be that something is not working as expected. An alternative reason that we might receive alerts is that our product has increased in popularity beyond our expectations, and we simply need to increase the memory limit. This requires context and is therefore best left to a human to make this decision.</p>
    <div>
      <h3>System Resilience via “self-healing”</h3>
      <a href="#system-resilience-via-self-healing">
        
      </a>
    </div>
    <p>We do everything we can to not disturb on-call engineers, and therefore, we try to make the system as “self-healing” as possible. We also don’t want to have too much extra resource running as it can be expensive and use limited capacity that another  Cloudflare service might need more - it's a trade off.  To do this, we make use of a few patterns and tools common in every distributed engineer’s toolbelt. Firstly, we deploy on Kubernetes. This enables us to make use of great features like <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/">Horizontal Pod Autoscaling</a>. When any of our pods reach ~80% memory usage, a new pod is created which will pick up some of the slack up to a predefined limit.</p><p>Secondly, by using a message bus, we get a lot of control over the amount of “work” our services have to do in a given time frame. In general, a message bus is “pull” based. If we want more work, we ask for it. If we want less work, we pull less. This holds for the most part, but with a system where being close to real time is important, it is essential that we monitor the “lag” of the topic, or how far we are behind real time. If we are too far behind, we may want to introduce more partitions or consumers.</p><p>Finally, networks fail. We therefore add retry policies to all HTTP calls we make before reporting them a failure. For example, if we were to receive a 500 (Internal Server Error) from one of our partner APIs, we would retry up to five times using an exponential backoff strategy before reporting a failure.</p>
    <div>
      <h2>Data from the first couple of months</h2>
      <a href="#data-from-the-first-couple-of-months">
        
      </a>
    </div>
    <p>Since the release of Crawler Hints on October 18, 2021 until December 15, 2021, Crawler Hints has processed over twenty five billion crawl signals, has been opted-in to by more than 81,000 customers, and has handled roughly 18,000 requests per second. It’s been an exciting project to be a part of, and we are just getting started.</p>
    <div>
      <h2>What’s Next?</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>We will continue to work with our partners to improve the standard even further and continue to improve the signaling on our side to ensure the most valuable information is being pushed on behalf of our customers in a timely manner.</p><p><b><i>If you're interested in building scalable services and solving interesting technical problems, we are hiring engineers on our team in</i></b> <a href="https://boards.greenhouse.io/cloudflare/jobs/3129759?gh_jid=3129759"><b><i>Austin</i></b></a><b><i>,</i></b> <a href="https://boards.greenhouse.io/cloudflare/jobs/3231716?gh_jid=3231716"><b><i>Lisbon</i></b></a><b><i>, and</i></b> <a href="https://boards.greenhouse.io/cloudflare/jobs/3231718?gh_jid=3231718"><b><i>London</i></b></a><b><i>.</i></b></p> ]]></content:encoded>
            <category><![CDATA[Crawler Hints]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[SEO]]></category>
            <guid isPermaLink="false">3WLRRaI7sl25i1KhmRq7YB</guid>
            <dc:creator>Matt Boyle</dc:creator>
            <dc:creator>Nathan Disidore</dc:creator>
            <dc:creator>Rajesh Bhatia</dc:creator>
        </item>
    </channel>
</rss>