
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sat, 04 Apr 2026 09:29:11 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Improve global upload performance with R2 Local Uploads]]></title>
            <link>https://blog.cloudflare.com/r2-local-uploads/</link>
            <pubDate>Tue, 03 Feb 2026 14:00:00 GMT</pubDate>
            <description><![CDATA[ Local Uploads on R2 reduces request duration for uploads by up to 75%. It writes object data to a nearby location and asynchronously copies it to your bucket, all while data is available immediately.  ]]></description>
            <content:encoded><![CDATA[ <p>Today, we are launching<b> Local Uploads</b> for R2 in <b>open beta</b>. With <a href="https://developers.cloudflare.com/r2/buckets/local-uploads/"><u>Local Uploads</u></a> enabled, object data is automatically written to a storage location close to the client first, then asynchronously copied to where the bucket lives. The data is immediately accessible and stays <a href="https://developers.cloudflare.com/r2/reference/consistency/"><u>strongly consistent</u></a>. Uploads get faster, and data feels global.</p><p>For many applications, performance needs to be global. Users uploading media content from different regions, for example, or devices sending logs and telemetry from all around the world. But your data has to live somewhere, and that means uploads from far away have to travel the full distance to reach your bucket. </p><p><a href="https://www.cloudflare.com/developer-platform/products/r2/"><u>R2</u></a> is <a href="https://www.cloudflare.com/learning/cloud/what-is-object-storage/"><u>object storage</u></a> built on Cloudflare's global network. Out of the box, it automatically caches object data globally for fast reads anywhere — all while retaining strong consistency and zero <a href="https://www.cloudflare.com/learning/cloud/what-are-data-egress-fees/"><u>egress fees</u></a>. This happens behind the scenes whether you're using the <a href="https://www.cloudflare.com/developer-platform/solutions/s3-compatible-object-storage/">S3</a> API, Workers Bindings, or plain HTTP. And now with Local Uploads, both reads and writes can be fast from anywhere in the world.</p><p>Try it yourself <a href="https://local-uploads.r2-demo.workers.dev/"><u>in this demo</u></a> to see the benefits of Local Uploads.</p><p>Ready to try it? Enable Local Uploads in the <a href="https://dash.cloudflare.com/?to=/:account/r2/overview"><u>Cloudflare Dashboard</u></a> under your bucket's settings, or with a single Wrangler command on an existing bucket.</p>
            <pre><code>npx wrangler r2 bucket local-uploads enable [BUCKET]</code></pre>
            
    <div>
      <h2>75% lower total request duration for global uploads</h2>
      <a href="#75-lower-total-request-duration-for-global-uploads">
        
      </a>
    </div>
    <p><a href="https://developers.cloudflare.com/r2/buckets/local-uploads"><u>Local Uploads</u></a> makes upload requests (i.e. PutObject, UploadPart) faster. In both our private beta tests with customers and our synthetic benchmarks, we saw up to 75% reduction in Time to Last Byte (TTLB) when upload requests are made in a different region than the bucket. In these results, TTLB is measured from when R2 receives the upload request to when R2 returns a 200 response.</p><p>In our synthetic tests, we measured the impact of Local Uploads by using a synthetic workload to simulate a cross-region upload workflow. We deployed a test client in Western North America and configured an R2 bucket with a <a href="https://developers.cloudflare.com/r2/reference/data-location/"><u>location hint</u></a> for Asia-Pacific. The client performed around 20 PutObject requests per second over 30 minutes to upload objects of 5 MB size. </p><p>The following graph compares the p50 (or median) TTLB metrics for these requests, showing the difference in upload request duration — first without Local Uploads (TTLB around 2s), and then with Local Uploads enabled (TTLB around 500ms): </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4uvSdPwflyjHohwLQvOKsu/4b82637a5ac29ceee0fc37e04ab0107f/image1.png" />
          </figure>
    <div>
      <h2>How it works: The distance problem</h2>
      <a href="#how-it-works-the-distance-problem">
        
      </a>
    </div>
    <p>To understand how Local Uploads can improve upload requests, let’s first take a look at <a href="https://developers.cloudflare.com/r2/how-r2-works/"><u>how R2 works</u></a>. R2's architecture is composed of multiple components including:</p><ul><li><p><b>R2 Gateway Worker: </b>The entry point for all API requests that handles authentication and routing logic. It is deployed across Cloudflare's global network via <a href="https://developers.cloudflare.com/workers/"><u>Cloudflare Workers</u></a>.</p></li><li><p><b>Durable Object Metadata Service: </b>A distributed layer built on <a href="https://developers.cloudflare.com/durable-objects/"><u>Durable Objects</u></a> used to store and manage object metadata (e.g. object key, checksum).</p></li><li><p><b>Distributed Storage Infrastructure: </b>The underlying infrastructure that persistently stores encrypted object data.</p></li></ul><p>Without Local Uploads, here’s what happens when you upload objects to your bucket: The request is first received by the R2 Gateway, close to the user, where it is authenticated. Then, as the client streams bytes of the object data, the data is encrypted and written into the storage infrastructure in the region where the bucket is placed. When this is completed, the Gateway reaches out to the Metadata Service to publish the object metadata, and it returns a success response back to the client after it is committed.</p><p>If the client and the bucket are in separate regions, more variability can be introduced in the process of uploading bytes of the object data, due to the longer distance that the request must travel. This could result in slower or less reliable uploads. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6toAZ6JSHPv2jgntdyCOvr/704f6837d2705f18a0e5b8554994cb7a/image9.png" />
          </figure><p><sup>A client uploading from Eastern North America to a bucket in Eastern Europe without Local Uploads enabled. </sup></p><p>Now, when you make an upload request to a bucket with Local Uploads enabled, there are two cases that are handled: </p><ol><li><p>The client and the bucket region are in the <b>same</b> region</p></li><li><p>The client and the bucket region are in <b>different</b> regions</p></li></ol><p>In the first case, R2 follows the regular flow, where object data is written to the storage infrastructure for your bucket. In the second case, R2 writes to the storage infrastructure located in the client region while still publishing to the object metadata to the region of the bucket.</p><p>Importantly, the object is immediately accessible after the initial write completes. It remains accessible throughout the entire replication process — there's <b>no</b> <b>waiting period</b> for background replication to finish before the object can be read.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/33oUAdlGF8cWOeQhha6Ocy/68537e503f1ec8d1dd080db363f97dc3/image3.png" />
          </figure><p><sup>A client uploading from Eastern North America to a bucket in Eastern Europe with Local Uploads enabled. </sup></p><p>Note that this is for non-jurisdiction restricted buckets, and Local Uploads are not available for buckets with jurisdiction restriction (e.g. EU, FedRAMP) enabled.</p>
    <div>
      <h2>When to use Local Uploads</h2>
      <a href="#when-to-use-local-uploads">
        
      </a>
    </div>
    <p>Local uploads are built for workloads that receive a lot of upload requests originating from different geographic regions than where your bucket is located. This feature is ideal when:</p><ul><li><p>Your users are globally distributed</p></li><li><p>Upload performance and reliability is critical to your application</p></li><li><p>You want to optimize write performance without changing your bucket's primary location</p></li></ul><p>To understand the geographic distribution of where your read and write requests are initiated, you can visit the <a href="https://dash.cloudflare.com/?to=/:account/r2/overview"><u>Cloudflare Dashboard</u></a>, and go to your R2 bucket’s Metrics page and view the Request Distribution by Region graph. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6SJ9UYY3RADryXmnT0J3Vq/9b26c948e925a705387a64c24a1dd7e3/image7.png" />
          </figure>
    <div>
      <h2>How we built Local Uploads</h2>
      <a href="#how-we-built-local-uploads">
        
      </a>
    </div>
    <p>With Local Uploads, object data is written close to the client and then copied to the bucket's region in the background. We call this copy job a replication task.</p><p>Given these replication tasks, we needed an asynchronous processing component for them, which tends to be a great use case for <a href="https://developers.cloudflare.com/queues/"><u>Cloudflare Queues</u></a>. Queues allow us to control the rate at which we process replication tasks, and it provides built-in failure handling capabilities like <a href="https://developers.cloudflare.com/queues/configuration/batching-retries/"><u>retries</u></a> and <a href="https://developers.cloudflare.com/queues/configuration/dead-letter-queues/"><u>dead letter queues</u></a>. In this case, R2 shards replication tasks across multiple queues per storage region.</p>
    <div>
      <h3>Publishing metadata and scheduling replication</h3>
      <a href="#publishing-metadata-and-scheduling-replication">
        
      </a>
    </div>
    <p>When publishing the metadata of an object with Local Uploads enabled, we perform three operations atomically:</p><ol><li><p>Store the object metadata</p></li><li><p>Create a pending replica key that tracks which replications still need to happen</p></li><li><p>Create a replication task marker keyed by timestamp, which controls when the task should be sent to the queue</p></li></ol><p>The pending replica key contains the full replication plan: the number of replication tasks, which source location to read from, which destination location to write to, the replication mode and priority, and whether the source should be deleted after successful replication.</p><p>This gives us flexibility in how we move an object's data. For example, moving data across long geographical distances is expensive. We could try to move all the replicas as fast as possible by processing them in parallel, but this would incur greater cost and pressure the network infrastructure. Instead, we minimize the number of cross-regional data movements by first creating one replica in the target bucket region, and then use this local copy to create additional replicas within the bucket region.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2rCuA2zXR4ltZJsiNDBHd7/ae388f13ea27922158b27f429080c69c/image6.png" />
          </figure><p>A background process periodically scans the replication task markers and sends them to one of the queues associated with the destination storage region. The markers guarantee at-least-once delivery to the queue — if enqueueing fails or the process crashes, the marker persists and the task will be retried on the next scan. This also allows us to process replications at different times and enqueue only valid tasks. Once a replication task reaches a queue, it is ready to be processed.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5G4STZSp67TnKhehzCqFMv/445a0c74ba7f4bc5dd3de04eb7aa1257/image4.png" />
          </figure>
    <div>
      <h3>Asynchronous replication: Pull model</h3>
      <a href="#asynchronous-replication-pull-model">
        
      </a>
    </div>
    <p>For the queue consumer, we chose a pull model where a centralized polling service consumes tasks from the regional queues and dispatches them to the Gateway Worker for execution.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2p6SHkqO1tT7wxdhPJCFCr/86f219af85e332813ede2eb95a3810d8/image2.png" />
          </figure><p>Here's how it works:</p><ol><li><p>Polling service pulls from a regional queue: The consumer service polls the regional queue for replication tasks. It then batches the tasks to create uniform batch sizes based on the amount of data to be moved.</p></li><li><p>Polling service dispatches to Gateway Worker: The consumer service sends the replication job to the Gateway Worker.</p></li><li><p>Gateway Worker executes replication: The worker reads object data from the source location, writes it to the destination, and updates metadata in the Durable Object, optionally marking the source location to be garbage collected.</p></li><li><p>Gateway Worker reports result: On completion, the worker returns the result to the poller, which acknowledges the task to the queue as completed or failed.</p></li></ol><p>By using this pull model approach, we ensure that the replication process remains stable and efficient. The service can dynamically adjust its pace based on real-time system health, guaranteeing that data is safely replicated across regions.</p>
    <div>
      <h2>Try it out</h2>
      <a href="#try-it-out">
        
      </a>
    </div>
    <p>Local Uploads is available now in open beta. There is <b>no additional cost</b> to enable Local Uploads. Upload requests made with this feature enabled incur the standard <a href="https://developers.cloudflare.com/r2/pricing/"><u>Class A operation costs</u></a>, same as upload requests made without Local Uploads.</p><p>To get started, visit the <a href="https://dash.cloudflare.com/?to=/:account/r2/overview"><u>Cloudflare Dashboard</u></a> under your bucket's settings and look for the Local Uploads card to enable, or simply run the following command using Wrangler to enable Local Uploads on a bucket.</p>
            <pre><code>npx wrangler r2 bucket local-uploads enable [BUCKET]</code></pre>
            <p>Enabling Local Uploads on a bucket is seamless: existing uploads will complete as expected and there’s no interruption to traffic.</p><p>For more information, refer to the <a href="https://developers.cloudflare.com/r2/buckets/local-uploads/"><u>Local Uploads documentation</u></a>. If you have questions or want to share feedback, join the discussion on our <a href="https://discord.gg/cloudflaredev"><u>Developer Discord</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[R2]]></category>
            <category><![CDATA[Performance]]></category>
            <category><![CDATA[Storage]]></category>
            <guid isPermaLink="false">453lZMuYluqGqfRKADhf9K</guid>
            <dc:creator>Frank Chen</dc:creator>
            <dc:creator>Rahul Suresh</dc:creator>
            <dc:creator>Anni Wang</dc:creator>
        </item>
        <item>
            <title><![CDATA[An AI Index for all our customers]]></title>
            <link>https://blog.cloudflare.com/an-ai-index-for-all-our-customers/</link>
            <pubDate>Fri, 26 Sep 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare will soon automatically create an AI-optimized search index for your domain, and expose a set of ready-to-use standard APIs and tools including an MCP server, LLMs.txt, and a search API. ]]></description>
            <content:encoded><![CDATA[ <p>Today, we’re announcing the <b>private beta</b> of <b>AI Index </b>for domains on Cloudflare, a new type of web index that gives content creators the tools to make their data discoverable by AI, and gives AI builders access to better data for fair compensation.</p><p>With AI Index enabled on your domain, we will automatically create an AI-optimized search index for your website, and expose a set of ready-to-use standard APIs and tools including an MCP server, LLMs.txt, and a search API. Our customers will own and control that index and how it’s used, and you will have the ability to monetize access through <a href="https://developers.cloudflare.com/ai-crawl-control/features/pay-per-crawl/what-is-pay-per-crawl/"><u>Pay per crawl</u></a> and the new <a href="https://blog.cloudflare.com/x402/"><u>x402 integrations</u></a>. You will be able to use it to build modern search experiences on your own site, and more importantly, interact with external AI and Agentic providers to make your content more discoverable while being fairly compensated.</p><p>For AI builders—whether developers creating agentic applications, or AI platform companies providing foundational LLM models—Cloudflare will offer a new way to discover and retrieve web content: direct <b>pub/sub connections</b> to individual websites with AI Index. Instead of indiscriminate crawling, builders will be able to subscribe to specific sites that have opted in for discovery, receive structured updates as soon as content changes, and pay fairly for each access. Access is always at the discretion of the site owner.</p><p>From the individual indexes, Cloudflare will also build an aggregated layer, the <b>Open Index</b>, that bundles together participating sites. Builders get a single place to search across collections or the broader web, while every site still retains control and can earn from participation. </p>
    <div>
      <h3>Why build an AI Index?</h3>
      <a href="#why-build-an-ai-index">
        
      </a>
    </div>
    <p>AI platforms are quickly becoming one of the main ways people discover information online. Whether asking a chatbot to summarize a news article or find a product recommendation, the path to that answer almost always starts with crawling original content and indexing or using that data for training. However, today, that process is largely controlled by platforms: what gets crawled, how often, and whether the site owner has any input in the matter.</p><p>Although Cloudflare now offers to monitor and control how AI services respect your access policies and how they access your content, it's still challenging to make new content visible. Content creators have no efficient way to signal to AI builders when a page is published or updated. On the other hand, for AI builders, crawling and recrawling unstructured content is costly, wastes resources, especially when you don’t know the quality and cost in advance.</p><p>We need a fairer and healthier ecosystem for content discovery and usage that bridges the gap between content creators and AI builders.</p>
    <div>
      <h3>How AI Index will work</h3>
      <a href="#how-ai-index-will-work">
        
      </a>
    </div>
    <p>When you onboard a domain to Cloudflare, or if you have an existing domain on Cloudflare, you will have the choice to enable an AI Index. If enabled, we will automatically create an AI-optimized search index for your domain that you own and control.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3kV7Oru6D5jPWeGeWDQDsi/7d738250f24250cf98db2e96222319ec/image1.png" />
          </figure><p>As your site updates and grows, the index will evolve with it. New or updated pages will be processed in real-time using the same technology that powers Cloudflare <a href="https://developers.cloudflare.com/ai-search/"><u>AI Search (formerly AutoRAG)</u></a> and its <a href="https://developers.cloudflare.com/ai-search/configuration/data-source/website/"><u>Website</u></a> as a data source. Best of all, we will manage everything; you won't have to worry about each individual component of compute, storage resources, databases, embeddings, chunking, or AI models. Everything will happen behind the scenes, automatically.</p><p>Importantly, you will have control over what content to <b>include or exclude </b>from your website's index, and <b>who</b> can get access to your content via <b>AI</b> <b>Crawl Control</b>, ensuring that only the data you want to expose is made searchable and accessible. You also will be able to opt out of the AI Index completely; it will all be up to you.</p><p>When your AI Index is set up, you will get a set of ready-to-use APIs:                                                                                                                                                   </p><ul><li><p><b>An MCP Server: </b>Agentic applications will be able to connect directly to your site using the <a href="https://www.cloudflare.com/learning/ai/what-is-model-context-protocol-mcp/"><u>Model Context Protocol (MCP)</u></a>, making your content discoverable to agents in a standardized way. This includes support for <a href="https://developers.cloudflare.com/ai-search/how-to/nlweb/"><u>NLWeb</u></a> tools, an open project developed by Microsoft that defines a standard protocol for natural language queries on websites.</p></li><li><p><b>A flexible search API: </b>This endpoint will<b> </b>return relevant results in structured JSON. </p></li><li><p><b>LLMs.txt and LLMs-full.txt: </b>Standard files that provide LLMs with a machine-readable map of your site, following <a href="https://github.com/AnswerDotAI/llms-txt"><u>emerging open standards</u></a>. These will help models understand how to use your site’s content at inference time. An example of <a href="https://developers.cloudflare.com/llms.txt"><u>llms.txt</u></a> exists in the Cloudflare Developer Documentation.</p></li><li><p><b>A bulk data API: </b>An endpoint<b> </b>for transferring large amounts of content efficiently, available under the rules you set. Instead of querying for every document, AI providers will be able to ingest in one shot.</p></li><li><p><b>Pub-sub subscriptions: </b>AI platforms will be able to subscribe to your site’s index and receive events and content updates directly from Cloudflare in a structured format in real-time, making it easy for them to stay current without re-crawling.</p></li><li><p><b>Discoverability directives:</b> In robots.txt and well-known URIs to allow AI agents and crawlers visiting your site to discover and use the available API automatically.</p></li></ul>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4Hr3EhsMBH0oVwMVKywwre/2a01efbe03d67a8154123b63c05c000f/image3.png" />
          </figure><p>The index will integrate directly with <a href="https://developers.cloudflare.com/ai-crawl-control/"><u>AI Crawl Control</u></a>, so you will be able to see who’s accessing your content, set rules, and manage permissions. And with <a href="https://developers.cloudflare.com/ai-crawl-control/features/pay-per-crawl/what-is-pay-per-crawl/"><u>Pay per crawl</u></a> and <a href="https://blog.cloudflare.com/x402/"><u>x402 integrations</u></a>, you can choose to directly monetize access to your content. </p>
    <div>
      <h3>A feed of the web for AI builders</h3>
      <a href="#a-feed-of-the-web-for-ai-builders">
        
      </a>
    </div>
    <p>As an AI builder, you will be able to discover and subscribe to high-quality, permissioned web data through individual site’s AI indexes. Instead of sending crawlers blindly across the open Internet, you will connect via a pub/sub model: participating websites will expose structured updates whenever their content changes, and you will be able to subscribe to receive those updates in real-time. With this model, your new workflow may look something like this:</p><ol><li><p><b>Discover websites that have opted in: </b>Browse and filter through a directory of websites that make their indexes available through Cloudflare.</p></li><li><p><b>Evaluate content with metadata and metrics: </b>Get content metadata information on various metrics (e.g., uniqueness, depth, contextual relevance, popularity) before accessing it.</p></li><li><p><b>Pay fairly for access:</b> When content is valuable, platforms can compensate creators directly through Pay per crawl. These payments not only enable access but also support the continued creation of original content, helping to sustain a healthier ecosystem for discovery.</p></li><li><p><b>Subscribe to updates: </b>Use pub-sub subscriptions to receive events about changes made by the website, so you know when to retrieve or crawl for new content without wasting resources on constant re-crawling. </p></li></ol><p>By shifting from blind crawling to a permissioned pub/sub system for the web, AI builders save time, cut costs, and gain access to cleaner, high-quality data while content creators remain in control and are fairly compensated.</p>
    <div>
      <h3>The aggregated Open Index</h3>
      <a href="#the-aggregated-open-index">
        
      </a>
    </div>
    <p>Individual indexes provide AI platforms with the ability to access data directly from specific sites, allowing them to subscribe for updates, evaluate value, and pay for full content access on a per-site basis. But when builders need to work at a larger scale, managing dozens or hundreds of separate subscriptions can become complex. The <b>Open Index </b>will provide an additional option: a bundled, opt-in collection of those indexes, featuring sophisticated features such as quality, uniqueness, originality, and depth of content filters, all accessible in one place.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6rjkK5UCh9BLSqceUuG0RI/92413aed318baced0ee8812bec511cfb/image2.png" />
          </figure><p>The Open Index is designed to make content discovery at scale easier:</p><ul><li><p><b>Get unified access: </b>Query and retrieve data across many participating sites simultaneously. This reduces integration overhead and enables builders to plug into a curated collection of data, or use it as a ready-made web search layer that can be accessed at query time.</p></li><li><p><b>Discover broader scopes: </b>Work with topic-specific bundles (e.g., news, documentation, scientific research) or a general discovery index covering the broader web. This makes it simple to explore new content sources you may not have identified individually.</p></li><li><p><b>Bottom-up monetization: </b>Results still originate from an individual site’s AI index, with monetization flowing back to that site through Pay per crawl, helping preserve fairness and sustainability at scale.</p></li></ul><p>Together, per-site AI indexes and the Open Index will provide flexibility and precise control when you want full content from individual sites (i.e., for training, AI agents, or search experiences), and broad search coverage when you need a unified search across the web.</p>
    <div>
      <h3>How you can participate in the shift</h3>
      <a href="#how-you-can-participate-in-the-shift">
        
      </a>
    </div>
    <p>With AI Index and the Cloudflare Open Index, we’re creating a model where websites decide how their content is accessed, and AI builders receive structured, reliable data at scale to build a fairer and healthier ecosystem for content discovery and usage on the Internet.</p><p>We’re starting with a <b>private beta</b>. If you want to enroll your website into the AI Index or access the pub/sub web feed as an AI builder, you can <a href="https://www.cloudflare.com/aiindex-signup/"><b><u>sign up today</u></b></a>.</p> ]]></content:encoded>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Pay Per Crawl]]></category>
            <category><![CDATA[AI Search]]></category>
            <category><![CDATA[MCP]]></category>
            <guid isPermaLink="false">7rcW6x4j6v7O6ZEHir5fmK</guid>
            <dc:creator>Celso Martinho</dc:creator>
            <dc:creator>Anni Wang</dc:creator>
        </item>
        <item>
            <title><![CDATA[Make Your Website Conversational for People and Agents with NLWeb and AutoRAG]]></title>
            <link>https://blog.cloudflare.com/conversational-search-with-nlweb-and-autorag/</link>
            <pubDate>Thu, 28 Aug 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ With NLWeb, an open project by Microsoft, and Cloudflare AutoRAG, conversational search is now a one-click setup for your website. ]]></description>
            <content:encoded><![CDATA[ <p>Publishers and content creators have historically relied on traditional keyword-based search to help users navigate their website’s content. However, traditional search is built on outdated assumptions: users type in keywords to indicate intent, and the site returns a list of links for the most relevant results. It’s up to the visitor to click around, skim pages, and piece together the answer they’re looking for. </p><p><a href="https://www.cloudflare.com/learning/ai/what-is-artificial-intelligence/"><u>AI</u></a> has reset expectations and that paradigm is breaking: how we search for information has fundamentally changed.</p>
    <div>
      <h2>Your New Type of Visitors</h2>
      <a href="#your-new-type-of-visitors">
        
      </a>
    </div>
    <p>Users no longer want to search websites the old way. They’re used to interacting with AI systems like Copilot, Claude, and ChatGPT, where they can simply ask a question and get an answer. We’ve moved from search engines to answer engines. </p><p>At the same time, websites now have a new class of visitors, AI agents. Agents face the same pain with keyword search: they have to issue keyword queries, click through links, and scrape pages to piece together answers. But they also need more: a structured way to ask questions and get reliable answers across websites. This means that websites need a way to give the agents they trust controlled access, so that information is retrieved accurately.</p><p>Website owners need a way to participate in this shift.</p>
    <div>
      <h2>A New Search Model for the Agentic Web</h2>
      <a href="#a-new-search-model-for-the-agentic-web">
        
      </a>
    </div>
    <p>If AI has reset expectations, what comes next? To meet both people and agents where they are, websites need more than incremental upgrades to keyword search. They need a model that makes conversational access to content a first-class part of the web itself.</p><p>That’s what we want to deliver: combining an open standard (NLWeb) with the infrastructure (AutoRAG) to make it simple for any website to become AI-ready.</p><p><a href="https://news.microsoft.com/source/features/company-news/introducing-nlweb-bringing-conversational-interfaces-directly-to-the-web/"><u>NLWeb</u></a> is an open project developed by Microsoft that defines a standard protocol for natural-language queries on websites. Each NLWeb instance also operates as a Model Context Protocol (MCP) server. Cloudflare is building to this spec and actively working with Microsoft to extend the standard with the goal to let every site function like an AI app, so users and agents alike can query its contents naturally.</p><p><a href="https://developers.cloudflare.com/autorag/"><u>AutoRAG</u></a>, Cloudflare’s managed retrieval engine, can automatically crawl your website, store the content in R2, and embed it into a managed vector database. AutoRAG keeps the index fresh with continuous re-crawling and re-indexing. Model inference and embedding can be served through Workers AI. Each AutoRAG is paired with an AI Gateway that can provide <a href="https://www.cloudflare.com/learning/performance/what-is-observability/">observability and insights</a> into your AI model usage. This gives you a <a href="https://www.cloudflare.com/learning/ai/how-to-build-rag-pipelines/">complete, managed pipeline</a> for conversational search without the burden of managing custom infrastructure.</p><blockquote><p><i>“Together, NLWeb and AutoRAG let publishers go beyond search boxes, making conversational interfaces for websites simple to create and deploy. This integration will enable every website to easily become AI-ready for both people and trusted agents.”</i> – R.V. Guha, creator of NLWeb, CVP and Technical Fellow at Microsoft. </p></blockquote><p>We are optimistic this will open up new monetization models for publishers:</p><blockquote><p><i>"The challenges publishers have faced are well known, as are the risks of AI accelerating the collapse of already challenged business models. However, with NLWeb and AutoRAG, there is an opportunity to reset the nature of relationships with audiences for the better. More direct engagement on Publisher Owned and Operated (O&amp;O) environments, where audiences value the brand and voice of the Publisher, means new potential for monetization. This would be the reset the entire industry needs."</i>  – Joe Marchese, General &amp; Build Partner at Human Ventures.</p></blockquote>
    <div>
      <h2>One-Click to Make Your Site Conversational</h2>
      <a href="#one-click-to-make-your-site-conversational">
        
      </a>
    </div>
    <p>By combining NLWeb's standard with Cloudflare’s AutoRAG infrastructure, we’re making it possible to  easily bring conversational search to any website.</p><p>Simply select your domain in AutoRAG, and it will crawl and index your site for semantic querying. It then deploys a Cloudflare Worker, which acts as the access layer. This Worker implements the NLWeb standard and UI defined by the <a href="https://github.com/nlweb-ai/NLWeb"><u>NLWeb project</u></a> and exposes your indexed content to both people and AI agents.

The Worker includes:</p><ul><li><p><b>`/ask` endpoint:</b> The defined standard for how conversational web searches should be served. Powers the conversational UI at the root `/` as well as the embeddable preview at `/snippet.html`. It supports chat history so queries can build on one another within the same session, and includes automatic query decontextualization to improve retrieval quality.</p></li><li><p><b>`/mcp` endpoint: </b>Implements an MCP server that trusted AI agents can connect to for structured access.</p></li></ul><p>With this setup, your site content is immediately available in two ways for you to experiment: through a conversational UI that you can serve to your visitors, and through a structured MCP interface that lets trusted agents query your site reliably on your terms.</p><p>Additionally, if you prefer to deploy and host your own version of the NLWeb project, there’s also the option to use AutoRAG as the retrieval engine powering the <a href="https://github.com/nlweb-ai/NLWeb/blob/main/docs/setup-cloudflare-autorag.md"><u>NLWeb instance</u></a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1SM7rSQDhoR4fH5KgAJPD7/2266dc2e3c80f3fcc7f17014eb1d0cf1/image5.png" />
          </figure>
    <div>
      <h2>How Your Site Becomes Conversational</h2>
      <a href="#how-your-site-becomes-conversational">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/xkeREv3GwXwBZw52Dg6XQ/caeb587819d08eff53a33aa893032b78/image2.png" />
          </figure><p>From your perspective, making your site conversational is just a single click. Behind the scenes, AutoRAG spins up a full retrieval pipeline to make that possible:</p><ol><li><p><b>Crawling and ingestion: </b>AutoRAG explores your site like a search engine, following `sitemap.xml` and `robots.txt` files to understand what pages are available and allowed for crawling. From there, it follows your sitemap to discover pages within your domain (up to 100k pages). <a href="https://developers.cloudflare.com/browser-rendering/"><u>Browser Rendering</u></a> is used to load each page so that it can capture dynamic, JavaScript content. Crawled pages are downloaded into an <a href="https://developers.cloudflare.com/r2/"><u>R2 bucket</u></a> in your account before being ingested. </p></li><li><p><b>Continuous Indexing:</b> Once ingested, the content is parsed and embedded into <a href="https://developers.cloudflare.com/vectorize/"><u>Vectorize</u></a>, making it queryable beyond keyword matching through semantic search. AutoRAG automatically re-crawls and re-indexes to keep your knowledge base aligned with your latest content.</p></li><li><p><b>Access &amp; Observability: </b>A Cloudflare Worker is deployed in your account to serve as the access layer that implements the NLWeb protocol (you can also find the deployable Worker in the Workers <a href="https://github.com/cloudflare/templates"><u>templates repository</u></a>). Workers AI is used to seamlessly power the summarization and decontextualized query capabilities to improve responses. <i>Soon, with the</i><a href="http://blog.cloudflare.com/ai-gateway-aug-2025-refresh/"><i><u> AI Gateway and Secret Store BYO keys</u></i></a><i>, you’ll be able to connect models from any provider and select them directly in the AutoRAG dashboard.</i></p></li></ol>
    <div>
      <h2>Road to Making Websites a First-Class Data Source</h2>
      <a href="#road-to-making-websites-a-first-class-data-source">
        
      </a>
    </div>
    <p>Until now, <a href="https://developers.cloudflare.com/autorag/concepts/how-autorag-works/"><u>AutoRAG</u></a> only supported R2 as a data source. That worked well for structured files, but we needed to make a website itself a first-class data source to be indexed and searchable. Making that possible meant building website crawling into AutoRAG and strengthening the system to handle large, dynamic sources like websites.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5ouTCcbipVX3s1fPgg6hEs/541a03efb4365370fee5df67cd68841f/image4.png" />
          </figure><p>Before implementing our web crawler, we needed to improve the reliability of data syncs. Prior users of AutoRAG lacked visibility into when indexing syncs ran and whether they were successful. To fix this, we introduced a Job module to track all syncs, store history, and provide logs. This required two new Durable Objects to be added into AutoRAG’s architecture:</p><ul><li><p><b>JobManager</b> runs a complete sync, and its duties include queuing files, embedding content, and keeping the Vectorize database up to date.  To ensure data consistency, only one JobManager can run per RAG at a time, enforced by the RagManager (a Durable Object in our existing architecture), which cancels any running jobs before starting new ones which can be triggered either manually or by a scheduled sync.</p></li><li><p><b>FileManager</b> solved scalability issues we hit when Workers ran out of memory during parallel processing. Originally, a single Durable Object was responsible for handling multiple files, but with a 128MB memory limit it quickly became a bottleneck. The solution was to break the work apart: JobManager now distributes files across many FileManagers, each responsible for a single file. By processing 20 files in parallel through 20 different FileManagers, we expanded effective memory capacity from 128MB to roughly 2.5GB per batch.</p></li></ul><p>With these improvements, we were ready to build the website parser. By reusing our existing R2-based queuing logic, we added crawling with minimal disruption:</p><ol><li><p>A JobManager designated for a website crawl begins by reading the sitemaps associated with the RAG configuration.</p></li><li><p>Instead of listing objects from an R2 bucket, it queues each website link into our existing R2-based queue, using the full URL as the R2 object key.</p></li><li><p>From here, the process is nearly identical to our file-based sync. A FileManager picks up the job and checks if the RAG is configured for website parsing.</p></li><li><p>If it is, the FileManager crawls the link and places the page's HTML contents into the user's R2 bucket, again using the URL as the object key.</p></li></ol><p>After these steps, we index the data and serve it at query time. This approach maximized code reuse, and any improvements to our <a href="https://blog.cloudflare.com/markdown-for-agents/">HTML-to-Markdown conversion</a> now benefit both file and website-based RAGs automatically.</p>
    <div>
      <h2>Get Started Today</h2>
      <a href="#get-started-today">
        
      </a>
    </div>
    <p>Getting your website ready for conversational search through NLWeb and AutoRAG is simple. Here’s how:</p><ol><li><p>In the <b>Cloudflare Dashboard</b>, navigate to <b>Compute &amp; AI &gt; AutoRAG</b>.</p></li><li><p>Select <b>Create</b> in AutoRAG, then choose the <b>NLWeb Website</b> quick deploy option.</p></li><li><p>Select the <b>domain</b> from your Cloudflare account that you want indexed.</p></li><li><p>Click <b>Start indexing</b>.</p></li></ol><p>That’s it! You can now try out your NLWeb search experience via the provided link, and test out how it will look on your site by using the embeddable snippet.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/dI9xwOKdn3jGkYKWK8NEN/e25ae13199eb09577868e421cc1fef7d/image1.png" />
          </figure><p>We’d love to hear your feedback as you experiment with this new capability and share your thoughts with us at <a>nlweb@cloudflare.com</a>.</p><p></p> ]]></content:encoded>
            <category><![CDATA[AI Week]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Search Engine]]></category>
            <category><![CDATA[Microsoft]]></category>
            <category><![CDATA[Auto Rag]]></category>
            <guid isPermaLink="false">1FRpZMePLmgD9cPqJnMFKS</guid>
            <dc:creator>Catarina Pires Mota</dc:creator>
            <dc:creator>Gabriel Massadas</dc:creator>
            <dc:creator>Nelson Duarte</dc:creator>
            <dc:creator>Daniel Leal</dc:creator>
            <dc:creator>Anni Wang</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing AutoRAG: fully managed Retrieval-Augmented Generation on Cloudflare]]></title>
            <link>https://blog.cloudflare.com/introducing-autorag-on-cloudflare/</link>
            <pubDate>Mon, 07 Apr 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ AutoRAG is here: fully managed Retrieval-Augmented Generation (RAG) pipelines powered by Cloudflare's global network and powerful developer ecosystem.  ]]></description>
            <content:encoded><![CDATA[ <p>Today we’re excited to announce <b>AutoRAG </b>in open beta, a fully managed <a href="https://www.cloudflare.com/learning/ai/retrieval-augmented-generation-rag/">Retrieval-Augmented Generation (RAG)</a> pipeline powered by Cloudflare, designed to simplify how developers integrate context-aware AI into their applications. RAG is a method that improves the accuracy of AI responses by retrieving information from your own data, and providing it to the <a href="https://www.cloudflare.com/learning/ai/what-is-large-language-model/">large language model (LLM)</a> to generate more grounded responses.</p><p><a href="https://www.cloudflare.com/learning/ai/how-to-build-rag-pipelines/">Building a RAG pipeline</a> is a patchwork of moving parts. You have to stitch together multiple tools and services — your data storage, a <a href="https://www.cloudflare.com/learning/ai/what-is-vector-database/">vector database</a>, an embedding model, LLMs, and custom indexing, retrieval, and generation logic — all just to get started. Maintaining it is even harder. As your data changes, you have to manually reindex and regenerate <a href="https://www.cloudflare.com/learning/ai/what-are-embeddings/">embeddings</a> to keep the system relevant and performant. What should be a simple “ask a question, get a smart answer” experience becomes a brittle pipeline of glue code, fragile integrations, and constant upkeep.</p><p>AutoRAG removes that complexity. With just a few clicks, it delivers a fully-managed RAG pipeline end-to-end: from ingesting your data and automatically chunking and embedding it, to storing vectors in Cloudflare’s <a href="https://developers.cloudflare.com/vectorize/"><u>Vectorize</u></a> database, performing semantic retrieval, and generating high-quality responses using <a href="https://developers.cloudflare.com/workers-ai/"><u>Workers AI</u></a>. AutoRAG continuously monitors your data sources and indexes in the background so your AI stays fresh without manual effort. It abstracts away the mess, letting you focus on building smarter, faster applications on Cloudflare’s developer platform. Get started today in the <a href="https://dash.cloudflare.com/?to=/:account/ai/autorag"><u>Cloudflare Dashboard</u></a>!</p><div>
  
</div>
<p></p>
    <div>
      <h3>Why use RAG in the first place?</h3>
      <a href="#why-use-rag-in-the-first-place">
        
      </a>
    </div>
    <p>LLMs like Llama 3.3 from Meta are powerful, but they only know what they’ve been trained on. They often struggle to produce accurate answers when asked about new, proprietary, or domain-specific information. System prompts providing relevant information can help, but they bloat input size and are limited by context windows. Fine-tuning a model is expensive and requires ongoing retraining to keep up to date.</p><p>RAG solves this by retrieving relevant information from your data source at query time, combining it with the user’s input query, and feeding both into the LLM to generate responses grounded with your data. This makes RAG a great fit for AI-driven support bots, internal knowledge assistants, semantic search across documentation, and other use cases where the source of truth is always evolving.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5zrM30iI2E1ZlmTQvAsx0D/fcef1f00b048a147fc3bc459895cc19c/1.png" />
          </figure>
    <div>
      <h3>What’s under the hood of AutoRAG?</h3>
      <a href="#whats-under-the-hood-of-autorag">
        
      </a>
    </div>
    <p>AutoRAG sets up a RAG pipeline for you, using the building blocks of Cloudflare’s developer platform. Instead of you having to write code to create a RAG system using <a href="https://developers.cloudflare.com/workers-ai/"><u>Workers AI</u></a>, <a href="https://developers.cloudflare.com/vectorize/"><u>Vectorize</u></a>, and <a href="https://developers.cloudflare.com/ai-gateway/"><u>AI Gateway</u></a>, you just create an AutoRAG instance and point it at a data source, like an <a href="https://developers.cloudflare.com/r2/"><u>R2 </u></a>storage bucket.</p><p>Behind the scenes, AutoRAG is powered by two processes: <b>indexing</b> and <b>querying</b>.</p><ul><li><p><b>Indexing</b> is an asynchronous process that runs in the background. It kicks off as soon as you create an AutoRAG, and automatically continues in cycles — reprocessing new or updated files after each previous job completes. During indexing, your content is transformed into vectors optimized for semantic search.</p></li><li><p><b>Querying</b> is a synchronous process triggered when a user sends a search request. AutoRAG takes the query, retrieves the most relevant content from your vector database, and uses it to generate a context-aware response using an LLM.</p></li></ul><p>Let’s take a closer look at how they work.</p>
    <div>
      <h4>Indexing process</h4>
      <a href="#indexing-process">
        
      </a>
    </div>
    <p>When you connect a data source, AutoRAG automatically ingests, transforms, and stores it as vectors, optimizing it for semantic search when querying:</p><ol><li><p><b>File ingestion from data source: </b>AutoRAG reads directly from your data source. Today, it supports integration with Cloudflare R2, where you can store documents like PDFs, images, text, HTML, CSV, and more for processing.
<i>Check out the </i><a href="#rag-to-riches-in-under-5-minutes"><b><i><u>RAG to riches in 5 minutes tutorial below</u></i></b></a><i> to learn how you can use Browser Rendering to parse webpages to use within your AutoRAG.</i></p></li><li><p><b>Markdown conversion:</b> AutoRAG uses <a href="https://developers.cloudflare.com/workers-ai/markdown-conversion/"><u>Workers AI’s Markdown Conversion</u></a> to convert all files into structured <a href="https://blog.cloudflare.com/markdown-for-agents/">Markdown</a>. This ensures consistency across diverse file types. For images, <a href="https://www.cloudflare.com/developer-platform/products/workers-ai/">Workers AI</a> is used to perform object detection followed by vision-to-language transformation to convert images into Markdown text.</p></li><li><p><b>Chunking:</b> The extracted text is chunked into smaller pieces to improve retrieval granularity.</p></li><li><p><b>Embedding:</b> Each chunk is embedded using Workers AI’s embedding model to transform the content into vectors.</p></li><li><p><b>Vector storage: </b>The resulting vectors, along with metadata like source location and file name, are stored in a Cloudflare’s Vectorize database created on your account.</p></li></ol>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5UK62iIO747BOe7JgazkBP/19a65b75cc4ad6b7fba31bff301cc133/Indexing.png" />
          </figure>
    <div>
      <h4>Querying process</h4>
      <a href="#querying-process">
        
      </a>
    </div>
    <p>When an end user makes a request, AutoRAG orchestrates the following:</p><ol><li><p><b>Receive query from AutoRAG API: </b>The query workflow begins when you send a request to either the AutoRAG’s AI Search or Search endpoint.</p></li><li><p><b>Query rewriting (optional): </b>AutoRAG provides the option to rewrite the input query using one of Workers AI’s LLMs to improve retrieval quality by transforming the original query into a more effective search query.</p></li><li><p><b>Embedding the query: </b>The rewritten (or original) query is transformed into a vector via the same embedding model used to embed your data so that it can be compared against your vectorized data to find the most relevant matches.</p></li><li><p><b>Vector search in Vectorize: </b>The query vector is searched against stored vectors in the associated Vectorize database for your AutoRAG.</p></li><li><p><b>Metadata + content retrieval: </b>Vectorize returns the most relevant chunks and their metadata. And the original content is retrieved from the R2 bucket. These are passed to a text-generation model.</p></li><li><p><b>Response generation:</b> A text-generation model from Workers AI is used to generate a response using the retrieved content and the original user’s query.</p></li></ol><p>The end result is an AI-powered answer grounded in your private data — accurate, and up to date.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7ueRtSqcc6BcQL27SyFMzi/30712fdf3a50115f9560f6c3e82f76db/3.png" />
          </figure>
    <div>
      <h3>RAG to riches in under 5 minutes</h3>
      <a href="#rag-to-riches-in-under-5-minutes">
        
      </a>
    </div>
    <p>Most of the time, getting started with AutoRAG is as simple as pointing it to an existing R2 bucket — just drop in your content, and you're ready to go. But what if your content isn’t already in a bucket? What if it’s still on a webpage or needs to first be rendered dynamically by a frontend UI? You're in luck, because with the <a href="https://developers.cloudflare.com/browser-rendering/"><b><u>Browser Rendering API</u></b></a>, you can crawl your own websites to gather information that powers your RAG. The Browser Rendering REST API is now<b> generally available</b>, offering endpoints for common browser actions including extracting HTML content, capturing screenshots, and generating PDFs. Additionally, a crawl endpoint is coming soon, making it even easier to ingest websites.</p><p>In this walkthrough, we’ll show you how to take your website and feed it into AutoRAG for Q&amp;A. We’ll use a Cloudflare Worker to render web pages in a headless browser, upload the content to R2, and hook that into AutoRAG for semantic search and generation.</p>
    <div>
      <h4>Step 1. Create a Worker to fetch webpages and upload into R2</h4>
      <a href="#step-1-create-a-worker-to-fetch-webpages-and-upload-into-r2">
        
      </a>
    </div>
    <p>We’ll create a Cloudflare Worker that uses Puppeteer to visit your URL, render it, and store the full HTML in your R2 bucket. If you already have an R2 bucket with content you’d like to build a RAG for then you can skip this step.</p><ol><li><p>Create a new Worker project named <code>browser-r2-worker</code> by running:</p></li></ol>
            <pre><code>npm create cloudflare@latest -- browser-r2-worker</code></pre>
            <p>For setup, select the following options:</p><ul><li><p><i>What would you like to start with?</i> Choose Hello World Starter.</p></li><li><p><i>Which template would you like to use?</i> Choose Worker only.</p></li><li><p><i>Which language do you want to use? </i>Choose TypeScript.</p></li></ul><p>
2. Install <code>@cloudflare/puppeteer</code>, which allows you to control the Browser Rendering instance:</p>
            <pre><code>npm i @cloudflare/puppeteer</code></pre>
            <p>3. Create a new R2 bucket named <code>html-bucket</code> by running: </p>
            <pre><code>npx wrangler r2 bucket create html-bucket</code></pre>
            <p>4. Add the following configurations to your Wrangler configuration file, so your Worker can use browser rendering and your new R2 bucket:</p>
            <pre><code>{
	"compatibility_flags": ["nodejs_compat"],
	"browser": {
		"binding": "MY_BROWSER"
	},
	"r2_buckets": [
		{
			"binding": "HTML_BUCKET",
			"bucket_name": "html-bucket",
		}
	],
}</code></pre>
            <p>5. Replace the contents of src/index.ts with the following skeleton script:</p>
            <pre><code>import puppeteer from "@cloudflare/puppeteer";

// Define our environment bindings
interface Env {
	MY_BROWSER: any;
	HTML_BUCKET: R2Bucket;
}

// Define request body structure
interface RequestBody {
	url: string;
}

export default {
	async fetch(request: Request, env: Env): Promise&lt;Response&gt; {
		// Only accept POST requests
		if (request.method !== 'POST') {
return new Response('Please send a POST request with a target URL', { status: 405 });
		}

		// Get URL from request body
		const body = await request.json() as RequestBody;
		// Note: Only use this parser for websites you own
		const targetUrl = new URL(body.url); 

		// Launch browser and create new page
		const browser = await puppeteer.launch(env.MY_BROWSER);
		const page = await browser.newPage();

		// Navigate to the page and fetch its html
		await page.goto(targetUrl.href);
		const htmlPage = await page.content();

		// Create filename and store in R2
		const key = targetUrl.hostname + '_' + Date.now() + '.html';
		await env.HTML_BUCKET.put(key, htmlPage);

		// Close browser
		await browser.close();

		// Return success response
		return new Response(JSON.stringify({
			success: true,
			message: 'Page rendered and stored successfully',
			key: key
		}), {
			headers: { 'Content-Type': 'application/json' }
		});
	}
} satisfies ExportedHandler&lt;Env&gt;;</code></pre>
            <p>6. Once the code is ready, you can deploy it to your Cloudflare account by running:</p>
            <pre><code>npx wrangler deploy</code></pre>
            <p>7. To test your Worker, you can use the following cURL request to fetch the HTML file of a page. In this example we are fetching this blog page to upload into the html-bucket bucket:</p>
            <pre><code>curl -X POST https://browser-r2-worker.&lt;YOUR_SUBDOMAIN&gt;.workers.dev \
-H "Content-Type: application/json" \
-d '{"url": "https://blog.cloudflare.com/introducing-autorag-on-cloudflare"}'</code></pre>
            
    <div>
      <h4>Step 2. Create your AutoRAG and monitor the indexing</h4>
      <a href="#step-2-create-your-autorag-and-monitor-the-indexing">
        
      </a>
    </div>
    <p>Now that you have created your R2 bucket and filled it with your content that you’d like to query from, you are ready to create an AutoRAG instance:</p><ol><li><p>In your <a href="https://dash.cloudflare.com/?to=/:account/ai/autorag"><u>Cloudflare dashboard</u></a>, navigate to AI &gt; AutoRAG</p></li><li><p>Select Create AutoRAG and complete the setup process:</p><ol><li><p><b>Select the R2 bucket</b> which contains your knowledge base, in this case, select the <code>html-bucket</code>.</p></li><li><p><b>Select an embedding model </b>used to convert your data to vector representation. It is recommended to use the Default.</p></li><li><p><b>Select an LLM </b>to use to generate your responses. It is recommended to use the Default.</p></li><li><p><b>Select or create an AI Gateway</b> to monitor and control your model usage.</p></li><li><p><b>Name your AutoRAG</b> as <code>my-rag</code>.</p></li><li><p><b>Select or create a Service API token</b> to grant AutoRAG access to create and access resources in your account.</p></li></ol></li><li><p>Select Create to spin up your AutoRAG.</p></li></ol><p>Once you’ve created your AutoRAG, it will automatically create a Vectorize database in your account and begin indexing the data. You can view the progress of your indexing job in the Overview page of your AutoRAG. The indexing time may vary depending on the number and type of files you have in your data source.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5qgy5VRqvKjBhdmSZ4riEE/e7dc59a4c615838704d9ec323bfdabfa/4.png" />
          </figure>
    <div>
      <h4>Step 3. Test and add to your application</h4>
      <a href="#step-3-test-and-add-to-your-application">
        
      </a>
    </div>
    <p>Once AutoRAG finishes indexing your content, you’re ready to start asking it questions. You can open up your AutoRAG instance, navigate to the Playground tab, and ask a question based on your uploaded content, like “What is AutoRAG?”.</p><p>Once you’re happy with the results in the Playground, you can integrate AutoRAG directly into the application that you are building. If you are using a Worker to build your application, then you can use the AI binding to directly call your AutoRAG: </p>
            <pre><code>{
  "ai": {
    "binding": "AI"
  }
}</code></pre>
            <p>Then, query your AutoRAG instance from your Worker code by calling the <code>aiSearch()</code> method. Alternatively you can use the <code>Search()</code> method to get a list of retrieved results without an AI generated response.</p>
            <pre><code>const answer = await env.AI.autorag('my-rag').aiSearch({
   query: 'What is AutoRAG?'
});</code></pre>
            <p>For more information on how to add AutoRAG into your application, go to your AutoRAG then navigate to Use AutoRAG for more instructions.</p>
    <div>
      <h3>Start building today</h3>
      <a href="#start-building-today">
        
      </a>
    </div>
    <p>During the open beta, AutoRAG is <b>free to enable</b>. AutoRAG is built entirely on top of <a href="https://www.cloudflare.com/developer-platform/products/">Cloudflare’s Developer Platform</a>, using the same tools you’d reach for if you were building a RAG pipeline yourself. When you create an AutoRAG instance, it provisions and runs on top of Cloudflare resources within your own account. These resources are <b>billed as part of your Cloudflare usage</b>, and include:</p><ul><li><p><a href="https://developers.cloudflare.com/r2/"><b><u>R2</u></b></a><b>: </b>stores your source data.</p></li><li><p><a href="https://developers.cloudflare.com/vectorize/"><b><u>Vectorize</u></b></a><b>:</b> stores vector embeddings and powers semantic retrieval.</p></li><li><p><a href="https://developers.cloudflare.com/workers-ai/"><b><u>Workers AI</u></b></a><b>: </b>converts images to Markdown, generates embeddings, rewrites queries, and generates responses.</p></li><li><p><a href="https://developers.cloudflare.com/ai-gateway/"><b><u>AI Gateway</u></b></a><b>:</b> tracks and controls your model’s usage.</p></li></ul><p>To help manage resources during the beta, each account is limited to <b>10 AutoRAG</b> instances, with up to <b>100,000 files</b> <b>per AutoRAG</b>. </p>
    <div>
      <h3>What’s on the roadmap?</h3>
      <a href="#whats-on-the-roadmap">
        
      </a>
    </div>
    <p>We’re just getting started with AutoRAG, and we have more planned throughout 2025 to make it more powerful and flexible. Here are a few things we’re actively working on:</p><ul><li><p><b>More data source integrations:</b> We’re expanding beyond R2, with support for new input types like direct website URL parsing (powered by browser rendering) and structured data sources like Cloudflare <a href="https://developers.cloudflare.com/d1/"><u>D1</u></a>.</p></li><li><p><b>Smarter, higher-quality responses: </b>We’re exploring built-in reranking, recursive chunking, and other processing techniques to improve the quality and relevance of generated answers.</p></li></ul><p>These features will roll out incrementally, and we’d love your feedback as we shape what’s next. AutoRAG is built to evolve with your use cases so stay tuned.</p>
    <div>
      <h3>Try it out today!</h3>
      <a href="#try-it-out-today">
        
      </a>
    </div>
    <p>Get started with AutoRAG today by visiting the <a href="https://dash.cloudflare.com/?to=/:account/ai/autorag"><u>Cloudflare Dashboard</u></a>, navigate to AI &gt; AutoRAG, and select Create AutoRAG. Whether you’re building an AI-powered search experience, an internal knowledge assistant, or just experimenting with LLMs, AutoRAG gives you a fast and flexible way to get started with RAG on Cloudflare’s global network. For more details, refer to the <a href="https://developers.cloudflare.com/autorag"><u>Developer Docs</u></a>. Also, try out the <a href="https://developers.cloudflare.com/browser-rendering/"><u>Browser Rendering API</u></a> that is now generally available for your browser action needs.</p><p>We’re excited to see what you build, and we’re here to help. Have questions or feedback? Join the conversation on the <a href="https://discord.com/channels/595317990191398933/1356674457355423895"><u>Cloudflare Developers Discord</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Auto Rag]]></category>
            <category><![CDATA[Browser Rendering]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <guid isPermaLink="false">2JjYf004DWQEOykF435HA3</guid>
            <dc:creator>Anni Wang</dc:creator>
        </item>
        <item>
            <title><![CDATA[Builder Day 2024: 18 big updates to the Workers platform]]></title>
            <link>https://blog.cloudflare.com/builder-day-2024-announcements/</link>
            <pubDate>Thu, 26 Sep 2024 21:00:00 GMT</pubDate>
            <description><![CDATA[ To celebrate Builder Day 2024, we’re shipping 18 updates inspired by direct feedback from developers building on Cloudflare. This includes new capabilities, like running evals with AI Gateway, beta  ]]></description>
            <content:encoded><![CDATA[ <p>To celebrate <a href="https://builderday.pages.dev/"><u>Builder Day 2024</u></a>, we’re shipping 18 updates inspired by direct feedback from developers building on Cloudflare. Choosing a platform isn't just about current technologies and services — it's about betting on a partner that will evolve with your needs as your project grows and the tech landscape shifts. We’re in it for the long haul with you.</p>
    <div>
      <h3>Watch on Cloudflare TV</h3>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div>
  
</div><p><b>Starting today, you can:</b></p><ul><li><p><a href="#logs-for-every-worker">Persist logs from your Worker and query them directly on the Cloudflare dashboard</a></p></li><li><p><a href="#connect-to-private-databases-from-workers">Connect your Worker to private databases (isolated in VPCs) using Hyperdrive</a></p></li><li><p><a href="#improved-node.js-compatibility-is-now-ga">Use a wider set of NPM packages on Cloudflare Workers, via improved Node.js compatibility</a></p></li><li><p><a href="#cloudflare-joins-opennext">Deploy Next.js apps that use the Node.js runtime to Cloudflare, via OpenNext</a></p></li><li><p><a href="https://blog.cloudflare.com/workers-ai-bigger-better-faster/">Run Evals with AI Gateway, now in Open Beta</a></p></li><li><p><a href="https://blog.cloudflare.com/sqlite-in-durable-objects">Read from and write to SQLite with zero-latency from every Durable Object</a></p></li></ul><p><b>We’ve brought key features from </b><a href="https://blog.cloudflare.com/pages-and-workers-are-converging-into-one-experience/"><b><u>Pages to Workers</u></b></a><b>, allowing you to: </b></p><ul><li><p><a href="#static-asset-hosting">Upload and serve static assets as part of your Worker, and use popular frameworks with Workers</a></p></li><li><p><a href="#continuous-integration-and-delivery">Automatically build and deploy each pull request to your Worker’s git repository</a></p></li><li><p><a href="#workers-preview-urls">Get back a preview deployment URL for each version of your Worker</a></p></li></ul><p><b>Four things are going GA and are officially production-ready:</b></p><ul><li><p><a href="#gradual-deployments">Gradual Deployments</a>: Deploy changes to your Worker gradually, on a percentage basis of traffic</p></li><li><p><a href="#queues-is-ga">Cloudflare Queues</a><b>:</b> Now with much higher throughput and concurrency limits</p></li><li><p><a href="#event-notifications-for-r2-is-now-ga">R2 Event Notifications</a><b>:</b> Tightly integrated with Queues for event-driven applications</p></li><li><p><a href="https://blog.cloudflare.com/workers-ai-bigger-better-faster/">Vectorize</a>: Globally distributed vector database, now faster, with larger indexes, and new pricing</p></li></ul><p><b>The Workers platform is getting faster:</b></p><ul><li><p><a href="https://blog.cloudflare.com/faster-workers-kv">We made Workers KV up to 3x faster.</a> Which makes serving static assets from Workers and Pages faster!</p></li><li><p><a href="https://blog.cloudflare.com/making-workers-ai-faster/ ">Workers AI now has much faster Time-to-First-Token (TTFT)</a>, backed by more powerful GPUs</p></li></ul><p><b>And we’re lowering the cost of building on Cloudflare:</b></p><ul><li><p><a href="#removing-serverless-microservices-tax">Requests made through Service Bindings and to Tail Workers are now free</a></p></li><li><p><a href="#image-optimization-free-for-everyone">Cloudflare Images is introducing a free tier for everyone with a Cloudflare account</a></p></li><li><p>We’ve <a href="https://blog.cloudflare.com/workers-ai-bigger-better-faster">simplified Workers AI pricing</a> to use industry standard units of measure</p></li></ul><p>Everything in this post is available for you to use today. Keep reading to learn more, and watch the <a href="https://cloudflare.tv/event/builder-day-live-stream/xvm4qdgm"><u>Builder Day Live Stream</u></a> for demos and more.</p><h2>Persistent Logs for every Worker</h2><p>Starting today in open beta, you can automatically retain logs from your Worker, with full search, query, and filtering capabilities available directly within the Cloudflare dashboard. All newly created Workers will have this setting automatically enabled. This marks the first step in the development of our observability platform, following <a href="https://blog.cloudflare.com/cloudflare-acquires-baselime-expands-observability-capabilities/"><u>Cloudflare’s acquisition of Baselime</u></a>.</p><p>Getting started is easy – just add two lines to your Worker’s wrangler.toml and redeploy:</p>
            <pre><code>[observability]
enabled = true
</code></pre>
            <p>Workers Logs allow you to view all logs emitted from your Worker. When enabled, each <code>console.log</code> message, error, and exception is published as a separate event. Every Worker invocation (i.e. requests, alarms, rpc, etc.) also publishes an enriched execution log that contains invocation metadata. You can view logs in the <code>Logs</code> tab of your Worker in the dashboard, where you can filter on any event field, such as time, error code, message, or your own custom field.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3rPKtYlXEgN1u8utUuXxJR/c2fc4dcff2a7574d8ad9f92edbe867fe/image2.png" />
          </figure><p>If you’ve ever had to piece together the puzzle of unusual metrics, such as a spike in errors or latency, you know how frustrating it is to connect metrics to traces and logs that often live in independent data silos. Workers Logs is the first piece of a new observability platform we are building that helps you easily correlate telemetry data, and surfaces insights to help you <i>understand</i>. We’ll structure your telemetry data so you have the full context to ask the right questions, and can quickly and easily analyze the behavior of your applications. This is just the beginning for observability tools for Workers. We are already working on automatically emitting distributed traces from Workers, with real time errors and wide, high dimensionality events coming soon as well. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/XiRuQjqzVEld2eCIVVHPh/7c8938479e1f254699487dfe23caade4/Screenshot_2024-09-25_at_3.06.00_PM.png" />
          </figure><p>Starting November 1, 2024, Workers Logs will cost $0.60 per million log lines written after the included volume, as shown in the table below. Querying your logs is free. This makes it easy to estimate and forecast your costs — we think you shouldn’t have to calculate the number of ‘Gigabytes Ingested’ to understand what you’ll pay.</p>
<div><table><thead>
  <tr>
    <th></th>
    <th><span>Workers Free</span></th>
    <th><span>Workers Paid</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>Included Volume</span></td>
    <td><span>200,000 logs per day</span></td>
    <td><span>20,000,000 logs per month</span></td>
  </tr>
  <tr>
    <td><span>Additional Events</span></td>
    <td><span>N/A</span></td>
    <td><span>$0.60 per million logs</span></td>
  </tr>
  <tr>
    <td><span>Retention</span></td>
    <td><span>3 days</span></td>
    <td><span>7 days</span></td>
  </tr>
</tbody></table></div><p>Try out Workers Logs today. You can learn more from our <a href="https://developers.cloudflare.com/workers/observability/logs/workers-logs/"><u>developer documentation</u></a>, and give us feedback directly in the #workers-observability channel on <a href="https://discord.cloudflare.com/"><u>Discord</u></a>.</p><h2>Connect to private databases from Workers</h2><p>Starting today, you can now use <a href="https://developers.cloudflare.com/hyperdrive/"><u>Hyperdrive</u></a>, <a href="https://www.cloudflare.com/en-ca/products/tunnel/"><u>Cloudflare Tunnels</u></a> and <a href="https://www.cloudflare.com/zero-trust/products/access/"><u>Access</u></a> together to securely connect to databases that are isolated in a private network. </p><p><a href="https://developers.cloudflare.com/hyperdrive/"><u>Hyperdrive</u></a> enables you to build on Workers with your existing regional databases. It accelerates database queries using Cloudflare’s network, caching data close to end users and pooling connections close to the database. But there’s been a major blocker preventing you from building with Hyperdrive: network isolation.</p><p>The majority of databases today aren’t publicly accessible on the Internet. Data is highly sensitive and placing databases within private networks like a <a href="https://www.cloudflare.com/learning/cloud/what-is-a-virtual-private-cloud/"><u>virtual private cloud (VPC)</u></a> keeps data secure. But to date, that has also meant that your data is held captive within your cloud provider, preventing you from building on Workers. </p><p>Today, we’re enabling Hyperdrive to <a href="https://developers.cloudflare.com/hyperdrive/configuration/connect-to-private-database/"><u>securely connect to private databases</u></a> using <a href="https://www.cloudflare.com/en-ca/products/tunnel/"><u>Cloudflare Tunnels</u></a> and <a href="https://www.cloudflare.com/zero-trust/products/access/"><u>Cloudflare Access</u></a>. With a Cloudflare Tunnel running in your private network, Hyperdrive can securely connect to your database and start speeding up your queries.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3ozsfXdsWFJlfRhhulMClT/61ec772a843880370e81eeec190000fa/BLOG-2517_4.png" />
          </figure><p>With this update, Hyperdrive makes it possible for you to build full-stack applications on Workers with your existing databases, network-isolated or not. Whether you’re using <a href="https://developers.cloudflare.com/hyperdrive/examples/aws-rds-aurora/"><u>Amazon RDS</u></a>, <a href="https://developers.cloudflare.com/hyperdrive/examples/aws-rds-aurora/"><u>Amazon Aurora</u></a>, <a href="https://developers.cloudflare.com/hyperdrive/examples/google-cloud-sql/"><u>Google Cloud SQL</u></a>, <a href="https://azure.microsoft.com/en-gb/products/category/databases"><u>Azure Database</u></a>, or any other provider, Hyperdrive can connect to your databases and optimize your database connections to provide the fast performance you’ve come to expect with building on Workers.</p><h2>Improved Node.js compatibility is now GA</h2><p>Earlier this month, we <a href="https://blog.cloudflare.com/more-npm-packages-on-cloudflare-workers-combining-polyfills-and-native-code/"><u>overhauled our support for Node.js APIs in the Workers runtime</u></a>. With <a href="https://workers-nodejs-compat-matrix.pages.dev/"><u>twice as many Node APIs</u></a> now supported on Workers, you can now use a wider set of NPM packages to build a broader range of applications. Today, we’re happy to announce that improved Node.js compatibility is GA.</p><p>To give it a try, enable the nodejs_compat compatibility flag, and set your compatibility date to on or after 2024-09-23:</p>
            <pre><code>compatibility_flags = ["nodejs_compat"]
compatibility_date = "2024-09-23"
</code></pre>
            <p>Read the <a href="https://developers.cloudflare.com/workers/runtime-apis/nodejs/"><u>developer documentation</u></a> to learn more about how to opt-in your Workers to try it today. If you encounter any bugs or want to report feedback, <a href="https://github.com/cloudflare/workers-sdk/issues/new?assignees=&amp;labels=bug&amp;projects=&amp;template=bug-template.yaml&amp;title=%F0%9F%90%9B+BUG%3A"><u>open an issue</u></a>.</p><h2>Build frontend applications on Workers with Static Asset Hosting</h2><p>Starting today in open beta, you now can upload and serve HTML, CSS, and client-side JavaScript directly as part of your Worker. This means you can build dynamic, server-side rendered applications on Workers using popular frameworks such as Astro, Remix, Next.js and Svelte (full list <a href="https://developers.cloudflare.com/workers/frameworks"><u>here</u></a>), with more coming soon.</p><p>You can now deploy applications to Workers that previously could only be deployed to Cloudflare Pages and use features that are not yet supported in Pages, including <a href="https://developers.cloudflare.com/workers/observability/logging/logpush/"><u>Logpush</u></a>, <a href="https://developers.cloudflare.com/hyperdrive/#_top"><u>Hyperdrive</u></a>, <a href="https://developers.cloudflare.com/workers/configuration/cron-triggers/"><u>Cron Triggers</u></a>, <a href="https://developers.cloudflare.com/queues/configuration/configure-queues/#consumer"><u>Queue Consumers</u></a>, and <a href="https://developers.cloudflare.com/workers/configuration/versions-and-deployments/"><u>Gradual Deployments</u></a>. </p><p>To get started, create a new project with <a href="https://developers.cloudflare.com/workers/frameworks"><u>create-cloudflare</u></a>. For example, to create a new Astro project:  </p>
            <pre><code>npm create cloudflare@latest -- my-astro-app --framework=astro --experimental
</code></pre>
            <p>Visit our <a href="https://developers.cloudflare.com/workers/static-assets/"><u>developer documentation</u></a> to learn more about setting up a new front-end application on Workers and watch a <a href="https://youtu.be/W45MIi_t_go"><u>quick demo</u></a> to learn about how you can deploy an existing application to Workers. Static assets aren’t just for Workers written in JavaScript! You can serve static assets from <a href="https://developers.cloudflare.com/workers/languages/python/"><u>Workers written in Python</u></a> or even <a href="https://github.com/cloudflare/workers-rs/tree/main/templates/leptos/README.md"><u>deploy a Leptos app using workers-rs</u></a>.</p><p>If you’re wondering “<i>What about Pages?” </i>— rest assured, Pages will remain fully supported. We’ve heard from developers that as we’ve added new features to Workers and Pages, the choice of which product to use has become challenging. We’re closing this gap by bringing asset hosting, CI/CD and Preview URLs to Workers this Birthday Week.</p><p>To make the upfront choice Cloudflare Workers and Pages more transparent, we’ve created a <a href="https://developers.cloudflare.com/workers/static-assets/compatibility-matrix/"><u>compatibility matrix</u></a>. Looking ahead, we plan to bridge the remaining gaps between Workers and Pages and provide ways to migrate your Pages projects to Workers.</p><h2>Cloudflare joins OpenNext to deploy Next.js apps to Workers</h2><p>Starting today, as an early developer preview, you can use <a href="https://opennext.js.org//cloudflare"><u>OpenNext</u></a> to deploy Next.js apps to Cloudflare Workers via <a href="https://npmjs.org/@opennextjs/cloudflare"><u>@opennextjs/cloudflare</u></a>, a new npm package that lets you use the <a href="https://nextjs.org/docs/app/building-your-application/rendering/edge-and-nodejs-runtimes"><u>Node.js “runtime” in Next.js</u></a> on Workers.</p><p>This new adapter is powered by our <a href="https://blog.cloudflare.com/more-npm-packages-on-cloudflare-workers-combining-polyfills-and-native-code/"><u>new Node.js compatibility layer</u></a>, newly introduced <a href="#static-asset-hosting"><u>Static Assets for Workers</u></a>, and Workers KV, which is <a href="https://blog.cloudflare.com/faster-workers-kv"><u>now up to 3x faster</u></a>. It unlocks support for <a href="https://nextjs.org/docs/app/building-your-application/data-fetching/incremental-static-regeneration"><u>Incremental Static Regeneration (ISR)</u></a>, <a href="https://nextjs.org/docs/pages/building-your-application/routing/custom-error"><u>custom error pages</u></a>, and other Next.js features that our previous adapter, <a href="https://developers.cloudflare.com/pages/framework-guides/nextjs/ssr/get-started/"><u>@cloudflare/next-on-pages</u></a>, could not support, as it was only compatible with the Edge “runtime” in Next.js.</p><p><a href="https://blog.cloudflare.com/aws-egregious-egress/"><u>Cloud providers shouldn’t lock you in</u></a>. Like cloud compute and storage, open source frameworks should be portable — you should be able to deploy them to different cloud providers. The goal of the OpenNext project is to make sure you can deploy Next.js apps to any cloud platform, originally to AWS, and now Cloudflare. We’re excited to contribute to the OpenNext community, and give developers the freedom to run on the cloud that fits their applications needs (and <a href="https://blog.cloudflare.com/workers-pricing-scale-to-zero/"><u>budget</u></a>) best.</p><p>To get started by reading the <a href="https://opennext.js.org//cloudflare/get-started"><u>OpenNext docs</u></a>, which provide examples and a guide on how to add <a href="https://npmjs.org/@opennextjs/cloudflare"><u>@opennextjs/cloudflare</u></a> to your Next.js app.</p><p>We want your feedback! Report issues and contribute code at <a href="https://github.com/opennextjs/opennextjs-cloudflare/"><u>opennextjs/opennextjs-cloudflare on GitHub</u></a>, and join the discussion on the <a href="https://discord.gg/WUNsBM69"><u>OpenNext Discord</u></a>.</p>
            <pre><code>npm create cloudflare@latest -- my-next-app --framework=next --experimental
</code></pre>
            <h2>Continuous Integration &amp; Delivery (CI/CD) with Workers Builds</h2><p>Now in open beta, you can connect a GitHub or GitLab repository to a Worker, and Cloudflare will automatically build and deploy your changes each time you push a commit. Workers Builds provides an integrated CI/CD workflow you can use to build and deploy everything from full-stack applications built with the most popular frameworks to simple static websites. Just add your build command and let Workers Builds take care of the rest. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1K9izEbBxIlA0nXbNKJ1Od/55ecf9e56ecbc33aeb88df7ede1afddc/BLOG-2517_5.png" />
          </figure><p>While in open beta, Workers Builds is free to use, with a limit of one concurrent build per account, and unlimited build minutes per month. Once Workers Builds is Generally Available in early 2025, you will be billed based on the number of build minutes you use each month, and have a higher number of concurrent builds.</p>
<div><table><thead>
  <tr>
    <th></th>
    <th><span>Workers Free</span></th>
    <th><span>Workers Paid</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>Build minutes, </span><span>open beta</span></td>
    <td><span>Unlimited</span></td>
    <td><span>Unlimited</span></td>
  </tr>
  <tr>
    <td><span>Concurrent builds, </span><span>open beta</span></td>
    <td><span>1</span></td>
    <td><span>1</span></td>
  </tr>
  <tr>
    <td><span>Build minutes, </span><span>general availability</span></td>
    <td><span>3,000 minutes included per month</span></td>
    <td><span>6,000 minutes included per month </span><br /><span>+$0.005 per additional build minute</span></td>
  </tr>
  <tr>
    <td><span>Concurrent builds, </span><span>general availability</span></td>
    <td><span>1</span></td>
    <td><span>6</span></td>
  </tr>
</tbody></table></div><p><a href="https://developers.cloudflare.com/workers/ci-cd/builds/"><u>Read the docs</u></a> to learn more about how to deploy your first project with Workers Builds.</p><h2>Workers preview URLs</h2><p>Each newly uploaded version of a Worker now automatically generates a preview URL. Preview URLs make it easier for you to collaborate with your team during development, and can be used to test and identify issues in a preview environment before they are deployed to production.</p><p>When you upload a version of your Worker via the Wrangler CLI, Wrangler will display the preview URL once your upload succeeds. You can also find preview URLs for each version of your Worker in the Cloudflare dashboard:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/29iDm0x8QQex5ryatk23e1/ecfdba5b98b6e0c22350087a6035442d/BLOG-2517_6.png" />
          </figure><p>Preview URLs for Workers are similar to Pages <a href="https://developers.cloudflare.com/pages/configuration/preview-deployments/"><u>preview deployments</u></a> — they run on your Worker’s <code>workers.dev</code> subdomain and allow you to view changes applied on a new version of your application before the changes are deployed.</p><p>Learn more about preview URLs by visiting our <a href="https://developers.cloudflare.com/workers/configuration/previews"><u>developer documentation</u></a>. </p><h2>Safely release to production with Gradual Deployments</h2><p>At Developer Week, we launched <a href="https://developers.cloudflare.com/workers/configuration/versions-and-deployments/gradual-deployments/#_top"><u>Gradual Deployments</u></a> for Workers and Durable Objects to make it safer and easier to deploy changes to your applications. Gradual Deployments is now GA — we have been using it ourselves at Cloudflare for mission-critical services built on Workers since early 2024.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2FOHnaYqTyhuJRZVdERWdh/52df3d29622ccca9118d1cb49de19ae8/BLOG-2517_7.png" />
          </figure><p>Gradual deployments can help you stay on top of availability SLAs and minimize application downtime by surfacing issues early. Internally at Cloudflare, every single service built on Workers uses gradual deployments to roll out new changes. Each new version gets released in stages —– 0.05%, 0.5%, 3%, 10%, 25%, 50%, 75% and 100% with time to soak between each stage. Throughout the roll-out, we keep an eye on metrics (which are often instrumented with <a href="https://developers.cloudflare.com/analytics/analytics-engine/"><u>Workers Analytics Engine</u></a>!) and we <a href="https://developers.cloudflare.com/workers/configuration/versions-and-deployments/rollbacks/"><u>roll back</u></a> if we encounter issues. </p><p>Using gradual deployments is as simple as swapping out the <a href="https://developers.cloudflare.com/workers/wrangler/commands/#versions"><u>wrangler commands</u></a>, <a href="https://developers.cloudflare.com/api/operations/worker-versions-upload-version"><u>API endpoints</u></a>, and/or using “Save version” in the code editor that is built into the Workers dashboard. Read the <a href="https://developers.cloudflare.com/workers/configuration/versions-and-deployments/"><u>developer documentation</u></a> to learn more and get started. </p><h2>Queues is GA, with higher throughput and concurrency limits</h2><p><a href="https://developers.cloudflare.com/queues/"><u>Cloudflare Queues</u></a> is now generally available with higher limits. </p><p>Queues let a developer decouple their Workers into event driven services. <i>Producer </i>Workers write events to a Queue, and <i>consumer </i>Workers are invoked to take actions on the events. For example, you can use a Queue to decouple an e-commerce website from a service which sends purchase confirmation emails to users.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3cpkxgIQSYLrwbfhDSL2A5/97818131c1f4f7d2e8b8d76dcc8c7f9a/BLOG-2517_8.png" />
          </figure><p>Throughput and concurrency limits for Queues are now significantly higher, which means you can push more messages through a Queue, and consume them faster.</p><ul><li><p><b>Throughput:</b> Each queue can now process 5000 messages per second (previously 400 per second).</p></li><li><p><b>Concurrency:</b> Each queue can now have up to 250 <a href="https://developers.cloudflare.com/queues/configuration/consumer-concurrency/"><u>concurrent consumers</u></a> (previously 20 concurrent consumers). </p></li></ul><p>Since we <a href="https://blog.cloudflare.com/introducing-cloudflare-queues/"><u>announced Queues in beta</u></a>, we’ve added the following functionality:</p><ul><li><p><a href="https://developers.cloudflare.com/queues/configuration/batching-retries/#batching"><u>Batch sizes can be customized</u></a>, to reduce the number of consumer Worker invocations and thus reduce cost.</p></li><li><p><a href="https://developers.cloudflare.com/queues/configuration/batching-retries/#delay-messages"><u>Individual messages can be delayed</u></a>, so you can back off due to external API rate limits.</p></li><li><p><a href="https://developers.cloudflare.com/queues/configuration/pull-consumers/"><u>HTTP Pull consumers</u></a> allow messages to be consumed outside Workers, with zero data egress costs.</p></li></ul><p>Queues can be used by any developer on a Workers Paid plan. Head over to our <a href="https://developers.cloudflare.com/queues/get-started/"><u>getting started</u><i><u> </u></i><u>guide</u></a> to start building with Queues.</p><h2>Event notifications for R2 is now GA</h2><p>We’re excited to announce that event notifications for R2 is now generally available. Whether it’s kicking off image processing after a user uploads a file or triggering a sync to an external data warehouse when new analytics data is generated, many applications need to be able to reliably respond when events happen. <a href="https://blog.cloudflare.com/r2-events-gcs-migration-infrequent-access/#event-notifications-open-beta"><u>Event notifications</u></a> for <a href="https://developers.cloudflare.com/r2/"><u>Cloudflare R2</u></a> give you the ability to build event-driven applications and workflows that react to changes in your data.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/73t1PtQg576iv7m95HHjGL/26cd028004f5b669e41a89a7265c5a14/BLOG-2517_9.png" />
          </figure><p>Here’s how it works: When data in your R2 bucket changes, event notifications are sent to your queue. You can consume these notifications with a <a href="https://developers.cloudflare.com/queues/reference/how-queues-works/#create-a-consumer-worker"><u>consumer Worker </u></a>or <a href="https://developers.cloudflare.com/queues/configuration/pull-consumers/"><u>pull them over HTTP</u></a> from outside of Cloudflare Workers.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4NSN5r40rmXy0FMGOKvdAd/7d0c0637ccc478881528339304942948/BLOG-2517_10.png" />
          </figure><p>Since we introduced event notifications in <a href="https://blog.cloudflare.com/r2-events-gcs-migration-infrequent-access/#event-notifications-open-beta"><u>open beta</u></a> earlier this year, we’ve made significant improvements based on your feedback:</p><ul><li><p>We increased reliability of event notifications with throughput improvements from Queues. R2 event notifications can now scale to thousands of writes per second.</p></li><li><p>You can now configure event notifications directly from the Cloudflare dashboard (in addition to <a href="https://developers.cloudflare.com/workers/wrangler/commands/#notification-create"><u>Wrangler</u></a>).</p></li><li><p>There is now support for receiving notifications triggered by <a href="https://developers.cloudflare.com/r2/buckets/object-lifecycles/"><u>object lifecycle deletes</u></a>.</p></li><li><p>You can now set up multiple notification rules for a single queue on a bucket.</p></li></ul><p>Visit <a href="https://developers.cloudflare.com/r2/buckets/event-notifications/"><u>our documentation</u></a> to learn about how to set up event notifications for your R2 buckets.</p><h2>Removing the serverless microservices tax: No more request fees for Service Bindings and Tail Workers</h2><p>Earlier this year, we quietly changed Workers pricing to lower your costs. As of July 2024, you are no longer charged for requests between Workers on your account made via <a href="https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/"><u>Service Bindings</u></a>, or for invocations of <a href="https://developers.cloudflare.com/workers/observability/logging/tail-workers/"><u>Tail Workers.</u></a> For example, let’s say you have the following chain of Workers: </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1PTgWu9XiGJNoWrHTduQdB/84e1f6ee0788f99684440a9db7b4e6c1/BLOG-2517_11.png" />
          </figure><p>Each request from a client results in three Workers invocations. Previously, we charged you for each of these invocations, plus the CPU time for each of these Workers. With this change, we only charge you for the first request from the client, plus the CPU time used by each Worker.</p><p>This eliminates the additional cost of breaking a monolithic serverless app into microservices. In 2023, we introduced new <a href="https://blog.cloudflare.com/workers-pricing-scale-to-zero/"><u>pricing based on CPU time</u></a>, rather than duration, so you don’t have to worry about being billed for time spent waiting on I/O. This includes I/O to other Workers. With this change, you’re only billed for the first request in the chain, eliminating the other additional cost of using multiple Workers.</p><p>When you build microservices on Workers, you face fewer trade offs than on other compute platforms. Service bindings have <a href="https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/"><u>zero network overhead</u></a> by default, a built-in <a href="https://blog.cloudflare.com/javascript-native-rpc/"><u>JavaScript RPC system</u></a>, and a security model with <a href="https://blog.cloudflare.com/workers-environment-live-object-bindings/"><u>fewer footguns and simpler configuration</u></a>. We’re excited to improve this further with this pricing change.</p><h2>Image optimization is available to everyone for free — no subscription needed</h2><p>Starting today, you can use <a href="https://developers.cloudflare.com/images/transform-images/transform-via-url/"><u>Cloudflare Images</u></a> for free to optimize your images with up to 5,000 transformations per month.</p><p>Large, oversized images can throttle your application speed and page load times. We built <a href="https://developers.cloudflare.com/images/"><u>Cloudflare Images</u></a> to let you dynamically optimize images in the correct dimensions and formats for each use case, all while storing only the original image.</p><p>In the spirit of Birthday Week, we’re making image optimization available to everyone with a Cloudflare account, no subscription needed. You’ll be able to use Images to transform images that are stored outside of Images, such as in R2.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/49UPZRpeAp79qugqstqbT7/5e23fb4c7a458f5d00b401383bd6e777/BLOG-2517_12.png" />
          </figure><p>Transformations are served from your zone through a specially formatted URL with parameters that specify how an image should be optimized. For example, the transformation URL below uses the <code>format</code> parameter to automatically serve the image in the most optimal format for the requesting browser:</p>
            <pre><code>https://example.com/cdn-cgi/image/format=auto/thumbnail.png</code></pre>
            <p>This means that the original PNG image may be served as AVIF to one user and WebP to another. Without a subscription, transforming images from remote sources is free up to 5,000 unique transformations per month. Once you exceed this limit, any already cached transformations will continue to be served, but you’ll need a <a href="https://dash.cloudflare.com/?to=/:account/images"><u>paid Images plan</u></a> to request new transformations or to purchase storage within Images.</p><p>To get started, navigate to <a href="https://dash.cloudflare.com/?to=/:account/images"><u>Images in the dashboard</u></a> to enable transformations on your zone.</p>
    <div>
      <h2>Dive deep into more announcements from Builder Day</h2>
      <a href="#dive-deep-into-more-announcements-from-builder-day">
        
      </a>
    </div>
    <p>We shipped so much that we couldn’t possibly fit it all in one blog post. These posts dive into the technical details of what we’re announcing at Builder Day:</p><ul><li><p><a href="https://blog.cloudflare.com/workers-ai-bigger-better-faster"><u>Cloudflare’s Bigger, Better, Faster AI platform</u></a></p></li><li><p><a href="https://blog.cloudflare.com/making-workers-ai-faster"><u>Making Workers AI faster with KV cache compression, speculative decoding, and upgraded hardware</u></a></p></li><li><p><a href="https://blog.cloudflare.com/faster-workers-kv"><u>We made Workers KV up to 3x faster — here’s the data</u></a></p></li><li><p><a href="https://blog.cloudflare.com/sqlite-in-durable-objects"><u>Zero-latency SQLite storage in every Durable Object</u></a></p></li></ul>
    <div>
      <h2>Build the next big thing on Cloudflare</h2>
      <a href="#build-the-next-big-thing-on-cloudflare">
        
      </a>
    </div>
    <p>Cloudflare is for builders, and everything we’re announcing at Builder Day, you can start building with right away. We’re now offering <a href="http://blog.cloudflare.com/startup-program-250k-credits"><u>$250,000 in credits to use on our Developer Platform to qualified startups</u></a>, so that you can get going even faster, and become the next company to reach hypergrowth scale with a small team, and not waste time provisioning infrastructure and doing undifferentiated heavy lifting. Focus on shipping, and we’ll take care of the rest.</p><p>Apply to the startup program <a href="https://www.cloudflare.com/forstartups/"><u>here</u></a>, or stop by and say hello in the <a href="https://discord.cloudflare.com/"><u>Cloudflare Developers Discord</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[undefined]]></category>
            <category><![CDATA[Queues]]></category>
            <category><![CDATA[Cloudflare Images]]></category>
            <guid isPermaLink="false">6ct91ZmJYzPu9n9pt8sNBm</guid>
            <dc:creator>Tanushree Sharma</dc:creator>
            <dc:creator>Rohin Lohe</dc:creator>
            <dc:creator>Anni Wang</dc:creator>
            <dc:creator>Nevi Shah</dc:creator>
        </item>
        <item>
            <title><![CDATA[Race ahead with Cloudflare Pages build caching]]></title>
            <link>https://blog.cloudflare.com/race-ahead-with-build-caching/</link>
            <pubDate>Thu, 28 Sep 2023 13:00:57 GMT</pubDate>
            <description><![CDATA[ Unleash the fast & furious in your builds with Cloudflare Pages' build caching. Reduce build times by caching previously computed project components. Now in Beta for select frameworks and package managers. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Today, we are thrilled to release a beta of Cloudflare Pages support for build caching! With build caching, we are offering a supercharged Pages experience by helping you cache parts of your project to save time on subsequent builds.</p><p>For developers, time is not just money – it’s innovation and progress. When every second counts in crunch time before a new launch, the “need for speed” becomes <i>critical</i>. With Cloudflare Pages’ built-in <a href="https://www.cloudflare.com/learning/serverless/glossary/what-is-ci-cd/">continuous integration and continuous deployment (CI/CD)</a>, developers count on us to drive fast. We’ve already taken great strides in making sure we’re enabling quick development iterations for our users by <a href="/cloudflare-pages-build-improvements/">making solid improvements on the stability and efficiency</a> of our build infrastructure. But we always knew there was more to our build story.</p>
    <div>
      <h3>Quick pit stops</h3>
      <a href="#quick-pit-stops">
        
      </a>
    </div>
    <p>Build times can feel like a developer's equivalent of a time-out, a forced pause in the creative process—the inevitable pit stop in a high-speed formula race.</p><p>Long build times not only breaks the flow of individual developers, but it can also create a ripple effect across the team. It can slow down iterations and push back deployments. In the fast-paced world of CI/CD, these delays can drastically impact productivity and the delivery of products.</p><p>We want to empower developers to <b>win the race</b>, miles ahead of competition.</p>
    <div>
      <h3>Mechanics of build caching</h3>
      <a href="#mechanics-of-build-caching">
        
      </a>
    </div>
    <p>At its core, build caching is a mechanism that stores artifacts of a build, allowing subsequent builds to reuse these artifacts rather than recomputing them from scratch. By leveraging the cached results, build times can be significantly reduced, leading to a more efficient build process.</p><p>Previously, when you initiated a build, the Pages CI system would generate every step of the build process, even if most parts of the codebase remain unchanged between builds. This is the equivalent to changing out every single part of the car during a pit stop, irrespective of if anything needs replacing.</p><p>Build caching refines this process. Now, the Pages build system will detect if cached artifacts can be leveraged, restore the artifacts, then focus on only computing the modified sections of the code. In essence, build caching acts like an experienced pit crew, smartly skipping unnecessary steps and focusing only on what's essential to get you back in the race faster.</p>
    <div>
      <h3>What are we caching?</h3>
      <a href="#what-are-we-caching">
        
      </a>
    </div>
    <p>It boils down to two components: dependencies and build output.</p><p>The Pages build system supports dependency caching for select package managers and build output caching for select frameworks. Check out our <a href="https://developers.cloudflare.com/pages/platform/build-caching">documentation</a> for more information on what’s currently supported and what’s coming up.</p><p>Let’s take a closer look at what exactly we are caching.</p><p><b>Dependencies:</b> upon initiating a build, the Pages CI system checks for cached artifacts from previous builds. If it identifies a cache hit for dependencies, it restores from cache to speed up dependency installation.</p><p><b>Build output:</b> if a cache hit for build output is identified, Pages will only build the changed assets. This approach enables the long awaited <i>incremental builds</i> for supported JavaScript frameworks.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4kqmUJuLrUGc7vtXbDc4X6/3f1440dbf1ad3acef20a2b99c18d6d28/image2-26.png" />
            
            </figure>
    <div>
      <h3>Ready, set … go!</h3>
      <a href="#ready-set-go">
        
      </a>
    </div>
    <p>Build caching is now in beta, and ready for you to test drive!</p><p>In this release, the feature will support the node-based package managers <a href="https://www.npmjs.com/">npm</a>, <a href="https://yarnpkg.com/">yarn</a>, <a href="https://pnpm.io/">pnpm</a>, as well as <a href="https://bun.sh/">Bun</a>. We’ve also ensured compatibility with the most popular frameworks that provide native incremental building support: <a href="https://www.gatsbyjs.com/">Gatsby.js</a>, <a href="https://nextjs.org/">Next.js</a> and <a href="https://astro.build/">Astro</a> – and more to come!</p><p>For you as a Pages user, interacting with build caching will be seamless. If you are working with an existing project, simply navigate to your project’s settings to toggle on Build Cache.</p><p>When you push a code change and initiate a build using Pages CI, build caching will kick-start and do its magic in the background.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4hWz7Sh9wtk64c01cSnNjG/6967b9783a75f3fdfaaa10bd26884e0b/image4-17.png" />
            
            </figure>
    <div>
      <h3>“Cache” us on Discord</h3>
      <a href="#cache-us-on-discord">
        
      </a>
    </div>
    <p>Have questions? Join us on our <a href="https://discord.com/invite/cloudflaredev?event=1152163002502615050">Discord Server</a>. We will be hosting an “Ask Us Anything” <a href="https://discord.com/invite/cloudflaredev?event=1152163002502615050">session</a> on October 2nd where you can chat live with members of our team! Your feedback on this beta is invaluable to us, so after testing out build caching, don't hesitate to share your experiences! Happy building!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6lavvh2PfpjlEbNV0YEuGB/8104fcccf6bf1243dfa113e940317f82/image3-32.png" />
            
            </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Cloudflare Pages]]></category>
            <category><![CDATA[Beta]]></category>
            <category><![CDATA[Speed]]></category>
            <guid isPermaLink="false">5NhsEJJxtKlKawPWJmHWJm</guid>
            <dc:creator>Anni Wang</dc:creator>
            <dc:creator>Jacob Hands</dc:creator>
            <dc:creator>John Fawcett</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing the 2023 Intern-ets!]]></title>
            <link>https://blog.cloudflare.com/introducing-the-2023-intern-ets/</link>
            <pubDate>Wed, 23 Aug 2023 17:30:50 GMT</pubDate>
            <description><![CDATA[ This year, Cloudflare welcomed a class of ~40 interns for an unforgettable summer filled with invaluable mentorship, continuous learning, and the chance to make a real-world impact. Get ready to learn about the surprising world of Cloudflare interns in their own words  ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4yHW5uY9u2WLcc5jEgwdXC/c7a6316b931653216e6ba6b1dd6c2d45/Introducing-the-2023-Intern-ets-.png" />
            
            </figure><p>This year, Cloudflare welcomed a class of approximately 40 interns, hailing from five different countries for an unforgettable summer. As we joined both remotely and in-person across Cloudflare’s global offices, our experiences spanned a variety of roles from engineering, product management to internal auditing and marketing. Through invaluable mentorship, continuous learning, and the chance to make a real-world impact, our summer was truly enriched at every step. Join us, Anni and Emilie, as we provide an insider's perspective on a summer at Cloudflare, sharing snippets and quotes from our intern cohort.</p>
    <div>
      <h2>printf(“Hello Intern-ets!”)</h2>
      <a href="#printf-hello-intern-ets">
        
      </a>
    </div>
    <p>You might have noticed that we have a new name for the interns: the Intern-ets! Our fresh intern nickname was born from a brainstorm between us and our recruiter, Judy. While “Cloudies”, “Cloudterns”, and “Flaries” made the shortlist, a company-wide vote crowned "Intern-ets" as the favorite. And just like that, we've made Cloudflare history!</p>
    <div>
      <h2>git commit -m “Innovation!”</h2>
      <a href="#git-commit-m-innovation">
        
      </a>
    </div>
    <p>We're all incredibly proud to have gotten the opportunity to tackle interesting and highly impactful projects throughout the duration of our internships. To give you a glimpse of our summer, here are a few that showcase the breadth and depth of our experiences.</p><p><b>Mia M., Product Manager intern,</b> worked on the Cloudflare Secrets Store, which is a new product that will allow Cloudflare customers to store, encrypt, and deploy sensitive data across the product suite. She focused on creating requirements for the core platform and tackling the first milestone, bringing secrets and environment variables from the per-Worker level to the account level in the Workers platform.</p><p><b>Pierre, Research intern,</b> focused on integrating differential privacy—a layer of protection with formal privacy guarantees—into distributed aggregation protocols. This privacy layer is imperative for user trust and data security as these protocols are commonly used for collecting sensitive information, such as browser telemetry or health data.</p><p><b>Johnny, Software Engineer intern</b>, worked on a new feature for API Shield called Object-Level Access Policies as a first step in a solution to combat Broken Object-Level Authorization, which has been ranked as the #1 API Security flaw by OWASP. This feature will enable customers to specify explicit allowlists and blocklists for API traffic per object.</p><p><b>Olivia, Project Manager intern,</b> led the Dissatisfied (DSAT) Customer Outreach Project and JIRA Automations for the Customer Support team. The DSAT project involves reaching out to Premium customers who express dissatisfaction with the goal of providing personalized contact to ensure they feel valued. The JIRA Automation project aims to create zero-touch tickets, removing Customer Support as the middle man.</p><p>Also don’t forget to check out the amazing intern projects that got featured in a blog post!</p><p><b>Emilie, Software Engineering intern</b> introduced a <a href="/debug-queues-from-dash/">debugging feature for Cloudflare Queues</a>.</p><p><b>Joaquin, Software Engineer intern</b> added a <a href="/cloudflare-workers-database-integration-with-upstash/">new Workers Database integration</a>.</p><p><b>Austin, Software Engineer intern</b> created <a href="/introducing-scheduled-deletion-for-cloudflare-stream/">scheduled deletion for Cloudflare Stream</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5GjXDOjR12QZdJS1J5Ey53/3382bed549140ffe2a350bb74892b21b/Introducing-the-2023-Intern-ets-body-2-1.png" />
            
            </figure>
    <div>
      <h2>No null days here!</h2>
      <a href="#no-null-days-here">
        
      </a>
    </div>
    <p>Beyond our projects, we had tons of fun getting to meet other Cloudflarians and experience the vibrant Cloudflare culture. Let's dive into some of the standout moments that made our internships truly special!</p>
    <div>
      <h3>Remote, office, hybrid</h3>
      <a href="#remote-office-hybrid">
        
      </a>
    </div>
    <p>This summer, the interns dotted the globe, working from cozy home setups to bustling offices in cities. Regardless of where we worked, we had a blast. Here's what some fellow interns have to say about their work experiences:</p><p><b>Austin office: Jada</b> loved her colleagues at the Austin office as they were “warm and open to exploring the city, [...], and hanging out outside of work”. <b>Anni</b> and <b>Maximo</b> loved attending the Austin-based team summit where they attended strategy sessions and met the team in-person.</p><p><b>San Francisco office</b>: <b>Emmanuel F.</b> enjoyed getting to interact with other engineers during SF Team Lunches. <b>Matthew</b> enjoyed working on the rooftop to a view of the city skyline. <b>Jonathan</b> appreciated the hybrid work model enjoyed by SF employees.</p><p><b>Remote work</b>: <b>Johnny</b> liked the distributed and flexible work style that the company embraces. <b>Daniël</b>, also working remotely and found it amusing how “[s]everal people have noticed the Feynman Lectures on Physics on the shelf behind me in my home and have asked about it.”</p><p><b>Remote intern events: Emmanuel G., Aaron, and Feiyu</b> enjoyed the intern calls that were held on GatherRound as “it was a fun, quick way to get to meet everyone.” <b>Pradyumna</b> particularly liked it when we played skribbl.io.</p>
<table>
<colgroup>
<col></col>
<col></col>
</colgroup>
<thead>
  <tr>
    <th><img src="http://staging.blog.mrk.cfdata.org/content/images/2023/08/pasted-image-0--2--2.png" /></th>
    <th><img src="http://staging.blog.mrk.cfdata.org/content/images/2023/08/pasted-image-0--1--1.png" /></th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><img src="http://staging.blog.mrk.cfdata.org/content/images/2023/08/IMG-3880---Maximo-Guk-1.jpg" /></td>
    <td><img src="http://staging.blog.mrk.cfdata.org/content/images/2023/08/pasted-image-0-1.png" /></td>
  </tr>
</tbody>
</table>
    <div>
      <h3>Mentorship</h3>
      <a href="#mentorship">
        
      </a>
    </div>
    <p>With so many exceptional minds at Cloudflare, every interaction became a chance for us to learn and grow. Here are some awe-inspiring individuals who have made our internships unforgettable:</p><p><b>Harshal, Systems Engineer: Aaron</b> is grateful for his mentor <b>Harshall</b>. “I always left our conversations knowing more than I did coming into them”.</p><p><b>Revathy, Systems Engineer: Harshini</b> is thankful to her mentor <b>Revathy</b> for “how she helps me to learn [...] the best way possible to do something and ultimately achieve my goals”.</p><p><b>Nevi, Product Manager: Anni</b> admires her manager <b>Nevi</b> who is always thinking about the team and our customers and has invested in the personal growth and mentorship of interns.</p><p><b>Conner, Systems Engineer: Jonathan</b> is grateful that he was always able to count on <b>Conner</b> for great engineering tips, guidance, and NeoVim wizardry.</p><p><b>Malgorzata, Data Scientist:</b> <b>Jada</b> looks up to <b>Malgorzata</b> for being so welcoming, kind, and funny. She has a great attitude and besides being super knowledgeable, she is also willing to share her expertise and support others!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/NpzyFOtTN51zpo3VPfoyw/1b8ac80f54e82411df1b021f230a1d9c/Introducing-the-2023-Intern-ets-body-3.png" />
            
            </figure>
    <div>
      <h3>Executive chats</h3>
      <a href="#executive-chats">
        
      </a>
    </div>
    <p>During our internships, we engaged in Executive fireside chats, diving deep with Cloudflare's top leaders. Each chat was insightful and surprising in a different way, and some of our favorite takeaways were…</p><p><b>John, CTO: Shaheen</b> valued John’s humility in emphasizing daily learning from others at work, stating, “As I grow in my career, I intend to keep a similar attitude and try to learn from those around me by keeping myself grounded.”</p><p><b>Doug Kramer, General Counsel: Emilie</b> valued Doug Kramer's advice on identifying a career "north star" to guide intentions while also recognizing "exit-ramps" or alternative paths that may offer unexpected fulfillment.</p><p><b>Matthew Prince</b>, <b>Co-founder and CEO</b>: <b>Yunfan</b> loved hearing “the story about how Cloudflare developed from a start-up till today”.</p><p><b>Michelle Zatlyn, Co-founder and COO: Harsh</b> learned from Michelle about “how they moved across the country, against everyone's advice, to start Cloudflare”, and <b>Mia C.</b> enjoyed learning that “Cloudflare started as a business school project”.</p>
    <div>
      <h3>Snack bytes</h3>
      <a href="#snack-bytes">
        
      </a>
    </div>
    <p>A bonding point for the Intern-ets was our love for snacks! In July, we gathered the Intern-ets together for a virtual snack break. The University team sent out a box featuring snacks from Indonesia, a country none of us had visited (or tried goodies from… yet!). Below you can see us holding up our favorite snacks from the box!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5A3F9KPae2jxzjshJsteXl/3a0139eff7f2eb769f94ebf1550c4070/Frame-2--10-.png" />
            
            </figure><p>Meanwhile, the on-site interns couldn't get enough of the office snacks. Favorites? Pita chips, Lucky Charms, chocolate almonds, coconut chocolate bars and coconut water. Plus, the Austin and San Francisco offices even have a Sour Patch Kids dispenser! Snack on!</p>
    <div>
      <h2>Surprises</h2>
      <a href="#surprises">
        
      </a>
    </div>
    <p>Every day at Cloudflare presented unexpected joys and challenges. Here's what the interns found most surprising:</p><p><b>High Impact: Simon, Emmanuel F.</b> and <b>Maximo</b> were surprised to “[do] such visible and important work as an intern”. <b>Austin</b> agreed, noting “I was treated like any other member of the team [...] It felt like I was working on something important and not just a typical intern project!” <b>Harshini</b> added, “when [colleagues] hear what I am working on they go - ‘that is really cool, I can't wait to see that happen - we need it.’”</p><p><b>Support: Eames</b> “was worried that it would feel like my achievements were the only thing that mattered. But my colleagues always showed concern for how I was feeling and how things were going, and I couldn't be more grateful for that.”</p><p><b>Industry vs Academia: Johnny</b> mentioned “coming from academia, I was amazed by the amount of effort that has been, and is continuing to be, put into the products to really productionize what I have only before seen in research. It is another reminder of the scale in which we work!”</p>
    <div>
      <h2>By the numbers</h2>
      <a href="#by-the-numbers">
        
      </a>
    </div>
    <p>Here are some fun stats from our internship…</p><ul><li><p><b>Johnny</b> drove 30 hours from New York to Colorado</p></li><li><p><b>Maximo</b> missed 0 days of going to the Austin office</p></li><li><p><b>Anni</b> drank 86 matcha lattes this summer</p></li><li><p><b>Emilie</b> participated in 38 Cloudfriends calls and coffee chats</p></li><li><p><b>Simon</b> has waited around one week cumulatively for builds to finish</p></li></ul>
    <div>
      <h3>exit(0)</h3>
      <a href="#exit-0">
        
      </a>
    </div>
    <p>At Cloudflare, our internships aren’t just about work—they're about growth, mentorship, and real impact. We've built more than projects; we've forged lasting relationships. It’s been an unforgettable summer of challenges, bonding, and authentic experiences. For more about our journey this summer, check out our <a href="https://cloudflare.tv/event/9ZdNewvj">Cloudflare TV segment</a> with <b>Michelle Zatlyn</b>, <b>Co-founder and COO</b>.</p><p>Finally, we would love to give a huge thanks to our university recruiters <b>Judy, Trang,</b> and <b>Dani</b> for creating such an amazing internship experience for us this summer!</p>
    <div>
      <h2>Want to become an Intern-et or Cloudflarian?</h2>
      <a href="#want-to-become-an-intern-et-or-cloudflarian">
        
      </a>
    </div>
    <p>Sign up <a href="https://docs.google.com/forms/d/e/1FAIpQLSdU9aOH_aWOkQViU3Qlo_w0kLsJ_TsW5wUJHG7OxG2ncIhnlg/viewform"><b>here</b></a> to be notified of new grad and internship opportunities for 2024. Cloudflare is also hiring for full-time opportunities: check out <a href="https://www.cloudflare.com/careers/jobs/">open positions</a> and apply today!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3E9oNXBAvBEB8vHMpC6n5Y/87e29d504635e2a0df2a654a97927411/Introducing-the-2023-Intern-ets--body-1.png" />
            
            </figure> ]]></content:encoded>
            <category><![CDATA[Internship Experience]]></category>
            <category><![CDATA[Life at Cloudflare]]></category>
            <category><![CDATA[Careers]]></category>
            <guid isPermaLink="false">4b1Y1QbACl6HL4X750Ub0H</guid>
            <dc:creator>Emilie Ma</dc:creator>
            <dc:creator>Judy Cheong</dc:creator>
            <dc:creator>Anni Wang</dc:creator>
        </item>
    </channel>
</rss>