
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Thu, 09 Apr 2026 04:56:59 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Make Your Website Conversational for People and Agents with NLWeb and AutoRAG]]></title>
            <link>https://blog.cloudflare.com/conversational-search-with-nlweb-and-autorag/</link>
            <pubDate>Thu, 28 Aug 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ With NLWeb, an open project by Microsoft, and Cloudflare AutoRAG, conversational search is now a one-click setup for your website. ]]></description>
            <content:encoded><![CDATA[ <p>Publishers and content creators have historically relied on traditional keyword-based search to help users navigate their website’s content. However, traditional search is built on outdated assumptions: users type in keywords to indicate intent, and the site returns a list of links for the most relevant results. It’s up to the visitor to click around, skim pages, and piece together the answer they’re looking for. </p><p><a href="https://www.cloudflare.com/learning/ai/what-is-artificial-intelligence/"><u>AI</u></a> has reset expectations and that paradigm is breaking: how we search for information has fundamentally changed.</p>
    <div>
      <h2>Your New Type of Visitors</h2>
      <a href="#your-new-type-of-visitors">
        
      </a>
    </div>
    <p>Users no longer want to search websites the old way. They’re used to interacting with AI systems like Copilot, Claude, and ChatGPT, where they can simply ask a question and get an answer. We’ve moved from search engines to answer engines. </p><p>At the same time, websites now have a new class of visitors, AI agents. Agents face the same pain with keyword search: they have to issue keyword queries, click through links, and scrape pages to piece together answers. But they also need more: a structured way to ask questions and get reliable answers across websites. This means that websites need a way to give the agents they trust controlled access, so that information is retrieved accurately.</p><p>Website owners need a way to participate in this shift.</p>
    <div>
      <h2>A New Search Model for the Agentic Web</h2>
      <a href="#a-new-search-model-for-the-agentic-web">
        
      </a>
    </div>
    <p>If AI has reset expectations, what comes next? To meet both people and agents where they are, websites need more than incremental upgrades to keyword search. They need a model that makes conversational access to content a first-class part of the web itself.</p><p>That’s what we want to deliver: combining an open standard (NLWeb) with the infrastructure (AutoRAG) to make it simple for any website to become AI-ready.</p><p><a href="https://news.microsoft.com/source/features/company-news/introducing-nlweb-bringing-conversational-interfaces-directly-to-the-web/"><u>NLWeb</u></a> is an open project developed by Microsoft that defines a standard protocol for natural-language queries on websites. Each NLWeb instance also operates as a Model Context Protocol (MCP) server. Cloudflare is building to this spec and actively working with Microsoft to extend the standard with the goal to let every site function like an AI app, so users and agents alike can query its contents naturally.</p><p><a href="https://developers.cloudflare.com/autorag/"><u>AutoRAG</u></a>, Cloudflare’s managed retrieval engine, can automatically crawl your website, store the content in R2, and embed it into a managed vector database. AutoRAG keeps the index fresh with continuous re-crawling and re-indexing. Model inference and embedding can be served through Workers AI. Each AutoRAG is paired with an AI Gateway that can provide <a href="https://www.cloudflare.com/learning/performance/what-is-observability/">observability and insights</a> into your AI model usage. This gives you a <a href="https://www.cloudflare.com/learning/ai/how-to-build-rag-pipelines/">complete, managed pipeline</a> for conversational search without the burden of managing custom infrastructure.</p><blockquote><p><i>“Together, NLWeb and AutoRAG let publishers go beyond search boxes, making conversational interfaces for websites simple to create and deploy. This integration will enable every website to easily become AI-ready for both people and trusted agents.”</i> – R.V. Guha, creator of NLWeb, CVP and Technical Fellow at Microsoft. </p></blockquote><p>We are optimistic this will open up new monetization models for publishers:</p><blockquote><p><i>"The challenges publishers have faced are well known, as are the risks of AI accelerating the collapse of already challenged business models. However, with NLWeb and AutoRAG, there is an opportunity to reset the nature of relationships with audiences for the better. More direct engagement on Publisher Owned and Operated (O&amp;O) environments, where audiences value the brand and voice of the Publisher, means new potential for monetization. This would be the reset the entire industry needs."</i>  – Joe Marchese, General &amp; Build Partner at Human Ventures.</p></blockquote>
    <div>
      <h2>One-Click to Make Your Site Conversational</h2>
      <a href="#one-click-to-make-your-site-conversational">
        
      </a>
    </div>
    <p>By combining NLWeb's standard with Cloudflare’s AutoRAG infrastructure, we’re making it possible to  easily bring conversational search to any website.</p><p>Simply select your domain in AutoRAG, and it will crawl and index your site for semantic querying. It then deploys a Cloudflare Worker, which acts as the access layer. This Worker implements the NLWeb standard and UI defined by the <a href="https://github.com/nlweb-ai/NLWeb"><u>NLWeb project</u></a> and exposes your indexed content to both people and AI agents.

The Worker includes:</p><ul><li><p><b>`/ask` endpoint:</b> The defined standard for how conversational web searches should be served. Powers the conversational UI at the root `/` as well as the embeddable preview at `/snippet.html`. It supports chat history so queries can build on one another within the same session, and includes automatic query decontextualization to improve retrieval quality.</p></li><li><p><b>`/mcp` endpoint: </b>Implements an MCP server that trusted AI agents can connect to for structured access.</p></li></ul><p>With this setup, your site content is immediately available in two ways for you to experiment: through a conversational UI that you can serve to your visitors, and through a structured MCP interface that lets trusted agents query your site reliably on your terms.</p><p>Additionally, if you prefer to deploy and host your own version of the NLWeb project, there’s also the option to use AutoRAG as the retrieval engine powering the <a href="https://github.com/nlweb-ai/NLWeb/blob/main/docs/setup-cloudflare-autorag.md"><u>NLWeb instance</u></a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1SM7rSQDhoR4fH5KgAJPD7/2266dc2e3c80f3fcc7f17014eb1d0cf1/image5.png" />
          </figure>
    <div>
      <h2>How Your Site Becomes Conversational</h2>
      <a href="#how-your-site-becomes-conversational">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/xkeREv3GwXwBZw52Dg6XQ/caeb587819d08eff53a33aa893032b78/image2.png" />
          </figure><p>From your perspective, making your site conversational is just a single click. Behind the scenes, AutoRAG spins up a full retrieval pipeline to make that possible:</p><ol><li><p><b>Crawling and ingestion: </b>AutoRAG explores your site like a search engine, following `sitemap.xml` and `robots.txt` files to understand what pages are available and allowed for crawling. From there, it follows your sitemap to discover pages within your domain (up to 100k pages). <a href="https://developers.cloudflare.com/browser-rendering/"><u>Browser Rendering</u></a> is used to load each page so that it can capture dynamic, JavaScript content. Crawled pages are downloaded into an <a href="https://developers.cloudflare.com/r2/"><u>R2 bucket</u></a> in your account before being ingested. </p></li><li><p><b>Continuous Indexing:</b> Once ingested, the content is parsed and embedded into <a href="https://developers.cloudflare.com/vectorize/"><u>Vectorize</u></a>, making it queryable beyond keyword matching through semantic search. AutoRAG automatically re-crawls and re-indexes to keep your knowledge base aligned with your latest content.</p></li><li><p><b>Access &amp; Observability: </b>A Cloudflare Worker is deployed in your account to serve as the access layer that implements the NLWeb protocol (you can also find the deployable Worker in the Workers <a href="https://github.com/cloudflare/templates"><u>templates repository</u></a>). Workers AI is used to seamlessly power the summarization and decontextualized query capabilities to improve responses. <i>Soon, with the</i><a href="http://blog.cloudflare.com/ai-gateway-aug-2025-refresh/"><i><u> AI Gateway and Secret Store BYO keys</u></i></a><i>, you’ll be able to connect models from any provider and select them directly in the AutoRAG dashboard.</i></p></li></ol>
    <div>
      <h2>Road to Making Websites a First-Class Data Source</h2>
      <a href="#road-to-making-websites-a-first-class-data-source">
        
      </a>
    </div>
    <p>Until now, <a href="https://developers.cloudflare.com/autorag/concepts/how-autorag-works/"><u>AutoRAG</u></a> only supported R2 as a data source. That worked well for structured files, but we needed to make a website itself a first-class data source to be indexed and searchable. Making that possible meant building website crawling into AutoRAG and strengthening the system to handle large, dynamic sources like websites.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5ouTCcbipVX3s1fPgg6hEs/541a03efb4365370fee5df67cd68841f/image4.png" />
          </figure><p>Before implementing our web crawler, we needed to improve the reliability of data syncs. Prior users of AutoRAG lacked visibility into when indexing syncs ran and whether they were successful. To fix this, we introduced a Job module to track all syncs, store history, and provide logs. This required two new Durable Objects to be added into AutoRAG’s architecture:</p><ul><li><p><b>JobManager</b> runs a complete sync, and its duties include queuing files, embedding content, and keeping the Vectorize database up to date.  To ensure data consistency, only one JobManager can run per RAG at a time, enforced by the RagManager (a Durable Object in our existing architecture), which cancels any running jobs before starting new ones which can be triggered either manually or by a scheduled sync.</p></li><li><p><b>FileManager</b> solved scalability issues we hit when Workers ran out of memory during parallel processing. Originally, a single Durable Object was responsible for handling multiple files, but with a 128MB memory limit it quickly became a bottleneck. The solution was to break the work apart: JobManager now distributes files across many FileManagers, each responsible for a single file. By processing 20 files in parallel through 20 different FileManagers, we expanded effective memory capacity from 128MB to roughly 2.5GB per batch.</p></li></ul><p>With these improvements, we were ready to build the website parser. By reusing our existing R2-based queuing logic, we added crawling with minimal disruption:</p><ol><li><p>A JobManager designated for a website crawl begins by reading the sitemaps associated with the RAG configuration.</p></li><li><p>Instead of listing objects from an R2 bucket, it queues each website link into our existing R2-based queue, using the full URL as the R2 object key.</p></li><li><p>From here, the process is nearly identical to our file-based sync. A FileManager picks up the job and checks if the RAG is configured for website parsing.</p></li><li><p>If it is, the FileManager crawls the link and places the page's HTML contents into the user's R2 bucket, again using the URL as the object key.</p></li></ol><p>After these steps, we index the data and serve it at query time. This approach maximized code reuse, and any improvements to our <a href="https://blog.cloudflare.com/markdown-for-agents/">HTML-to-Markdown conversion</a> now benefit both file and website-based RAGs automatically.</p>
    <div>
      <h2>Get Started Today</h2>
      <a href="#get-started-today">
        
      </a>
    </div>
    <p>Getting your website ready for conversational search through NLWeb and AutoRAG is simple. Here’s how:</p><ol><li><p>In the <b>Cloudflare Dashboard</b>, navigate to <b>Compute &amp; AI &gt; AutoRAG</b>.</p></li><li><p>Select <b>Create</b> in AutoRAG, then choose the <b>NLWeb Website</b> quick deploy option.</p></li><li><p>Select the <b>domain</b> from your Cloudflare account that you want indexed.</p></li><li><p>Click <b>Start indexing</b>.</p></li></ol><p>That’s it! You can now try out your NLWeb search experience via the provided link, and test out how it will look on your site by using the embeddable snippet.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/dI9xwOKdn3jGkYKWK8NEN/e25ae13199eb09577868e421cc1fef7d/image1.png" />
          </figure><p>We’d love to hear your feedback as you experiment with this new capability and share your thoughts with us at <a>nlweb@cloudflare.com</a>.</p><p></p> ]]></content:encoded>
            <category><![CDATA[AI Week]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Search Engine]]></category>
            <category><![CDATA[Microsoft]]></category>
            <category><![CDATA[Auto Rag]]></category>
            <guid isPermaLink="false">1FRpZMePLmgD9cPqJnMFKS</guid>
            <dc:creator>Catarina Pires Mota</dc:creator>
            <dc:creator>Gabriel Massadas</dc:creator>
            <dc:creator>Nelson Duarte</dc:creator>
            <dc:creator>Daniel Leal</dc:creator>
            <dc:creator>Anni Wang</dc:creator>
        </item>
        <item>
            <title><![CDATA[DO it again: how we used Durable Objects to add WebSockets support and authentication to AI Gateway]]></title>
            <link>https://blog.cloudflare.com/do-it-again/</link>
            <pubDate>Tue, 19 Nov 2024 22:00:00 GMT</pubDate>
            <description><![CDATA[ We used Cloudflare’s Developer Platform and Durable Objects to build authentication and a WebSockets API that developers can use to call AI Gateway, enabling continuous communication over a single ]]></description>
            <content:encoded><![CDATA[ <p>In October 2024, we talked about storing <a href="https://blog.cloudflare.com/billions-and-billions-of-logs-scaling-ai-gateway-with-the-cloudflare"><u>billions of logs</u></a> from your AI application using AI Gateway, and how we used Cloudflare’s Developer Platform to do this. </p><p>With AI Gateway already processing over 3 billion logs and experiencing rapid growth, the number of connections to the platform continues to increase steadily. To help developers manage this scale more effectively, we wanted to offer an alternative to implementing HTTP/2 keep-alive to maintain persistent HTTP(S) connections, thereby avoiding the overhead of repeated handshakes and <a href="https://www.cloudflare.com/en-gb/learning/ssl/what-happens-in-a-tls-handshake/"><u>TLS negotiations</u></a> with each new HTTP connection to AI Gateway. We understand that implementing HTTP/2 can present challenges, particularly when many libraries and tools may not support it by default and most modern programming languages have well-established WebSocket libraries available.</p><p>With this in mind, we used Cloudflare’s Developer Platform and Durable Objects (yes, again!) to build a <a href="https://developers.cloudflare.com/workers/runtime-apis/websockets/"><u>WebSockets API</u></a> that establishes a single, persistent connection, enabling continuous communication. </p><p>Through this API, all AI providers supported by AI Gateway can be accessed via WebSocket, allowing you to maintain a single TCP connection between your client or server application and the AI Gateway. The best part? Even if your chosen provider doesn’t support WebSockets, we handle it for you, managing the requests to your preferred AI provider.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/20z8XfL3pJ78z1Y0Yp7JNO/0b8607702a25ede4268009f124278985/unnamed.png" />
          </figure><p>By connecting via WebSocket to AI Gateway, we make the requests to the inference service for you using the provider’s supported protocols (HTTPS, WebSocket, etc.), and you can keep the connection open to execute as many inference requests as you would like. </p><p>To make your connection to AI Gateway more secure, we are also introducing authentication for AI Gateway. The new WebSockets API will require authentication. All you need to do is <a href="https://developers.cloudflare.com/fundamentals/api/get-started/create-token/#_top"><u>create a Cloudflare API token</u></a> with the permission “AI Gateway: Run” and send that in the <code>cf-aig-authorization</code> header.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2NqmYZX8NdnrTUBHkQiLA1/4bda8a44aa225c7ff440686dd3046a5a/image4.png" />
          </figure><p>In the flow diagram above:</p><p>1️⃣ When Authenticated Gateway is enabled and a valid token is included, requests will pass successfully.</p><p>2️⃣ If Authenticated Gateway is enabled, but a request does not contain the required <code>cf-aig-authorization</code> header with a valid token, the request will fail. This ensures only verified requests pass through the gateway. </p><p>3️⃣ When Authenticated Gateway is disabled, the <code>cf-aig-authorization</code> header is bypassed entirely, and any token — whether valid or invalid — is ignored.</p>
    <div>
      <h2>How we built it</h2>
      <a href="#how-we-built-it">
        
      </a>
    </div>
    <p>We recently used Durable Objects (DOs) to scale our logging solution for AI Gateway, so using WebSockets within the same DOs was a natural fit.</p><p>When a new WebSocket connection is received by our Cloudflare Workers, we implement authentication in two ways to support the diverse capabilities of WebSocket clients. The primary method involves validating a Cloudflare API token through the <code>cf-aig-authorization</code> header, ensuring the token is valid for the connecting account and gateway. </p><p>However, due to limitations in browser WebSocket implementations, we also support authentication via the “sec-websocket-protocol” header. Browser WebSocket clients don't allow for custom headers in their standard API, complicating the addition of authentication tokens in requests. While we don’t recommend that you store API keys in a browser, we decided to add this method to add more flexibility to all WebSocket clients.</p>
            <pre><code>// Built-in WebSocket client in browsers
const socket = new WebSocket("wss://gateway.ai.cloudflare.com/v1/my-account-id/my-gateway/", [
   "cf-aig-authorization.${AI_GATEWAY_TOKEN}"
]);


// ws npm package
import WebSocket from "ws";
const ws = new WebSocket("wss://gateway.ai.cloudflare.com/v1/my-account-id/my-gateway/",{
   headers: {
       "cf-aig-authorization": "Bearer AI_GATEWAY_TOKEN",
   },
});
</code></pre>
            <p>After this initial verification step, we upgrade the connection to the Durable Object, meaning that it will now handle all the messages for the connection. Before the new connection is fully accepted, we generate a random UUID, so this connection is identifiable among all the messages received by the Durable Object. During an open connection, any AI Gateway settings passed via headers — such as <code>cf-aig-skip-cache</code> (<a href="https://developers.cloudflare.com/ai-gateway/configuration/caching/#skip-cache-cf-skip-cache"><u>which bypasses caching when set to true</u></a>) — are stored and applied to all requests in the session. However, these headers can still be overridden on a per-request basis, just like with the <a href="https://developers.cloudflare.com/ai-gateway/providers/universal/"><u>Universal Endpoint</u></a> today.</p>
    <div>
      <h2>How it works</h2>
      <a href="#how-it-works">
        
      </a>
    </div>
    <p>Once the connection is established, the Durable Object begins listening for incoming messages. From this point on, users can send messages in the AI Gateway universal format via WebSocket, simplifying the transition of your application from an existing HTTP setup to WebSockets-based communication.</p>
            <pre><code>import WebSocket from "ws";
const ws = new WebSocket("wss://gateway.ai.cloudflare.com/v1/my-account-id/my-gateway/",{
   headers: {
       "cf-aig-authorization": "Bearer AI_GATEWAY_TOKEN",
   },
});

ws.send(JSON.stringify({
   type: "universal.create",
   request: {
      "eventId": "my-request",
      "provider": "workers-ai",
      "endpoint": "@cf/meta/llama-3.1-8b-instruct",
      "headers": {
         "Authorization": "Bearer WORKERS_AI_TOKEN",
         "Content-Type": "application/json"
      },
      "query": {
         "prompt": "tell me a joke"
      }
   }
}));

ws.on("message", function incoming(message) {
   console.log(message.toString())
});
</code></pre>
            <p>When a new message reaches the Durable Object, it’s processed using the same code that powers the HTTP Universal Endpoint, enabling seamless code reuse across Workers and Durable Objects — one of the key benefits of building on Cloudflare.</p><p>For non-streaming requests, the response is wrapped in a JSON envelope, allowing us to include additional information beyond the AI inference itself, such as the AI Gateway log ID for that request.</p><p>Here’s an example response for the request above:</p>
            <pre><code>{
  "type":"universal.created",
  "metadata":{
     "cacheStatus":"MISS",
     "eventId":"my-request",
     "logId":"01JC3R94FRD97JBCBX3S0ZAXKW",
     "step":"0",
     "contentType":"application/json"
  },
  "response":{
     "result":{
        "response":"Why was the math book sad? Because it had too many problems. Would you like to hear another one?"
     },
     "success":true,
     "errors":[],
     "messages":[]
  }
}
</code></pre>
            <p>For streaming requests, AI Gateway sends an initial message with request metadata telling the developer the stream is starting.</p>
            <pre><code>{
  "type":"universal.created",
  "metadata":{
     "cacheStatus":"MISS",
     "eventId":"my-request",
     "logId":"01JC40RB3NGBE5XFRZGBN07572",
     "step":"0",
     "contentType":"text/event-stream"
  }
}
</code></pre>
            <p>After this initial message, all streaming chunks are relayed in real-time to the WebSocket connection as they arrive from the inference provider. Note that only the <code>eventId</code> field is included in the metadata for these streaming chunks (more info on what this new field is below).</p>
            <pre><code>{
  "type":"universal.stream",
  "metadata":{
     "eventId":"my-request",
  }
  "response":{
     "response":"would"
  }
}
</code></pre>
            <p>This approach serves two purposes: first, all request metadata is already provided in the initial message. Second, it addresses the concurrency challenge of handling multiple streaming requests simultaneously.</p>
    <div>
      <h2>Handling asynchronous events</h2>
      <a href="#handling-asynchronous-events">
        
      </a>
    </div>
    <p>With WebSocket connections, client and server can send messages asynchronously at any time. This means the client doesn’t need to wait for a server response before sending another message. But what happens if a client sends multiple streaming inference requests immediately after the WebSocket connection opens?</p><p>In this case, the server streams all the inference responses simultaneously to the client. Since everything occurs asynchronously, the client has no built-in way to identify which response corresponds to each request.</p><p>To address this, we introduced a new field in the Universal format called <code>eventId</code>, which allows AI Gateway to include a client-defined ID with each message, even in a streaming WebSocket environment.</p><p>So, to fully answer the question above: the server streams both responses in parallel chunks, and the client can accurately identify which request each message belongs to based on the <code>eventId</code>.</p><div>
  
</div><p>Once all chunks for a request have been streamed, AI Gateway sends a final message to signal the request’s completion. For added flexibility, this message includes all the metadata again, even though it was also provided at the start of the streaming process.</p>
            <pre><code>{
  "type":"universal.done",
  "metadata":{
     "cacheStatus":"MISS",
     "eventId":"my-request",
     "logId":"01JC40RB3NGBE5XFRZGBN07572",
     "step":"0",
     "contentType":"text/event-stream"
  }
}
</code></pre>
            
    <div>
      <h2>Try it out today</h2>
      <a href="#try-it-out-today">
        
      </a>
    </div>
    <p>AI Gateway’s real-time Websocket API is now in beta and open to everyone!</p><p>To try it out, copy your gateway universal endpoint URL, and replace the “https://” with “wss://”, like this: </p><p><code>wss://gateway.ai.cloudflare.com/v1/my-account-id/my-gateway/</code></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5qbWGSAjCj8GjKmY8Zyvld/1bf36332de118857e3d24baea0c2ffc9/BLOG-2617_4.png" />
          </figure><p>Then open a WebSocket connection using your Universal Endpoint, and guarantee that it is authenticated with a Cloudflare token with the AI Gateway <i>Run</i> permission.</p><p>Here’s an example code using the <a href="https://www.npmjs.com/package/ws"><u>ws npm package</u></a>:</p>
            <pre><code>import WebSocket from "ws";
const ws = new WebSocket("wss://gateway.ai.cloudflare.com/v1/my-account-id/my-gateway/", {
   headers: {
       "cf-aig-authorization": "Bearer AI_GATEWAY_TOKEN",
   },
});

ws.on("open", function open() {
   console.log("Connected to server.");
   ws.send(JSON.stringify({
      type: "universal.create",
      request: {
         "provider": "workers-ai",
         "endpoint": "@cf/meta/llama-3.1-8b-instruct",
         "headers": {
            "Authorization": "Bearer WORKERS_AI_TOKEN",
            "Content-Type": "application/json"
         },
         "query": {
            "stream": true,
            "prompt": "tell me a joke"
         }
      }
   }));
});


ws.on("message", function incoming(message) {
   console.log(message.toString())
});
</code></pre>
            <p>Here’s an example code using the built-in browser WebSocket client:</p>
            <pre><code>const socket = new WebSocket("wss://gateway.ai.cloudflare.com/v1/my-account-id/my-gateway/", [
   "cf-aig-authorization.${AI_GATEWAY_TOKEN}"
]);

socket.addEventListener("open", (event) =&gt; {
  console.log("Connected to server.");
   socket.send(JSON.stringify({
      type: "universal.create",
      request: {
         "provider": "workers-ai",
         "endpoint": "@cf/meta/llama-3.1-8b-instruct",
         "headers": {
            "Authorization": "Bearer WORKERS_AI_TOKEN",
            "Content-Type": "application/json"
         },
         "query": {
            "stream": true,
            "prompt": "tell me a joke"
         }
      }
   }));
});

socket.addEventListener("message", (event) =&gt; {
  console.log(event.data);
});
</code></pre>
            
    <div>
      <h3>And we will DO it again</h3>
      <a href="#and-we-will-do-it-again">
        
      </a>
    </div>
    <p>In Q1 2025, we plan to support WebSocket-to-WebSocket connections (using DOs), allowing you to connect to OpenAI's new real-time API directly through our platform. In the meantime, you can deploy <a href="https://github.com/cloudflare/openai-workers-relay"><u>this Worker</u></a> in your account to proxy the requests yourself.</p><p>If you have any questions, reach out on our <a href="http://discord.cloudflare.com/"><u>Discord channel</u></a>. We’re also hiring for AI Gateway, check out <a href="https://www.cloudflare.com/en-gb/careers/jobs/?department=Emerging+Technology+and+Incubation&amp;location=Lisbon%2C+Portugal"><u>Cloudflare Jobs in Lisbon</u></a>!</p> ]]></content:encoded>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[AI Gateway]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[JavaScript]]></category>
            <guid isPermaLink="false">2b8uznXSknoVGwTIcxxmKp</guid>
            <dc:creator>Catarina Pires Mota</dc:creator>
            <dc:creator>Gabriel Massadas</dc:creator>
        </item>
        <item>
            <title><![CDATA[Billions and billions (of logs): scaling AI Gateway with the Cloudflare Developer Platform]]></title>
            <link>https://blog.cloudflare.com/billions-and-billions-of-logs-scaling-ai-gateway-with-the-cloudflare/</link>
            <pubDate>Thu, 24 Oct 2024 13:00:00 GMT</pubDate>
            <description><![CDATA[ How we scaled AI Gateway to handle and store billions of requests, using Cloudflare Workers, D1, Durable Objects, and R2. ]]></description>
            <content:encoded><![CDATA[ <p>With the rapid advancements occurring in the AI space, developers face significant challenges in keeping up with the ever-changing landscape. New models and providers are continuously emerging, and understandably, developers want to experiment and test these options to find the best fit for their use cases. This creates the need for a streamlined approach to managing multiple models and providers, as well as a centralized platform to efficiently monitor usage, implement controls, and gather data for optimization.</p><p><a href="https://developers.cloudflare.com/ai-gateway/"><u>AI Gateway</u></a> is specifically designed to address these pain points. Since its launch in <a href="https://blog.cloudflare.com/announcing-ai-gateway"><u>September 2023</u></a>, AI Gateway has empowered developers and organizations by successfully proxying over 2 billion requests in just one year, as we <a href="https://blog.cloudflare.com/workers-ai-bigger-better-faster/#optimizing-ai-workflows-with-ai-gateway"><u>highlighted during September’s Birthday Week</u></a>. With AI Gateway, developers can easily store, analyze, and optimize their AI <a href="https://www.cloudflare.com/learning/ai/inference-vs-training/"><u>inference</u></a> requests and responses in real time.</p><p>With our initial architecture, AI Gateway faced a significant challenge: the logs, those critical trails of data interactions between applications and AI models, could only be retained for 30 minutes. This limitation was not just a minor inconvenience; it posed a substantial barrier for developers and businesses needing to analyze long-term patterns, ensure compliance, or simply debug over more extended periods.</p><p>In this post, we'll explore the technical challenges and strategic decisions behind extending our log storage capabilities from 30 minutes to being able to store billions of logs indefinitely. We'll discuss the challenges of scale, the intricacies of data management, and how we've engineered a system that not only meets the demands of today, but is also scalable for the future of AI development.</p>
    <div>
      <h2>Background</h2>
      <a href="#background">
        
      </a>
    </div>
    <p>AI Gateway is built on <a href="https://workers.cloudflare.com"><u>Cloudflare Workers</u></a>, a serverless platform that runs on the Cloudflare network, allowing developers to write small JavaScript functions that can execute at the point of need, near the user, on Cloudflare's vast network of data centers, without worrying about platform scalability.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6jV3iKCN771ixU21Hixfpz/18086a52cfe05cd20f1c94bbba21e293/_BLOG-2593_2.png" />
          </figure><p>Our customers use multiple providers and models and are always looking to optimize the way they do inference. And, of course, in order to evaluate their prompts, performance, cost, and to troubleshoot what’s going on, AI Gateway’s customers need to store requests and responses. New requests show up within 15 seconds and customers can check a request’s cost, duration, number of tokens, and provide their feedback (thumbs up or down).</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/RBqZXnLJNCaQPbtbzjQmj/70aa2598f9b9294b67db8cd5712a6345/_BLOG-2593_3.png" />
          </figure><p>This scales in a way where an account can have multiple gateways and each gateway has its own settings. In our first implementation, a backend worker was responsible for storing Real Time Logs and other background tasks. However, in the rapidly evolving domain of artificial intelligence, where real-time data is as precious as the insights it provides, <a href="https://www.cloudflare.com/learning/performance/log-retention-best-practices/">managing log data efficiently</a> becomes paramount. We recognized that to truly empower our users, we needed to offer a solution where logs weren't just transient records but could be stored permanently. Permanent log storage means developers can now track the performance, security, and operational insights of their AI applications over time, enabling not only immediate troubleshooting but also longitudinal studies of AI behavior, usage trends, and system health.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1TcC1ZdyNzT0xwFwme2oBt/a9202691a0a983fa3eafdf6c0ee92f2c/_BLOG-2593_4.png" />
          </figure><p>The diagram above describes our old architecture, which could only store 30 minutes of data.</p><p>Tracing the path of a request through the AI Gateway, as depicted in the sequence above:</p><ol><li><p>A developer sends a new inference request, which is first received by our Gateway Worker.</p></li><li><p>The Gateway Worker then performs several checks: it looks for cached results, enforces rate limits, and verifies any other configurations set by the user for their gateway. Provided all conditions are met, it forwards the request to the selected inference provider (in this diagram, OpenAI).</p></li><li><p>The inference provider processes the request and sends back the response.</p></li><li><p>Simultaneously, as the response is relayed back to the developer, the request and response details are also dispatched to our Backend Worker. This worker's role is to manage and store the log of this transaction.</p></li></ol>
    <div>
      <h2>The challenge: Store two billion logs</h2>
      <a href="#the-challenge-store-two-billion-logs">
        
      </a>
    </div>
    
    <div>
      <h3>First step: real-time logs</h3>
      <a href="#first-step-real-time-logs">
        
      </a>
    </div>
    <p>Initially, the AI Gateway project stored both request metadata and the actual request bodies in a <a href="https://developers.cloudflare.com/d1/"><u>D1 database</u></a>. This approach facilitated rapid development in the project's infancy. However, as customer engagement grew, the <a href="https://www.cloudflare.com/developer-platform/products/d1/">D1 database</a> began to fill at an accelerating rate, eventually retaining logs for only 30 minutes at a time.</p><p>To mitigate this, we first optimized the database schema, which extended the log retention to one hour. However, we soon encountered diminishing returns due to the sheer volume of byte data from the request bodies. Post-launch, it became clear that a more scalable solution was necessary. We decided to migrate the request bodies to R2 storage, significantly alleviating the data load on D1. This adjustment allowed us to incrementally extend log retention to 24 hours.</p><p>Consequently, D1 functioned primarily as a log index, enabling users to search and filter logs efficiently. When users needed to view details or download a log, these actions were seamlessly proxied through to R2.</p><p>This dual-system approach provided us with the breathing room to contemplate and develop more sophisticated storage solutions for the future.</p>
    <div>
      <h3>Second step: persistent logs and Durable Object transactional storage</h3>
      <a href="#second-step-persistent-logs-and-durable-object-transactional-storage">
        
      </a>
    </div>
    <p>As our traffic surged, we encountered a growing number of requests from customers wanting to access and compare older logs.</p><p>Upon learning that the Durable Objects team was seeking beta testers for their new <a href="https://blog.cloudflare.com/sqlite-in-durable-objects/"><u>Durable Objects with SQLite</u></a>, we eagerly signed up.</p><p>Originally, we considered Durable Objects as the ideal solution for expanding our log storage capacity, which required us to shard the logs by a unique string. Initially, this string was the account ID, but during a mid-development load test, we hit a cap at 10 million logs per Durable Object. This limitation meant that each account could only support up to this number of logs.</p><p>Given our commitment to the DO migration, we saw an opportunity rather than a constraint. To overcome the 10 million log limit per account, we refined our approach to shard by both account ID and gateway name. This adjustment effectively raised the storage ceiling from 10 million logs per account to 10 million per gateway. With the default setting allowing each account up to 10 gateways, the potential storage for each account skyrocketed to 100 million logs.</p><p>This strategic pivot not only enabled us to store a significantly larger number of logs. But also enhanced our flexibility in gateway management. Now, when a gateway is deleted, we can simply remove the corresponding Durable Object.</p><p>Additionally, this sharding method isolates high-volume request scenarios. If one customer's heavy usage slows down log insertion, it only impacts their specific Durable Object, thereby preserving performance for other customers.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3Q6degDA3V02dZFVugW2LO/ae121890a3d4493e5c01459c477f32d9/_BLOG-2593_5.png" />
          </figure><p>Taking a glance at the revised architecture diagram, we replaced the Backend Worker with our newly integrated Durable Object. The rest of the request flow remains unchanged, including the concurrent response to the user and the interaction with the Durable Object, which occurs in the fourth step.</p><p>Leveraging Cloudflare’s network, our Gateway Worker operates near the user's location, which in turn positions the user's Durable Object close by. This proximity significantly enhances the speed of log insertion and query operations.</p>
    <div>
      <h3>Third step: managing thousands of Durable Objects</h3>
      <a href="#third-step-managing-thousands-of-durable-objects">
        
      </a>
    </div>
    <p>As the number of users and requests on AI Gateway grows, managing each unique Durable Object (DO) becomes increasingly complex. New customers join continuously, and we needed an efficient method to track each DO, ensure users stay within their 10 gateway limit, and manage the storage capacity for free users.</p><p>To address these challenges, we introduced another layer of control with a new Durable Object we've named the Account Manager. The primary function of the Account Manager is straightforward yet crucial: it keeps user activities in check.</p><p>Here's how it works: before any Gateway commits a new log to permanent storage, it consults the Account Manager. This check determines whether the gateway is allowed to insert the log based on the user's current usage and entitlements. The Account Manager uses its own SQLite database to verify the total number of rows a user has and their service level. If all checks pass, it signals the Gateway that the log can be inserted. It was paramount to guarantee that this entire validation process occurred in the background, ensuring that the user experience remains seamless and uninterrupted.</p><p>The Account Manager stays updated by periodically receiving data from each Gateway’s Durable Object. Specifically, after every 1000 inference requests, the Gateway sends an update on its total rows to the Account Manager, which then updates its local records. This system ensures that the Account Manager has the most current data when making its decisions.</p><p>Additionally, the Account Manager is responsible for monitoring customer entitlements. It tracks whether an account is on a free or paid plan, how many gateways a user is permitted to create, and the log storage capacity allocated to each gateway. </p><p>Through these mechanisms, the Account Manager not only helps in maintaining system integrity but also ensures fair usage across all users of AI Gateway.</p>
    <div>
      <h2>AI evaluations and Durable Objects sharding</h2>
      <a href="#ai-evaluations-and-durable-objects-sharding">
        
      </a>
    </div>
    <p>As we continue to develop evaluations to fully automatic and, in the future, use Large Language Models (LLMs),  we are now taking the first step towards this goal and launching the open beta phase of comprehensive <a href="https://blog.cloudflare.com/workers-ai-bigger-better-faster/#optimizing-ai-workflows-with-ai-gateway"><u>AI evaluations</u></a>, centered on Human-in-the-Loop feedback.</p><p>This feature empowers users to create bespoke datasets from their application logs, thereby enabling them to score and evaluate the performance, speed, and cost-effectiveness of their models, with a primary focus on LLMs and automated scoring, analyzing the performance of LLMs, providing developers with objective, data-driven insights to refine their models.</p><p>To do this, developers require a reliable logging mechanism that persists logs from multiple gateways, storing up to 100 million logs in total (10 million logs per gateway, across 10 gateways). This represents a significant volume of data, as each request made through the AI Gateway generates a log entry, with some log entries potentially exceeding 50 MB in size.</p><p>This necessity leads us to work on the expansion of log storage capabilities. Since log storage is limited to 10 million logs per gateway, in future iterations, we aim to scale this capacity by implementing sharded Durable Objects (DO), allowing multiple Durable Objects per gateway to handle and store logs. This scaling strategy will enable us to store significantly larger volumes of logs, providing richer data for evaluations (using LLMs as a judge or from user input), all through AI Gateway.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7FLy2JEfvGFo8P7PCVBZYT/a4d6367341e9fc224dedaad3aa0f02e2/_BLOG-2593_6.png" />
          </figure>
    <div>
      <h2>Coming Soon</h2>
      <a href="#coming-soon">
        
      </a>
    </div>
    <p>We are working on improving our existing <a href="https://developers.cloudflare.com/ai-gateway/providers/universal/"><u>Universal Endpoint</u></a>, the next step on an enhanced solution that builds on existing fallback mechanisms to offer greater resilience, flexibility, and intelligence in request management.</p><p>Currently, when a provider encounters an error or is unavailable, our system <a href="https://developers.cloudflare.com/ai-gateway/configuration/fallbacks/"><u>falls back</u></a> to an alternative provider to ensure continuity. The improved Universal Endpoint takes this a step further by introducing automatic retry capabilities, allowing failed requests to be reattempted before fallback is triggered. This significantly improves reliability by handling transient errors and increasing the likelihood of successful request fulfillment. It will look something like this:</p>
            <pre><code>curl --location 'https://aig.example.com/' \
--header 'CF-AIG-TOKEN: Bearer XXXX' \
--header 'Content-Type: application/json' \
--data-raw '[
    {
        "id": "0001",
        "provider": "openai",
        "endpoint": "chat/completions",
        "headers": {
            "Authorization": "Bearer XXXX",
            "Content-Type": "application/json"
        },
        "query": {
            "model": "gpt-3.5-turbo",
            "messages": [
                {
                    "role": "user",
                    "content": "generate a prompt to create cloudflare random images"
                }
            ]
        },
        "option": {
            "retry": 2,
            "delay": 200,
            "onComplete": {
                "provider": "workers-ai",
                "endpoint": "@cf/stabilityai/stable-diffusion-xl-base-1.0",
                "headers": {
                    "Authorization": "Bearer XXXXXX",
                    "Content-Type": "application/json"
                },
                "query": {
                    "messages": [
                        {
                            "role": "user",
                            "content": "&lt;prompt-response id='\''0001'\'' /&gt;"
                        }
                    ]
                }
            }
        }
    },
    {
        "provider": "workers-ai",
        "endpoint": "@cf/stabilityai/stable-diffusion-xl-base-1.0",
        "headers": {
            "Authorization": "Bearer XXXXXX",
            "Content-Type": "application/json"
        },
        "query": {
            "messages": [
                {
                    "role": "user",
                    "content": "create a image of a missing cat"
                }
            ]
        }
    }
]'</code></pre>
            <p>The request to the improved Universal Endpoint system demonstrates how it handles multiple providers with integrated retry mechanisms and fallback logic. In this example, the first request is sent to a provider like OpenAI, asking it to generate a text-to-image prompt. The “retry” option ensures that transient issues don’t result in immediate failure.</p><p>The system’s ability to seamlessly switch between providers while applying retry strategies ensures higher reliability and robustness in managing requests. By leveraging fallback logic, the Improved Universal Endpoint can dynamically adapt to provider failures, ensuring that tasks are completed successfully even in complex, multi-step workflows.</p><p>In addition to retry logic, we will have the ability to inspect requests and responses and make dynamic decisions based on the content of the result. This enables developers to create conditional workflows where the system can adapt its behavior depending on the nature of the response, creating a highly flexible and intelligent decision-making process.</p><p>If you haven’t yet used AI Gateway, check out our <a href="https://developers.cloudflare.com/ai-gateway/"><u>developer documentation</u></a> on how to get started. If you have any questions, reach out on our <a href="http://discord.cloudflare.com/"><u>Discord channel</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[AI Gateway]]></category>
            <category><![CDATA[Durable Objects]]></category>
            <category><![CDATA[D1]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <guid isPermaLink="false">2LUyKREpCJjJ5qGqwZyoAx</guid>
            <dc:creator>Catarina Pires Mota</dc:creator>
            <dc:creator>Gabriel Massadas</dc:creator>
            <dc:creator>Nelson Duarte</dc:creator>
        </item>
        <item>
            <title><![CDATA[How we built it: the technology behind Cloudflare Radar 2.0]]></title>
            <link>https://blog.cloudflare.com/technology-behind-radar2/</link>
            <pubDate>Thu, 17 Nov 2022 14:00:00 GMT</pubDate>
            <description><![CDATA[ Radar 2.0 was launched last month during Cloudflare's Birthday Week as a complete product revamp. This blog explains how we built it technically. Hopefully, it will inspire other developers to build complex web apps using Cloudflare products. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1Tbyq3gfFHHRXwc4Uny8RH/a7d2558532a5b33ce1ffa285c950afb2/image11-1.png" />
            
            </figure><p><a href="/radar2/">Radar 2.0</a> was built on the learnings of Radar 1.0 and was launched last month during Cloudflare's Birthday Week as a complete product revamp. We wanted to make it easier for our users to find insights and navigate our data, and overall provide a better and faster user experience.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/34b5YuwtsM5h8WwqSABuXw/dc314ea8e1a3a6b8db68bae7010e64ed/image16.png" />
            
            </figure><p>We're building a <a href="/welcome-to-the-supercloud-and-developer-week-2022/">Supercloud</a>. Cloudflare's products now include hundreds of features in networking, security, access controls, computing, storage, and more.</p><p>This blog will explain how we built the new Radar from an engineering perspective. We wanted to do this to demonstrate that anyone could build a somewhat complex website that involves demanding requirements and multiple architectural layers, do it on top of our stack, and how easy it can be.</p><p>Hopefully, this will inspire other developers to switch from traditional software architectures and build their applications using modern, more efficient technologies.</p>
    <div>
      <h2>High level architecture</h2>
      <a href="#high-level-architecture">
        
      </a>
    </div>
    <p>The following diagram is a birds-eye view of the Radar 2.0 architecture. As you can see, it's divided into three main layers:</p><ul><li><p>The Core layer is where we keep our data lake, data exploration tools, and backend API.</p></li><li><p>The Cloudflare network layer is where we host and run Radar and serve the public APIs.</p></li><li><p>The Client layer is essentially everything else that runs in your browser. We call it the Radar Web app.</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7eBe8tSbh0Uocpgw1abbZ8/5b30a247f945240f2f9254f16c0a021a/image3-31.png" />
            
            </figure><p>As you can see, there are Cloudflare products <i>everywhere</i>. They provide the foundational resources to host and securely run our code at scale, but also other building blocks necessary to run the application end to end.</p><p>By having these features readily available and tightly integrated into our ecosystem and tools, at the distance of a click and a few lines of code, engineering teams don't have to reinvent the wheel constantly and can use their time on what is essential: their app logic.</p><p>Let's dig in.</p>
    <div>
      <h2>Cloudflare Pages</h2>
      <a href="#cloudflare-pages">
        
      </a>
    </div>
    <p>Radar 2.0 is deployed using <a href="https://pages.cloudflare.com/">Cloudflare Pages</a>, our <a href="https://www.cloudflare.com/developer-platform/solutions/hosting/">developer-focused website hosting platform</a>. In the early days, you could only host static assets on Pages, which was helpful for many use cases, including integrating with static site generators like <a href="https://developers.cloudflare.com/pages/framework-guides/deploy-a-hugo-site/">Hugo</a>, <a href="https://developers.cloudflare.com/pages/framework-guides/deploy-a-jekyll-site/">Jekyll</a>, or <a href="https://developers.cloudflare.com/pages/framework-guides/deploy-a-gatsby-site/">Gatsby</a>. Still, it wouldn't solve situations where your application needs some sort of server-side computing or advanced logic using a single deployment.</p><p>Luckily Pages recently added support to run custom Workers scripts. With <a href="https://developers.cloudflare.com/pages/platform/functions/">Functions</a>, you can now run server-side code and enable any kind of dynamic functionality you'd typically implement using a separate Worker.</p><p>Cloudflare Pages Functions also allow you to use <a href="https://developers.cloudflare.com/workers/learning/using-durable-objects/">Durable Objects</a>, <a href="https://developers.cloudflare.com/workers/runtime-apis/kv/">KV</a>, <a href="https://developers.cloudflare.com/r2/">R2</a>, or <a href="https://developers.cloudflare.com/d1">D1</a>, just like a regular Worker would. We provide <a href="https://developers.cloudflare.com/pages/platform/functions/">excellent documentation</a> on how to do this and more in our Developer Documentation. Furthermore, the team wrote a blog on <a href="/building-full-stack-with-pages/">how to build a full-stack application</a> that describes all the steps in detail.</p><p>Radar 2.0 needs server-side functions for two reasons:</p><ul><li><p>To render Radar and run the server side of Remix.</p></li><li><p>To implement and serve our frontend API.</p></li></ul>
    <div>
      <h2>Remix and Server-side Rendering</h2>
      <a href="#remix-and-server-side-rendering">
        
      </a>
    </div>
    <p>We use Remix with Cloudflare Pages on Radar 2.0.</p><p><a href="https://remix.run/">Remix</a> follows a server/client model and works under the premise that you can't control the user's network, so web apps must reduce the amount of Javascript, CSS, and JSON they send through the wire. To do this, they move some of the logic to the server.</p><p>In this case, the client browser will get pre-rendered DOM components and the result of pre-fetched API calls with just the right amount of JSON, Javascript, and CSS code, rightfully adjusted to the UI needs. Here’s the <a href="https://remix.run/docs/en/v1/pages/technical-explanation">technical explanation</a> with more detail.</p><p>Typically, Remix would need a Node.js server to do all of this, but guess what: <a href="https://developers.cloudflare.com/pages/framework-guides/remix/">It can also run</a> on Cloudflare Workers and Pages.</p><p>Here’s the code to get the Remix server running on Workers, using Cloudflare Pages:</p>
            <pre><code>import { createPagesFunctionHandler } from "@remix-run/cloudflare-pages";
import * as build from "@remix-run/dev/server-build";

const handleRequest = createPagesFunctionHandler({
  build: {
    ...build,
    publicPath: "/build/",
    assetsBuildDirectory: "public/build",
  },
  mode: process.env.NODE_ENV,
  getLoadContext: (context) =&gt; ({
    ...context.env,
    CF: (context.request as any).cf as IncomingRequestCfProperties | undefined,
  }),
});

const handler: ExportedHandler&lt;Env&gt; = {
  fetch: async (req, env, ctx) =&gt; {
    const r = new Request(req);
    return handleRequest({
      env,
      params: {},
      request: r,
      waitUntil: ctx.waitUntil,
      next: () =&gt; {
        throw new Error("next() called in Worker");
      },
      functionPath: "",
      data: undefined,
    });
  },
};</code></pre>
            <p>In Remix, <a href="https://remix.run/docs/en/v1/guides/api-routes">routes</a> handle changes when a user interacts with the app and changes it (clicking on a menu option, for example). A Remix route can have a <a href="https://remix.run/docs/en/v1/guides/data-loading"><i>loader</i></a>, an <a href="https://remix.run/docs/en/v1/guides/data-writes"><i>action</i></a> and a <a href="https://remix.run/docs/en/v1/api/conventions#root-layout-route"><i>default</i></a> export. The <i>loader</i> handles API calls for fetching data (GET method). The <i>action</i> handles submissions to the server (POST, PUT, PATCH, DELETE methods) and returns the response. The <i>default</i> export handles the UI code in React that’s returned for that route. A route without a <i>default</i> export returns only data.</p><p>Because Remix runs both on the server and the client, it can get smart and know what can be pre-fetched and computed server-side and what must go through the network connection, optimizing everything for performance and responsiveness.</p><p>Here’s an example of a Radar route, simplified for readability, for the <a href="https://radar.cloudflare.com/outage-center">Outage Center</a> page.</p>
            <pre><code>import type { MetaFunction } from "@remix-run/cloudflare";
import { useLoaderData } from "@remix-run/react";
import { type LoaderArgs } from "@remix-run/server-runtime";

export async function loader(args: LoaderArgs) {
  const ssr = await initialFetch(SSR_CHARTS, args);
  return { ssr, };
}

export default function Outages() {
  const { ssr } = useLoaderData&lt;typeof loader&gt;();

  return (
    &lt;Page
      filters={["timerange"]}
      title={
        &lt;&gt;
          &lt;Svg use="icon-outages" /&gt;
          {t("nav.main.outage-center")}
        &lt;/&gt;
      }
    &gt;
      &lt;Grid columns={[1, 1, 1, 1]}&gt;
        &lt;Card.Article colspan={[1, 1, 1, 1]} rowspan={[1, 1, 1, 1]}&gt;
          &lt;Card.Section&gt;
            &lt;Components.InternetOutagesChoropleth ssr={ssr} /&gt;
          &lt;/Card.Section&gt;
          &lt;Divider /&gt;
          &lt;Card.Section&gt;
            &lt;Components.InternetOutagesTable ssr={ssr} /&gt;
          &lt;/Card.Section&gt;
        &lt;/Card.Article&gt;
      &lt;/Grid&gt;
    &lt;/Page&gt;
  );
}</code></pre>
            <p>And here’s what it produces:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7wmqtv1VV0kSOU3UTlwXiz/fcbaf883e8f975e679069737e6750251/image18.png" />
            
            </figure><p>Remix and SSR can also help you with your <a href="https://developer.chrome.com/docs/lighthouse/overview/">Lighthouse</a> scores and SEO. It can drastically improve metrics like <a href="https://web.dev/cls/">Cumulative Layout Shift</a>, <a href="https://web.dev/fcp/">First Contentful Paint</a> and <a href="https://web.dev/lcp/">Largest Contentful Paint</a> by reducing the number of fetches and information traveling from the server to the browser and pre-rendering the DOM.</p><p>Another project porting their app to Remix is <a href="https://cloudflare.tv/">Cloudflare TV</a>. This is how their metrics looked before and after the changes.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/oR1Rqf8Mp1fYJRpQMBrgE/aa49405daef020536bfe82c22e42c5d1/image12.png" />
            
            </figure><p>Radar’s Desktop Lighthouse score is now nearly 100% on Performance, Accessibility, Best Practices, and SEO.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3fe4weO7U21a7ZNGW969su/e5c4250fe87be0cfa8bd6f879a721e03/image14.png" />
            
            </figure><p>Another Cloudflare product that we use extensively on Radar 2.0 is <a href="https://www.cloudflare.com/website-optimization/">Speed</a>. In particular, we want to mention the <a href="/early-hints/">Early Hints</a> feature. Early Hints is a new web <a href="https://developer.mozilla.org/docs/Web/HTTP/Status/103">standard</a> that defines a new HTTP 103 header the server can use to inform the browser which assets will likely be needed to render the web page while it's still being requested, resulting in dramatic load times improvements.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2OSrsfjFevZ4qx1TT4jYWb/a012f0b129b260e0f75a62742e9d9df4/image2-42.png" />
            
            </figure><p>You can use <a href="/early-hints-on-cloudflare-pages/">Cloudflare Pages with Early Hints</a>.</p>
    <div>
      <h2>APIs</h2>
      <a href="#apis">
        
      </a>
    </div>
    <p>Radar has two APIs. The backend which has direct access to our data sources, and the frontend, which is available on the Internet.</p>
    <div>
      <h3>Backend API</h3>
      <a href="#backend-api">
        
      </a>
    </div>
    <p>The backend API was written using <a href="https://www.python.org/">Python</a>, <a href="https://pandas.pydata.org/">Pandas</a> and <a href="https://fastapi.tiangolo.com/">FastAPI</a> and is protected by <a href="https://developers.cloudflare.com/cloudflare-one/policies/access/">Cloudflare Access</a>, <a href="https://developers.cloudflare.com/cloudflare-one/identity/authorization-cookie/validating-json/">JWT tokens</a> and an <a href="https://developers.cloudflare.com/ssl/origin-configuration/authenticated-origin-pull/set-up/">authenticated origin pull</a> (AOP) configuration. Using Python allows anyone on the team, engineers or data scientists, to collaborate easily and contribute to improving and expanding the API, which is great. Our data science team uses <a href="https://jupyter.org/hub">JupyterHub</a> and <a href="https://docs.jupyter.org/en/latest/start/index.html">Jupyter Notebooks</a> as part of their data exploration workflows, which makes prototyping and reusing code, algorithms and models particularly easy and fast.</p><p>It then talks to the upstream frontend API via a <a href="https://strawberry.rocks/">Strawberry</a> based GraphQL server. Using <a href="https://graphql.org/">GraphQL</a> makes it easy to create complex queries, giving internal users and analysts the flexibility they need when building reports from our vast collection of data.</p>
    <div>
      <h3>Frontend API</h3>
      <a href="#frontend-api">
        
      </a>
    </div>
    <p>We built Radar's frontend API on top of Cloudflare <a href="https://developers.cloudflare.com/workers/">Workers</a>. This worker has two main functions:</p><ul><li><p>It fetches data from the backend API using GraphQL, and then transforms it.</p></li><li><p>It provides a public <a href="https://developers.cloudflare.com/radar">REST API</a> that anyone can use, including Radar.</p></li></ul><p>Using a worker in front of our core API allows us to easily add and separate microservices, and also adds notable features like:</p><ul><li><p>Cloudflare's <a href="https://developers.cloudflare.com/workers/runtime-apis/cache/">Cache API</a> allows finer control over what to cache and for how long and supports POST requests and customizable cache control headers, which we use.</p></li><li><p>Stale responses using <a href="https://developers.cloudflare.com/r2/">R2</a>. When the backend API cannot serve a request for some reason, and there’s a stale response cached, it’ll be served directly from R2, giving end users a better experience.</p></li><li><p><a href="https://en.wikipedia.org/wiki/Comma-separated_values">CSV</a> and <a href="https://en.wikipedia.org/wiki/JSON">JSON</a> output formats. The CSV format is convenient and makes it easier for data scientists, analysts, and others to use the API and consume our API data directly from other tools.</p></li></ul>
    <div>
      <h3>Open sourcing our OpenAPI 3 schema generator and validator</h3>
      <a href="#open-sourcing-our-openapi-3-schema-generator-and-validator">
        
      </a>
    </div>
    <p>One last feature on the frontend API is <a href="https://spec.openapis.org/oas/latest.html">OpenAPI 3</a> support. We automatically generate an OpenAPI schema and validate user input with it. This is done through a custom library that we built on top of <a href="https://github.com/kwhitley/itty-router">itty-router</a>, which we also use. Today we’re open sourcing this work.</p><p><a href="https://github.com/cloudflare/itty-router-openapi">itty-router-openapi</a> provides an easy and compact OpenAPI 3 schema generator and validator for Cloudflare Workers. Check our <a href="https://github.com/cloudflare/itty-router-openapi">GitHub repository</a> for more information and details on how to use it.</p>
    <div>
      <h3>Developer’s Documentation</h3>
      <a href="#developers-documentation">
        
      </a>
    </div>
    <p>Today we’re also launching our developer’s <a href="https://developers.cloudflare.com/radar">documentation pages for the Radar API</a> where you can find more information about our data license, basic concepts, how to get started and the available API methods. Cloudflare Radar's API is free, allowing academics, data sleuths and other web enthusiasts to investigate Internet usage across the globe, based on data from our global network.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1GbwacvF3M7mVj6kovY0x3/c9e8c3f3b7d2ef7b364f6740bfec2760/image6-7.png" />
            
            </figure><p>To facilitate using our API, we also put together a <a href="https://colab.research.google.com/github/cloudflare/radar-notebooks/blob/main/notebooks/example.ipynb">Colab Notebook template</a> that you can play with, copy and expand to your use case.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5IQocyWiVv1U9ecurP6AOg/1f6336488129f9f3ae6e804d15526d39/image7-4.png" />
            
            </figure>
    <div>
      <h2>The Radar App</h2>
      <a href="#the-radar-app">
        
      </a>
    </div>
    <p>The Radar App is the code that runs in your browser. We've talked about Remix, but what else do we use?</p><p>Radar relies on a lot of <b>data visualizations</b>. Things like charts and maps are essential to us. We decided to build our reusable library of visualization components on top of two other frameworks: <a href="https://airbnb.io/visx/">visx</a>, a "collection of expressive, low-level visualization primitives for React," <a href="https://d3js.org/">D3</a>, a powerful JavaScript library for manipulating the DOM based on data, and <a href="https://maplibre.org/">MapLibre</a>, an open-source map visualization stack.</p><p>Here’s one of our visualization components in action. We call it the “PewPew map”.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1ENMTCsc23Vq5Of80vVqBP/fd8b4bab134c536c73f34c9ced3db670/image5-12.png" />
            
            </figure><p>And here’s the Remix React code for it, whenever we need to use it in a page:</p>
            <pre><code>&lt;Card.Section
    title={t("card.attacks.title")}
    description={t("card.attacks.description")}
  &gt;
    &lt;Flex gap={spacing.medium} align="center" justify="flex-end"&gt;
      &lt;SegmentedControl
        label="Sort order:"
        name="attacksDirection"
        value={attacksDirection}
        options={[
          { label: t("common.source"), value: "ORIGIN" },
          { label: t("common.target"), value: "TARGET" },
        ]}
      onChange={({ target }: any) =&gt; setAttacksDirection(target.value)}
      /&gt;
    &lt;/Flex&gt;

    &lt;Components.AttacksCombinedChart
      ssr={ssr}
      height={400}
      direction={attacksDirection}
    /&gt;
  &lt;/Card.Section&gt;</code></pre>
            
    <div>
      <h3>SVGs</h3>
      <a href="#svgs">
        
      </a>
    </div>
    <p>Another change we made to Radar was switching our images and graphical assets to <a href="https://en.wikipedia.org/wiki/Scalable_Vector_Graphics">Scalable Vector Graphics</a>. SVGs are great because they're essentially a declarative graphics language. They're XML text files with vectorial information. And so, they can be easily manipulated, transformed, stored, or indexed, and of course, they can be rendered at any size, producing beautiful, crisp results on any device and resolution.</p><p>SVGs are also extremely small and efficient in size compared to bitmap formats and support <a href="https://www.w3.org/TR/SVGTiny12/i18n.html">internationalization</a>, making them easier to translate to other languages (localization), thus providing better <a href="https://www.a11yproject.com/">accessibility</a>.</p><p>Here’s an example of a Radar Bubble Chart, inspected, where you can see the SVG code and the  strings embedded.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1PJgCboffVUKXDjtRTPFov/b07a3c91bc2e3ad91bbfb150ace899bc/image17.png" />
            
            </figure>
    <div>
      <h3>Cosmos</h3>
      <a href="#cosmos">
        
      </a>
    </div>
    <p><a href="https://reactcosmos.org/">React Cosmos</a> is a "sandbox for developing and testing UI components in isolation." We wanted to use Cosmos with Radar 2.0 because it's the perfect project for it:</p><ol><li><p>It has a lot of visual components; some are complex and have many configuration options and features.</p></li><li><p>The components are highly reusable across multiple pages in different contexts with different data.</p></li><li><p>We have a multidisciplinary team; everyone can send a pull request and add or change code in the frontend.</p></li></ol><p>Cosmos acts as a component library where you can see our palette of ready-to-use visualizations and widgets, from simple buttons to complex charts, and you play with their options in real-time and see what happens. Anyone can do it, not only designers or engineers but also other project stakeholders. This effectively improves team communications and makes contributing and iterating quickly.</p><p>Here’s a screenshot of our Cosmos in action:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5LBKvrUOyalqcEkuie59dB/c571741695acf093204103d94ce6ebd5/image1-57.png" />
            
            </figure>
    <div>
      <h2>Continuous integration and development</h2>
      <a href="#continuous-integration-and-development">
        
      </a>
    </div>
    <p>Continuous integration is important for any team doing modern software. Cloudflare Pages provides multiple options to work with CI tools using direct uploads, out of the box. The team has put up <a href="https://developers.cloudflare.com/pages/how-to/use-direct-upload-with-continuous-integration/">documentation and examples</a> on how to do that with GitHub Actions, CircleCI, and Travis, but you can use others.</p><p>In our case, we use BitBucket and TeamCity internally to build and deploy our releases. Our workflow automatically builds, tests, and deploys Radar 2.0 within minutes on an approved PR and follow-up merge.</p><p>Unit tests are done with <a href="https://vitest.dev/">Vitest</a> and E2E tests with <a href="https://playwright.dev/">Playwright</a>. Visual Regression testing is planned and <a href="https://playwright.dev/docs/test-snapshots">Playwright can also help with that</a>.</p><p>Furthermore, we have multiple environments to stage and test our releases before they go live to production. Our <a href="https://www.cloudflare.com/learning/serverless/glossary/what-is-ci-cd/">CI/CD</a> setup makes it easy to switch from one environment to the other or quickly roll back any undesired deployment.</p><p>Again Cloudflare Pages makes it easy to do this using <a href="https://developers.cloudflare.com/pages/platform/preview-deployments/">Preview deployments</a>, aliases, or <a href="https://developers.cloudflare.com/pages/platform/branch-build-controls/">Branch build controls</a>. The same is true for regular Workers using <a href="https://developers.cloudflare.com/workers/platform/environments/">Environments</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2Q9hXo3Iz04CtXpmaALbzI/0c40d53fd560b42f08098a00d00157e5/image19.png" />
            
            </figure>
    <div>
      <h3>Fast previews and notifications</h3>
      <a href="#fast-previews-and-notifications">
        
      </a>
    </div>
    <p>Radar 1.0 wasn't particularly fast doing CI/CD, we confess. We had a few episodes when a quick fix could take some good 30 minutes from committing the code to deployment, and we felt frustrated about it.</p><p>So we invested a lot in ensuring that the new CI would be fast, efficient, and furious.</p><p>One cool thing we ended up doing was fast preview links on any commit pushed to the code repository. Using a combination of intelligent caching during builds and doing asynchronous tests when the commit is outside the normal release branches, we were able to shorten the deployment time to seconds.</p><p>This is the notification we get in our chat when anyone pushes code to any branch:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5moqSeGzEAlNHnEJVoxwbO/900325ae89a90528c250ffd3fa4c4e0e/image8-2.png" />
            
            </figure><p>Anyone can follow a thread for a specific branch in the chat and get notified on new changes when they happen.</p><p>Blazing-fast builds, preview links and notifications are game-changers. An engineer can go from an idea or a quick fix to showing the result on a link to a product manager or another team member. Anyone can quickly click the link to see the changes on a fully working end-to-end version of Radar.</p>
    <div>
      <h2>Accessibility and localization</h2>
      <a href="#accessibility-and-localization">
        
      </a>
    </div>
    <p>Cloudflare is committed to web accessibility. Recently we announced how we upgraded Cloudflare’s Dashboard to <a href="/project-a11y/">adhere to industry accessibility standards</a>, but this premise is valid for all our properties. The same is true for localization. In 2020, we <a href="/internationalizing-the-cloudflare-dashboard/">internationalized</a> our Dashboard and added support for new languages and locales.</p><p>Accessibility and localization go hand in hand and are both important, but they are also different. The <a href="https://www.w3.org/TR/WCAG21/">Web Content Accessibility Guidelines</a> define many best practices around accessibility, including using <a href="https://color.cloudflare.design/">color</a> and contrast, tags, SVGs, shortcuts, gestures, and many others. The <a href="https://www.a11yproject.com/">A11Y project page</a> is an excellent resource for learning more.</p><p>Localization, though, also known as <a href="https://en.wikipedia.org/wiki/Internationalization_and_localization">L10n</a>, is more of a technical requirement when you start a new project. It's about making sure you choose the right set of libraries and frameworks that will make it easier to add new translations without engineering dependencies or code rewrites.</p><p>We wanted Radar to perform well on both fronts. Our design system takes Cloudflare's design and brand <a href="https://cloudflare.design/">guidelines</a> seriously and adds as many A11Y good practices as possible, and the app is fully aware of localization strings across its pages and UI components.</p><p>Adding a new language is as easy as translating a single JSON file. Here's a snippet of the en-US.json file with the default American English strings:</p>
            <pre><code>{
  "abbr.asn": "Autonomous System Number",
  "actions.chart.download.csv": "Download chart data in CSV",
  "actions.chart.download.png": "Download chart in PNG Format",
  "actions.chart.download.svg": "Download chart in SVG Format",
  "actions.chart.download": "Download chart",
  "actions.chart.maximize": "Maximize chart",
  "actions.chart.minimize": "Minimize chart",
  "actions.chart.share": "Share chart",
  "actions.download.csv": "Download CSV",
  "actions.download.png": "Download PNG",
  "actions.download.svg": "Download SVG",
  "actions.share": "Share",
  "alert.beta.link": "Radar Classic",
  "alert.beta.message": "Radar 2.0 is currently in Beta. You can still use {link} during the transition period.",
  "card.about.cloudflare.p1": "Cloudflare, Inc. ({website} / {twitter}) is on a mission to help build a better Internet. Cloudflare's suite of products protects and accelerates any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare have all web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks. Cloudflare was named to Entrepreneur Magazine's Top Company Cultures 2018 list and ranked among the World's Most Innovative Companies by Fast Company in 2019.",
  "card.about.cloudflare.p2": "Headquartered in San Francisco, CA, Cloudflare has offices in Austin, TX, Champaign, IL, New York, NY, San Jose, CA, Seattle, WA, Washington, D.C., Toronto, Dubai, Lisbon, London, Munich, Paris, Beijing, Singapore, Sydney, and Tokyo.",
  "card.about.cloudflare.title": "About Cloudflare",
...</code></pre>
            <p>You can expect us to release Radar in other languages soon.</p>
    <div>
      <h2>Radar Reports and Jupyter notebooks</h2>
      <a href="#radar-reports-and-jupyter-notebooks">
        
      </a>
    </div>
    <p><a href="https://radar.cloudflare.com/reports">Radar Reports</a> are documents that use data exploration and storytelling to analyze a particular theme in-depth. Some reports tend to get updates from time to time. Examples of Radar Reports are our quarterly <a href="https://radar.cloudflare.com/reports/ddos-2022-q3">DDoS Attack Trends</a>, or the <a href="https://radar.cloudflare.com/reports/ipv6">IPv6 adoption</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1YdlIovSNvHy7YITYdxiCf/50804dd483563fd236d7b8ab6f05b8b1/image4-23.png" />
            
            </figure><p>The source of these Reports is <a href="https://jupyter.org/">Jupyter Notebooks</a>. Our Data Science team works on some use-case or themes with other stakeholders using our internal Jupyter Hub tool. After all the iteration and exploration are done, and the work is signed off, a notebook is produced.</p><p>A Jupyter Notebook is a <a href="https://ipython.org/ipython-doc/3/notebook/nbformat.html">JSON document</a> containing text, source code, rich media such as images or charts, and other metadata. It is the de facto standard for presenting data science projects, and every data scientist uses it.</p><p>With Radar 1.0, converting a Jupyter Notebook to a Radar page was a lengthy and manual process implicating many engineering and design resources and causing much frustration to everyone involved. Even updating an already-published notebook would frequently cause trouble for us.</p><p>Radar 2.0 changed all of this. We now have a fully automated process that takes a Jupyter Notebook and, as long as it's designed using a list of simple rules and internal guidelines, converts it automatically, hosts the resulting HTML and assets in an R2 bucket, and publishes it on the <a href="https://radar.cloudflare.com/reports">Reports</a> page.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5iS9T8mn23CDjg4nFfpqWT/efdeaa9d9b3ee645cd21b5014af13da1/image9-2.png" />
            
            </figure><p>The conversion to HTML takes into account our design system and UI components, and the result is a <a href="https://radar.cloudflare.com/reports/ddos-2022-q3">beautiful document</a>, usually long-form, perfectly matching Radar's look and feel.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4oXmxPoDH1NDJqsBaIRKK4/9cdd9229bd835c6722fb06c6350fbf0f/image13.png" />
            
            </figure><p>We will eventually open-source this tool so that anyone can use it.</p>
    <div>
      <h2>More Cloudflare, less to worry about</h2>
      <a href="#more-cloudflare-less-to-worry-about">
        
      </a>
    </div>
    <p>We gave examples of using Cloudflare's products and features to build your next-gen app without worrying too much about things that aren't core to your business or logic. A few are missing, though.</p><p>Once the app is up and running, you must protect it from bad traffic and malicious actors. Cloudflare offers you <a href="https://www.cloudflare.com/ddos/">DDoS</a>, <a href="https://www.cloudflare.com/waf/">WAF</a>, and <a href="https://www.cloudflare.com/products/bot-management/">Bot Management</a> protection out of the box at a click's distance.</p><p>For example, here are some of our security rules. This is traffic we don't have to worry about in our app because Cloudflare detects it and acts on it according to our rules.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/14D17IdfhuHOyzPeFPePyP/faa894184818241d101551d1815bf0d7/image10-1.png" />
            
            </figure><p>Another thing we don't need to worry about is redirects from the old site to the new one. Cloudflare has a feature called <a href="https://developers.cloudflare.com/rules/url-forwarding/bulk-redirects/create-dashboard/">Bulk Redirects</a>, where you can easily create redirect lists directly on the dashboard.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3ieVLmpoynt1H7lDWykfs8/ca4136845ea8fa4f66166a3be7fa57b5/image15.png" />
            
            </figure><p>It's also important to mention that every time we talk about what you can do using our Dashboard, we're, in fact, also saying you can do precisely the same using <a href="https://api.cloudflare.com/">Cloudflare's APIs</a>. Our Dashboard is built entirely on top of them. And if you're the infrastructure as code kind of person, we have you covered, too; you can use the <a href="https://developers.cloudflare.com/terraform/tutorial/">Cloudflare Terraform provider</a>.</p><p>Deploying and managing Workers, R2 buckets, or Pages sites is obviously scriptable too. <a href="https://github.com/cloudflare/wrangler">Wrangler</a> is the command-line tool to do this and more, and it goes the extra mile to allow you to run your full app <a href="https://developers.cloudflare.com/workers/wrangler/commands/#dev">locally</a>, emulating our stack, on your computer, before deploying.</p>
    <div>
      <h2>Final words</h2>
      <a href="#final-words">
        
      </a>
    </div>
    <p>We hope you enjoyed this Radar team write-up and were inspired to build your next app on top of our <a href="/welcome-to-the-supercloud-and-developer-week-2022/">Supercloud</a>. We will continue improving and innovating on Radar 2.0 with new features, share our findings and open-sourcing our tools with you.</p><p>In the meantime, we opened a <a href="https://discord.gg/cloudflaredev">Radar room</a> on our Developers Discord Server. Feel free to <a href="https://discord.gg/cloudflaredev">join</a> it and ask us questions; the team is eager to receive feedback and discuss web technology with you.</p><p>You can also follow us <a href="https://twitter.com/cloudflareradar">on Twitter</a> for more Radar updates.</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Radar]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Cloudflare Pages]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">2H0o8Ld6ebN4hs7uhm1ELW</guid>
            <dc:creator>Celso Martinho</dc:creator>
            <dc:creator>Nuno Pereira</dc:creator>
            <dc:creator>Sofia Cardita</dc:creator>
            <dc:creator>Gabriel Massadas</dc:creator>
        </item>
    </channel>
</rss>