
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Thu, 09 Apr 2026 20:39:59 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Announcing the Cloudflare Data Platform: ingest, store, and query your data directly on Cloudflare]]></title>
            <link>https://blog.cloudflare.com/cloudflare-data-platform/</link>
            <pubDate>Thu, 25 Sep 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ The Cloudflare Data Platform, launching today, is a fully-managed suite of products for ingesting, transforming, storing, and querying analytical data, built on Apache Iceberg and R2 storage. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>For Developer Week in April 2025, we announced <a href="https://blog.cloudflare.com/r2-data-catalog-public-beta/"><u>the public beta of R2 Data Catalog</u></a>, a fully managed <a href="https://iceberg.apache.org/docs/nightly/"><u>Apache Iceberg</u></a> catalog on top of <a href="https://www.cloudflare.com/developer-platform/products/r2/"><u>Cloudflare R2 object storage</u></a>. Today, we are building on that foundation with three launches:</p><ul><li><p><b>Cloudflare Pipelines</b> receives events sent via Workers or HTTP, transforms them with SQL, and ingests them into Iceberg or as files on R2</p></li><li><p><b>R2 Data Catalog</b> manages the Iceberg metadata and now performs ongoing maintenance, including compaction, to improve query performance</p></li><li><p><b>R2 SQL</b> is our in-house distributed SQL engine, designed to perform petabyte-scale queries over your data in R2</p></li></ul><p>Together, these products make up the <b>Cloudflare Data Platform</b>, a complete solution for ingesting, storing, and querying analytical data tables.</p><p>Like all <a href="https://www.cloudflare.com/developer-platform/products/"><u>Cloudflare Developer Platform products</u></a>, they run on our global compute infrastructure. They’re built around open standards and interoperability. That means that you can bring your own Iceberg query engine — whether that's PyIceberg, DuckDB, or Spark — connect with other platforms like Databricks and Snowflake — and pay no egress fees to access your data.</p><p>Analytical data is critical for modern companies. It allows you to understand your user’s behavior, your company’s performance, and alerts you to issues. But traditional data infrastructure is expensive and hard to operate, requiring fixed cloud infrastructure and in-house expertise. We built the Cloudflare Data Platform to be easy enough for anyone to use with affordable, usage-based pricing.</p><p>If you're ready to get started now, follow the <a href="https://developers.cloudflare.com/pipelines/getting-started/"><u>Data Platform tutorial</u></a> for a step-by-step guide through creating a <a href="https://developers.cloudflare.com/pipelines/"><u>Pipeline</u></a> that processes and delivers events to an <a href="https://developers.cloudflare.com/r2/data-catalog/"><u>R2 Data Catalog</u></a> table, which can then be queried with <a href="https://developers.cloudflare.com/r2-sql/"><u>R2 SQL</u></a>. Or read on to learn about how we got here and how all of this works.</p>
    <div>
      <h3>How did we end up building a Data Platform?</h3>
      <a href="#how-did-we-end-up-building-a-data-platform">
        
      </a>
    </div>
    <p>We <a href="https://blog.cloudflare.com/introducing-r2-object-storage/"><u>launched R2 Object Storage in 2021</u></a> with a radical pricing strategy: no <a href="https://www.cloudflare.com/learning/cloud/what-are-data-egress-fees/"><u>egress fees</u></a> — the bandwidth costs that traditional cloud providers charge to get data out — effectively ransoming your data. This was possible because we had already built one of the largest global networks, interconnecting with thousands of ISPs, cloud services, and other enterprises.</p><p>Object storage powers a wide range of use cases, from media to static assets to AI training data. But over time, we've seen an increasing number of companies using open data and table formats to store their analytical data warehouses in R2.</p><p>The technology that enables this is <a href="https://iceberg.apache.org/"><u>Apache Iceberg</u></a>. Iceberg is a <i>table format</i>, which provides database-like capabilities (including updates, ACID transactions, and schema evolution) on top of data files in object storage. In other words, it’s a metadata layer that tells clients which data files make up a particular logical table, what the schemas are, and how to efficiently query them.</p><p>The adoption of Iceberg across the industry meant users were no longer locked-in to one query engine. But egress fees still make it cost-prohibitive to query data across regions and clouds. R2, with <a href="https://www.cloudflare.com/the-net/egress-fees-exit/"><u>zero-cost egress</u></a>, solves that problem — users would no longer be locked-in to their clouds either. They could store their data in a vendor-neutral location and let teams use whatever query engine made sense for their data and query patterns.</p><p>But users still had to manage all of the metadata and other infrastructure themselves. We realized there was an opportunity for us to solve a major pain point and reduce the friction of storing data lakes on R2. This became R2 Data Catalog, our managed Iceberg catalog.</p><p>With the data stored on R2 and metadata managed, that still left a few gaps for users to solve.</p><p>How do you get data into your Iceberg tables? Once it's there, how do you optimize for query performance? And how do you actually get value from your data without needing to self-host a query engine or use another cloud platform?</p><p>In the rest of this post, we'll walk through how the three products that make up the Data Platform solve these challenges.</p>
    <div>
      <h3>Cloudflare Pipelines</h3>
      <a href="#cloudflare-pipelines">
        
      </a>
    </div>
    <p>Analytical data tables are made up of <i>events</i>, things that happened at a particular point in time. They might come from server logs, mobile applications, or IoT devices, and are encoded in data formats like JSON, Avro, or Protobuf. They ideally have a schema — a standardized set of fields — but might just be whatever a particular team thought to throw in there.</p><p>But before you can query your events with Iceberg, they need to be ingested, structured according to a schema, and written into object storage. This is the role of <a href="https://developers.cloudflare.com/pipelines/"><u>Cloudflare Pipelines</u></a>.</p><p>Built on top of <a href="https://www.arroyo.dev"><u>Arroyo</u></a>, a stream processing engine we acquired earlier this year, Pipelines receives events, transforms them with SQL queries, and sinks them to R2 and R2 Data Catalog.</p><p>Pipelines is organized around three central objects:</p><p><a href="https://developers.cloudflare.com/pipelines/streams/"><b><u>Streams</u></b></a> are how you get data into Cloudflare. They're durable, buffered queues that receive events and store them for processing. Streams can accept events in two ways: via an HTTP endpoint or from a <a href="https://developers.cloudflare.com/workers/runtime-apis/bindings/"><u>Cloudflare Worker binding</u></a>.</p><p><a href="https://developers.cloudflare.com/pipelines/sinks/"><b><u>Sinks</u></b></a> define the destination for your data. We support ingesting into R2 Data Catalog, as well as writing raw files to R2 as JSON or <a href="https://parquet.apache.org/"><u>Apache Parquet</u></a>. Sinks can be configured to frequently write files, prioritizing low-latency ingestion, or to write less frequent, larger files to get better query performance. In either case, ingestion is <i>exactly-once</i>, which means that we will never duplicate or drop events on their way to R2.</p><p><a href="https://developers.cloudflare.com/pipelines/pipelines/"><b><u>Pipelines</u></b></a> connect streams and sinks via <a href="https://developers.cloudflare.com/pipelines/sql-reference/"><u>SQL transformations</u></a>, which can modify events before writing them to storage. This enables you to <i>shift left</i>, pushing validation, schematization, and processing to your ingestion layer to make your queries easy, fast, and correct.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7ExEHrwqgUuYLCm2q6yUkN/bf31d44dd7b97666af37cb2c25f12808/unnamed__33_.png" />
          </figure><p>For example, here's a pipeline that ingests events from a clickstream data source and writes them to Iceberg:</p>
            <pre><code>INSERT into events_table
SELECT
  user_id,
  lower(event) AS event_type,
  to_timestamp_micros(ts_us) AS event_time,
  regexp_match(url, '^https?://([^/]+)')[1]  AS domain,
  url,
  referrer,
  user_agent
FROM events_json
WHERE event = 'page_view'
  AND NOT regexp_like(user_agent, '(?i)bot|spider');</code></pre>
            <p>SQL transformations are very powerful and give you full control over how data is structured and written into the table. For example, you can</p><ul><li><p>Schematize and normalize your data, even using <a href="https://developers.cloudflare.com/pipelines/sql-reference/scalar-functions/json/"><u>JSON functions</u></a> to extract fields from arbitrary JSON</p></li><li><p>Filter out events or split them into separate tables with their own schemas</p></li><li><p>Redact sensitive information before storage with regexes</p></li><li><p>Unroll nested arrays and objects into separate events</p></li></ul><p>Initially, Pipelines supports stateless transformations. In the future, we'll leverage more of <a href="https://www.arroyo.dev/blog/stateful-stream-processing/"><u>Arroyo's stateful processing capabilities</u></a> to support aggregations, incrementally-updated materialized views, and joins.</p><p>Cloudflare Pipelines is available today in open beta. You can create a pipeline using the dashboard, Wrangler, or the REST API. To get started, check out our <a href="https://developers.cloudflare.com/pipelines/getting-started/"><u>developer docs.</u></a></p><p>We aren’t currently billing for Pipelines during the open beta. However, R2 storage and operations incurred by sinks writing data to R2 are billed at <a href="https://developers.cloudflare.com/r2/pricing/"><u>standard rates</u></a>. When we start billing, we anticipate charging based on the amount of data read, the amount of data processed via SQL transformations, and data delivered.</p>
    <div>
      <h3>R2 Data Catalog</h3>
      <a href="#r2-data-catalog">
        
      </a>
    </div>
    <p>We launched the open beta of <a href="https://developers.cloudflare.com/r2/data-catalog/"><u>R2 Data Catalog</u></a> in April and have been amazed by the response. Query engines like DuckDB <a href="https://duckdb.org/docs/stable/guides/network_cloud_storage/cloudflare_r2_import.html"><u>have added native support</u></a>, and we've seen useful integrations like <a href="https://blog.cloudflare.com/marimo-cloudflare-notebooks/"><u>marimo notebooks</u></a>.</p><p>It makes getting started with Iceberg easy. There’s no need to set up a database cluster, connect to object storage, or manage any infrastructure. You can create a catalog with a couple of <a href="https://developers.cloudflare.com/workers/wrangler/"><u>Wrangler</u></a> commands:</p>
            <pre><code>$ npx wrangler bucket create mycatalog 
$ npx wrangler r2 bucket catalog enable mycatalog</code></pre>
            <p>This provisions a data lake that can scale to petabytes of storage, queryable by whatever engine you want to use with zero egress fees.</p><p>But just storing the data isn't enough. Over time, as data is ingested, the number of underlying data files that make up a table will grow, leading to slower and slower query performance.</p><p>This is a particular problem with low-latency ingestion, where the goal is to have events queryable as quickly as possible. Writing data frequently means the files are smaller, and there are more of them. Each file needed for a query has to be listed, downloaded, and read. The overhead of too many small files can dominate the total query time.</p><p>The solution is <i>compaction</i>, a periodic maintenance operation performed automatically by the catalog. Compaction rewrites small files into larger files which reduces metadata overhead and increases query performance. </p><p>Today we are launching compaction support in R2 Data Catalog. Enabling it for your catalog is as easy as:
</p>
            <pre><code>$ npx wrangler r2 bucket catalog compaction enable mycatalog</code></pre>
            <p>We're starting with support for small-file compaction, and will expand to additional compaction strategies in the future. Check out the <a href="https://developers.cloudflare.com/r2/data-catalog/about-compaction/"><u>compaction documentation</u></a> to learn more about how it works and how to enable it.</p><p>At this time, during open beta, we aren’t billing for R2 Data Catalog. Below is our current thinking on future pricing:</p><table><tr><td><p>
</p></td><td><p><b>Pricing*</b></p></td></tr><tr><td><p>R2 storage</p><p>For standard storage class</p></td><td><p>$0.015 per GB-month (no change)</p></td></tr><tr><td><p>R2 Class A operations</p></td><td><p>$4.50 per million operations (no change)</p></td></tr><tr><td><p>R2 Class B operations</p></td><td><p>$0.36 per million operations (no change)</p></td></tr><tr><td><p>Data Catalog operations</p><p>e.g., create table, get table metadata, update table properties</p></td><td><p>$9.00 per million catalog operations</p></td></tr><tr><td><p>Data Catalog compaction data processed</p></td><td><p>$0.005 per GB processed</p><p>$2.00 per million objects processed</p></td></tr><tr><td><p>Data egress</p></td><td><p>$0 (no change, always free)</p></td></tr></table><p><i>*prices subject to change prior to General Availability</i></p><p>We will provide at least 30 days notice before billing starts or if anything changes.</p>
    <div>
      <h3>R2 SQL</h3>
      <a href="#r2-sql">
        
      </a>
    </div>
    <p>Having data in R2 Data Catalog is only the first step; the real goal is getting insights and value from it. Traditionally, that means setting up and managing DuckDB, Spark, Trino, or another query engine, adding a layer of operational overhead between you and those insights. What if instead you could run queries directly on Cloudflare?</p><p>Now you can. We’ve built a query engine specifically designed for R2 Data Catalog and Cloudflare’s edge infrastructure. We call it <a href="https://developers.cloudflare.com/r2-sql/"><u>R2 SQL</u></a>, and it’s available today as an open beta.</p><p>With Wrangler, running a query on an R2 Data Catalog table is as easy as</p>
            <pre><code>$ npx wrangler r2 sql query "{WAREHOUSE}" "\
  SELECT user_id, url FROM events \
  WHERE domain = 'mywebsite.com'"</code></pre>
            <p>Cloudflare's ability to schedule compute anywhere on its global network is the foundation of R2 SQL's design. This lets us process data directly where it lives, instead of requiring you to manage centralized clusters for your analytical workloads.</p><p>R2 SQL is tightly integrated with R2 Data Catalog and R2, which allows the query planner to go beyond simple storage scanning and make deep use of the rich statistics stored in the R2 Data Catalog metadata. This provides a powerful foundation for a new class of query optimizations, such as auxiliary indexes or enabling more complex analytical functions in the future.</p><p>The result is a fully serverless experience for users. You can focus on your SQL without needing a deep understanding of how the engine operates. If you are interested in how R2 SQL works, the team has written <a href="https://blog.cloudflare.com/r2-sql-deep-dive"><u>a deep dive into how R2 SQL’s distributed query engine works at scale.</u></a></p><p>The open beta is an early preview of R2 SQL querying capabilities, and is initially focused around filter queries. Over time, we will be expanding its capabilities to cover more SQL features, like complex aggregations.</p><p>We're excited to see what our users do with R2 SQL. To try it out, see <a href="https://developers.cloudflare.com/r2-sql/"><u>the documentation</u></a> and <a href="https://developers.cloudflare.com/r2-sql/get-started/"><u>tutorials</u></a><b>. </b>During the beta, R2 SQL usage is not currently billed, but R2 storage and operations incurred by queries are billed at standard rates. We plan to charge for the volume of data scanned by queries in the future and will provide notice before billing begins.</p>
    <div>
      <h3>Wrapping up</h3>
      <a href="#wrapping-up">
        
      </a>
    </div>
    <p>Today, you can use the Cloudflare Data Platform to ingest events into R2 Data Catalog and query them via R2 SQL. In the first half of 2026, we’ll be expanding on the capabilities in all of these products, including:</p><ul><li><p>Integration with <a href="https://developers.cloudflare.com/logs/logpush/"><u>Logpush</u></a>, so you can transform, store, and query your logs directly within Cloudflare</p></li><li><p>User-defined functions via Workers, and stateful processing support for streaming transformations</p></li><li><p>Expanding the featureset of R2 SQL to cover aggregations and joins</p></li></ul><p>In the meantime, you can get started with the Cloudflare Data Platform by following <a href="https://developers.cloudflare.com/pipelines/getting-started/"><u>the tutorial</u></a> to create an end-to-end analytical data system, from ingestion with Pipelines, through storage in R2 Data Catalog, and querying with R2 SQL. 

We’re excited to see what you build! Come share your feedback with us on our <a href="http://discord.cloudflare.com/"><u>Developer Discord</u></a>.</p><div>
  
</div><p></p> ]]></content:encoded>
            <category><![CDATA[undefined]]></category>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Data Catalog]]></category>
            <category><![CDATA[Pipelines]]></category>
            <guid isPermaLink="false">1InN6nunuaGKjLU7DcoArr</guid>
            <dc:creator>Micah Wylde</dc:creator>
            <dc:creator>Alex Graham</dc:creator>
            <dc:creator>Jérôme Schneider</dc:creator>
        </item>
        <item>
            <title><![CDATA[Just landed: streaming ingestion on Cloudflare with Arroyo and Pipelines]]></title>
            <link>https://blog.cloudflare.com/cloudflare-acquires-arroyo-pipelines-streaming-ingestion-beta/</link>
            <pubDate>Thu, 10 Apr 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ We’ve just shipped our new streaming ingestion service, Pipelines — and we’ve acquired Arroyo, enabling us to bring new SQL-based, stateful transformations to Pipelines and R2. ]]></description>
            <content:encoded><![CDATA[ <p>Today, we’re launching the open beta of Pipelines, our streaming ingestion product. Pipelines allows you to ingest high volumes of structured, real-time data, and load it into our <a href="https://www.cloudflare.com/developer-platform/products/r2/"><u>object storage service, R2</u></a>. You don’t have to manage any of the underlying infrastructure, worry about scaling shards or metadata services, and you pay for the data processed (and not by the hour). Anyone on a Workers paid plan can start using it to ingest and batch data — at tens of thousands of requests per second (RPS) — directly into R2.</p><p>But this is just the tip of the iceberg: you often want to transform the data you’re ingesting, hydrate it on-the-fly from other sources, and write it to an open table format (such as Apache Iceberg), so that you can efficiently query that data once you’ve landed it in object storage.</p><p>The good news is that we’ve thought about that too, and we’re excited to announce that we’ve acquired <a href="https://www.arroyo.dev/"><u>Arroyo</u></a>, a cloud-native, distributed stream processing engine, to make that happen.</p><p>With Arroyo <i>and </i>our just announced <a href="https://blog.cloudflare.com/r2-data-catalog-public-beta/">R2 Data Catalog</a>, we’re getting increasingly serious about building a data platform that allows you to ingest data across the planet, store it at scale, and <i>run compute over it</i>. </p><p>To get started, you can dive into the <a href="http://developers.cloudflare.com/pipelines/"><u>Pipelines developer docs</u></a> or just run this <a href="https://developers.cloudflare.com/workers/wrangler/"><u>Wrangler</u></a> command to create your first pipeline:</p>
            <pre><code>$ npx wrangler@latest pipelines create my-clickstream-pipeline --r2-bucket my-bucket

...
✅ Successfully created Pipeline my-clickstream-pipeline with ID 0e00c5ff09b34d018152af98d06f5a1xv</code></pre>
            <p>… and then write your first record(s):</p>
            <pre><code>$ curl -d '[{"payload": [],"id":"abc-def"}]' 
"https://0e00c5ff09b34d018152af98d06f5a1xvc.pipelines.cloudflarestorage.com/"</code></pre>
            <p>However, the true power comes from the processing of data streams between ingestion and when they’re written to sinks like R2. Being able to write SQL that acts on windows of data <i>as it’s being ingested</i>, that can transform &amp; aggregate it, and even extract insights from the data in real-time, turns out to be extremely powerful.</p><p>This is where Arroyo comes in, and we’re going to be bringing the best parts of Arroyo into Pipelines and deeply integrate it with Workers, R2, and the rest of our Developer Platform.</p>
    <div>
      <h2>The Arroyo origin story </h2>
      <a href="#the-arroyo-origin-story">
        
      </a>
    </div>
    <p><i>(By Micah Wylde, founder of Arroyo)</i></p><p>We started Arroyo in 2023 to bring real-time (<i>stream</i>) processing to everyone who works with data. Modern companies rely on data pipelines to power their applications and businesses — from user customization, recommendations, and anti-fraud, to the emerging world of AI agents.</p><p>But today, most of these pipelines operate in batch, running once per hour, day, or even month. After spending many years working on stream processing at companies like Lyft and Splunk, it was no mystery why: it was just too hard for developers and data scientists to build correct, performant, and reliable pipelines. Large tech companies hire streaming experts to build and operate these systems, but everyone else is stuck waiting for batches to arrive. </p><p>When we started, the dominant solution for streaming pipelines — and what we ran at Lyft and Splunk — was Apache Flink. Flink was the first system that successfully combined a fault-tolerant (able to recover consistently from failures), distributed (across multiple machines), stateful (and remember data about past events) dataflow with a graph-construction API. This combination of features meant that we could finally build powerful real-time data applications, with capabilities like windows, aggregations, and joins. But while Flink had the necessary power, in practice the API proved too hard and low-level for non-expert users, and the stateful nature of the resulting services required endless operations.</p><p>We realized we would need to build a new streaming engine — one with the power of Flink, but designed for product engineers and data scientists and to run on modern cloud infrastructure. We started with SQL as our API because it’s easy to use, widely known, and declarative. We built it in Rust for speed and operational simplicity (no JVM tuning required!). We constructed an object-storage-native state backend, simplifying the challenge of running stateful pipelines — which each are like a weird, specialized database. And then in the summer of 2023, we open-sourced it. Today, dozens of companies are running Arroyo pipelines with use cases including data ingestion, anti-fraud, IoT observability, and financial trading. </p><p>But we always knew that the engine was just one piece of the puzzle. To make streaming as easy as batch, users need to be able to develop and test query logic, backfill on historical data, and deploy serverlessly without having to worry about cluster sizing or ongoing operations. Democratizing streaming ultimately meant building a complete data platform. And when we started talking with Cloudflare, we realized they already had all of the pieces in place: R2 provides object storage for state and data at rest, Cloudflare <a href="https://developers.cloudflare.com/queues/"><u>Queues</u></a> for data in transit, and Workers to safely and efficiently run user code. And Cloudflare, uniquely, allows us to push these systems all the way to the edge, enabling a new paradigm of local stream processing that will be key for a future of data sovereignty and AI.</p><p>That’s why we’re incredibly excited to join with the Cloudflare team to make this vision a reality.</p>
    <div>
      <h2>Ingestion at scale</h2>
      <a href="#ingestion-at-scale">
        
      </a>
    </div>
    <p>While transformations and a streaming SQL API are on the way for Pipelines, it already solves two critical parts of the data journey: globally distributed, high-throughput ingestion and efficient loading into object storage. </p><p>Creating a pipeline is as simple as running one command: </p>
            <pre><code>$ npx wrangler@latest pipelines create my-clickstream-pipeline --r2-bucket my-bucket

🌀 Creating pipeline named "my-clickstream-pipeline"
✅ Successfully created pipeline my-clickstream-pipeline with ID 
0e00c5ff09b34d018152af98d06f5a1xvc

Id:    0e00c5ff09b34d018152af98d06f5a1xvc
Name:  my-clickstream-pipeline
Sources:
  HTTP:
    Endpoint:        https://0e00c5ff09b34d018152af98d06f5a1xvc.pipelines.cloudflare.com/
    Authentication:  off
    Format:          JSON
  Worker:
    Format:  JSON
Destination:
  Type:         R2
  Bucket:       my-bucket
  Format:       newline-delimited JSON
  Compression:  GZIP
Batch hints:
  Max bytes:     100 MB
  Max duration:  300 seconds
  Max records:   100,000

🎉 You can now send data to your pipeline!

Send data to your pipeline's HTTP endpoint:
curl "https://0e00c5ff09b34d018152af98d06f5a1xvc.pipelines.cloudflare.com/" -d '[{ ...JSON_DATA... }]'</code></pre>
            <p>By default, a pipeline can ingest data from two sources – Workers and an HTTP endpoint – and load batched events into an R2 bucket. This gives you an out-of-the-box solution for streaming raw event data into object storage. If the defaults don’t work, you can configure pipelines during creation or anytime after. Options include: adding authentication to the HTTP endpoint, configuring CORS to allow browsers to make cross-origin requests, and specifying output file compression and batch settings.</p><p>We’ve built Pipelines for high ingestion volumes from day 1. Each pipeline can scale to ~100,000 records per second (and we’re just getting started here). Once records are written to a Pipeline, they are then durably stored, batched, and written out as files in an R2 bucket. Batching is critical here: if you’re going to act on and query that data, you don’t want your query engine querying millions (or tens of millions) of tiny files. It’s slow (per-file &amp; request overheads), inefficient (more files to read), and costly (more operations). Instead, you want to find the right balance between batch size for your query engine and latency (not waiting too long for a batch): Pipelines allows you to configure this.</p><p>To further optimize queries, output files are partitioned by date and time, using the standard Hive partitioning scheme. This can optimize queries even further, because your query engine can just skip data that is irrelevant to the query you’re running. The output in your R2 bucket might look like this:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7q63u2kRoYBAZJtgfcF874/2a7341e1cba6e371e0eed311e89fec6a/image1.png" />
          </figure><p><sup><i>Hive-partioned files from Pipelines in an R2 bucket</i></sup></p><p>Output files are stored as new-line delimited JSON (NDJSON) — which makes it easy to materialize a stream from these files (hint: in the future you’ll be able to use R2 as a pipeline source too). Finally, the file names are <a href="https://github.com/ulid/spec"><u>ULIDs</u></a> - so they’re sorted by time by default.</p>
    <div>
      <h2>First you shard, then you shard some more</h2>
      <a href="#first-you-shard-then-you-shard-some-more">
        
      </a>
    </div>
    <p>What makes Pipelines so horizontally scalable <i>and</i> able to acknowledge writes quickly is how we built it: we use Durable Objects and the <a href="https://blog.cloudflare.com/sqlite-in-durable-objects/"><u>embedded, zero-latency SQLite</u></a> storage within each Durable Object to immediately persist data as it’s written, before then processing it and writing it to R2.</p><p>For example: imagine you’re an e-commerce or SaaS site and need to ingest website usage data (known as <i>clickstream data</i>), and make it available to your data science team to query. The infrastructure which handles this workload has to be resilient to several failure scenarios. The ingestion service needs to maintain high availability in the face of bursts in traffic. Once ingested, the data needs to be buffered, to minimize downstream invocations and thus downstream cost. Finally, the buffered data needs to be delivered to a sink, with appropriate retry &amp; failure handling if the sink is unavailable. Each step of this process needs to signal backpressure upstream when overloaded. It also needs to scale: up during major sales or events, and down during the quieter periods of the day.</p><p>Data engineers reading this post might be familiar with the status quo of using Kafka and the associated ecosystem to handle this. But if you’re an application engineer: you use Pipelines to build an ingestion service <i>without </i>learning about Kafka, Zookeeper, and Kafka streams.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/eRIUocbyvY2oHwEK34pzE/e2ef72b2858c02e890446cfd34accb45/image3.png" />
          </figure><p><sup><i>Pipelines horizontal sharding</i></sup></p><p>The diagram above shows how Pipelines splits the control plane, which is responsible for accounting, tracking shards, and Pipelines lifecycle events, and the data path, which is a scalable group of Durable Objects shards.</p><p>When a record (or batch of records) is written to Pipelines:</p><ol><li><p>The Pipelines Worker receives the records either through the fetch handler or worker binding.</p></li><li><p>Contacts the Coordinator, based upon the <code>pipeline_id</code> to get the execution plan: subsequent reads are cached to reduce pressure on the coordinator.</p></li><li><p>Executes the plan, which first shards to a set of Executors, while are primarily serving to scale read request handling</p></li><li><p>These then re-shard to another set of executors that are actually handling the writes, beginning with persisting to Durable Object storage, which will be replicated for durability and availability by the <a href="https://blog.cloudflare.com/sqlite-in-durable-objects/#under-the-hood-storage-relay-service"><u>Storage Relay Service</u></a> (SRS). </p></li><li><p>After SRS, we pass to any configured Transform Workers to customize the data.</p></li><li><p>The data is batched, written to output files, and compressed (if applicable).</p></li><li><p>The files are compressed, data is packaged into the final batches, and written to the configured R2 bucket.</p></li></ol><p>Each step of this pipeline can signal backpressure upstream. We do this by leveraging <a href="https://developer.mozilla.org/en-US/docs/Web/API/ReadableStream"><u>ReadableStreams</u></a> and responding with <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Status/429"><u>429s</u></a> when the total number of bytes awaiting write exceeds a threshold. Each ReadableStream is able to cross Durable Object boundaries by using <a href="https://developers.cloudflare.com/workers/runtime-apis/rpc/"><u>JSRPC</u></a> calls between Durable Objects. To improve performance, we use RPC stubs for connection reuse between Durable Objects. Each step is also able to retry operations, to handle any temporary unavailability in the Durable Objects or R2.</p><p>We also guarantee delivery even while updating an existing pipeline. When you update an existing pipeline, we create a new deployment, including all the shards and Durable Objects described above. Requests are gracefully re-routed to the new pipeline. The old pipeline continues to write data into R2, until all the Durable Object storage is drained. We spin down the old pipeline only after all the data has been written out. This way, you won’t lose data even while updating a pipeline.</p><p>You’ll notice there’s one interesting part in here — the Transform Workers — which we haven’t yet exposed. As we work to integrate Arroyo’s streaming engine with Pipelines, this will be a key part of how we hand over data for Arroyo to process.</p>
    <div>
      <h2>So, what’s it cost?</h2>
      <a href="#so-whats-it-cost">
        
      </a>
    </div>
    <p>During the first phase of the open beta, there will be no additional charges beyond standard R2 storage and operation costs incurred when loading and accessing data. And as always, egress directly from R2 buckets is free, so you can process and query your data from any cloud or region without worrying about data transfer costs adding up.</p><p>In the future, we plan to introduce pricing based on volume of data ingested into Pipelines and delivered from Pipelines:</p><table><tr><td><p>
</p></td><td><p><b>Workers Paid ($5 / month)</b></p></td></tr><tr><td><p><b>Ingestion</b></p></td><td><p>First 50 GB per month included</p><p>\$0.02 per additional GB</p></td></tr><tr><td><p><b>Delivery to R2</b></p></td><td><p>First 50 GB per month included</p><p>\$0.02 per additional GB</p></td></tr></table><p>We’re also planning to make Pipelines available on the Workers Free plan as the beta progresses.</p><p>We’ll be sharing more as we bring transformations and additional sinks to Pipelines. We’ll provide at least 30 days notice before we make any changes or start charging for usage, which we expect to do by September 15, 2025.</p>
    <div>
      <h2>What’s next?</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>There’s a lot to build here, and we’re keen to build on a lot of the powerful components that Arroyo has built: integrating Workers as UDFs (User-Defined Functions), adding new sources like Kafka clients, and extending Pipelines with new sinks (beyond R2).</p><p>We’ll also be integrating Pipelines with our just-launched <a href="https://blog.cloudflare.com/r2-data-catalog-public-beta/">R2 Data Catalog</a>: enabling you ingest streams of data directly into Iceberg tables and immediately query them, without needing to rely on other systems.</p><p>In the meantime, you can:</p><ul><li><p>Get started and <a href="http://developers.cloudflare.com/pipelines/getting-started/"><u>create your first Pipeline</u></a></p></li><li><p><a href="http://developers.cloudflare.com/pipelines/"><u>Read the docs</u></a></p></li><li><p>Join the <code>#pipelines-beta</code> channel on <a href="http://discord.cloudflare.com/"><u>our Developer Discord</u></a></p></li></ul><p>… or deploy the example project directly: </p>
            <pre><code>$ npm create cloudflare@latest -- pipelines-starter 
--template="cloudflare/pipelines-starter"</code></pre>
            <p></p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[R2]]></category>
            <category><![CDATA[Pipelines]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <guid isPermaLink="false">7rKz4iUFCDuhtjGXVbgFzl</guid>
            <dc:creator>Micah Wylde</dc:creator>
            <dc:creator>Matt Silverlock</dc:creator>
            <dc:creator>Pranshu Maheshwari</dc:creator>
        </item>
    </channel>
</rss>