Subscribe to receive notifications of new posts:

Migrate from S3 easily with the R2 Super Slurper

2022-11-15

2 min read
This post is also available in 简体中文, Español and 日本語.
The R2 Super Slurper will enable anyone to bring any amount of data into R2 with minimal effort and risk

R2 is an S3-compatible, globally distributed object storage, allowing developers to store large amounts of unstructured data without the costly egress bandwidth fees you commonly find with other providers.

To enjoy this egress freedom, you’ll have to start planning to send all that data you have somewhere else into R2. You might want to do it all at once, moving as much data as quickly as possible while ensuring data consistency. Or do you prefer moving the data to R2 slowly and gradually shifting your reads from your old provider to R2? And only then decide whether to cut off your old storage or keep it as a backup for new objects in R2?

There are multiple options for architecture and implementations for this movement, but taking terabytes of data from one cloud storage provider to another is always problematic, always involves planning, and likely requires staffing.

And that was hard. But not anymore.

Today we're announcing the R2 Super Slurper, the feature that will enable you to move all your data to R2 in one giant slurp or sip by sip — all in a friendly, intuitive UI and API.

The first step: R2 Super Slurper Private Beta

One giant slurp

The very first iteration of the R2 Super Slurper allows you to target an S3 bucket and import the objects you have stored there into your R2 bucket. It's a simple, one-time import that covers the most common scenarios. Point to your existing S3 source, grant the R2 Super Slurper permissions to read the objects you want to migrate, and an asynchronous job will take care of the rest.

an active migration implies a big-bang of data movement from your old buckets to R2

You'll also be able to save the definitions and credentials to access your source bucket, so you can migrate different folders from within the bucket, in new operations, without having to define URLs and credentials all over again. This operation alone will save you from scripting your way through buckets with many paths you’d like to validate for consistency.  During the beta stages — with your feedback — we will evolve the R2 Super Slurper to the point where anyone can achieve an entirely consistent, super slurp, all with the click of just a few buttons.

Automatic sip by sip migration

Other future development includes automatic sip by sip migration, which provides a way to incrementally copy objects to R2 as they get requested from an end-user. It allows you to start serving objects from R2 as they migrate, saving you money immediately.

a passive migration strategy involves always serving through Cloudflare, but getting the file from an external repository on the first time it’s requested

The flow of the requests and object migration will look like this:

  • Check for Object — A request arrives at Cloudflare (1), and we check the R2 bucket for the requested object (2). If the object exists, R2 serves it (3).

  • Copy the Object — If the object does not exist in R2, a request for the object flows to the origin bucket (2a). Once there's an answer with an object, we serve it and copy it into R2 (2b).

  • Serve the Object — R2 serves all future requests for the object (3).

With this capability you can copy your objects, previously scattered through one or even multiple buckets from other vendors, while ensuring that everything requested from the end-user side gets served from R2. And because you will only need to use the R2 Super Slurper to sip the object from elsewhere on the first request, you will start saving on those egress fees for any subsequent ones.

We are currently targeting S3-compatible buckets for now, but you can expect other sources to become available during 2023.

Join the waitlist for the R2 Super Slurper private beta

To access the R2 Super Slurper, you must be an R2 user first and sign up for the R2 Super Slurper waitlist here.

We will collaborate closely with many early users in the private beta stage to refine and test the service . Soon, we'll announce an open beta where users can sign up for the service.

Make sure to join our Discord server and get in touch with a fantastic community of users and Cloudflare staff for all R2-related topics!

Cloudflare's connectivity cloud protects entire corporate networks, helps customers build Internet-scale applications efficiently, accelerates any website or Internet application, wards off DDoS attacks, keeps hackers at bay, and can help you on your journey to Zero Trust.

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here. If you're looking for a new career direction, check out our open positions.
Developer WeekR2 StorageR2 Super SlurperData Transfer BucketProduct NewsDeveloper Platform

Follow on X

Aly Cabral|@Aly_Cabral
Cloudflare|@cloudflare

Related posts

October 31, 2024 1:00 PM

Moving Baselime from AWS to Cloudflare: simpler architecture, improved performance, over 80% lower cloud costs

Post-acquisition, we migrated Baselime from AWS to the Cloudflare Developer Platform and in the process, we improved query times, simplified data ingestion, and now handle far more events, all while cutting costs. Here’s how we built a modern, high-performing observability platform on Cloudflare’s network....

October 24, 2024 1:05 PM

Build durable applications on Cloudflare Workers: you write the Workflows, we take care of the rest

Cloudflare Workflows is now in open beta! Workflows allows you to build reliable, repeatable, long-lived multi-step applications that can automatically retry, persist state, and scale out. Read on to learn how Workflows works, how we built it on top of Durable Objects, and how you can deploy your first Workflows application....