---
title: R2 Changelog
image: https://developers.cloudflare.com/cf-twitter-card.png
---

> Documentation Index  
> Fetch the complete documentation index at: https://developers.cloudflare.com/changelog/llms.txt  
> Use this file to discover all available pages before exploring further.

[Skip to content](#%5Ftop) 

# Changelog

New updates and improvements at Cloudflare.

[ Subscribe to RSS ](https://developers.cloudflare.com/changelog/rss/index.xml) [ View RSS feeds ](https://developers.cloudflare.com/fundamentals/new-features/available-rss-feeds/) 

R2

![hero image](https://developers.cloudflare.com/_astro/hero.CVYJHPAd_26AMqX.svg) 

Apr 30, 2026
1. ### [Empty buckets and delete folders from the R2 dashboard](https://developers.cloudflare.com/changelog/post/2026-04-30-r2-empty-bucket-folder-delete/)  
[ R2 ](https://developers.cloudflare.com/r2/)  
You can now empty an entire [R2](https://developers.cloudflare.com/r2/) bucket or delete folders directly from the dashboard. Emptying a bucket is required before you can delete it. Previously, this required scripting or configuring [lifecycle rules](https://developers.cloudflare.com/r2/buckets/object-lifecycles/). Now, the dashboard can handle it in a single action.  
#### Empty a bucket  
Go to your bucket's **Settings** tab and select **Empty** under the **Empty Bucket** section. This deletes all objects in the bucket while preserving the bucket and its configuration. For large buckets, the operation runs in the background and the dashboard displays progress.  
Emptying a bucket is also a prerequisite for deleting it. The dashboard now guides you through both steps in one place.  
![Empty Bucket and Delete Bucket sections in the R2 dashboard Settings tab](https://developers.cloudflare.com/_astro/empty-bucket-changelog.DjuMZppm_11Omax.webp)  
#### Delete folders  
R2 uses a flat object structure. The dashboard groups objects that share a common prefix into folders when the **View prefixes as directories** checkbox is selected. Deleting a folder removes every object under that prefix.  
From the **Objects** tab, you can select one or more folders and delete them alongside individual objects.  
For step-by-step instructions, refer to [Delete buckets](https://developers.cloudflare.com/r2/buckets/delete-buckets/) and [Delete objects](https://developers.cloudflare.com/r2/objects/delete-objects/).

Apr 22, 2026
1. ### [R2 Data Catalog snapshot expiration now removes unreferenced data files](https://developers.cloudflare.com/changelog/post/2026-04-22-snapshot-expiration-cleans-data-files/)  
[ R2 ](https://developers.cloudflare.com/r2/)  
[R2 Data Catalog](https://developers.cloudflare.com/r2/data-catalog/), a managed [Apache Iceberg ↗](https://iceberg.apache.org/) catalog built into R2, now removes unreferenced data files during automatic snapshot expiration. This improvement reduces storage costs and eliminates the need to run manual maintenance jobs to reclaim space from deleted data.  
Previously, snapshot expiration only cleaned up Iceberg metadata files such as manifests and manifest lists. Data files that were no longer referenced by active snapshots remained in R2 storage until you manually ran `remove_orphan_files` or `expire_snapshots` through an engine like Spark. This required extra operational overhead and left stale data files consuming storage.  
Snapshot expiration now handles both metadata and data file cleanup automatically. When a snapshot is expired, any data files that are no longer referenced by retained snapshots are removed from R2 storage.  
Terminal window  
```  
# Enable catalog-level snapshot expiration  
npx wrangler r2 bucket catalog snapshot-expiration enable my-bucket \  
  --older-than-days 7 \  
  --retain-last 10  
```  
To learn more about snapshot expiration and other automatic maintenance operations, refer to the [table maintenance documentation](https://developers.cloudflare.com/r2/data-catalog/table-maintenance/).

Feb 23, 2026
1. ### [Backup and restore API for Sandbox SDK](https://developers.cloudflare.com/changelog/post/2026-02-23-sandbox-backup-restore-api/)  
[ Agents ](https://developers.cloudflare.com/agents/)[ R2 ](https://developers.cloudflare.com/r2/)[ Containers ](https://developers.cloudflare.com/containers/)  
[Sandboxes](https://developers.cloudflare.com/sandbox/) now support `createBackup()` and `restoreBackup()` methods for creating and restoring point-in-time snapshots of directories.  
This allows you to restore environments quickly. For instance, in order to develop in a sandbox, you may need to include a user's codebase and run a build step. Unfortunately `git clone` and `npm install` can take minutes, and you don't want to run these steps every time the user starts their sandbox.  
Now, after the initial setup, you can just call `createBackup()`, then `restoreBackup()` the next time this environment is needed. This makes it practical to pick up exactly where a user left off, even after days of inactivity, without repeating expensive setup steps.  
TypeScript  
```  
const sandbox = getSandbox(env.Sandbox, "my-sandbox");  
// Make non-trivial changes to the file system  
await sandbox.gitCheckout(endUserRepo, { targetDir: "/workspace" });  
await sandbox.exec("npm install", { cwd: "/workspace" });  
// Create a point-in-time backup of the directory  
const backup = await sandbox.createBackup({ dir: "/workspace" });  
// Store the handle for later use  
await env.KV.put(`backup:${userId}`, JSON.stringify(backup));  
// ... in a future session...  
// Restore instead of re-cloning and reinstalling  
await sandbox.restoreBackup(backup);  
```  
Backups are stored in [R2](https://developers.cloudflare.com/r2) and can take advantage of [R2 object lifecycle rules](https://developers.cloudflare.com/sandbox/guides/backup-restore/#configure-r2-lifecycle-rules-for-automatic-cleanup) to ensure they do not persist forever.  
Key capabilities:  
   * **Persist and reuse across sandbox sessions** — Easily store backup handles in KV, D1, or Durable Object storage for use in subsequent sessions  
   * **Usable across multiple instances** — Fork a backup across many sandboxes for parallel work  
   * **Named backups** — Provide optional human-readable labels for easier management  
   * **TTLs** — Set time-to-live durations so backups are automatically removed from storage once they are no longer needed  
Note  
Backup and restore currently uses a FUSE overlay. Soon, native snapshotting at a lower level will be added to Containers and Sandboxes, improving speed and ergonomics. The current backup functionality provides a significant speed improvement over manually recreating a file system, but it will be further optimized in the future. The new snapshotting system will use a similar API, so changing to this system will be simple once it is available.  
To get started, refer to the [backup and restore guide](https://developers.cloudflare.com/sandbox/guides/backup-restore/) for setup instructions and usage patterns, or the [Backups API reference](https://developers.cloudflare.com/sandbox/api/backups/) for full method documentation.

Feb 03, 2026
1. ### [Improve Global Upload Performance with R2 Local Uploads - Now in Open Beta](https://developers.cloudflare.com/changelog/post/2026-02-03-r2-local-uploads/)  
[ R2 ](https://developers.cloudflare.com/r2/)  
[Local Uploads](https://developers.cloudflare.com/r2/buckets/local-uploads/) is now available in open beta. Enable it on your [R2](https://developers.cloudflare.com/r2/) bucket to improve upload performance when clients upload data from a different region than your bucket. With Local Uploads enabled, object data is written to storage infrastructure near the client, then asynchronously replicated to your bucket. The object is immediately accessible and remains strongly consistent throughout. Refer to [How R2 works](https://developers.cloudflare.com/r2/how-r2-works/) for details on how data is written to your bucket.  
In our tests, we observed **up to 75% reduction in Time to Last Byte (TTLB)** for upload requests when Local Uploads is enabled.  
![Local Uploads latency comparison showing p50 TTLB dropping from around 2 seconds to 500ms after enabling Local Uploads](https://developers.cloudflare.com/_astro/local-uploads-latency.R4pUgVuI_2cwpHU.webp)  
This feature is ideal when:  
   * Your users are globally distributed  
   * Upload performance and reliability is critical to your application  
   * You want to optimize write performance without changing your bucket's primary location  
To enable Local Uploads on your bucket, find **Local Uploads** in your bucket settings in the [Cloudflare Dashboard ↗](https://dash.cloudflare.com/?to=/:account/r2/overview), or run:  
Terminal window  
```  
npx wrangler r2 bucket local-uploads enable <BUCKET_NAME>  
```  
Enabling Local Uploads on a bucket is seamless: existing uploads will complete as expected and there’s no interruption to traffic. There is no additional cost to enable Local Uploads. Upload requests incur the standard [Class A operation costs](https://developers.cloudflare.com/r2/pricing/) same as upload requests made without Local Uploads.  
For more information, refer to [Local Uploads](https://developers.cloudflare.com/r2/buckets/local-uploads/).

Dec 18, 2025
1. ### [R2 Data Catalog now supports automatic snapshot expiration](https://developers.cloudflare.com/changelog/post/2025-12-18-r2-data-catalog-snapshot-expiration/)  
[ R2 ](https://developers.cloudflare.com/r2/)  
[R2 Data Catalog](https://developers.cloudflare.com/r2/data-catalog/) now supports automatic snapshot expiration for Apache Iceberg tables.  
In Apache Iceberg, a snapshot is metadata that represents the state of a table at a given point in time. Every mutation creates a new snapshot which enable powerful features like time travel queries and rollback capabilities but will accumulate over time.  
Without regular cleanup, these accumulated snapshots can lead to:  
   * Metadata overhead  
   * Slower table operations  
   * Increased storage costs.  
Snapshot expiration in R2 Data Catalog automatically removes old table snapshots based on your configured retention policy, improving performance and storage costs.  
Terminal window  
```  
# Enable catalog-level snapshot expiration  
# Expire snapshots older than 7 days, always retain at least 10 recent snapshots  
npx wrangler r2 bucket catalog snapshot-expiration enable my-bucket \  
  --older-than-days 7 \  
  --retain-last 10  
```  
Snapshot expiration uses two parameters to determine which snapshots to remove:  
   * `--older-than-days`: age threshold in days  
   * `--retain-last`: minimum snapshot count to retain  
Both conditions must be met before a snapshot is expired, ensuring you always retain recent snapshots even if they exceed the age threshold.  
This feature complements [automatic compaction](https://developers.cloudflare.com/r2/data-catalog/table-maintenance/), which optimizes query performance by combining small data files into larger ones. Together, these automatic maintenance operations keep your Iceberg tables performant and cost-efficient without manual intervention.  
To learn more about snapshot expiration and how to configure it, visit our [table maintenance documentation](https://developers.cloudflare.com/r2/data-catalog/table-maintenance/) or see [how to manage catalogs](https://developers.cloudflare.com/r2/data-catalog/manage-catalogs/).

Nov 21, 2025
1. ### [Mount R2 buckets in Containers](https://developers.cloudflare.com/changelog/post/2025-11-21-fuse-support-in-containers/)  
[ Containers ](https://developers.cloudflare.com/containers/)[ R2 ](https://developers.cloudflare.com/r2/)  
[Containers](https://developers.cloudflare.com/containers/) now support mounting R2 buckets as FUSE (Filesystem in Userspace) volumes, allowing applications to interact with [R2](https://developers.cloudflare.com/r2/) using standard filesystem operations.  
Common use cases include:  
   * Bootstrapping containers with datasets, models, or dependencies for [sandboxes](https://developers.cloudflare.com/sandbox/) and [agent](https://developers.cloudflare.com/agents/) environments  
   * Persisting user configuration or application state without managing downloads  
   * Accessing large static files without bloating container images or downloading at startup  
FUSE adapters like [tigrisfs ↗](https://github.com/tigrisdata/tigrisfs), [s3fs ↗](https://github.com/s3fs-fuse/s3fs-fuse), and [gcsfuse ↗](https://github.com/GoogleCloudPlatform/gcsfuse) can be installed in your container image and configured to mount buckets at startup.  
```  
FROM alpine:3.20  
# Install FUSE and dependencies  
RUN apk update && \  
    apk add --no-cache ca-certificates fuse curl bash  
# Install tigrisfs  
RUN ARCH=$(uname -m) && \  
    if [ "$ARCH" = "x86_64" ]; then ARCH="amd64"; fi && \  
    if [ "$ARCH" = "aarch64" ]; then ARCH="arm64"; fi && \  
    VERSION=$(curl -s https://api.github.com/repos/tigrisdata/tigrisfs/releases/latest | grep -o '"tag_name": "[^"]*' | cut -d'"' -f4) && \  
    curl -L "https://github.com/tigrisdata/tigrisfs/releases/download/${VERSION}/tigrisfs_${VERSION#v}_linux_${ARCH}.tar.gz" -o /tmp/tigrisfs.tar.gz && \  
    tar -xzf /tmp/tigrisfs.tar.gz -C /usr/local/bin/ && \  
    rm /tmp/tigrisfs.tar.gz && \  
    chmod +x /usr/local/bin/tigrisfs  
# Create startup script that mounts bucket  
RUN printf '#!/bin/sh\n\  
    set -e\n\  
    mkdir -p /mnt/r2\n\  
    R2_ENDPOINT="https://${R2_ACCOUNT_ID}.r2.cloudflarestorage.com"\n\  
    /usr/local/bin/tigrisfs --endpoint "${R2_ENDPOINT}" -f "${BUCKET_NAME}" /mnt/r2 &\n\  
    sleep 3\n\  
    ls -lah /mnt/r2\n\  
    ' > /startup.sh && chmod +x /startup.sh  
CMD ["/startup.sh"]  
```  
See the [Mount R2 buckets with FUSE](https://developers.cloudflare.com/containers/examples/r2-fuse-mount/) example for a complete guide on mounting R2 buckets and/or other S3-compatible storage buckets within your containers.

Oct 06, 2025
1. ### [R2 Data Catalog table-level compaction](https://developers.cloudflare.com/changelog/post/2025-10-06-data-catalog-table-compaction/)  
[ R2 ](https://developers.cloudflare.com/r2/)  
You can now enable compaction for individual [Apache Iceberg ↗](https://iceberg.apache.org/) tables in [R2 Data Catalog](https://developers.cloudflare.com/r2/data-catalog/), giving you fine-grained control over different workloads.  
Terminal window  
```  
# Enable compaction for a specific table (no token required)  
npx wrangler r2 bucket catalog compaction enable <BUCKET> <NAMESPACE> <TABLE> --target-size 256  
```  
This allows you to:  
   * Apply different target file sizes per table  
   * Disable compaction for specific tables  
   * Optimize based on table-specific access patterns  
Learn more at [Manage catalogs](https://developers.cloudflare.com/r2/data-catalog/manage-catalogs/).

Sep 25, 2025
1. ### [R2 Data Catalog now supports compaction](https://developers.cloudflare.com/changelog/post/2025-09-25-data-catalog-compaction/)  
[ R2 ](https://developers.cloudflare.com/r2/)  
You can now enable automatic compaction for [Apache Iceberg ↗](https://iceberg.apache.org/) tables in [R2 Data Catalog](https://developers.cloudflare.com/r2/data-catalog/) to improve query performance.  
Compaction is the process of taking a group of small files and combining them into fewer larger files. This is an important maintenance operation as it helps ensure that query performance remains consistent by reducing the number of files that needs to be scanned.  
To enable automatic compaction in R2 Data Catalog, find it under **R2 Data Catalog** in your R2 bucket settings in the dashboard.  
![compaction-dash](https://developers.cloudflare.com/_astro/compaction.MLojYuHL_wkqll.webp)  
Or with [Wrangler](https://developers.cloudflare.com/workers/wrangler/), run:  
Terminal window  
```  
npx wrangler r2 bucket catalog compaction enable <BUCKET_NAME>  --target-size 128 --token <API_TOKEN>  
```  
To get started with compaction, check out [manage catalogs](https://developers.cloudflare.com/r2/data-catalog/manage-catalogs/). For best practices and limitations, refer to [about compaction](https://developers.cloudflare.com/r2/data-catalog/table-maintenance/).

May 01, 2025
1. ### [R2 Dashboard experience gets new updates](https://developers.cloudflare.com/changelog/post/2025-05-01-r2-dashboard-updates/)  
[ R2 ](https://developers.cloudflare.com/r2/)  
We're excited to announce several improvements to the [Cloudflare R2](https://developers.cloudflare.com/r2/) dashboard experience that make managing your object storage easier and more intuitive:  
![Cloudflare R2 Dashboard](https://developers.cloudflare.com/_astro/r2-dashboard-updates.B7WXxzMk_Z2vfGut.webp)  
#### All-new settings page  
We've redesigned the bucket settings page, giving you a centralized location to manage all your bucket configurations in one place.  
#### Improved navigation and sharing  
   * Deeplink support for prefix directories: Navigate through your bucket hierarchy without losing your state. Your browser's back button now works as expected, and you can share direct links to specific prefix directories with teammates.  
   * Objects as clickable links: Objects are now proper links that you can copy or `CMD + Click` to open in a new tab.  
#### Clearer public access controls  
   * Renamed "r2.dev domain" to "Public Development URL" for better clarity when exposing bucket contents for non-production workloads.  
   * Public Access status now clearly displays "Enabled" when your bucket is exposed to the internet (via Public Development URL or Custom Domains).  
We've also made numerous other usability improvements across the board to make your R2 experience smoother and more productive.

Apr 10, 2025
1. ### [Cloudflare Pipelines now available in beta](https://developers.cloudflare.com/changelog/post/2025-04-10-launching-pipelines/)  
[ Pipelines ](https://developers.cloudflare.com/pipelines/)[ R2 ](https://developers.cloudflare.com/r2/)[ Workers ](https://developers.cloudflare.com/workers/)  
[Cloudflare Pipelines](https://developers.cloudflare.com/pipelines) is now available in beta, to all users with a [Workers Paid](https://developers.cloudflare.com/workers/platform/pricing) plan.  
Pipelines let you ingest high volumes of real time data, without managing the underlying infrastructure. A single pipeline can ingest up to 100 MB of data per second, via HTTP or from a [Worker](https://developers.cloudflare.com/workers). Ingested data is automatically batched, written to output files, and delivered to an [R2 bucket](https://developers.cloudflare.com/r2) in your account. You can use Pipelines to build a data lake of clickstream data, or to store events from a Worker.  
Create your first pipeline with a single command:  
Create a pipeline  
```  
$ npx wrangler@latest pipelines create my-clickstream-pipeline --r2-bucket my-bucket  
🌀 Authorizing R2 bucket "my-bucket"  
🌀 Creating pipeline named "my-clickstream-pipeline"  
✅ Successfully created pipeline my-clickstream-pipeline  
Id:    0e00c5ff09b34d018152af98d06f5a1xvc  
Name:  my-clickstream-pipeline  
Sources:  
  HTTP:  
    Endpoint:        https://0e00c5ff09b34d018152af98d06f5a1xvc.pipelines.cloudflare.com/  
    Authentication:  off  
    Format:          JSON  
  Worker:  
    Format:  JSON  
Destination:  
  Type:         R2  
  Bucket:       my-bucket  
  Format:       newline-delimited JSON  
  Compression:  GZIP  
Batch hints:  
  Max bytes:     100 MB  
  Max duration:  300 seconds  
  Max records:   100,000  
🎉 You can now send data to your pipeline!  
Send data to your pipeline's HTTP endpoint:  
curl "https://0e00c5ff09b34d018152af98d06f5a1xvc.pipelines.cloudflare.com/" -d '[{ ...JSON_DATA... }]'  
To send data to your pipeline from a Worker, add the following configuration to your config file:  
{  
  "pipelines": [  
    {  
      "pipeline": "my-clickstream-pipeline",  
      "binding": "PIPELINE"  
    }  
  ]  
}  
```  
Head over to our [getting started guide](https://developers.cloudflare.com/pipelines/getting-started) for an in-depth tutorial to building with Pipelines.

Apr 10, 2025
1. ### [R2 Data Catalog is a managed Apache Iceberg data catalog built directly into R2 buckets](https://developers.cloudflare.com/changelog/post/2025-04-10-r2-data-catalog-beta/)  
[ R2 ](https://developers.cloudflare.com/r2/)  
Today, we're launching [R2 Data Catalog](https://developers.cloudflare.com/r2/data-catalog/) in open beta, a managed Apache Iceberg catalog built directly into your [Cloudflare R2](https://developers.cloudflare.com/r2/) bucket.  
If you're not already familiar with it, [Apache Iceberg ↗](https://iceberg.apache.org/) is an open table format designed to handle large-scale analytics datasets stored in object storage, offering ACID transactions and schema evolution. R2 Data Catalog exposes a standard Iceberg REST catalog interface, so you can connect engines like [Spark](https://developers.cloudflare.com/r2/data-catalog/config-examples/spark-scala/), [Snowflake](https://developers.cloudflare.com/r2/data-catalog/config-examples/snowflake/), and [PyIceberg](https://developers.cloudflare.com/r2/data-catalog/config-examples/pyiceberg/) to start querying your tables using the tools you already know.  
To enable a data catalog on your R2 bucket, find **R2 Data Catalog** in your buckets settings in the dashboard, or run:  
Terminal window  
```  
npx wrangler r2 bucket catalog enable my-bucket  
```  
And that's it. You'll get a catalog URI and warehouse you can plug into your favorite Iceberg engines.  
Visit our [getting started guide](https://developers.cloudflare.com/r2/data-catalog/get-started/) for step-by-step instructions on enabling R2 Data Catalog, creating tables, and running your first queries.

Mar 06, 2025
1. ### [Set retention polices for your R2 bucket with bucket locks](https://developers.cloudflare.com/changelog/post/2025-03-06-r2-bucket-locks/)  
[ R2 ](https://developers.cloudflare.com/r2/)  
You can now use [bucket locks](https://developers.cloudflare.com/r2/buckets/bucket-locks/) to set retention policies on your [R2 buckets](https://developers.cloudflare.com/r2/buckets/) (or specific prefixes within your buckets) for a specified period — or indefinitely. This can help ensure compliance by protecting important data from accidental or malicious deletion.  
Locks give you a few ways to ensure your objects are retained (not deleted or overwritten). You can:  
   * Lock objects for a specific duration, for example 90 days.  
   * Lock objects until a certain date, for example January 1, 2030.  
   * Lock objects indefinitely, until the lock is explicitly removed.  
Buckets can have up to 1,000 [bucket lock rules](https://developers.cloudflare.com/r2/buckets/). Each rule specifies which objects it covers (via prefix) and how long those objects must remain retained.  
Here are a couple of examples showing how you can configure bucket lock rules using [Wrangler](https://developers.cloudflare.com/workers/wrangler/):  
#### Ensure all objects in a bucket are retained for at least 180 days  
Terminal window  
```  
npx wrangler r2 bucket lock add <bucket> --name 180-days-all --retention-days 180  
```  
#### Prevent deletion or overwriting of all logs indefinitely (via prefix)  
Terminal window  
```  
npx wrangler r2 bucket lock add <bucket> --name indefinite-logs --prefix logs/ --retention-indefinite  
```  
For more information on bucket locks and how to set retention policies for objects in your R2 buckets, refer to our [documentation](https://developers.cloudflare.com/r2/buckets/bucket-locks/).

Feb 24, 2025
1. ### [Super Slurper now supports migrations from all S3-compatible storage providers](https://developers.cloudflare.com/changelog/post/2025-02-24-r2-super-slurper-s3-compatible-support/)  
[ R2 ](https://developers.cloudflare.com/r2/)  
[Super Slurper](https://developers.cloudflare.com/r2/data-migration/super-slurper/) can now migrate data from any S3-compatible object storage provider to [Cloudflare R2](https://developers.cloudflare.com/r2/). This includes transfers from services like MinIO, Wasabi, Backblaze B2, and DigitalOcean Spaces.  
![Super Slurper S3-Compatible Source](https://developers.cloudflare.com/_astro/super-slurper-s3-compat-screenshot-border.D8Gd5eye_dt8CT.webp)  
For more information on Super Slurper and how to migrate data from your existing S3-compatible storage buckets to R2, refer to our [documentation](https://developers.cloudflare.com/r2/data-migration/super-slurper/).

Feb 14, 2025
1. ### [Super Slurper now transfers data to R2 up to 5x faster](https://developers.cloudflare.com/changelog/post/2025-02-14-r2-super-slurper-faster-migrations/)  
[ R2 ](https://developers.cloudflare.com/r2/)  
[Super Slurper](https://developers.cloudflare.com/r2/data-migration/super-slurper/) now transfers data from cloud object storage providers like AWS S3 and Google Cloud Storage to [Cloudflare R2](https://developers.cloudflare.com/r2/) up to 5x faster than it did before.  
We moved from a centralized service to a distributed system built on the Cloudflare Developer Platform — using [Cloudflare Workers](https://developers.cloudflare.com/workers/), [Durable Objects](https://developers.cloudflare.com/durable-objects/), and [Queues](https://developers.cloudflare.com/queues/) — to both improve performance and increase system concurrency capabilities (and we'll share more details about how we did it soon!)  
![Super Slurper Objects Migrated](https://developers.cloudflare.com/_astro/slurper-objects-over-time-border.BFDkMQUw_KFpzV.webp)  
_Time to copy 75,000 objects from AWS S3 to R2 decreased from 15 minutes 30 seconds (old) to 3 minutes 25 seconds (after performance improvements)_  
For more information on Super Slurper and how to migrate data from existing object storage to R2, refer to our [documentation](https://developers.cloudflare.com/r2/data-migration/super-slurper/).

[Search all changelog entries](https://developers.cloudflare.com/search/?contentType=Changelog+entry) 