---
title: Storage Changelog
image: https://developers.cloudflare.com/cf-twitter-card.png
---

[Skip to content](#%5Ftop) 

# Changelog

New updates and improvements at Cloudflare.

[ Subscribe to RSS ](https://developers.cloudflare.com/changelog/rss/index.xml) [ View RSS feeds ](https://developers.cloudflare.com/fundamentals/new-features/available-rss-feeds/) 

Storage

![hero image](https://developers.cloudflare.com/_astro/hero.CVYJHPAd_26AMqX.svg) 

Mar 19, 2026
1. ### [Hyperdrive now supports custom TLS/SSL certificates for MySQL](https://developers.cloudflare.com/changelog/post/2026-03-19-hyperdrive-mysql-custom-certificate-support/)  
[ Hyperdrive ](https://developers.cloudflare.com/hyperdrive/)  
Hyperdrive now supports custom TLS/SSL certificates for MySQL databases, bringing the same certificate options previously available for PostgreSQL to MySQL connections.  
You can now configure:  
   * **Server certificate verification** with `VERIFY_CA` or `VERIFY_IDENTITY` SSL modes to verify that your MySQL database server's certificate is signed by the expected certificate authority (CA).  
   * **Client certificates** (mTLS) for Hyperdrive to authenticate itself to your MySQL database with credentials beyond username and password.  
Create a Hyperdrive configuration with custom certificates for MySQL:  
Terminal window  
```  
# Upload a CA certificate  
npx wrangler cert upload certificate-authority --ca-cert your-ca-cert.pem --name your-custom-ca-name  
# Create a Hyperdrive with VERIFY_IDENTITY mode  
npx wrangler hyperdrive create your-hyperdrive-config \  
  --connection-string="mysql://user:password@hostname:port/database" \  
  --ca-certificate-id <CA_CERT_ID> \  
  --sslmode VERIFY_IDENTITY  
```  
For more information, refer to [SSL/TLS certificates for Hyperdrive](https://developers.cloudflare.com/hyperdrive/configuration/tls-ssl-certificates-for-hyperdrive/) and [MySQL TLS/SSL modes](https://developers.cloudflare.com/hyperdrive/examples/connect-to-mysql/).

Mar 16, 2026
1. ### [Return up to 50 query results with values or metadata](https://developers.cloudflare.com/changelog/post/2026-03-16-topk-limit-increased-to-50/)  
[ Vectorize ](https://developers.cloudflare.com/vectorize/)  
You can now set `topK` up to `50` when a Vectorize query returns values or full metadata. This raises the previous limit of `20` for queries that use `returnValues: true` or `returnMetadata: "all"`.  
Use the higher limit when you need more matches in a single query response without dropping values or metadata. Refer to the [Vectorize API reference](https://developers.cloudflare.com/vectorize/reference/client-api/) for query options and current `topK` limits.

Feb 24, 2026
1. ### [deleteAll() now deletes Durable Object alarm](https://developers.cloudflare.com/changelog/post/2026-02-24-deleteall-deletes-alarms/)  
[ Durable Objects ](https://developers.cloudflare.com/durable-objects/)[ Workers ](https://developers.cloudflare.com/workers/)  
`deleteAll()` now deletes a Durable Object alarm in addition to stored data for Workers with a compatibility date of `2026-02-24` or later. This change simplifies clearing a Durable Object's storage with a single API call.  
Previously, `deleteAll()` only deleted user-stored data for an object. Alarm usage stores metadata in an object's storage, which required a separate `deleteAlarm()` call to fully clean up all storage for an object. The `deleteAll()` change applies to both KV-backed and SQLite-backed Durable Objects.  
JavaScript  
```  
// Before: two API calls required to clear all storage  
await this.ctx.storage.deleteAlarm();  
await this.ctx.storage.deleteAll();  
// Now: a single call clears both data and the alarm  
await this.ctx.storage.deleteAll();  
```  
For more information, refer to the [Storage API documentation](https://developers.cloudflare.com/durable-objects/api/sqlite-storage-api/#deleteall).

Feb 23, 2026
1. ### [Backup and restore API for Sandbox SDK](https://developers.cloudflare.com/changelog/post/2026-02-23-sandbox-backup-restore-api/)  
[ Agents ](https://developers.cloudflare.com/agents/)[ R2 ](https://developers.cloudflare.com/r2/)[ Containers ](https://developers.cloudflare.com/containers/)  
[Sandboxes](https://developers.cloudflare.com/sandbox/) now support `createBackup()` and `restoreBackup()` methods for creating and restoring point-in-time snapshots of directories.  
This allows you to restore environments quickly. For instance, in order to develop in a sandbox, you may need to include a user's codebase and run a build step. Unfortunately `git clone` and `npm install` can take minutes, and you don't want to run these steps every time the user starts their sandbox.  
Now, after the initial setup, you can just call `createBackup()`, then `restoreBackup()` the next time this environment is needed. This makes it practical to pick up exactly where a user left off, even after days of inactivity, without repeating expensive setup steps.  
TypeScript  
```  
const sandbox = getSandbox(env.Sandbox, "my-sandbox");  
// Make non-trivial changes to the file system  
await sandbox.gitCheckout(endUserRepo, { targetDir: "/workspace" });  
await sandbox.exec("npm install", { cwd: "/workspace" });  
// Create a point-in-time backup of the directory  
const backup = await sandbox.createBackup({ dir: "/workspace" });  
// Store the handle for later use  
await env.KV.put(`backup:${userId}`, JSON.stringify(backup));  
// ... in a future session...  
// Restore instead of re-cloning and reinstalling  
await sandbox.restoreBackup(backup);  
```  
Backups are stored in [R2](https://developers.cloudflare.com/r2) and can take advantage of [R2 object lifecycle rules](https://developers.cloudflare.com/sandbox/guides/backup-restore/#configure-r2-lifecycle-rules-for-automatic-cleanup) to ensure they do not persist forever.  
Key capabilities:  
   * **Persist and reuse across sandbox sessions** — Easily store backup handles in KV, D1, or Durable Object storage for use in subsequent sessions  
   * **Usable across multiple instances** — Fork a backup across many sandboxes for parallel work  
   * **Named backups** — Provide optional human-readable labels for easier management  
   * **TTLs** — Set time-to-live durations so backups are automatically removed from storage once they are no longer neeeded  
Note  
Backup and restore currently uses a FUSE overlay. Soon, native snapshotting at a lower level will be added to Containers and Sandboxes, improving speed and ergonomics. The current backup functionality provides a significant speed improvement over manually recreating a file system, but it will be further optimized in the future. The new snapshotting system will use a similar API, so changing to this system will be simple once it is available.  
To get started, refer to the [backup and restore guide](https://developers.cloudflare.com/sandbox/guides/backup-restore/) for setup instructions and usage patterns, or the [Backups API reference](https://developers.cloudflare.com/sandbox/api/backups/) for full method documentation.

Feb 23, 2026
1. ### [Hyperdrive no longer caches queries using STABLE PostgreSQL functions](https://developers.cloudflare.com/changelog/post/2026-02-23-hyperdrive-stable-functions-uncacheable/)  
[ Hyperdrive ](https://developers.cloudflare.com/hyperdrive/)  
Hyperdrive now treats queries containing PostgreSQL `STABLE` functions as uncacheable, in addition to `VOLATILE` functions.  
Previously, only functions [that PostgreSQL categorizes ↗](https://www.postgresql.org/docs/current/xfunc-volatility.html) as `VOLATILE` (for example, `RANDOM()`, `LASTVAL()`) were detected as uncacheable. `STABLE` functions (for example, `NOW()`, `CURRENT_TIMESTAMP`, `CURRENT_DATE`) were incorrectly allowed to be cached.  
Because `STABLE` functions can return different results across different SQL statements within the same transaction, caching their results could serve stale or incorrect data. This change aligns Hyperdrive's caching behavior with PostgreSQL's function volatility semantics.  
If your queries use `STABLE` functions, and you were relying on them being cached, move the function call to your application code and pass the result as a query parameter. For example, instead of `WHERE created_at > NOW()`, compute the timestamp in your Worker and pass it as `WHERE created_at > $1`.  
Hyperdrive uses text-based pattern matching to detect uncacheable functions. References to function names like `NOW()` in SQL comments also cause the query to be marked as uncacheable.  
For more information, refer to [Query caching](https://developers.cloudflare.com/hyperdrive/concepts/query-caching/) and [Troubleshoot and debug](https://developers.cloudflare.com/hyperdrive/observability/troubleshooting/).

Feb 04, 2026
1. ### [Cloudflare Queues now available on Workers Free plan](https://developers.cloudflare.com/changelog/post/2026-02-04-queues-free-plan/)  
[ Queues ](https://developers.cloudflare.com/queues/)  
[Cloudflare Queues](https://developers.cloudflare.com/queues) is now part of the Workers free plan, offering guaranteed message delivery across up to **10,000 queues** to either [Cloudflare Workers](https://developers.cloudflare.com/workers) or [HTTP pull consumers](https://developers.cloudflare.com/queues/configuration/pull-consumers). Every Cloudflare account now includes **10,000 operations per day** across reads, writes, and deletes. For more details on how each operation is defined, refer to [Queues pricing ↗](https://developers.cloudflare.com/workers/platform/pricing/#queues).  
All features of the existing Queues functionality are available on the free plan, including unlimited [event subscriptions](https://developers.cloudflare.com/queues/event-subscriptions/). Note that the maximum retention period on the free tier, however, is 24 hours rather than 14 days.  
If you are new to Cloudflare Queues, follow [this guide ↗](https://developers.cloudflare.com/queues/get-started/) or try one of our [tutorials](https://developers.cloudflare.com/queues/tutorials/) to get started.

Feb 03, 2026
1. ### [Improve Global Upload Performance with R2 Local Uploads - Now in Open Beta](https://developers.cloudflare.com/changelog/post/2026-02-03-r2-local-uploads/)  
[ R2 ](https://developers.cloudflare.com/r2/)  
[Local Uploads](https://developers.cloudflare.com/r2/buckets/local-uploads/) is now available in open beta. Enable it on your [R2](https://developers.cloudflare.com/r2/) bucket to improve upload performance when clients upload data from a different region than your bucket. With Local Uploads enabled, object data is written to storage infrastructure near the client, then asynchronously replicated to your bucket. The object is immediately accessible and remains strongly consistent throughout. Refer to [How R2 works](https://developers.cloudflare.com/r2/how-r2-works/) for details on how data is written to your bucket.  
In our tests, we observed **up to 75% reduction in Time to Last Byte (TTLB)** for upload requests when Local Uploads is enabled.  
![Local Uploads latency comparison showing p50 TTLB dropping from around 2 seconds to 500ms after enabling Local Uploads](https://developers.cloudflare.com/_astro/local-uploads-latency.R4pUgVuI_2cwpHU.webp)  
This feature is ideal when:  
   * Your users are globally distributed  
   * Upload performance and reliability is critical to your application  
   * You want to optimize write performance without changing your bucket's primary location  
To enable Local Uploads on your bucket, find **Local Uploads** in your bucket settings in the [Cloudflare Dashboard ↗](https://dash.cloudflare.com/?to=/:account/r2/overview), or run:  
Terminal window  
```  
npx wrangler r2 bucket local-uploads enable <BUCKET_NAME>  
```  
Enabling Local Uploads on a bucket is seamless: existing uploads will complete as expected and there’s no interruption to traffic. There is no additional cost to enable Local Uploads. Upload requests incur the standard [Class A operation costs](https://developers.cloudflare.com/r2/pricing/) same as upload requests made without Local Uploads.  
For more information, refer to [Local Uploads](https://developers.cloudflare.com/r2/buckets/local-uploads/).

Jan 30, 2026
1. ### [Reduced minimum cache TTL for Workers KV to 30 seconds](https://developers.cloudflare.com/changelog/post/2026-01-30-kv-reduced-minimum-cachettl/)  
[ KV ](https://developers.cloudflare.com/kv/)  
The minimum `cacheTtl` parameter for Workers KV has been reduced from 60 seconds to 30 seconds. This change applies to both `get()` and `getWithMetadata()` methods.  
This reduction allows you to maintain more up-to-date cached data and have finer-grained control over cache behavior. Applications requiring faster data refresh rates can now configure cache durations as low as 30 seconds instead of the previous 60-second minimum.  
The `cacheTtl` parameter defines how long a KV result is cached at the global network location it is accessed from:  
JavaScript  
```  
// Read with custom cache TTL  
const value = await env.NAMESPACE.get("my-key", {  
  cacheTtl: 30, // Cache for minimum 30 seconds (previously 60)  
});  
// getWithMetadata also supports the reduced cache TTL  
const valueWithMetadata = await env.NAMESPACE.getWithMetadata("my-key", {  
  cacheTtl: 30, // Cache for minimum 30 seconds  
});  
```  
The default cache TTL remains unchanged at 60 seconds. Upgrade to the latest version of Wrangler to be able to use 30 seconds `cacheTtl`.  
This change affects all KV read operations using the binding API. For more information, consult the [Workers KV cache TTL documentation](https://developers.cloudflare.com/kv/api/read-key-value-pairs/#cachettl-parameter).

Jan 23, 2026
1. ### [Vectorize indexes now support up to 10 million vectors](https://developers.cloudflare.com/changelog/post/2026-01-23-increased-index-capacity/)  
[ Vectorize ](https://developers.cloudflare.com/vectorize/)  
You can now store up to 10 million vectors in a single Vectorize index, doubling the previous limit of 5 million vectors. This enables larger-scale semantic search, recommendation systems, and retrieval-augmented generation (RAG) applications without splitting data across multiple indexes.  
Vectorize continues to support indexes with up to 1,536 dimensions per vector at 32-bit precision. Refer to the [Vectorize limits documentation](https://developers.cloudflare.com/vectorize/platform/limits/) for complete details.

Jan 20, 2026
1. ### [New Workers KV Dashboard UI](https://developers.cloudflare.com/changelog/post/2026-01-20-kv-dash-ui-homepage/)  
[ KV ](https://developers.cloudflare.com/kv/)  
[Workers KV](https://developers.cloudflare.com/kv/) has an updated dashboard UI with new dashboard styling that makes it easier to navigate and see analytics and settings for a KV namespace.  
The new dashboard features a **streamlined homepage** for easy access to your namespaces and key operations, with consistent design with the rest of the dashboard UI updates. It also provides an **improved analytics view**.  
![New KV Dashboard Homepage](https://developers.cloudflare.com/_astro/kv-dash-ui-homepage.BT5hNntj_1OgUmv.webp)  
The updated dashboard is now available for all Workers KV users. Log in to the [Cloudflare Dashboard ↗](https://dash.cloudflare.com/) to start exploring the new interface.

Jan 09, 2026
1. ### [Get notified when your Workers builds succeed or fail](https://developers.cloudflare.com/changelog/post/2025-12-11-builds-event-subscriptions/)  
[ Workers ](https://developers.cloudflare.com/workers/)[ Queues ](https://developers.cloudflare.com/queues/)  
You can now receive notifications when your Workers' builds start, succeed, fail, or get cancelled using [Event Subscriptions](https://developers.cloudflare.com/queues/event-subscriptions/).  
[Workers Builds](https://developers.cloudflare.com/workers/ci-cd/builds/) publishes events to a [Queue](https://developers.cloudflare.com/queues/) that your Worker can read messages from, and then send notifications wherever you need — Slack, Discord, email, or any webhook endpoint.  
You can deploy [this Worker ↗](https://github.com/cloudflare/templates/tree/main/workers-builds-notifications-template) to your own Cloudflare account to send build notifications to Slack:  
[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/workers-builds-notifications-template)  
The template includes:  
   * Build status with Preview/Live URLs for successful deployments  
   * Inline error messages for failed builds  
   * Branch, commit hash, and author name  
![Slack notifications showing build events](https://developers.cloudflare.com/_astro/builds-notifications-slack.rcRiU95L_169ufw.webp)  
For setup instructions, refer to the [template README ↗](https://github.com/cloudflare/templates/tree/main/workers-builds-notifications-template#readme) or the [Event Subscriptions documentation](https://developers.cloudflare.com/queues/event-subscriptions/manage-event-subscriptions/).

Dec 18, 2025
1. ### [R2 Data Catalog now supports automatic snapshot expiration](https://developers.cloudflare.com/changelog/post/2025-12-18-r2-data-catalog-snapshot-expiration/)  
[ R2 ](https://developers.cloudflare.com/r2/)  
[R2 Data Catalog](https://developers.cloudflare.com/r2/data-catalog/) now supports automatic snapshot expiration for Apache Iceberg tables.  
In Apache Iceberg, a snapshot is metadata that represents the state of a table at a given point in time. Every mutation creates a new snapshot which enable powerful features like time travel queries and rollback capabilities but will accumulate over time.  
Without regular cleanup, these accumulated snapshots can lead to:  
   * Metadata overhead  
   * Slower table operations  
   * Increased storage costs.  
Snapshot expiration in R2 Data Catalog automatically removes old table snapshots based on your configured retention policy, improving performance and storage costs.  
Terminal window  
```  
# Enable catalog-level snapshot expiration  
# Expire snapshots older than 7 days, always retain at least 10 recent snapshots  
npx wrangler r2 bucket catalog snapshot-expiration enable my-bucket \  
  --older-than-days 7 \  
  --retain-last 10  
```  
Snapshot expiration uses two parameters to determine which snapshots to remove:  
   * `--older-than-days`: age threshold in days  
   * `--retain-last`: minimum snapshot count to retain  
Both conditions must be met before a snapshot is expired, ensuring you always retain recent snapshots even if they exceed the age threshold.  
This feature complements [automatic compaction](https://developers.cloudflare.com/r2/data-catalog/table-maintenance/), which optimizes query performance by combining small data files into larger ones. Together, these automatic maintenance operations keep your Iceberg tables performant and cost-efficient without manual intervention.  
To learn more about snapshot expiration and how to configure it, visit our [table maintenance documentation](https://developers.cloudflare.com/r2/data-catalog/table-maintenance/) or see [how to manage catalogs](https://developers.cloudflare.com/r2/data-catalog/manage-catalogs/).

Dec 15, 2025
1. ### [New Best Practices guide for Durable Objects](https://developers.cloudflare.com/changelog/post/2025-12-15-rules-of-durable-objects/)  
[ Durable Objects ](https://developers.cloudflare.com/durable-objects/)[ Workers ](https://developers.cloudflare.com/workers/)  
A new [Rules of Durable Objects](https://developers.cloudflare.com/durable-objects/best-practices/rules-of-durable-objects/) guide is now available, providing opinionated best practices for building effective Durable Objects applications. This guide covers design patterns, storage strategies, concurrency, and common anti-patterns to avoid.  
Key guidance includes:  
   * **Design around your "atom" of coordination** — Create one Durable Object per logical unit (chat room, game session, user) instead of a global singleton that becomes a bottleneck.  
   * **Use SQLite storage with RPC methods** — SQLite-backed Durable Objects with typed RPC methods provide the best developer experience and performance.  
   * **Understand input and output gates** — Learn how Cloudflare's runtime prevents data races by default, how write coalescing works, and when to use `blockConcurrencyWhile()`.  
   * **Leverage Hibernatable WebSockets** — Reduce costs for real-time applications by allowing Durable Objects to sleep while maintaining WebSocket connections.  
The [testing documentation](https://developers.cloudflare.com/durable-objects/examples/testing-with-durable-objects/) has also been updated with modern patterns using `@cloudflare/vitest-pool-workers`, including examples for testing SQLite storage, alarms, and direct instance access:  
   * [  JavaScript ](#tab-panel-674)  
   * [  TypeScript ](#tab-panel-675)  
test/counter.test.js  
```  
import { env, runDurableObjectAlarm } from "cloudflare:test";  
import { it, expect } from "vitest";  
it("can test Durable Objects with isolated storage", async () => {  
  const stub = env.COUNTER.getByName("test");  
  // Call RPC methods directly on the stub  
  await stub.increment();  
  expect(await stub.getCount()).toBe(1);  
  // Trigger alarms immediately without waiting  
  await runDurableObjectAlarm(stub);  
});  
```  
test/counter.test.ts  
```  
import { env, runDurableObjectAlarm } from "cloudflare:test";  
import { it, expect } from "vitest";  
it("can test Durable Objects with isolated storage", async () => {  
  const stub = env.COUNTER.getByName("test");  
  // Call RPC methods directly on the stub  
  await stub.increment();  
  expect(await stub.getCount()).toBe(1);  
  // Trigger alarms immediately without waiting  
  await runDurableObjectAlarm(stub);  
});  
```

Dec 12, 2025
1. ### [Billing for SQLite Storage](https://developers.cloudflare.com/changelog/post/2025-12-12-durable-objects-sqlite-storage-billing/)  
[ Durable Objects ](https://developers.cloudflare.com/durable-objects/)[ Workers ](https://developers.cloudflare.com/workers/)  
Storage billing for SQLite-backed Durable Objects will be enabled in January 2026, with a target date of January 7, 2026 (no earlier).  
To view your SQLite storage usage, go to the **Durable Objects** page  
[ Go to **Durable Objects** ](https://dash.cloudflare.com/?to=/:account/workers/durable-objects)  
If you do not want to incur costs, please take action such as optimizing queries or deleting unnecessary stored data in order to reduce your SQLite storage usage ahead of the January 7th target. Only usage on and after the billing target date will incur charges.  
Developers on the Workers Paid plan with Durable Object's SQLite storage usage beyond included limits will incur charges according to [SQLite storage pricing](https://developers.cloudflare.com/durable-objects/platform/pricing/#sqlite-storage-backend) announced in September 2024 with the [public beta ↗](https://blog.cloudflare.com/sqlite-in-durable-objects/). Developers on the Workers Free plan will not be charged.  
Compute billing for SQLite-backed Durable Objects has been enabled since the initial public beta. SQLite-backed Durable Objects currently incur [charges for requests and duration](https://developers.cloudflare.com/durable-objects/platform/pricing/#compute-billing), and no changes are being made to compute billing.  
For more information about SQLite storage pricing and limits, refer to the [Durable Objects pricing documentation](https://developers.cloudflare.com/durable-objects/platform/pricing/#sqlite-storage-backend).

Dec 04, 2025
1. ### [Connect to remote databases during local development with wrangler dev](https://developers.cloudflare.com/changelog/post/2025-12-04-hyperdrive-remote-database-local-dev/)  
[ Hyperdrive ](https://developers.cloudflare.com/hyperdrive/)  
You can now connect directly to remote databases and databases requiring TLS with `wrangler dev`. This lets you run your Worker code locally while connecting to remote databases, without needing to use `wrangler dev --remote`.  
The `localConnectionString` field and `CLOUDFLARE_HYPERDRIVE_LOCAL_CONNECTION_STRING_<BINDING_NAME>` environment variable can be used to configure the connection string used by `wrangler dev`.  
JSONC  
```  
{  
  "hyperdrive": [  
    {  
      "binding": "HYPERDRIVE",  
      "id": "your-hyperdrive-id",  
      "localConnectionString": "postgres://user:password@remote-host.example.com:5432/database?sslmode=require"  
    }  
  ]  
}  
```  
Learn more about [local development with Hyperdrive](https://developers.cloudflare.com/hyperdrive/configuration/local-development/).

Nov 21, 2025
1. ### [Mount R2 buckets in Containers](https://developers.cloudflare.com/changelog/post/2025-11-21-fuse-support-in-containers/)  
[ Containers ](https://developers.cloudflare.com/containers/)[ R2 ](https://developers.cloudflare.com/r2/)  
[Containers](https://developers.cloudflare.com/containers/) now support mounting R2 buckets as FUSE (Filesystem in Userspace) volumes, allowing applications to interact with [R2](https://developers.cloudflare.com/r2/) using standard filesystem operations.  
Common use cases include:  
   * Bootstrapping containers with datasets, models, or dependencies for [sandboxes](https://developers.cloudflare.com/sandbox/) and [agent](https://developers.cloudflare.com/agents/) environments  
   * Persisting user configuration or application state without managing downloads  
   * Accessing large static files without bloating container images or downloading at startup  
FUSE adapters like [tigrisfs ↗](https://github.com/tigrisdata/tigrisfs), [s3fs ↗](https://github.com/s3fs-fuse/s3fs-fuse), and [gcsfuse ↗](https://github.com/GoogleCloudPlatform/gcsfuse) can be installed in your container image and configured to mount buckets at startup.  
```  
FROM alpine:3.20  
# Install FUSE and dependencies  
RUN apk update && \  
    apk add --no-cache ca-certificates fuse curl bash  
# Install tigrisfs  
RUN ARCH=$(uname -m) && \  
    if [ "$ARCH" = "x86_64" ]; then ARCH="amd64"; fi && \  
    if [ "$ARCH" = "aarch64" ]; then ARCH="arm64"; fi && \  
    VERSION=$(curl -s https://api.github.com/repos/tigrisdata/tigrisfs/releases/latest | grep -o '"tag_name": "[^"]*' | cut -d'"' -f4) && \  
    curl -L "https://github.com/tigrisdata/tigrisfs/releases/download/${VERSION}/tigrisfs_${VERSION#v}_linux_${ARCH}.tar.gz" -o /tmp/tigrisfs.tar.gz && \  
    tar -xzf /tmp/tigrisfs.tar.gz -C /usr/local/bin/ && \  
    rm /tmp/tigrisfs.tar.gz && \  
    chmod +x /usr/local/bin/tigrisfs  
# Create startup script that mounts bucket  
RUN printf '#!/bin/sh\n\  
    set -e\n\  
    mkdir -p /mnt/r2\n\  
    R2_ENDPOINT="https://${R2_ACCOUNT_ID}.r2.cloudflarestorage.com"\n\  
    /usr/local/bin/tigrisfs --endpoint "${R2_ENDPOINT}" -f "${BUCKET_NAME}" /mnt/r2 &\n\  
    sleep 3\n\  
    ls -lah /mnt/r2\n\  
    ' > /startup.sh && chmod +x /startup.sh  
CMD ["/startup.sh"]  
```  
See the [Mount R2 buckets with FUSE](https://developers.cloudflare.com/containers/examples/r2-fuse-mount/) example for a complete guide on mounting R2 buckets and/or other S3-compatible storage buckets within your containers.

Nov 05, 2025
1. ### [D1 can restrict data localization with jurisdictions](https://developers.cloudflare.com/changelog/post/2025-11-05-d1-jurisdiction/)  
[ D1 ](https://developers.cloudflare.com/d1/)[ Workers ](https://developers.cloudflare.com/workers/)  
You can now set a [jurisdiction](https://developers.cloudflare.com/d1/configuration/data-location/) when creating a D1 database to guarantee where your database runs and stores data. Jurisdictions can help you comply with data localization regulations such as GDPR. Supported jurisdictions include `eu` and `fedramp`.  
A jurisdiction can only be set at database creation time via wrangler, REST API or the UI and cannot be added/updated after the database already exists.  
Terminal window  
```  
npx wrangler@latest d1 create db-with-jurisdiction --jurisdiction eu  
```  
```  
curl -X POST "https://api.cloudflare.com/client/v4/accounts/<account_id>/d1/database" \  
     -H "Authorization: Bearer $TOKEN" \  
     -H "Content-Type: application/json" \  
     --data '{"name": "db-wth-jurisdiction", "jurisdiction": "eu" }'  
```  
To learn more, visit D1's data location [documentation](https://developers.cloudflare.com/d1/configuration/data-location/).

Oct 31, 2025
1. ### [Workers WebSocket message size limit increased from 1 MiB to 32 MiB](https://developers.cloudflare.com/changelog/post/2025-10-31-increased-websocket-message-size-limit/)  
[ Workers ](https://developers.cloudflare.com/workers/)[ Durable Objects ](https://developers.cloudflare.com/durable-objects/)[ Browser Rendering ](https://developers.cloudflare.com/browser-rendering/)  
Workers, including those using [Durable Objects](https://developers.cloudflare.com/durable-objects/) and [Browser Rendering](https://developers.cloudflare.com/browser-rendering/workers-bindings/), may now process WebSocket messages up to 32 MiB in size. Previously, this limit was 1 MiB.  
This change allows Workers to handle use cases requiring large message sizes, such as processing Chrome Devtools Protocol messages.  
For more information, please see the [Durable Objects startup limits](https://developers.cloudflare.com/durable-objects/platform/limits/#sqlite-backed-durable-objects-general-limits).

Oct 16, 2025
1. ### [View and edit Durable Object data in UI with Data Studio (Beta)](https://developers.cloudflare.com/changelog/post/2025-10-16-durable-objects-data-studio/)  
[ Durable Objects ](https://developers.cloudflare.com/durable-objects/)[ Workers ](https://developers.cloudflare.com/workers/)  
![Screenshot of Durable Objects Data Studio](https://developers.cloudflare.com/_astro/do-data-studio.BfCcgtkq_Z4LLzm.webp)  
You can now view and write to each Durable Object's storage using a UI editor on the Cloudflare dashboard. Only Durable Objects using [SQLite storage](https://developers.cloudflare.com/durable-objects/best-practices/access-durable-objects-storage/#create-sqlite-backed-durable-object-class) can use Data Studio.  
[ Go to **Durable Objects** ](https://dash.cloudflare.com/?to=/:account/workers/durable-objects)  
Data Studio unlocks easier data access with Durable Objects for prototyping application data models to debugging production storage usage. Before, querying your Durable Objects data required deploying a Worker.  
To access a Durable Object, you can provide an object's unique name or ID generated by Cloudflare. Data Studio requires you to have at least the `Workers Platform Admin` role, and all queries are captured with audit logging for your security and compliance needs. Queries executed by Data Studio send requests to your remote, deployed objects and incur normal usage billing.  
To learn more, visit the Data Studio [documentation](https://developers.cloudflare.com/durable-objects/observability/data-studio/). If you have feedback or suggestions for the new Data Studio, please share your experience on [Discord ↗](https://discord.com/channels/595317990191398933/773219443911819284)

Oct 06, 2025
1. ### [R2 Data Catalog table-level compaction](https://developers.cloudflare.com/changelog/post/2025-10-06-data-catalog-table-compaction/)  
[ R2 ](https://developers.cloudflare.com/r2/)  
You can now enable compaction for individual [Apache Iceberg ↗](https://iceberg.apache.org/) tables in [R2 Data Catalog](https://developers.cloudflare.com/r2/data-catalog/), giving you fine-grained control over different workloads.  
Terminal window  
```  
# Enable compaction for a specific table (no token required)  
npx wrangler r2 bucket catalog compaction enable <BUCKET> <NAMESPACE> <TABLE> --target-size 256  
```  
This allows you to:  
   * Apply different target file sizes per table  
   * Disable compaction for specific tables  
   * Optimize based on table-specific access patterns  
Learn more at [Manage catalogs](https://developers.cloudflare.com/r2/data-catalog/manage-catalogs/).

Sep 25, 2025
1. ### [R2 Data Catalog now supports compaction](https://developers.cloudflare.com/changelog/post/2025-09-25-data-catalog-compaction/)  
[ R2 ](https://developers.cloudflare.com/r2/)  
You can now enable automatic compaction for [Apache Iceberg ↗](https://iceberg.apache.org/) tables in [R2 Data Catalog](https://developers.cloudflare.com/r2/data-catalog/) to improve query performance.  
Compaction is the process of taking a group of small files and combining them into fewer larger files. This is an important maintenance operation as it helps ensure that query performance remains consistent by reducing the number of files that needs to be scanned.  
To enable automatic compaction in R2 Data Catalog, find it under **R2 Data Catalog** in your R2 bucket settings in the dashboard.  
![compaction-dash](https://developers.cloudflare.com/_astro/compaction.MLojYuHL_wkqll.webp)  
Or with [Wrangler](https://developers.cloudflare.com/workers/wrangler/), run:  
Terminal window  
```  
npx wrangler r2 bucket catalog compaction enable <BUCKET_NAME>  --target-size 128 --token <API_TOKEN>  
```  
To get started with compaction, check out [manage catalogs](https://developers.cloudflare.com/r2/data-catalog/manage-catalogs/). For best practices and limitations, refer to [about compaction](https://developers.cloudflare.com/r2/data-catalog/table-maintenance/).

Sep 11, 2025
1. ### [D1 automatically retries read-only queries](https://developers.cloudflare.com/changelog/post/2025-09-11-d1-automatic-read-retries/)  
[ D1 ](https://developers.cloudflare.com/d1/)[ Workers ](https://developers.cloudflare.com/workers/)  
D1 now detects read-only queries and automatically attempts up to two retries to execute those queries in the event of failures with retryable errors. You can access the number of execution attempts in the returned [response metadata](https://developers.cloudflare.com/d1/worker-api/return-object/#d1result) property `total_attempts`.  
At the moment, only read-only queries are retried, that is, queries containing only the following SQLite keywords: `SELECT`, `EXPLAIN`, `WITH`. Queries containing any [SQLite keyword ↗](https://sqlite.org/lang%5Fkeywords.html) that leads to database writes are not retried.  
The retry success ratio among read-only retryable errors varies from 5% all the way up to 95%, depending on the underlying error and its duration (like network errors or other internal errors).  
The retry success ratio among all retryable errors is lower, indicating that there are write-queries that could be retried. Therefore, we recommend D1 users to continue applying [retries in their own code](https://developers.cloudflare.com/d1/best-practices/retry-queries/) for queries that are not read-only but are idempotent according to the business logic of the application.  
![D1 automatically query retries success ratio](https://developers.cloudflare.com/_astro/d1-auto-retry-success-ratio.yPw8B0tB_1c6euA.webp)  
D1 ensures that any retry attempt does not cause database writes, making the automatic retries safe from side-effects, even if a query causing changes slips through the read-only detection. D1 achieves this by checking for modifications after every query execution, and if any write occurred due to a retry attempt, the query is rolled back.  
The read-only query detection heuristics are simple for now, and there is room for improvement to capture more cases of queries that can be retried, so this is just the beginning.

Aug 26, 2025
1. ### [List all vectors in a Vectorize index with the new list-vectors operation](https://developers.cloudflare.com/changelog/post/2025-08-26-vectorize-list-vectors/)  
[ Vectorize ](https://developers.cloudflare.com/vectorize/)  
You can now list all vector identifiers in a Vectorize index using the new `list-vectors` operation. This enables bulk operations, auditing, and data migration workflows through paginated requests that maintain snapshot consistency.  
The operation is available via Wrangler CLI and REST API. Refer to the [list-vectors best practices guide](https://developers.cloudflare.com/vectorize/best-practices/list-vectors/) for detailed usage guidance.

Aug 22, 2025
1. ### [Workers KV completes hybrid storage provider rollout for improved performance, fault-tolerance](https://developers.cloudflare.com/changelog/post/2025-08-22-kv-performance-improvements/)  
[ KV ](https://developers.cloudflare.com/kv/)  
Workers KV has completed rolling out performance improvements across all KV namespaces, providing a significant latency reduction on read operations for all KV users. This is due to architectural changes to KV's underlying storage infrastructure, which introduces a new metadata later and substantially improves redundancy.  
![Workers KV latency improvements showing P95 and P99 performance gains in Europe, Asia, Africa and Middle East regions as measured within KV's internal storage gateway worker.](https://developers.cloudflare.com/_astro/kv-hybrid-providers-performance-improvements.D6MBO22S_2ok8qE.webp)  
#### Performance improvements  
The new hybrid architecture delivers substantial latency reductions throughout Europe, Asia, Middle East, Africa regions. Over the past 2 weeks, we have observed the following:  
   * **p95 latency**: Reduced from \~150ms to \~50ms (67% decrease)  
   * **p99 latency**: Reduced from \~350ms to \~250ms (29% decrease)

Aug 21, 2025
1. ### [New getByName() API to access Durable Objects](https://developers.cloudflare.com/changelog/post/2025-08-21-durable-objects-get-by-name/)  
[ Durable Objects ](https://developers.cloudflare.com/durable-objects/)[ Workers ](https://developers.cloudflare.com/workers/)  
You can now create a client (a [Durable Object stub](https://developers.cloudflare.com/durable-objects/api/stub/)) to a Durable Object with the new `getByName` method, removing the need to convert Durable Object names to IDs and then create a stub.  
JavaScript  
```  
// Before: (1) translate name to ID then (2) get a client  
const objectId = env.MY_DURABLE_OBJECT.idFromName("foo"); // or .newUniqueId()  
const stub = env.MY_DURABLE_OBJECT.get(objectId);  
// Now: retrieve client to Durable Object directly via its name  
const stub = env.MY_DURABLE_OBJECT.getByName("foo");  
// Use client to send request to the remote Durable Object  
const rpcResponse = await stub.sayHello();  
```  
Each Durable Object has a globally-unique name, which allows you to send requests to a specific object from anywhere in the world. Thus, a Durable Object can be used to coordinate between multiple clients who need to work together. You can have billions of Durable Objects, providing isolation between application tenants.  
To learn more, visit the Durable Objects [API Documentation](https://developers.cloudflare.com/durable-objects/api/namespace/#getbyname) or the [getting started guide](https://developers.cloudflare.com/durable-objects/get-started/).

[Search all changelog entries](https://developers.cloudflare.com/search/?contentType=Changelog+entry) 