---
title: Storage Changelog
image: https://developers.cloudflare.com/cf-twitter-card.png
---

[Skip to content](#%5Ftop) 

# Changelog

New updates and improvements at Cloudflare.

[ Subscribe to RSS ](https://developers.cloudflare.com/changelog/rss/index.xml) [ View RSS feeds ](https://developers.cloudflare.com/fundamentals/new-features/available-rss-feeds/) 

Storage

![hero image](https://developers.cloudflare.com/_astro/hero.CVYJHPAd_26AMqX.svg) 

Aug 19, 2025
1. ### [Subscribe to events from Cloudflare services with Queues](https://developers.cloudflare.com/changelog/post/2025-08-19-event-subscriptions/)  
[ Queues ](https://developers.cloudflare.com/queues/)  
You can now subscribe to events from other Cloudflare services (for example, [Workers KV](https://developers.cloudflare.com/kv/), [Workers AI](https://developers.cloudflare.com/workers-ai), [Workers](https://developers.cloudflare.com/workers)) and consume those events via [Queues](https://developers.cloudflare.com/queues/), allowing you to build custom workflows, integrations, and logic in response to account activity.  
![Event subscriptions architecture](https://developers.cloudflare.com/_astro/queues-event-subscriptions.3aVidnXJ_Z2p3fRA.webp)  
Event subscriptions allow you to receive messages when events occur across your Cloudflare account. Cloudflare products can publish structured events to a queue, which you can then consume with [Workers](https://developers.cloudflare.com/workers/) or [pull via HTTP from anywhere](https://developers.cloudflare.com/queues/configuration/pull-consumers/).  
To create a subscription, use the dashboard or [Wrangler](https://developers.cloudflare.com/workers/wrangler/commands/queues/#queues-subscription-create):  
Terminal window  
```  
npx wrangler queues subscription create my-queue --source r2 --events bucket.created  
```  
An event is a structured record of something happening in your Cloudflare account – like a Workers AI batch request being queued, a Worker build completing, or an R2 bucket being created. Events follow a consistent structure:  
Example R2 bucket created event  
```  
{  
  "type": "cf.r2.bucket.created",  
  "source": {  
    "type": "r2"  
  },  
  "payload": {  
    "name": "my-bucket",  
    "location": "WNAM"  
  },  
  "metadata": {  
    "accountId": "f9f79265f388666de8122cfb508d7776",  
    "eventTimestamp": "2025-07-28T10:30:00Z"  
  }  
}  
```  
Current [event sources](https://developers.cloudflare.com/queues/event-subscriptions/events-schemas/) include [R2](https://developers.cloudflare.com/r2/), [Workers KV](https://developers.cloudflare.com/kv/), [Workers AI](https://developers.cloudflare.com/workers-ai/), [Workers Builds](https://developers.cloudflare.com/workers/ci-cd/builds/), [Vectorize](https://developers.cloudflare.com/vectorize/), [Super Slurper](https://developers.cloudflare.com/r2/data-migration/super-slurper/), and [Workflows](https://developers.cloudflare.com/workflows/). More sources and events are on the way.  
For more information on event subscriptions, available events, and how to get started, refer to our [documentation](https://developers.cloudflare.com/queues/event-subscriptions/).

Jul 03, 2025
1. ### [Hyperdrive now supports configuring the amount of database connections](https://developers.cloudflare.com/changelog/post/2025-07-02-hyperdrive-configurable-connection-count/)  
[ Hyperdrive ](https://developers.cloudflare.com/hyperdrive/)  
You can now specify the number of connections your Hyperdrive configuration uses to connect to your origin database.  
All configurations have a minimum of 5 connections. The maximum connection count for a Hyperdrive configuration depends on the [Hyperdrive limits of your Workers plan](https://developers.cloudflare.com/hyperdrive/platform/limits/).  
This feature allows you to right-size your connection pool based on your database capacity and application requirements. You can configure connection counts through the Cloudflare dashboard or API.  
Refer to the [Hyperdrive configuration documentation](https://developers.cloudflare.com/hyperdrive/concepts/connection-pooling/) for more information.

Jun 25, 2025
1. ### [@cloudflare/actors library - SDK for Durable Objects in beta](https://developers.cloudflare.com/changelog/post/2025-06-25-actors-package-alpha/)  
[ Durable Objects ](https://developers.cloudflare.com/durable-objects/)[ Workers ](https://developers.cloudflare.com/workers/)  
The new [@cloudflare/actors ↗](https://www.npmjs.com/package/@cloudflare/actors) library is now in beta!  
The `@cloudflare/actors` library is a new SDK for Durable Objects and provides a powerful set of abstractions for building real-time, interactive, and multiplayer applications on top of Durable Objects. With beta usage and feedback, `@cloudflare/actors` will become the recommended way to build on Durable Objects and draws upon Cloudflare's experience building products/features on Durable Objects.  
The name "actors" originates from the [actor programming model](https://developers.cloudflare.com/durable-objects/concepts/what-are-durable-objects/#actor-programming-model), which closely ties to how Durable Objects are modelled.  
The `@cloudflare/actors` library includes:  
   * Storage helpers for querying embeddeded, per-object SQLite storage  
   * Storage helpers for managing SQL schema migrations  
   * Alarm helpers for scheduling multiple alarms provided a date, delay in seconds, or cron expression  
   * `Actor` class for using Durable Objects with a defined pattern  
   * Durable Objects [Workers API ↗](https://developers.cloudflare.com/durable-objects/api/base/) is always available for your application as needed  
Storage and alarm helper methods can be combined with [any Javascript class ↗](https://github.com/cloudflare/actors?tab=readme-ov-file#storage--alarms-with-durableobject-class) that defines your Durable Object, i.e, ones that extend `DurableObject` including the `Actor` class.  
JavaScript  
```  
import { Storage } from "@cloudflare/actors/storage";  
export class ChatRoom extends DurableObject<Env> {  
    storage: Storage;  
    constructor(ctx: DurableObjectState, env: Env) {  
        super(ctx, env)  
        this.storage = new Storage(ctx.storage);  
        this.storage.migrations = [{  
            idMonotonicInc: 1,  
            description: "Create users table",  
            sql: "CREATE TABLE IF NOT EXISTS users (id INTEGER PRIMARY KEY)"  
        }]  
    }  
    async fetch(request: Request): Promise<Response> {  
        // Run migrations before executing SQL query  
        await this.storage.runMigrations();  
        // Query with SQL template  
        let userId = new URL(request.url).searchParams.get("userId");  
        const query = this.storage.sql`SELECT * FROM users WHERE id = ${userId};`  
        return new Response(`${JSON.stringify(query)}`);  
    }  
}  
```  
`@cloudflare/actors` library introduces the `Actor` class pattern. `Actor` lets you access Durable Objects without writing the Worker that communicates with your Durable Object (the Worker is created for you). By default, requests are routed to a Durable Object named "default".  
JavaScript  
```  
export class MyActor extends Actor<Env> {  
    async fetch(request: Request): Promise<Response> {  
        return new Response('Hello, World!')  
    }  
}  
export default handler(MyActor);  
```  
You can [route](https://developers.cloudflare.com/durable-objects/get-started/#3-instantiate-and-communicate-with-a-durable-object) to different Durable Objects by name within your `Actor` class using [nameFromRequest ↗](https://github.com/cloudflare/actors?tab=readme-ov-file#actor-with-custom-name).  
JavaScript  
```  
export class MyActor extends Actor<Env> {  
    static nameFromRequest(request: Request): string {  
        let url = new URL(request.url);  
        return url.searchParams.get("userId") ?? "foo";  
    }  
    async fetch(request: Request): Promise<Response> {  
        return new Response(`Actor identifier (Durable Object name): ${this.identifier}`);  
    }  
}  
export default handler(MyActor);  
```  
For more examples, check out the library [README ↗](https://github.com/cloudflare/actors?tab=readme-ov-file#getting-started). `@cloudflare/actors` library is a place for more helpers and built-in patterns, like retry handling and Websocket-based applications, to reduce development overhead for common Durable Objects functionality. Please share feedback and what more you would like to see on our [Discord channel ↗](https://discord.com/channels/595317990191398933/773219443911819284).

Jun 19, 2025
1. ### [Automate Worker deployments with a simplified SDK and more reliable Terraform provider](https://developers.cloudflare.com/changelog/post/2025-06-17-workers-terraform-sdk-api-fixes/)  
[ D1 ](https://developers.cloudflare.com/d1/)[ Workers ](https://developers.cloudflare.com/workers/)[ Workers for Platforms ](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/)  
#### Simplified Worker Deployments with our SDKs  
We've simplified the programmatic deployment of Workers via our [Cloudflare SDKs](https://developers.cloudflare.com/fundamentals/api/reference/sdks/). This update abstracts away the low-level complexities of the `multipart/form-data` upload process, allowing you to focus on your code while we handle the deployment mechanics.  
This new interface is available in:  
   * [cloudflare-typescript ↗](https://github.com/cloudflare/cloudflare-typescript) (4.4.1)  
   * [cloudflare-python ↗](https://github.com/cloudflare/cloudflare-python) (4.3.1)  
For complete examples, see our guide on [programmatic Worker deployments](https://developers.cloudflare.com/workers/platform/infrastructure-as-code).  
#### The Old way: Manual API calls  
Previously, deploying a Worker programmatically required manually constructing a `multipart/form-data` HTTP request, packaging your code and a separate `metadata.json` file. This was more complicated and verbose, and prone to formatting errors.  
For example, here's how you would upload a Worker script previously with cURL:  
Terminal window  
```  
curl https://api.cloudflare.com/client/v4/accounts/<account_id>/workers/scripts/my-hello-world-script \  
  -X PUT \  
  -H 'Authorization: Bearer <api_token>' \  
  -F 'metadata={  
        "main_module": "my-hello-world-script.mjs",  
        "bindings": [  
          {  
            "type": "plain_text",  
            "name": "MESSAGE",  
            "text": "Hello World!"  
          }  
        ],  
        "compatibility_date": "$today"  
      };type=application/json' \  
  -F 'my-hello-world-script.mjs=@-;filename=my-hello-world-script.mjs;type=application/javascript+module' <<EOF  
export default {  
  async fetch(request, env, ctx) {  
    return new Response(env.MESSAGE, { status: 200 });  
  }  
};  
EOF  
```  
#### After: SDK interface  
With the new SDK interface, you can now define your entire Worker configuration using a single, structured object.  
This approach allows you to specify metadata like `main_module`, `bindings`, and `compatibility_date` as clearer properties directly alongside your script content. Our SDK takes this logical object and automatically constructs the complex multipart/form-data API request behind the scenes.  
Here's how you can now programmatically deploy a Worker via the [cloudflare-typescript SDK ↗](https://github.com/cloudflare/cloudflare-typescript)  
   * [  JavaScript ](#tab-panel-680)  
   * [  TypeScript ](#tab-panel-681)  
JavaScript  
```  
import Cloudflare from "cloudflare";  
import { toFile } from "cloudflare/index";  
// ... client setup, script content, etc.  
const script = await client.workers.scripts.update(scriptName, {  
  account_id: accountID,  
  metadata: {  
    main_module: scriptFileName,  
    bindings: [],  
  },  
  files: {  
    [scriptFileName]: await toFile(Buffer.from(scriptContent), scriptFileName, {  
      type: "application/javascript+module",  
    }),  
  },  
});  
```  
TypeScript  
```  
import Cloudflare from 'cloudflare';  
import { toFile } from 'cloudflare/index';  
// ... client setup, script content, etc.  
const script = await client.workers.scripts.update(scriptName, {  
  account_id: accountID,  
  metadata: {  
    main_module: scriptFileName,  
    bindings: [],  
  },  
  files: {  
    [scriptFileName]: await toFile(Buffer.from(scriptContent), scriptFileName, {  
      type: 'application/javascript+module',  
    }),  
  },  
});  
```  
View the complete example here: [https://github.com/cloudflare/cloudflare-typescript/blob/main/examples/workers/script-upload.ts ↗](https://github.com/cloudflare/cloudflare-typescript/blob/main/examples/workers/script-upload.ts)  
#### Terraform provider improvements  
We've also made several fixes and enhancements to the [Cloudflare Terraform provider ↗](https://github.com/cloudflare/terraform-provider-cloudflare):  
   * Fixed the [cloudflare\_workers\_script ↗](https://registry.terraform.io/providers/cloudflare/cloudflare/latest/docs/resources/workers%5Fscript) resource in Terraform, which previously was producing a diff even when there were no changes. Now, your `terraform plan` outputs will be cleaner and more reliable.  
   * Fixed the [cloudflare\_workers\_for\_platforms\_dispatch\_namespace ↗](https://registry.terraform.io/providers/cloudflare/cloudflare/latest/docs/resources/workers%5Ffor%5Fplatforms%5Fdispatch%5Fnamespace), where the provider would attempt to recreate the namespace on a `terraform apply`. The resource now correctly reads its remote state, ensuring stability for production environments and CI/CD workflows.  
   * The [cloudflare\_workers\_route ↗](https://registry.terraform.io/providers/cloudflare/cloudflare/latest/docs/resources/workers%5Froute) resource now allows for the `script` property to be empty, null, or omitted to indicate that pattern should be negated for all scripts (see routes [docs](https://developers.cloudflare.com/workers/configuration/routing/routes)). You can now reserve a pattern or temporarily disable a Worker on a route without deleting the route definition itself.  
   * Using `primary_location_hint` in the [cloudflare\_d1\_database ↗](https://registry.terraform.io/providers/cloudflare/cloudflare/latest/docs/resources/d1%5Fdatabase) resource will no longer always try to recreate. You can now safely change the location hint for a D1 database without causing a destructive operation.  
#### API improvements  
We've also properly documented the [Workers Script And Version Settings](https://developers.cloudflare.com/api/resources/workers/subresources/scripts/subresources/script%5Fand%5Fversion%5Fsettings) in our public OpenAPI spec and SDKs.

May 29, 2025
1. ### [50-500ms Faster D1 REST API Requests](https://developers.cloudflare.com/changelog/post/2025-05-30-d1-rest-api-latency/)  
[ D1 ](https://developers.cloudflare.com/d1/)[ Workers ](https://developers.cloudflare.com/workers/)  
Users using Cloudflare's [REST API](https://developers.cloudflare.com/api/resources/d1/) to query their D1 database can see lower end-to-end request latency now that D1 authentication is performed at the closest Cloudflare network data center that received the request. Previously, authentication required D1 REST API requests to proxy to Cloudflare's core, centralized data centers, which added network round trips and latency.  
Latency improvements range from 50-500 ms depending on request location and [database location](https://developers.cloudflare.com/d1/configuration/data-location/) and only apply to the REST API. REST API requests and databases outside the United States see a bigger benefit since Cloudflare's primary core data centers reside in the United States.  
D1 query endpoints like `/query` and `/raw` have the most noticeable improvements since they no longer access Cloudflare's core data centers. D1 control plane endpoints such as those to create and delete databases see smaller improvements, since they still require access to Cloudflare's core data centers for other control plane metadata.

May 16, 2025
1. ### [Durable Objects are now supported in Python Workers](https://developers.cloudflare.com/changelog/post/2025-05-14-python-worker-durable-object/)  
[ Workers ](https://developers.cloudflare.com/workers/)[ Durable Objects ](https://developers.cloudflare.com/durable-objects/)  
You can now create [Durable Objects](https://developers.cloudflare.com/durable-objects/) using[Python Workers](https://developers.cloudflare.com/workers/languages/python/). A Durable Object is a special kind of Cloudflare Worker which uniquely combines compute with storage, enabling stateful long-running applications which run close to your users. For more info see[here](https://developers.cloudflare.com/durable-objects/concepts/what-are-durable-objects/).  
You can define a Durable Object in Python in a similar way to JavaScript:  
Python  
```  
from workers import DurableObject, Response, WorkerEntrypoint  
from urllib.parse import urlparse  
class MyDurableObject(DurableObject):  
    def __init__(self, ctx, env):  
        self.ctx = ctx  
        self.env = env  
    def fetch(self, request):  
        result = self.ctx.storage.sql.exec("SELECT 'Hello, World!' as greeting").one()  
        return Response(result.greeting)  
class Default(WorkerEntrypoint):  
    async def fetch(self, request):  
        url = urlparse(request.url)  
        id = env.MY_DURABLE_OBJECT.idFromName(url.path)  
        stub = env.MY_DURABLE_OBJECT.get(id)  
        greeting = await stub.fetch(request.url)  
        return greeting  
```  
Define the Durable Object in your Wrangler configuration file:  
   * [  wrangler.jsonc ](#tab-panel-676)  
   * [  wrangler.toml ](#tab-panel-677)  
JSONC  
```  
{  
  "durable_objects": {  
    "bindings": [  
      {  
        "name": "MY_DURABLE_OBJECT",  
        "class_name": "MyDurableObject"  
      }  
    ]  
  }  
}  
```  
TOML  
```  
[[durable_objects.bindings]]  
name = "MY_DURABLE_OBJECT"  
class_name = "MyDurableObject"  
```  
Then define the storage backend for your Durable Object:  
   * [  wrangler.jsonc ](#tab-panel-678)  
   * [  wrangler.toml ](#tab-panel-679)  
JSONC  
```  
{  
  "migrations": [  
    {  
      "tag": "v1", // Should be unique for each entry  
      "new_sqlite_classes": [ // Array of new classes  
        "MyDurableObject"  
      ]  
    }  
  ]  
}  
```  
TOML  
```  
[[migrations]]  
tag = "v1"  
new_sqlite_classes = [ "MyDurableObject" ]  
```  
Then test your new Durable Object locally by running `wrangler dev`:  
```  
npx wrangler dev  
```  
Consult the [Durable Objects documentation](https://developers.cloudflare.com/durable-objects/) for more details.

May 14, 2025
1. ### [Hyperdrive achieves FedRAMP Moderate-Impact Authorization](https://developers.cloudflare.com/changelog/post/2025-05-14-hyperdrive-fedramp/)  
[ Hyperdrive ](https://developers.cloudflare.com/hyperdrive/)  
Hyperdrive has been approved for FedRAMP Authorization and is now available in the [FedRAMP Marketplace ↗](https://marketplace.fedramp.gov/products/FR2000863987).  
FedRAMP is a U.S. government program that provides standardized assessment and authorization for cloud products and services. As a result of this product update, Hyperdrive has been approved as an authorized service to be used by U.S. federal agencies at the Moderate Impact level.  
For detailed information regarding FedRAMP and its implications, please refer to the [official FedRAMP documentation for Cloudflare ↗](https://marketplace.fedramp.gov/products/FR2000863987).

May 09, 2025
1. ### [Publish messages to Queues directly via HTTP](https://developers.cloudflare.com/changelog/post/2025-05-09-publish-to-queues-via-http/)  
[ Queues ](https://developers.cloudflare.com/queues/)  
You can now publish messages to [Cloudflare Queues](https://developers.cloudflare.com/queues/) directly via HTTP from any service or programming language that supports sending HTTP requests. Previously, publishing to queues was only possible from within [Cloudflare Workers](https://developers.cloudflare.com/workers/). You can already consume from queues via Workers or [HTTP pull consumers](https://developers.cloudflare.com/queues/configuration/pull-consumers/), and now publishing is just as flexible.  
Publishing via HTTP requires a [Cloudflare API token](https://developers.cloudflare.com/fundamentals/api/get-started/create-token/) with `Queues Edit` permissions for authentication. Here's a simple example:  
Terminal window  
```  
curl "https://api.cloudflare.com/client/v4/accounts/<account_id>/queues/<queue_id>/messages" \  
  -X POST \  
  -H 'Authorization: Bearer <api_token>' \  
  --data '{ "body": { "greeting": "hello", "timestamp":  "2025-07-24T12:00:00Z"} }'  
```  
You can also use our [SDKs](https://developers.cloudflare.com/fundamentals/api/reference/sdks/) for TypeScript, Python, and Go.  
To get started with HTTP publishing, check out our [step-by-step example](https://developers.cloudflare.com/queues/examples/publish-to-a-queue-via-http/) and the full API documentation in our [API reference](https://developers.cloudflare.com/api/resources/queues/subresources/messages/methods/push/).

May 01, 2025
1. ### [R2 Dashboard experience gets new updates](https://developers.cloudflare.com/changelog/post/2025-05-01-r2-dashboard-updates/)  
[ R2 ](https://developers.cloudflare.com/r2/)  
We're excited to announce several improvements to the [Cloudflare R2](https://developers.cloudflare.com/r2/) dashboard experience that make managing your object storage easier and more intuitive:  
![Cloudflare R2 Dashboard](https://developers.cloudflare.com/_astro/r2-dashboard-updates.B7WXxzMk_Z2vfGut.webp)  
#### All-new settings page  
We've redesigned the bucket settings page, giving you a centralized location to manage all your bucket configurations in one place.  
#### Improved navigation and sharing  
   * Deeplink support for prefix directories: Navigate through your bucket hierarchy without losing your state. Your browser's back button now works as expected, and you can share direct links to specific prefix directories with teammates.  
   * Objects as clickable links: Objects are now proper links that you can copy or `CMD + Click` to open in a new tab.  
#### Clearer public access controls  
   * Renamed "r2.dev domain" to "Public Development URL" for better clarity when exposing bucket contents for non-production workloads.  
   * Public Access status now clearly displays "Enabled" when your bucket is exposed to the internet (via Public Development URL or Custom Domains).  
We've also made numerous other usability improvements across the board to make your R2 experience smoother and more productive.

Apr 17, 2025
1. ### [Increased limits for Queues pull consumers](https://developers.cloudflare.com/changelog/post/2025-04-17-pull-consumer-limits/)  
[ Queues ](https://developers.cloudflare.com/queues/)  
[Queues pull consumers](https://developers.cloudflare.com/queues/configuration/pull-consumers/) can now pull and acknowledge up to **5,000 messages / second per queue**. Previously, pull consumers were rate limited to 1,200 requests / 5 minutes, aggregated across all queues.  
Pull consumers allow you to consume messages over HTTP from any environment—including outside of [Cloudflare Workers](https://developers.cloudflare.com/workers). They’re also useful when you need fine-grained control over how quickly messages are consumed.  
To setup a new queue with a pull based consumer using [Wrangler](https://developers.cloudflare.com/workers/wrangler/), run:  
Create a queue with a pull based consumer  
```  
npx wrangler queues create my-queue  
npx wrangler queues consumer http add my-queue  
```  
You can also configure a pull consumer using the [REST API](https://developers.cloudflare.com/api/resources/queues/subresources/consumers/methods/create/) or the Queues dashboard.  
Once configured, you can pull messages from the queue using any HTTP client. You'll need a [Cloudflare API Token](https://developers.cloudflare.com/fundamentals/api/get-started/create-token/) with `queues_read` and `queues_write` permissions. For example:  
Pull messages from a queue  
```  
curl "https://api.cloudflare.com/client/v4/accounts/${CF_ACCOUNT_ID}/queues/${QUEUE_ID}/messages/pull" \  
--header "Authorization: Bearer ${API_TOKEN}" \  
--header "Content-Type: application/json" \  
--data '{ "visibility_timeout": 10000, "batch_size": 2 }'  
```  
To learn more about how to acknowledge messages, pull batches at once, and setup multiple consumers, refer to the [pull consumer documentation](https://developers.cloudflare.com/queues/configuration/pull-consumers).  
As always, Queues doesn't charge for data egress. Pull operations continue to be billed at the [existing rate](https://developers.cloudflare.com/queues/platform/pricing), of $0.40 / million operations. The increased limits are available now, on all new and existing queues. If you're new to Queues, [get started with the Cloudflare Queues guide](https://developers.cloudflare.com/queues/get-started).

Apr 17, 2025
1. ### [Read multiple keys from Workers KV with bulk reads](https://developers.cloudflare.com/changelog/post/2025-04-10-kv-bulk-reads/)  
[ KV ](https://developers.cloudflare.com/kv/)  
You can now retrieve up to 100 keys in a single bulk read request made to Workers KV using the binding.  
This makes it easier to request multiple KV pairs within a single Worker invocation. Retrieving many key-value pairs using the bulk read operation is more performant than making individual requests since bulk read operations are not affected by [Workers simultaneous connection limits](https://developers.cloudflare.com/workers/platform/limits/#simultaneous-open-connections).  
JavaScript  
```  
// Read single key  
const key = "key-a";  
const value = await env.NAMESPACE.get(key);  
// Read multiple keys  
const keys = ["key-a", "key-b", "key-c", ...] // up to 100 keys  
const values : Map<string, string?> = await env.NAMESPACE.get(keys);  
// Print the value of "key-a" to the console.  
console.log(`The first key is ${values.get("key-a")}.`)  
```  
Consult the [Workers KV Read key-value pairs API](https://developers.cloudflare.com/kv/api/read-key-value-pairs/) for full details on Workers KV's new bulk reads support.

Apr 10, 2025
1. ### [D1 Read Replication Public Beta](https://developers.cloudflare.com/changelog/post/2025-04-10-d1-read-replication-beta/)  
[ D1 ](https://developers.cloudflare.com/d1/)[ Workers ](https://developers.cloudflare.com/workers/)  
D1 read replication is available in public beta to help lower average latency and increase overall throughput for read-heavy applications like e-commerce websites or content management tools.  
Workers can leverage read-only database copies, called read replicas, by using D1 [Sessions API](https://developers.cloudflare.com/d1/best-practices/read-replication). A session encapsulates all the queries from one logical session for your application. For example, a session may correspond to all queries coming from a particular web browser session. With Sessions API, D1 queries in a session are guaranteed to be [sequentially consistent](https://developers.cloudflare.com/d1/best-practices/read-replication/#replica-lag-and-consistency-model) to avoid data consistency pitfalls. D1 [bookmarks](https://developers.cloudflare.com/d1/reference/time-travel/#bookmarks) can be used from a previous session to ensure logical consistency between sessions.  
TypeScript  
```  
// retrieve bookmark from previous session stored in HTTP header  
const bookmark = request.headers.get("x-d1-bookmark") ?? "first-unconstrained";  
const session = env.DB.withSession(bookmark);  
const result = await session  
  .prepare(`SELECT * FROM Customers WHERE CompanyName = 'Bs Beverages'`)  
  .run();  
// store bookmark for a future session  
response.headers.set("x-d1-bookmark", session.getBookmark() ?? "");  
```  
Read replicas are automatically created by Cloudflare (currently one in each supported [D1 region](https://developers.cloudflare.com/d1/best-practices/read-replication/#read-replica-locations)), are active/inactive based on query traffic, and are transparently routed to by Cloudflare at no additional cost.  
To checkout D1 read replication, deploy the following Worker code using Sessions API, which will prompt you to create a D1 database and enable read replication on said database.  
[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/d1-starter-sessions-api)  
To learn more about how read replication was implemented, go to our [blog post ↗](https://blog.cloudflare.com/d1-read-replication-beta).

Apr 10, 2025
1. ### [Cloudflare Pipelines now available in beta](https://developers.cloudflare.com/changelog/post/2025-04-10-launching-pipelines/)  
[ Pipelines ](https://developers.cloudflare.com/pipelines/)[ R2 ](https://developers.cloudflare.com/r2/)[ Workers ](https://developers.cloudflare.com/workers/)  
[Cloudflare Pipelines](https://developers.cloudflare.com/pipelines) is now available in beta, to all users with a [Workers Paid](https://developers.cloudflare.com/workers/platform/pricing) plan.  
Pipelines let you ingest high volumes of real time data, without managing the underlying infrastructure. A single pipeline can ingest up to 100 MB of data per second, via HTTP or from a [Worker](https://developers.cloudflare.com/workers). Ingested data is automatically batched, written to output files, and delivered to an [R2 bucket](https://developers.cloudflare.com/r2) in your account. You can use Pipelines to build a data lake of clickstream data, or to store events from a Worker.  
Create your first pipeline with a single command:  
Create a pipeline  
```  
$ npx wrangler@latest pipelines create my-clickstream-pipeline --r2-bucket my-bucket  
🌀 Authorizing R2 bucket "my-bucket"  
🌀 Creating pipeline named "my-clickstream-pipeline"  
✅ Successfully created pipeline my-clickstream-pipeline  
Id:    0e00c5ff09b34d018152af98d06f5a1xvc  
Name:  my-clickstream-pipeline  
Sources:  
  HTTP:  
    Endpoint:        https://0e00c5ff09b34d018152af98d06f5a1xvc.pipelines.cloudflare.com/  
    Authentication:  off  
    Format:          JSON  
  Worker:  
    Format:  JSON  
Destination:  
  Type:         R2  
  Bucket:       my-bucket  
  Format:       newline-delimited JSON  
  Compression:  GZIP  
Batch hints:  
  Max bytes:     100 MB  
  Max duration:  300 seconds  
  Max records:   100,000  
🎉 You can now send data to your pipeline!  
Send data to your pipeline's HTTP endpoint:  
curl "https://0e00c5ff09b34d018152af98d06f5a1xvc.pipelines.cloudflare.com/" -d '[{ ...JSON_DATA... }]'  
To send data to your pipeline from a Worker, add the following configuration to your config file:  
{  
  "pipelines": [  
    {  
      "pipeline": "my-clickstream-pipeline",  
      "binding": "PIPELINE"  
    }  
  ]  
}  
```  
Head over to our [getting started guide](https://developers.cloudflare.com/pipelines/getting-started) for an in-depth tutorial to building with Pipelines.

Apr 10, 2025
1. ### [R2 Data Catalog is a managed Apache Iceberg data catalog built directly into R2 buckets](https://developers.cloudflare.com/changelog/post/2025-04-10-r2-data-catalog-beta/)  
[ R2 ](https://developers.cloudflare.com/r2/)  
Today, we're launching [R2 Data Catalog](https://developers.cloudflare.com/r2/data-catalog/) in open beta, a managed Apache Iceberg catalog built directly into your [Cloudflare R2](https://developers.cloudflare.com/r2/) bucket.  
If you're not already familiar with it, [Apache Iceberg ↗](https://iceberg.apache.org/) is an open table format designed to handle large-scale analytics datasets stored in object storage, offering ACID transactions and schema evolution. R2 Data Catalog exposes a standard Iceberg REST catalog interface, so you can connect engines like [Spark](https://developers.cloudflare.com/r2/data-catalog/config-examples/spark-scala/), [Snowflake](https://developers.cloudflare.com/r2/data-catalog/config-examples/snowflake/), and [PyIceberg](https://developers.cloudflare.com/r2/data-catalog/config-examples/pyiceberg/) to start querying your tables using the tools you already know.  
To enable a data catalog on your R2 bucket, find **R2 Data Catalog** in your buckets settings in the dashboard, or run:  
Terminal window  
```  
npx wrangler r2 bucket catalog enable my-bucket  
```  
And that's it. You'll get a catalog URI and warehouse you can plug into your favorite Iceberg engines.  
Visit our [getting started guide](https://developers.cloudflare.com/r2/data-catalog/get-started/) for step-by-step instructions on enabling R2 Data Catalog, creating tables, and running your first queries.

Apr 09, 2025
1. ### [Hyperdrive now supports custom TLS/SSL certificates](https://developers.cloudflare.com/changelog/post/2025-04-09-hyperdrive-custom-certificate-support/)  
[ Hyperdrive ](https://developers.cloudflare.com/hyperdrive/)  
Hyperdrive now supports more SSL/TLS security options for your database connections:  
   * Configure Hyperdrive to verify server certificates with `verify-ca` or `verify-full` SSL modes and protect against man-in-the-middle attacks  
   * Configure Hyperdrive to provide client certificates to the database server to authenticate itself (mTLS) for stronger security beyond username and password  
Use the new `wrangler cert` commands to create certificate authority (CA) certificate bundles or client certificate pairs:  
Terminal window  
```  
# Create CA certificate bundle  
npx wrangler cert upload certificate-authority --ca-cert your-ca-cert.pem --name your-custom-ca-name  
# Create client certificate pair  
npx wrangler cert upload mtls-certificate --cert client-cert.pem --key client-key.pem --name your-client-cert-name  
```  
Then create a Hyperdrive configuration with the certificates and desired SSL mode:  
Terminal window  
```  
npx wrangler hyperdrive create your-hyperdrive-config \  
  --connection-string="postgres://user:password@hostname:port/database" \  
  --ca-certificate-id <CA_CERT_ID> \  
  --mtls-certificate-id <CLIENT_CERT_ID>  
  --sslmode verify-full  
```  
Learn more about [configuring SSL/TLS certificates for Hyperdrive](https://developers.cloudflare.com/hyperdrive/configuration/tls-ssl-certificates-for-hyperdrive/) to enhance your database security posture.

Apr 08, 2025
1. ### [Hyperdrive Free plan makes fast, global database access available to all](https://developers.cloudflare.com/changelog/post/2025-04-08-hyperdrive-free-plan/)  
[ Hyperdrive ](https://developers.cloudflare.com/hyperdrive/)  
Hyperdrive is now available on the Free plan of Cloudflare Workers, enabling you to build Workers that connect to PostgreSQL or MySQL databases without compromise.  
Low-latency access to SQL databases is critical to building full-stack Workers applications. We want you to be able to build on fast, global apps on Workers, regardless of the tools you use. So we made Hyperdrive available for all, to make it easier to build Workers that connect to PostgreSQL and MySQL.  
If you want to learn more about how Hyperdrive works, read the [deep dive ↗](https://blog.cloudflare.com/how-hyperdrive-speeds-up-database-access) on how Hyperdrive can make your database queries up to 4x faster.  
![Hyperdrive provides edge connection setup and global connection pooling for optimal latencies.](https://developers.cloudflare.com/_astro/hyperdrive-global-placement.DHxlaFbz_1MNCXL.webp)  
Visit the docs to [get started](https://developers.cloudflare.com/hyperdrive/get-started/) with Hyperdrive for PostgreSQL or MySQL.

Apr 08, 2025
1. ### [Hyperdrive introduces support for MySQL and MySQL-compatible databases](https://developers.cloudflare.com/changelog/post/2025-04-08-hyperdrive-mysql-support/)  
[ Hyperdrive ](https://developers.cloudflare.com/hyperdrive/)  
Hyperdrive now supports connecting to MySQL and MySQL-compatible databases, including Amazon RDS and Aurora MySQL, Google Cloud SQL for MySQL, Azure Database for MySQL, PlanetScale and MariaDB.  
Hyperdrive makes your regional, MySQL databases fast when connecting from Cloudflare Workers. It eliminates unnecessary network roundtrips during connection setup, pools database connections globally, and can cache query results to provide the fastest possible response times.  
Best of all, you can connect using your existing drivers, ORMs, and query builders with Hyperdrive's secure credentials, no code changes required.  
TypeScript  
```  
import { createConnection } from "mysql2/promise";  
export interface Env {  
  HYPERDRIVE: Hyperdrive;  
}  
export default {  
  async fetch(request, env, ctx): Promise<Response> {  
    const connection = await createConnection({  
      host: env.HYPERDRIVE.host,  
      user: env.HYPERDRIVE.user,  
      password: env.HYPERDRIVE.password,  
      database: env.HYPERDRIVE.database,  
      port: env.HYPERDRIVE.port,  
      disableEval: true, // Required for Workers compatibility  
    });  
    const [results, fields] = await connection.query("SHOW tables;");  
    ctx.waitUntil(connection.end());  
    return new Response(JSON.stringify({ results, fields }), {  
      headers: {  
        "Content-Type": "application/json",  
        "Access-Control-Allow-Origin": "*",  
      },  
    });  
  },  
} satisfies ExportedHandler<Env>;  
```  
Learn more about [how Hyperdrive works](https://developers.cloudflare.com/hyperdrive/concepts/how-hyperdrive-works/) and [get started building Workers that connect to MySQL with Hyperdrive](https://developers.cloudflare.com/hyperdrive/get-started/).

Apr 07, 2025
1. ### [Create fully-managed RAG pipelines for your AI applications with AutoRAG](https://developers.cloudflare.com/changelog/post/2025-04-07-autorag-open-beta/)  
[ AI Search ](https://developers.cloudflare.com/ai-search/)[ Vectorize ](https://developers.cloudflare.com/vectorize/)  
[AutoRAG](https://developers.cloudflare.com/ai-search/) is now in open beta, making it easy for you to build fully-managed retrieval-augmented generation (RAG) pipelines without managing infrastructure. Just upload your docs to [R2](https://developers.cloudflare.com/r2/get-started/), and AutoRAG handles the rest: embeddings, indexing, retrieval, and response generation via API.  
With AutoRAG, you can:  
   * **Customize your pipeline:** Choose from [Workers AI](https://developers.cloudflare.com/workers-ai) models, configure chunking strategies, edit system prompts, and more.  
   * **Instant setup:** AutoRAG provisions everything you need from [Vectorize](https://developers.cloudflare.com/vectorize), [AI gateway](https://developers.cloudflare.com/ai-gateway), to pipeline logic for you, so you can go from zero to a working RAG pipeline in seconds.  
   * **Keep your index fresh:** AutoRAG continuously syncs your index with your data source to ensure responses stay accurate and up to date.  
   * **Ask questions:** Query your data and receive grounded responses via a [Workers binding](https://developers.cloudflare.com/ai-search/usage/workers-binding/) or [API](https://developers.cloudflare.com/ai-search/usage/rest-api/).  
Whether you're building internal tools, AI-powered search, or a support assistant, AutoRAG gets you from idea to deployment in minutes.  
Get started in the [Cloudflare dashboard ↗](https://dash.cloudflare.com/?to=/:account/ai/autorag) or check out the [guide](https://developers.cloudflare.com/ai-search/get-started/) for instructions on how to build your RAG pipeline today.

Apr 07, 2025
1. ### [Durable Objects on Workers Free plan](https://developers.cloudflare.com/changelog/post/2025-04-07-durable-objects-free-tier/)  
[ Durable Objects ](https://developers.cloudflare.com/durable-objects/)[ Workers ](https://developers.cloudflare.com/workers/)  
Durable Objects can now be used with zero commitment on the [Workers Free plan](https://developers.cloudflare.com/workers/platform/pricing/) allowing you to build AI agents with [Agents SDK](https://developers.cloudflare.com/agents/), collaboration tools, and real-time applications like chat or multiplayer games.  
Durable Objects let you build stateful, serverless applications with millions of tiny coordination instances that run your application code alongside (in the same thread!) your durable storage. Each Durable Object can access its own SQLite database through a [Storage API](https://developers.cloudflare.com/durable-objects/best-practices/access-durable-objects-storage/). A Durable Object class is defined in a Worker script encapsulating the Durable Object's behavior when accessed from a Worker. To try the code below, click the button:  
[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/hello-world-do-template)  
JavaScript  
```  
import { DurableObject } from "cloudflare:workers";  
// Durable Object  
export class MyDurableObject extends DurableObject {  
  ...  
  async sayHello(name) {  
    return `Hello, ${name}!`;  
  }  
}  
// Worker  
export default {  
  async fetch(request, env) {  
    // Every unique ID refers to an individual instance of the Durable Object class  
    const id = env.MY_DURABLE_OBJECT.idFromName("foo");  
    // A stub is a client used to invoke methods on the Durable Object  
    const stub = env.MY_DURABLE_OBJECT.get(id);  
    // Methods on the Durable Object are invoked via the stub  
    const response = await stub.sayHello("world");  
    return response;  
  },  
};  
```  
Free plan [limits](https://developers.cloudflare.com/durable-objects/platform/pricing/) apply to Durable Objects compute and storage usage. Limits allow developers to build real-world applications, with every Worker request able to call a Durable Object on the free plan.  
For more information, checkout:  
   * [Documentation](https://developers.cloudflare.com/durable-objects/concepts/what-are-durable-objects/)  
   * [Zero-latency SQLite storage in every Durable Object blog ↗](https://blog.cloudflare.com/sqlite-in-durable-objects/)

Apr 07, 2025
1. ### [SQLite in Durable Objects GA with 10GB storage per object](https://developers.cloudflare.com/changelog/post/2025-04-07-sqlite-in-durable-objects-ga/)  
[ Durable Objects ](https://developers.cloudflare.com/durable-objects/)[ Workers ](https://developers.cloudflare.com/workers/)  
SQLite in Durable Objects is now generally available (GA) with 10GB SQLite database per Durable Object. Since the [public beta ↗](https://blog.cloudflare.com/sqlite-in-durable-objects/) in September 2024, we've added feature parity and robustness for the SQLite storage backend compared to the preexisting key-value (KV) storage backend for Durable Objects.  
SQLite-backed Durable Objects are recommended for all new Durable Object classes, using `new_sqlite_classes` [Wrangler configuration](https://developers.cloudflare.com/durable-objects/best-practices/access-durable-objects-storage/#create-sqlite-backed-durable-object-class). Only SQLite-backed Durable Objects have access to Storage API's [SQL](https://developers.cloudflare.com/durable-objects/api/sqlite-storage-api/#sql-api) and [point-in-time recovery](https://developers.cloudflare.com/durable-objects/api/sqlite-storage-api/#pitr-point-in-time-recovery-api) methods, which provide relational data modeling, SQL querying, and better data management.  
JavaScript  
```  
export class MyDurableObject extends DurableObject {  
  sql: SqlStorage  
  constructor(ctx: DurableObjectState, env: Env) {  
    super(ctx, env);  
    this.sql = ctx.storage.sql;  
  }  
  async sayHello() {  
    let result = this.sql  
      .exec("SELECT 'Hello, World!' AS greeting")  
      .one();  
    return result.greeting;  
  }  
}  
```  
KV-backed Durable Objects remain for backwards compatibility, and a migration path from key-value storage to SQL storage for existing Durable Object classes will be offered in the future.  
For more details on SQLite storage, checkout [Zero-latency SQLite storage in every Durable Object blog ↗](https://blog.cloudflare.com/sqlite-in-durable-objects/).

Mar 27, 2025
1. ### [New Pause & Purge APIs for Queues](https://developers.cloudflare.com/changelog/post/2025-03-25-pause-purge-queues/)  
[ Queues ](https://developers.cloudflare.com/queues/)  
[Queues](https://developers.cloudflare.com/queues/) now supports the ability to pause message delivery and/or purge (delete) messages on a queue. These operations can be useful when:  
   * Your consumer has a bug or downtime, and you want to temporarily stop messages from being processed while you fix the bug  
   * You have pushed invalid messages to a queue due to a code change during development, and you want to clean up the backlog  
   * Your queue has a backlog that is stale and you want to clean it up to allow new messages to be consumed  
To pause a queue using [Wrangler](https://developers.cloudflare.com/workers/wrangler/), run the `pause-delivery` command. Paused queues continue to receive messages. And you can easily unpause a queue using the `resume-delivery` command.  
Pause and resume a queue  
```  
$ wrangler queues pause-delivery my-queue  
Pausing message delivery for queue my-queue.  
Paused message delivery for queue my-queue.  
$ wrangler queues resume-delivery my-queue  
Resuming message delivery for queue my-queue.  
Resumed message delivery for queue my-queue.  
```  
Purging a queue permanently deletes all messages in the queue. Unlike pausing, purging is an irreversible operation:  
Purge a queue  
```  
$ wrangler queues purge my-queue  
✔ This operation will permanently delete all the messages in queue my-queue. Type my-queue to proceed. … my-queue  
Purged queue 'my-queue'  
```  
You can also do these operations using the [Queues REST API](https://developers.cloudflare.com/api/resources/queues/), or the dashboard page for a queue.  
![Pause and purge using the dashboard](https://developers.cloudflare.com/_astro/pause-purge.SQ7B3RCF_2dqU5K.webp)  
This feature is available on all new and existing queues. Head over to the [pause and purge documentation](https://developers.cloudflare.com/queues/configuration/pause-purge) to learn more. And if you haven't used Cloudflare Queues before, [get started with the Cloudflare Queues guide](https://developers.cloudflare.com/queues/get-started).

Mar 07, 2025
1. ### [Hyperdrive reduces query latency by up to 90% and now supports IP access control lists](https://developers.cloudflare.com/changelog/post/2025-03-04-hyperdrive-pooling-near-database-and-ip-range-egress/)  
[ Hyperdrive ](https://developers.cloudflare.com/hyperdrive/)  
Hyperdrive now pools database connections in one or more regions close to your database. This means that your uncached queries and new database connections have up to 90% less latency as measured from connection pools.  
![Hyperdrive query latency decreases by 90% during Hyperdrive's gradual rollout of regional pooling.](https://developers.cloudflare.com/_astro/hyperdrive-regional-pooling-query-latency-improvement.Bzz_xvHZ_rlYbl.webp)  
By improving placement of Hyperdrive database connection pools, Workers' Smart Placement is now more effective when used with Hyperdrive, ensuring that your Worker can be placed as close to your database as possible.  
With this update, Hyperdrive also uses [Cloudflare's standard IP address ranges ↗](https://www.cloudflare.com/ips/) to connect to your database. This enables you to configure the firewall policies (IP access control lists) of your database to only allow access from Cloudflare and Hyperdrive.  
Refer to [documentation on how Hyperdrive makes connecting to regional databases from Cloudflare Workers fast](https://developers.cloudflare.com/hyperdrive/concepts/how-hyperdrive-works/).  
This improvement is enabled on all Hyperdrive configurations.

Mar 06, 2025
1. ### [Set retention polices for your R2 bucket with bucket locks](https://developers.cloudflare.com/changelog/post/2025-03-06-r2-bucket-locks/)  
[ R2 ](https://developers.cloudflare.com/r2/)  
You can now use [bucket locks](https://developers.cloudflare.com/r2/buckets/bucket-locks/) to set retention policies on your [R2 buckets](https://developers.cloudflare.com/r2/buckets/) (or specific prefixes within your buckets) for a specified period — or indefinitely. This can help ensure compliance by protecting important data from accidental or malicious deletion.  
Locks give you a few ways to ensure your objects are retained (not deleted or overwritten). You can:  
   * Lock objects for a specific duration, for example 90 days.  
   * Lock objects until a certain date, for example January 1, 2030.  
   * Lock objects indefinitely, until the lock is explicitly removed.  
Buckets can have up to 1,000 [bucket lock rules](https://developers.cloudflare.com/r2/buckets/). Each rule specifies which objects it covers (via prefix) and how long those objects must remain retained.  
Here are a couple of examples showing how you can configure bucket lock rules using [Wrangler](https://developers.cloudflare.com/workers/wrangler/):  
#### Ensure all objects in a bucket are retained for at least 180 days  
Terminal window  
```  
npx wrangler r2 bucket lock add <bucket> --name 180-days-all --retention-days 180  
```  
#### Prevent deletion or overwriting of all logs indefinitely (via prefix)  
Terminal window  
```  
npx wrangler r2 bucket lock add <bucket> --name indefinite-logs --prefix logs/ --retention-indefinite  
```  
For more information on bucket locks and how to set retention policies for objects in your R2 buckets, refer to our [documentation](https://developers.cloudflare.com/r2/buckets/bucket-locks/).

Feb 24, 2025
1. ### [Super Slurper now supports migrations from all S3-compatible storage providers](https://developers.cloudflare.com/changelog/post/2025-02-24-r2-super-slurper-s3-compatible-support/)  
[ R2 ](https://developers.cloudflare.com/r2/)  
[Super Slurper](https://developers.cloudflare.com/r2/data-migration/super-slurper/) can now migrate data from any S3-compatible object storage provider to [Cloudflare R2](https://developers.cloudflare.com/r2/). This includes transfers from services like MinIO, Wasabi, Backblaze B2, and DigitalOcean Spaces.  
![Super Slurper S3-Compatible Source](https://developers.cloudflare.com/_astro/super-slurper-s3-compat-screenshot-border.D8Gd5eye_dt8CT.webp)  
For more information on Super Slurper and how to migrate data from your existing S3-compatible storage buckets to R2, refer to our [documentation](https://developers.cloudflare.com/r2/data-migration/super-slurper/).

Feb 14, 2025
1. ### [Customize queue message retention periods](https://developers.cloudflare.com/changelog/post/2025-02-14-customize-queue-retention-period/)  
[ Queues ](https://developers.cloudflare.com/queues/)  
You can now customize a queue's message retention period, from a minimum of 60 seconds to a maximum of 14 days. Previously, it was fixed to the default of 4 days.  
![Customize a queue's message retention period](https://developers.cloudflare.com/_astro/customize-retention-period.CpK7s10q_19dmJh.webp)  
You can customize the retention period on the settings page for your queue, or using Wrangler:  
Update message retention period  
```  
$ wrangler queues update my-queue --message-retention-period-secs 600  
```  
This feature is available on all new and existing queues. If you haven't used Cloudflare Queues before, [get started with the Cloudflare Queues guide](https://developers.cloudflare.com/queues/get-started).

[Search all changelog entries](https://developers.cloudflare.com/search/?contentType=Changelog+entry) 