Skip to content

Access Durable Objects Storage

Durable Objects are a powerful compute API that provides a compute with storage building block. Each Durable Object has its own private, transactional and strongly consistent storage. Durable Objects Storage API provides access to a Durable Object's attached storage.

A Durable Object's in-memory state is preserved as long as the Durable Object is not evicted from memory. Inactive Durable Objects with no incoming request traffic can be evicted. There are normal operations like code deployments that trigger Durable Objects to restart and lose their in-memory state. For these reasons, you should use Storage API to persist state durably on disk that needs to survive eviction or restart of Durable Objects.

Access storage

By default, a Durable Object class leverages a key-value storage backend. New Durable Object classes can opt-in to using a SQLite storage backend.

Storage API methods are available on ctx.storage parameter passed to the Durable Object constructor. Storage API has key-value APIs and SQL APIs. Only Durable Object classes with a SQLite storage backend can access SQL API.

A common pattern is to initialize a Durable Object from persistent storage and set instance variables the first time it is accessed. Since future accesses are routed to the same Durable Object, it is then possible to return any initialized values without making further calls to persistent storage.

import { DurableObject } from "cloudflare:workers";
export class Counter extends DurableObject {
value: number;
constructor(ctx: DurableObjectState, env: Env) {
super(ctx, env);
// `blockConcurrencyWhile()` ensures no requests are delivered until
// initialization completes.
ctx.blockConcurrencyWhile(async () => {
// After initialization, future reads do not need to access storage.
this.value = (await ctx.storage.get("value")) || 0;
});
}
async getCounterValue() {
return this.value;
}
}

Removing a Durable Object's storage

A Durable Object fully ceases to exist if, when it shuts down, its storage is empty. If you never write to a Durable Object's storage at all (including setting alarms), then storage remains empty, and so the Durable Object will no longer exist once it shuts down.

However if you ever write using Storage API, including setting alarms, then you must explicitly call storage.deleteAll() to empty storage. It is not sufficient to simply delete the specific data that you wrote, such as deleting a key or dropping a table, as some metadata may remain. The only way to remove all storage is to call deleteAll(). Calling deleteAll() ensures that a Durable Object will not be billed for storage.

SQLite storage backend

To allow a new Durable Object class to use SQLite storage backend, use new_sqlite_classes on the migration in your Worker's wrangler.toml file:

[[migrations]]
tag = "v1" # Should be unique for each entry
new_sqlite_classes = ["MyDurableObject"] # Array of new classes

SQL API is available on ctx.storage.sql parameter passed to the Durable Object constructor.

Examples

SQL API examples below use the following SQL schema:

import { DurableObject } from "cloudflare:workers";
export class MyDurableObject extends DurableObject {
sql: SqlStorage
constructor(ctx: DurableObjectState, env: Env) {
super(ctx, env);
this.sql = ctx.storage.sql;
this.sql.exec(`CREATE TABLE IF NOT EXISTS artist(
artistid INTEGER PRIMARY KEY,
artistname TEXT
);INSERT INTO artist (artistid, artistname) VALUES
(123, 'Alice'),
(456, 'Bob'),
(789, 'Charlie');`
);
}
}

Iterate over query results as row objects:

let cursor = this.sql.exec("SELECT * FROM artist;");
for (let row of cursor) {
// Iterate over row object and do something
}

Convert query results to an array of row objects:

// Return array of row objects: [{"artistid":123,"artistname":"Alice"},{"artistid":456,"artistname":"Bob"},{"artistid":789,"artistname":"Charlie"}]
let resultsArray1 = this.sql.exec("SELECT * FROM artist;").toArray();
// OR
let resultsArray2 = Array.from(this.sql.exec("SELECT * FROM artist;"));
// OR
let resultsArray3 = [...this.sql.exec("SELECT * FROM artist;")]; // JavaScript spread syntax

Convert query results to an array of row values arrays:

// Returns [[123,"Alice"],[456,"Bob"],[789,"Charlie"]]
let cursor = this.sql.exec("SELECT * FROM artist;");
let resultsArray = cursor.raw().toArray();
// Returns ["artistid","artistname"]
let columnNameArray = this.sql.exec("SELECT * FROM artist;").columnNames.toArray();

Get first row object of query results:

// Returns {"artistid":123,"artistname":"Alice"}
let firstRow = this.sql.exec("SELECT * FROM artist ORDER BY artistname DESC;").toArray()[0];

Check if query results have exactly one row:

// returns error
this.sql.exec("SELECT * FROM artist ORDER BY artistname ASC;").one();
// returns { artistid: 123, artistname: 'Alice' }
let oneRow = this.sql.exec("SELECT * FROM artist WHERE artistname = ?;", "Alice").one()

Returned cursor behavior:

let cursor = this.sql.exec("SELECT * FROM artist ORDER BY artistname ASC;");
let result = cursor.next();
if (!result.done) {
console.log(result.value); // prints { artistid: 123, artistname: 'Alice' }
} else {
// query returned zero results
}
let remainingRows = cursor.toArray();
console.log(remainingRows); // prints [{ artistid: 456, artistname: 'Bob' },{ artistid: 789, artistname: 'Charlie' }]

Returned cursor and raw() iterator iterate over the same query results:

let cursor = this.sql.exec("SELECT * FROM artist ORDER BY artistname ASC;");
let result = cursor.raw().next();
if (!result.done) {
console.log(result.value); // prints [ 123, 'Alice' ]
} else {
// query returned zero results
}
console.log(cursor.toArray()); // prints [{ artistid: 456, artistname: 'Bob' },{ artistid: 789, artistname: 'Charlie' }]

sql.exec().rowsRead():

let cursor = this.sql.exec("SELECT * FROM artist;");
cursor.next()
console.log(cursor.rowsRead); // prints 1
cursor.toArray(); // consumes remaining cursor
console.log(cursor.rowsRead); // prints 3

SQL in Durable Objects vs D1

Cloudflare Workers offers a SQLite-backed serverless database product - D1. How should you compare SQLite in Durable Objects and D1?

D1 is a managed database product.

D1 fits into a familiar architecture for developers, where application servers communicate with a database over the network. Application servers are typically Workers; however, D1 also supports external, non-Worker access via an HTTP API, which helps unlock third-party tooling support for D1.

D1 aims for a "batteries included" feature set, including the above HTTP API, database schema management, data import/export, and database query insights.

With D1, your application code and SQL database queries are not colocated which can impact application performance. If performance is a concern with D1, Workers has Smart Placement to dynamically run your Worker in the best location to reduce total Worker request latency, considering everything your Worker talks to, including D1.

SQLite in Durable Objects is a lower-level compute with storage building block for distributed systems.

By design, Durable Objects are accessed with Workers-only.

Durable Objects require a bit more effort, but in return, give you more flexibility and control. With Durable Objects, you must implement two pieces of code that run in different places: a front-end Worker which routes incoming requests from the Internet to a unique Durable Object, and the Durable Object itself, which runs on the same machine as the SQLite database. You get to choose what runs where, and it may be that your application benefits from running some application business logic right next to the database.

With SQLite in Durable Objects, you may also need to build some of your own database tooling that comes out-of-the-box with D1.

SQL query pricing and limits are intended to be identical between D1 (pricing, limits) and SQLite in Durable Objects (pricing, limits). During SQLite in Durable Objects beta, Storage per Durable Object is 1GB, which will be raised to mirror storage per D1 database (10GB) by general availability.

Index for SQLite Durable Objects

Creating indexes for your most queried tables and filtered columns reduces how much data is scanned and improves query performance at the same time. If you have a read-heavy workload (most common), this can be particularly advantageous. Writing to columns referenced in an index will add at least one (1) additional row written to account for updating the index, but this is typically offset by the reduction in rows read due to the benefits of an index.