Backups
Create point-in-time snapshots of sandbox directories and restore them with copy-on-write overlays.
Create a point-in-time snapshot of a directory and upload it to R2 storage.
await sandbox.createBackup(options: BackupOptions): Promise<DirectoryBackup>Parameters:
options- Backup configuration (seeBackupOptions):dir(required) - Absolute path to the directory to back up (for example,"/workspace")name(optional) - Human-readable name for the backup. Maximum 256 characters, no control characters.ttl(optional) - Time-to-live in seconds until the backup expires. Default:259200(3 days). Must be a positive number.
Returns: Promise<DirectoryBackup> containing:
id- Unique backup identifier (UUID)dir- Directory that was backed up
import { getSandbox } from "@cloudflare/sandbox";
const sandbox = getSandbox(env.Sandbox, "my-sandbox");
// Create a backup of /workspaceconst backup = await sandbox.createBackup({ dir: "/workspace" });
// Later, restore the backupawait sandbox.restoreBackup(backup);import { getSandbox } from "@cloudflare/sandbox";
const sandbox = getSandbox(env.Sandbox, "my-sandbox");
// Create a backup of /workspaceconst backup = await sandbox.createBackup({ dir: "/workspace" });
// Later, restore the backupawait sandbox.restoreBackup(backup);How it works:
- The container creates a compressed squashfs archive from the directory.
- The container uploads the archive directly to R2 using a presigned URL.
- Metadata is stored alongside the archive in R2.
- The local archive is cleaned up.
Throws:
InvalidBackupConfigError- Ifdiris not absolute, contains.., theBACKUP_BUCKETbinding is missing, or the R2 presigned URL credentials are not configuredBackupCreateError- If the container fails to create the archive or the upload to R2 fails
Restore a previously created backup into a directory using FUSE overlayfs (copy-on-write).
await sandbox.restoreBackup(backup: DirectoryBackup): Promise<RestoreBackupResult>Parameters:
backup- The backup handle returned bycreateBackup(). Containsidanddir. (seeDirectoryBackup)
Returns: Promise<RestoreBackupResult> containing:
success- Whether the restore succeededdir- Directory that was restoredid- Backup ID that was restored
// Create a named backup with 24-hour TTLconst backup = await sandbox.createBackup({ dir: "/workspace", name: "before-refactor", ttl: 86400,});
// Store the handle for later useawait env.KV.put(`backup:${userId}`, JSON.stringify(backup));// Create a named backup with 24-hour TTLconst backup = await sandbox.createBackup({ dir: "/workspace", name: "before-refactor", ttl: 86400,});
// Store the handle for later useawait env.KV.put(`backup:${userId}`, JSON.stringify(backup));How it works:
- Metadata is downloaded from R2 and the TTL is checked. If expired, an error is thrown (with a 60-second buffer).
- The container downloads the archive directly from R2 using a presigned URL.
- The container mounts the squashfs archive with FUSE overlayfs.
Throws:
InvalidBackupConfigError- Ifbackup.idis missing or not a valid UUID, orbackup.diris invalidBackupNotFoundError- If the backup metadata or archive is not found in R2BackupExpiredError- If the backup TTL has elapsedBackupRestoreError- If the container fails to restore
Use backups as checkpoints before risky operations.
// Save checkpoint before risky operationconst checkpoint = await sandbox.createBackup({ dir: "/workspace" });
try { await sandbox.exec("npm install some-experimental-package"); await sandbox.exec("npm run build");} catch (error) { // Restore to the checkpoint if something goes wrong await sandbox.restoreBackup(checkpoint);}// Save checkpoint before risky operationconst checkpoint = await sandbox.createBackup({ dir: "/workspace" });
try { await sandbox.exec("npm install some-experimental-package"); await sandbox.exec("npm run build");} catch (error) { // Restore to the checkpoint if something goes wrong await sandbox.restoreBackup(checkpoint);}import { getSandbox } from "@cloudflare/sandbox";
const sandbox = getSandbox(env.Sandbox, "my-sandbox");
try { const backup = await sandbox.createBackup({ dir: "/workspace" }); console.log(`Backup created: ${backup.id}`);} catch (error) { if (error.code === "INVALID_BACKUP_CONFIG") { console.error("Configuration error:", error.message); } else if (error.code === "BACKUP_CREATE_FAILED") { console.error("Backup failed:", error.message); }}import { getSandbox } from "@cloudflare/sandbox";
const sandbox = getSandbox(env.Sandbox, "my-sandbox");
try { const backup = await sandbox.createBackup({ dir: "/workspace" }); console.log(`Backup created: ${backup.id}`);} catch (error) { if (error.code === "INVALID_BACKUP_CONFIG") { console.error("Configuration error:", error.message); } else if (error.code === "BACKUP_CREATE_FAILED") { console.error("Backup failed:", error.message); }}- Concurrent backup and restore operations on the same sandbox are automatically serialized.
- The returned
DirectoryBackuphandle is serializable — store it in KV, D1, or Durable Object storage. - Overlapping backups are independent. Restoring a parent directory overwrites subdirectory mounts.
The ttl value controls when a backup is considered expired. The SDK enforces this at restore time only — when you call restoreBackup(), the SDK reads the backup metadata from R2 and checks whether the TTL has elapsed. If it has, the restore is rejected with a BACKUP_EXPIRED error.
The TTL does not automatically delete objects from R2. Expired backup archives and metadata remain in your R2 bucket until you delete them. To automatically clean up expired objects, configure an R2 object lifecycle rule on your backup bucket. Without a lifecycle rule, expired backups continue to consume R2 storage.
interface BackupOptions { dir: string; name?: string; ttl?: number;}Fields:
dir(required) - Absolute path to the directory to back upname(optional) - Human-readable backup name. Maximum 256 characters, no control characters.ttl(optional) - Time-to-live in seconds. Default:259200(3 days). Must be a positive number.
interface DirectoryBackup { readonly id: string; readonly dir: string;}Fields:
id- Unique backup identifier (UUID)dir- Directory that was backed up
interface RestoreBackupResult { success: boolean; dir: string; id: string;}Fields:
success- Whether the restore succeededdir- Directory that was restoredid- Backup ID that was restored
- Storage API - Mount S3-compatible buckets
- Files API - Read and write files
- Wrangler configuration - Configure bindings