Skip to content
Cloudflare Docs

Workers Best Practices

Best practices for Workers based on production patterns, Cloudflare's own internal usage, and common issues seen across the developer community.

Configuration

Keep your compatibility date current

The compatibility_date controls which runtime features and bug fixes are available to your Worker. Setting it to today's date on new projects ensures you get the latest behavior. Periodically updating it on existing projects gives you access to new APIs and fixes without changing your code.

{
"name": "my-worker",
"main": "src/index.ts",
"compatibility_date": "2026-02-12",
"compatibility_flags": ["nodejs_compat"],
}

For more information, refer to Compatibility dates.

Enable nodejs_compat

The nodejs_compat compatibility flag gives your Worker access to Node.js built-in modules like node:crypto, node:buffer, node:stream, and others. Many libraries depend on these modules, and enabling this flag avoids cryptic import errors at runtime.

{
"name": "my-worker",
"main": "src/index.ts",
"compatibility_date": "2026-02-12",
"compatibility_flags": ["nodejs_compat"],
}

For more information, refer to Node.js compatibility.

Generate binding types with wrangler types

Do not hand-write your Env interface. Run wrangler types to generate a type definition file that matches your actual Wrangler configuration. This catches mismatches between your config and code at compile time instead of at deploy time.

Re-run wrangler types whenever you add or rename a binding.

Terminal window
wrangler types
src/index.js
// โœ… Good: Env is generated by wrangler types and always matches your config
// Do not manually define Env โ€” it drifts from your actual bindings
export default {
async fetch(request, env) {
// env.MY_KV, env.MY_BUCKET, etc. are all correctly typed
const value = await env.MY_KV.get("key");
return new Response(value);
},
};

For more information, refer to wrangler types.

Store secrets with wrangler secret, not in source

Secrets (API keys, tokens, database credentials) must never appear in your Wrangler configuration or source code. Use wrangler secret put to store them securely, and access them through env at runtime. For local development, use a .env file (and make sure it is in your .gitignore). For more information, refer to Environment variables.

{
"name": "my-worker",
"main": "src/index.ts",
"compatibility_date": "2026-02-12",
"compatibility_flags": ["nodejs_compat"],
// โœ… Good: non-secret configuration lives in version control
"vars": {
"API_BASE_URL": "https://api.example.com",
},
// ๐Ÿ”ด Bad: never put secrets here
// "API_KEY": "sk-live-abc123..."
}
Terminal window
# Store secrets securely
wrangler secret put API_KEY
# For local development, use .env (make sure it is in .gitignore)
# API_KEY=sk-test-abc123

For more information, refer to Secrets.

Configure environments deliberately

Wrangler environments let you deploy the same code to separate Workers for production, staging, and development. Each environment creates a distinct Worker named {name}-{env} (for example, my-api-production and my-api-staging).

Treat the root configuration as your base (shared settings), and override per environment. The root Worker (without an environment suffix) is a separate deployment. If you do not intend to use it, do not deploy without specifying an environment.

{
"name": "my-api",
"main": "src/index.ts",
"compatibility_date": "2026-02-12",
"compatibility_flags": ["nodejs_compat"],
// Shared bindings go in the root config
"kv_namespaces": [{ "binding": "CACHE", "id": "dev-kv-id" }],
"env": {
// Production environment: deploys as "my-api-production"
"production": {
"kv_namespaces": [{ "binding": "CACHE", "id": "prod-kv-id" }],
"routes": [
{ "pattern": "api.example.com/*", "zone_name": "example.com" },
],
},
// Staging environment: deploys as "my-api-staging"
"staging": {
"kv_namespaces": [{ "binding": "CACHE", "id": "staging-kv-id" }],
"routes": [
{ "pattern": "api-staging.example.com/*", "zone_name": "example.com" },
],
},
},
}
Terminal window
# Deploy to a specific environment
wrangler deploy --env production
wrangler deploy --env staging

For more information, refer to Environments.

Set up custom domains or routes correctly

Workers support two routing mechanisms, and they serve different purposes:

  • Custom domains: The Worker is the origin. Cloudflare creates DNS records and SSL certificates automatically. Use this when your Worker handles all traffic for a hostname.
  • Routes: The Worker runs in front of an existing origin server. You must have a proxied (orange-clouded) DNS record for the hostname before adding a route.

The most common mistake with routes is missing the DNS record. Without a proxied DNS record, requests to the hostname return ERR_NAME_NOT_RESOLVED and never reach your Worker. If you do not have a real origin, add a proxied AAAA record pointing to 100:: as a placeholder.

{
"name": "my-worker",
"main": "src/index.ts",
"compatibility_date": "2026-02-12",
"compatibility_flags": ["nodejs_compat"],
// Option 1: Custom domain โ€” Worker is the origin, DNS is managed automatically
"routes": [{ "pattern": "api.example.com", "custom_domain": true }],
// Option 2: Route โ€” Worker runs in front of an existing origin
// Requires a proxied DNS record for shop.example.com
// "routes": [
// { "pattern": "shop.example.com/*", "zone_name": "example.com" }
// ]
}

For more information, refer to Routing.

Request and response handling

Stream request and response bodies

Regardless of memory limits, streaming large requests and responses is a best practice in any language. It reduces peak memory usage and improves time-to-first-byte. Workers have a 128 MB memory limit, so buffering an entire body with await response.text() or await request.arrayBuffer() will crash your Worker on large payloads.

For request bodies you do consume entirely (JSON payloads, file uploads), enforce a maximum size before reading. This prevents clients from sending data you do not want to process.

Stream data through your Worker using TransformStream to pipe from a source to a destination without holding it all in memory.

src/index.js
export default {
async fetch(request, env) {
// ๐Ÿ”ด Bad: buffers the entire response body in memory
// const response = await fetch("https://api.example.com/large-dataset");
// const text = await response.text();
// return new Response(text);
// โœ… Good: stream the response body through without buffering
const response = await fetch("https://api.example.com/large-dataset");
return new Response(response.body, response);
},
};

When you need to concatenate multiple responses (for example, fetching data from several upstream APIs), pipe each body sequentially into a single writable stream. This avoids buffering any of the responses in memory.

src/concat.js
export default {
async fetch(request, env) {
const urls = [
"https://api.example.com/part-1",
"https://api.example.com/part-2",
"https://api.example.com/part-3",
];
const { readable, writable } = new TransformStream();
// โœ… Good: pipe each response body sequentially without buffering
const pipeline = (async () => {
for (const url of urls) {
const response = await fetch(url);
if (response.body) {
// pipeTo with preventClose keeps the writable open for the next response
await response.body.pipeTo(writable, {
preventClose: true,
});
}
}
await writable.close();
})();
// Return the readable side immediately โ€” data streams as it arrives
return new Response(readable, {
headers: { "Content-Type": "application/octet-stream" },
});
},
};

For more information, refer to Streams.

Use waitUntil for work after the response

ctx.waitUntil() lets you perform work after the response is sent to the client, such as analytics, cache writes, non-critical logging, or webhook notifications. This keeps your response fast while still completing background tasks.

There are two common pitfalls: destructuring ctx (which loses the this binding and throws "Illegal invocation"), and exceeding the 30-second time limit after the response is sent.

src/index.js
export default {
async fetch(request, env, ctx) {
const data = await processRequest(request);
// โœ… Good: send the response immediately, do background work after
ctx.waitUntil(logToAnalytics(env, data));
ctx.waitUntil(updateCache(env, data));
return Response.json(data);
},
};
// ๐Ÿ”ด Bad: destructuring ctx loses the `this` binding
// async fetch(request, env, ctx) {
// const { waitUntil } = ctx; // "Illegal invocation" at runtime
// waitUntil(somePromise);
// }
async function logToAnalytics(env, data) {
await fetch("https://analytics.example.com/events", {
method: "POST",
body: JSON.stringify(data),
});
}
async function updateCache(env, data) {
await env.CACHE.put("latest", JSON.stringify(data));
}

For more information, refer to Context.

Architecture

Use bindings for Cloudflare services, not REST APIs

Some Cloudflare services like R2, KV, D1, Queues, and Workflows are available as bindings. Bindings are direct, in-process references that require no network hop, no authentication, and no extra latency. Using the REST API from within a Worker wastes time and adds unnecessary complexity.

src/index.js
export default {
async fetch(request, env) {
// ๐Ÿ”ด Bad: calling the REST API from a Worker
// const response = await fetch(
// "https://api.cloudflare.com/client/v4/accounts/.../r2/buckets/.../objects/my-file",
// { headers: { Authorization: `Bearer ${env.CF_API_TOKEN}` } }
// );
// โœ… Good: use the binding directly โ€” no network hop, no auth needed
const object = await env.MY_BUCKET.get("my-file");
if (!object) {
return new Response("Not found", { status: 404 });
}
return new Response(object.body, {
headers: {
"Content-Type":
object.httpMetadata?.contentType ?? "application/octet-stream",
},
});
},
};

Use Queues and Workflows for async and background work

Long-running, retriable, or non-urgent tasks should not block a request. Use Queues and Workflows to move work out of the critical path. They serve different purposes:

Use Queues when you need to decouple a producer from a consumer. Queues are a message broker: one Worker sends a message, another Worker processes it later. They are the right choice for fan-out (one event triggers many consumers), buffering and batching (aggregate messages before writing to a downstream service), and simple single-step background jobs (send an email, fire a webhook, write a log). Queues provide at-least-once delivery with configurable retries per message.

Use Workflows when the background work has multiple steps that depend on each other. Workflows are a durable execution engine: each step's return value is persisted, and if a step fails, only that step is retried โ€” not the entire job. They are the right choice for multi-step processes (charge a card, then create a shipment, then send a confirmation), long-running tasks that need to pause and resume (wait hours or days for an external event or human approval via step.waitForEvent()), and complex conditional logic where later steps depend on earlier results. Workflows can run for hours, days, or weeks.

Use both together when a high-throughput entry point feeds into complex processing. For example, a Queue can buffer incoming orders, and the consumer can create a Workflow instance for each order that requires multi-step fulfillment.

src/index.js
export default {
async fetch(request, env) {
const order = await request.json();
if (order.type === "simple") {
// โœ… Queue: single-step background job โ€” send a message for async processing
await env.ORDER_QUEUE.send({
orderId: order.id,
action: "send-confirmation-email",
});
} else {
// โœ… Workflow: multi-step durable process โ€” payment, fulfillment, notification
const instance = await env.FULFILLMENT_WORKFLOW.create({
params: { orderId: order.id },
});
}
return Response.json({ status: "accepted" }, { status: 202 });
},
};

For more information, refer to Queues and Workflows.

Use service bindings for Worker-to-Worker communication

When one Worker needs to call another, use service bindings instead of making an HTTP request to a public URL. Service bindings are zero-cost, bypass the public internet, and support type-safe RPC.

src/index.js
import { WorkerEntrypoint } from "cloudflare:workers";
// The "auth" Worker exposes RPC methods
export class AuthService extends WorkerEntrypoint {
async verifyToken(token) {
// Token verification logic
return { userId: "user-123", valid: true };
}
}
// The "api" Worker calls the auth Worker via a service binding
export default {
async fetch(request, env) {
const token = request.headers.get("Authorization")?.replace("Bearer ", "");
if (!token) {
return new Response("Unauthorized", { status: 401 });
}
// โœ… Good: call another Worker via service binding RPC โ€” no network hop
const auth = await env.AUTH_SERVICE.verifyToken(token);
if (!auth.valid) {
return new Response("Invalid token", { status: 403 });
}
return Response.json({ userId: auth.userId });
},
};

Use Hyperdrive for external database connections

Always use Hyperdrive when connecting to a remote PostgreSQL or MySQL database from a Worker. Hyperdrive maintains a regional connection pool close to your database, eliminating the per-request cost of TCP handshake, TLS negotiation, and connection setup. It also caches query results where possible.

Create a new Client on each request. Hyperdrive manages the underlying pool, so client creation is fast. Requires nodejs_compat for database driver support.

{
"name": "my-worker",
"main": "src/index.ts",
"compatibility_date": "2026-02-12",
"compatibility_flags": ["nodejs_compat"],
"hyperdrive": [{ "binding": "HYPERDRIVE", "id": "<YOUR_HYPERDRIVE_ID>" }],
}
src/index.js
import { Client } from "pg";
export default {
async fetch(request, env) {
// โœ… Good: create a new client per request โ€” Hyperdrive pools the underlying connection
const client = new Client({
connectionString: env.HYPERDRIVE.connectionString,
});
try {
await client.connect();
const result = await client.query("SELECT id, name FROM users LIMIT 10");
return Response.json(result.rows);
} catch (e) {
console.error(
JSON.stringify({ message: "database query failed", error: String(e) }),
);
return Response.json({ error: "Database error" }, { status: 500 });
}
},
};
// ๐Ÿ”ด Bad: connecting directly to a remote database without Hyperdrive
// Every request pays the full TCP + TLS + auth cost (often 300-500ms)

For more information, refer to Hyperdrive.

Use Durable Objects for WebSockets

Plain Workers can upgrade HTTP connections to WebSockets, but they lack persistent state and hibernation. If the isolate is evicted, the connection is lost because there is no persistent actor to hold it. For reliable, long-lived WebSocket connections, use Durable Objects with the Hibernation API. Durable Objects keep WebSocket connections open even while the object is evicted from memory, and automatically wake up when a message arrives.

Use this.ctx.acceptWebSocket() instead of ws.accept() to enable hibernation. Use setWebSocketAutoResponse for ping/pong heartbeats that do not wake the object.

src/index.js
import { DurableObject } from "cloudflare:workers";
// Parent Worker: upgrades HTTP to WebSocket and routes to a Durable Object
export default {
async fetch(request, env) {
if (request.headers.get("Upgrade") !== "websocket") {
return new Response("Expected WebSocket", { status: 426 });
}
const stub = env.CHAT_ROOM.getByName("default-room");
return stub.fetch(request);
},
};
// Durable Object: manages WebSocket connections with hibernation
export class ChatRoom extends DurableObject {
constructor(ctx, env) {
super(ctx, env);
// Auto ping/pong without waking the object
this.ctx.setWebSocketAutoResponse(
new WebSocketRequestResponsePair("ping", "pong"),
);
}
async fetch(request) {
const pair = new WebSocketPair();
const [client, server] = Object.values(pair);
// โœ… Good: acceptWebSocket enables hibernation
this.ctx.acceptWebSocket(server);
return new Response(null, { status: 101, webSocket: client });
}
// Called when a message arrives โ€” the object wakes from hibernation if needed
async webSocketMessage(ws, message) {
for (const conn of this.ctx.getWebSockets()) {
conn.send(typeof message === "string" ? message : "binary");
}
}
async webSocketClose(ws, code, reason, wasClean) {
ws.close(code, reason);
}
}

For more information, refer to Durable Objects WebSocket best practices.

Use Workers Static Assets for new projects

Workers Static Assets is the recommended way to deploy static sites, single-page applications, and full-stack apps on Cloudflare. If you are starting a new project, use Workers instead of Pages. Pages continues to work, but new features and optimizations are focused on Workers.

For a purely static site, point assets.directory at your build output. No Worker script is needed. For a full-stack app, add a main entry point and an ASSETS binding to serve static files alongside your API.

{
// Static site โ€” no Worker script needed
"name": "my-static-site",
"compatibility_date": "2026-02-12",
"compatibility_flags": ["nodejs_compat"],
"assets": {
"directory": "./dist",
},
}

For more information, refer to Workers Static Assets.

Observability

Enable Workers Logs and Traces

Production Workers without observability are a black box. Enable logs and traces before you deploy to production. When an intermittent error appears, you need data already being collected to diagnose it.

Enable them in your Wrangler configuration and use head_sampling_rate to control volume and manage costs. A sampling rate of 1 captures everything; lower it for high-traffic Workers.

Use structured JSON logging with console.log so logs are searchable and filterable. Use console.error for errors and console.warn for warnings. These appear at the correct severity level in the Workers Observability dashboard.

{
"name": "my-worker",
"main": "src/index.ts",
"compatibility_date": "2026-02-12",
"compatibility_flags": ["nodejs_compat"],
"observability": {
"enabled": true,
"logs": {
// Capture 100% of logs โ€” lower this for high-traffic Workers
"head_sampling_rate": 1,
},
"traces": {
"enabled": true,
"head_sampling_rate": 0.01, // Sample 1% of traces
},
},
}
src/index.js
export default {
async fetch(request, env) {
const url = new URL(request.url);
try {
// โœ… Good: structured JSON โ€” searchable and filterable in the dashboard
console.log(
JSON.stringify({
message: "incoming request",
method: request.method,
path: url.pathname,
}),
);
const result = await env.MY_KV.get(url.pathname);
return new Response(result ?? "Not found", {
status: result ? 200 : 404,
});
} catch (e) {
// โœ… Good: console.error appears as "error" severity in Workers Observability
console.error(
JSON.stringify({
message: "request failed",
error: e instanceof Error ? e.message : String(e),
path: url.pathname,
}),
);
return Response.json({ error: "Internal server error" }, { status: 500 });
}
// ๐Ÿ”ด Bad: unstructured string logs are hard to query
// console.log("Got a request to " + url.pathname);
},
};

For more information, refer to Workers Logs and Traces.

For more information on all available observability tools, refer to Workers Observability.

Code patterns

Do not store request-scoped state in global scope

Workers reuse isolates across requests. A variable set during one request is still present during the next. This causes cross-request data leaks, stale state, and "Cannot perform I/O on behalf of a different request" errors.

Pass state through function arguments or store it on env bindings. Never in module-level variables.

src/index.js
// ๐Ÿ”ด Bad: global mutable state leaks between requests
// let currentUser: string | null = null;
// let requestHeaders: Headers | null = null;
export default {
async fetch(request, env, ctx) {
// ๐Ÿ”ด Bad: storing request-scoped data globally
// currentUser = request.headers.get("X-User-Id");
// โœ… Good: pass request-scoped data through function arguments
const userId = request.headers.get("X-User-Id");
const result = await handleRequest(userId, env);
return Response.json(result);
},
};
async function handleRequest(userId, env) {
return { userId };
}

For more information, refer to Workers errors.

Always await or waitUntil your Promises

A Promise that is not awaited, returned, or passed to ctx.waitUntil() is a floating promise. Floating promises cause silent bugs: dropped results, swallowed errors, and unfinished work. The Workers runtime may terminate your isolate before a floating promise completes.

Enable the no-floating-promises lint rule to catch these at development time. If you use ESLint, enable @typescript-eslint/no-floating-promises โ†—. If you use oxlint, enable typescript/no-floating-promises โ†—.

Terminal window
# ESLint (typescript-eslint)
npx eslint --rule '{"@typescript-eslint/no-floating-promises": "error"}' src/
# oxlint
npx oxlint --deny typescript/no-floating-promises src/
src/index.js
export default {
async fetch(request, env, ctx) {
const data = await request.json();
// ๐Ÿ”ด Bad: floating promise โ€” result is dropped, errors are swallowed
// fetch("https://api.example.com/webhook", { method: "POST", body: JSON.stringify(data) });
// โœ… Good: await if you need the result before responding
const response = await fetch("https://api.example.com/process", {
method: "POST",
body: JSON.stringify(data),
});
// โœ… Good: waitUntil if you do not need the result before responding
ctx.waitUntil(
fetch("https://api.example.com/webhook", {
method: "POST",
body: JSON.stringify(data),
}),
);
return new Response("OK");
},
};

Security

Use Web Crypto for secure token generation

The Workers runtime provides the Web Crypto API for cryptographic operations. Use crypto.randomUUID() for unique identifiers and crypto.getRandomValues() for random bytes. Never use Math.random() for anything security-sensitive. It is not cryptographically secure.

Node.js node:crypto is also fully supported when nodejs_compat is enabled, so you can use whichever API you or your libraries prefer.

src/index.js
export default {
async fetch(request, env) {
// ๐Ÿ”ด Bad: Math.random() is predictable and not suitable for security
// const token = Math.random().toString(36).substring(2);
// โœ… Good: cryptographically secure random UUID
const sessionId = crypto.randomUUID();
// โœ… Good: cryptographically secure random bytes for tokens
const tokenBytes = new Uint8Array(32);
crypto.getRandomValues(tokenBytes);
const token = Array.from(tokenBytes)
.map((b) => b.toString(16).padStart(2, "0"))
.join("");
return Response.json({ sessionId, token });
},
};

When comparing secret values (API keys, tokens, HMAC signatures), use crypto.subtle.timingSafeEqual() to prevent timing side-channel attacks. Do not short-circuit on length mismatch. Encode both values to a fixed-size hash first.

src/verify.js
async function verifyToken(provided, expected) {
const encoder = new TextEncoder();
// โœ… Good: hash both values to a fixed size, then compare in constant time
// This avoids leaking the length of the expected value
const [providedHash, expectedHash] = await Promise.all([
crypto.subtle.digest("SHA-256", encoder.encode(provided)),
crypto.subtle.digest("SHA-256", encoder.encode(expected)),
]);
return crypto.subtle.timingSafeEqual(providedHash, expectedHash);
// ๐Ÿ”ด Bad: direct string comparison leaks timing information
// return provided === expected;
}

Do not use passThroughOnException as error handling

passThroughOnException() is a fail-open mechanism that sends requests to your origin when your Worker throws an unhandled exception. While it can be useful during migration from an origin server, it hides bugs and makes debugging difficult. Use explicit try/catch blocks with structured error responses instead.

src/index.js
export default {
async fetch(request, env, ctx) {
// ๐Ÿ”ด Bad: hides errors by falling through to origin
// ctx.passThroughOnException();
// โœ… Good: explicit error handling with structured responses
try {
const result = await handleRequest(request, env);
return Response.json(result);
} catch (error) {
const message = error instanceof Error ? error.message : "Unknown error";
console.error(
JSON.stringify({
message: "unhandled error",
error: message,
path: new URL(request.url).pathname,
}),
);
return Response.json({ error: "Internal server error" }, { status: 500 });
}
},
};
async function handleRequest(request, env) {
return { status: "ok" };
}

Development and testing

Test with @cloudflare/vitest-pool-workers

The @cloudflare/vitest-pool-workers package runs your tests inside the Workers runtime, giving you access to real bindings (KV, R2, D1, Durable Objects) during tests. This catches issues that Node.js-based tests miss, like unsupported APIs or missing compatibility flags.

One known pitfall: the Vitest pool automatically injects nodejs_compat, so tests pass even if your Wrangler configuration does not have the flag. Always confirm your wrangler.jsonc includes nodejs_compat if your code depends on Node.js built-in modules.

test/index.test.js
import { describe, it, expect } from "vitest";
import { env } from "cloudflare:test";
describe("KV operations", () => {
it("should store and retrieve a value", async () => {
await env.MY_KV.put("key", "value");
const result = await env.MY_KV.get("key");
expect(result).toBe("value");
});
it("should return null for missing keys", async () => {
const result = await env.MY_KV.get("nonexistent");
// โœ… Good: test the null case explicitly
expect(result).toBeNull();
});
});

For more information, refer to Testing with Vitest.