Limits
| Feature | Workers Free | Workers Paid |
|---|---|---|
| Requests | 100,000/day | No limit |
| CPU time | 10 ms | 5 min |
| Memory | 128 MB | 128 MB |
| Subrequests | 50/request | 10,000/request |
| Simultaneous outgoing connections/request | 6 | 6 |
| Environment variables | 64/Worker | 128/Worker |
| Environment variable size | 5 KB | 5 KB |
| Worker size | 3 MB | 10 MB |
| Worker startup time | 1 second | 1 second |
| Number of Workers1 | 100 | 500 |
| Number of Cron Triggers per account | 5 | 250 |
| Number of Static Asset files per Worker version | 20,000 | 100,000 |
| Individual Static Asset file size | 25 MiB | 25 MiB |
1 If you reach this limit, consider using Workers for Platforms.
| Limit | Value |
|---|---|
| URL size | 16 KB |
| Request header size | 128 KB (total) |
| Response header size | 128 KB (total) |
| Response body size | No enforced limit |
Request body size limits depend on your Cloudflare account plan, not your Workers plan. Requests exceeding these limits return a 413 Request entity too large error.
| Cloudflare Plan | Maximum request body size |
|---|---|
| Free | 100 MB |
| Pro | 100 MB |
| Business | 200 MB |
| Enterprise | 500 MB (by default) |
Enterprise customers can contact their account team or Cloudflare Support for a higher request body limit.
Cloudflare does not enforce response body size limits. CDN cache limits apply: 512 MB for Free, Pro, and Business plans, and 5 GB for Enterprise.
CPU time measures how long the CPU spends executing your Worker code. Waiting on network requests (such as fetch() calls, KV reads, or database queries) does not count toward CPU time.
| Limit | Workers Free | Workers Paid |
|---|---|---|
| CPU time per HTTP request | 10 ms | 5 min (default: 30 seconds) |
| CPU time per Cron Trigger | 10 ms | 30 seconds (< 1 hour interval) 15 min (>= 1 hour interval) |
Most Workers consume very little CPU time. The average Worker uses approximately 2.2 ms per request. Heavier workloads that handle authentication, server-side rendering, or parse large payloads typically use 10-20 ms.
Each isolate has some built-in flexibility to allow for cases where your Worker infrequently runs over the configured limit. If your Worker starts hitting the limit consistently, its execution will be terminated according to the limit configured.
When a Worker exceeds its CPU time limit, Cloudflare returns Error 1102 to the client with the message Worker exceeded resource limits. In the dashboard, this appears as Exceeded CPU Time Limits under Metrics > Errors > Invocation Statuses. In analytics and Logpush, the invocation outcome is exceededCpu.
To resolve a CPU time limit error:
- Increase the CPU time limit — On the Workers Paid plan, you can raise the limit from the default 30 seconds up to 5 minutes (300,000 ms). Set this in your Wrangler configuration or in the dashboard.
- Optimize your code — Use CPU profiling with DevTools to identify CPU-intensive sections of your code.
- Offload work — Move expensive computation to Durable Objects or process data in smaller chunks across multiple requests.
On the Workers Paid plan, you can increase the maximum CPU time from the default 30 seconds to 5 minutes (300,000 ms).
{ // ...rest of your configuration... "limits": { "cpu_ms": 300000, // default is 30000 (30 seconds) }, // ...rest of your configuration...}[limits]cpu_ms = 300_000You can also change this in the dashboard: go to Workers & Pages > select your Worker > Settings > adjust the CPU time limit.
- Workers Logs — CPU time and wall time appear in the invocation log.
- Tail Workers / Logpush — CPU time and wall time appear at the top level of the Workers Trace Events object.
- DevTools — Use CPU profiling with DevTools locally to identify CPU-intensive sections of your code.
| Limit | Value |
|---|---|
| Memory per isolate | 128 MB |
Each isolate can consume up to 128 MB of memory, including the JavaScript heap and WebAssembly allocations. This limit is per-isolate, not per-invocation. A single isolate can handle many concurrent requests.
When an isolate exceeds 128 MB, the Workers runtime lets in-flight requests complete and creates a new isolate for subsequent requests. During extremely high load, the runtime may cancel some incoming requests to maintain stability.
When a Worker exceeds its memory limit, Cloudflare returns Error 1102 to the client with the message Worker exceeded resource limits. In the dashboard, this appears as Exceeded Memory under Metrics > Errors > Invocation Statuses. In analytics and Logpush, the invocation outcome is exceededMemory.
You may also see the runtime error Memory limit would be exceeded before EOF when attempting to buffer a response body that exceeds the limit.
To resolve a memory limit error:
- Stream request and response bodies — Use
TransformStreamornode:streaminstead of buffering entire payloads in memory. - Avoid large in-memory objects — Store large data in KV, R2, or D1 instead of holding it in Worker memory.
- Profile memory usage — Use memory profiling with DevTools locally to identify leaks and high-memory allocations.
To view memory errors in the dashboard:
-
Go to Workers & Pages.
Go to Workers & Pages -
Select the Worker you want to investigate.
-
Under Metrics, select Errors > Invocation Statuses and examine Exceeded Memory.
Duration measures wall-clock time from start to end of a Worker invocation. There is no hard limit on duration for HTTP-triggered Workers. As long as the client remains connected, the Worker can continue processing, making subrequests, and setting timeouts.
| Trigger type | Duration limit |
|---|---|
| HTTP request | No limit |
| Cron Trigger | 15 min |
| Durable Object Alarm | 15 min |
| Queue Consumer | 15 min |
When the client disconnects, all tasks associated with that request are canceled. Use event.waitUntil() to delay cancellation for another 30 seconds or until the promise you pass to waitUntil() completes.
Workers scale automatically across the Cloudflare global network. There is no general limit on requests per second.
Accounts on the Workers Free plan have a daily request limit of 100,000 requests, resetting at midnight UTC. When a Worker exceeds this limit, Cloudflare returns Error 1027.
| Route mode | Behavior |
|---|---|
| Fail open | Bypasses the Worker. Requests behave as if no Worker is configured. |
| Fail closed | Returns a Cloudflare 1027 error page. Use this for security-critical Workers. |
You can configure the fail mode by toggling the corresponding route.
A subrequest is any request a Worker makes using the Fetch API or to Cloudflare services like R2, KV, or D1.
| Limit | Workers Free | Workers Paid |
|---|---|---|
| Subrequests per invocation | 50 | 10,000 (up to 10M) |
| Subrequests to internal services | 1,000 | Matches configured limit (default 10,000) |
Each subrequest in a redirect chain counts against this limit. The total number of subrequests may exceed the number of fetch() calls in your code. You can change the subrequest limit per Worker using the limits configuration in your Wrangler configuration file.
There is no set time limit on individual subrequests. As long as the client remains connected, the Worker can continue making subrequests. When the client disconnects, all tasks are canceled. Use event.waitUntil() to delay cancellation for up to 30 seconds.
Use Service Bindings to send requests from one Worker to another on your account without going over the Internet.
Using global fetch() to call another Worker on the same zone without service bindings fails. Workers accept requests sent to a Custom Domain.
Each Worker invocation can open up to six simultaneous connections. The following API calls count toward this limit:
fetch()method of the Fetch APIget(),put(),list(), anddelete()methods of Workers KV namespace objectsput(),match(), anddelete()methods of Cache objectslist(),get(),put(),delete(), andhead()methods of R2send()andsendBatch()methods of Queues- Opening a TCP socket using the
connect()API
Outbound WebSocket connections also count toward this limit.
Once six connections are open, the runtime queues additional attempts until an existing connection closes. The runtime may close stalled connections (those not actively reading or writing) with a Response closed due to connection limit exception.
If you use fetch() but do not need the response body, call response.body.cancel() to free the connection:
const response = await fetch(url);
// Only read the response body for successful responsesif (response.statusCode <= 299) { // Call response.json(), response.text() or otherwise process the body} else { // Explicitly cancel it response.body.cancel();}If the system detects a deadlock (pending connection attempts with no in-progress reads or writes), it cancels the least-recently-used connection to unblock the Worker.
| Limit | Workers Free | Workers Paid |
|---|---|---|
| Variables per Worker (secrets + text) | 64 | 128 |
| Variable size | 5 KB | 5 KB |
| Variables per account | No limit | No limit |
| Limit | Workers Free | Workers Paid |
|---|---|---|
| After compression (gzip) | 3 MB | 10 MB |
| Before compression | 64 MB | 64 MB |
Larger Worker bundles can impact startup time. To check your compressed bundle size:
wrangler deploy --outdir bundled/ --dry-run# Output will resemble the below:Total Upload: 259.61 KiB / gzip: 47.23 KiBTo reduce Worker size:
- Remove unnecessary dependencies and packages.
- Store configuration files, static assets, and binary data in KV, R2, D1, or Workers Static Assets instead of bundling them.
- Split functionality across multiple Workers using Service bindings.
| Limit | Value |
|---|---|
| Startup time | 1 second |
A Worker must parse and execute its global scope (top-level code outside of handlers) within 1 second. Larger bundles and expensive initialization code in global scope increase startup time.
When the platform rejects a deployment because the Worker exceeds the startup time limit, the validation returns the error Script startup exceeded CPU time limit (error code 10021). Wrangler automatically generates a CPU profile that you can import into Chrome DevTools or open in VS Code. Refer to wrangler check startup for more details.
To measure startup time, run npx wrangler@latest deploy or npx wrangler@latest versions upload. Wrangler reports startup_time_ms in the output.
To reduce startup time, avoid expensive work in global scope. Move initialization logic into your handler or to build time. For example, generating or consuming a large schema at the top level is a common cause of exceeding this limit.
| Limit | Workers Free | Workers Paid |
|---|---|---|
| Workers per account | 100 | 500 |
If you need more than 500 Workers, consider using Workers for Platforms.
| Limit | Value |
|---|---|
| Routes per zone | 1,000 |
Routes per zone (wrangler dev --remote) | 50 |
| Custom domains per zone | 100 |
| Routed zones per Worker | 1,000 |
When you run a remote development session using the --remote flag, Cloudflare enforces a limit of 50 routes per zone. The Quick Editor in the Cloudflare dashboard also uses wrangler dev --remote, so the same limit applies.
If your zone has more than 50 routes, you cannot run a remote session until you remove routes to get under the limit.
If you require more than 1,000 routes or 1,000 routed zones per Worker, consider using Workers for Platforms. If you require more than 100 custom domains per zone, consider using a wildcard route.
| Feature | Workers Free | Workers Paid |
|---|---|---|
| Maximum object size | 512 MB | 512 MB |
| Calls per request | 50 | 1,000 |
Calls per request is the number of put(), match(), or delete() Cache API calls per request. This shares the same quota as subrequests (fetch()).
| Limit | Value |
|---|---|
| Log data per request | 256 KB |
This limit covers all data emitted via console.log() statements, exceptions, request metadata, and headers for a single request. After exceeding this limit, the system does not record additional context for that request in logs, tail logs, or Tail Workers.
Refer to the Workers Trace Event Logpush documentation for limits on fields sent to Logpush destinations.
Refer to the Image Resizing documentation for limits that apply when using Image Resizing with Workers.
| Limit | Workers Free | Workers Paid |
|---|---|---|
| Files per Worker version | 20,000 | 100,000 |
| Individual file size | 25 MiB | 25 MiB |
_headers rules | 100 | 100 |
_headers characters per line | 2,000 | 2,000 |
_redirects static redirects | 2,000 | 2,000 |
_redirects dynamic redirects | 100 | 100 |
_redirects total | 2,100 | 2,100 |
_redirects characters per rule | 1,000 | 1,000 |
If your Worker is on an Unbound plan, limits match the Workers Paid plan.
If your Worker is on a Bundled plan, limits match the Workers Paid plan with these exceptions:
| Feature | Bundled plan limit |
|---|---|
| Subrequests | 50/request |
| CPU time (HTTP requests) | 50 ms |
| CPU time (Cron Triggers) | 50 ms |
| Cache API calls/request | 50 |
Bundled plan Workers have no duration limits for Cron Triggers, Durable Object Alarms, or Queue Consumers.