Limits
Feature | Workers Free | Workers Paid |
---|---|---|
Subrequests | 50/request | 1000/request |
Simultaneous outgoing connections/request | 6 | 6 |
Environment variables | 64/Worker | 128/Worker |
Environment variable size | 5 KB | 5 KB |
Worker size | 3 MB | 10 MB |
Worker startup time | 400 ms | 400 ms |
Number of Workers1 | 100 | 500 |
Number of Cron Triggers per account | 5 | 250 |
1 If you are running into limits, your project may be a good fit for Workers for Platforms.
URLs have a limit of 16 KB.
Request headers observe a total limit of 32 KB, but each header is limited to 16 KB.
Cloudflare has network-wide limits on the request body size. This limit is tied to your Cloudflare account's plan, which is separate from your Workers plan. When the request body size of your POST
/PUT
/PATCH
requests exceed your plan's limit, the request is rejected with a (413) Request entity too large
error.
Cloudflare Enterprise customers may contact their account team or Cloudflare Support to have a request body limit beyond 500 MB.
Cloudflare Plan | Maximum body size |
---|---|
Free | 100 MB |
Pro | 100 MB |
Business | 200 MB |
Enterprise | 500 MB (by default) |
Cloudflare does not enforce response limits, but cache limits for Cloudflare's CDN are observed. Maximum file size is 512 MB for Free, Pro, and Business customers and 5 GB for Enterprise customers.
Feature | Workers Free | Workers Paid |
---|---|---|
Request | 100,000 requests/day 1000 requests/min | No limit |
Worker memory | 128 MB | 128 MB |
CPU time | 10 ms | 30 s HTTP request 15 min Cron Trigger |
Duration | No limit | No limit for Workers. 15 min duration limit for Cron Triggers, Durable Object Alarms and Queue Consumers |
Duration is a measurement of wall-clock time — the total amount of time from the start to end of an invocation of a Worker. There is no hard limit on the duration of a Worker. As long as the client that sent the request remains connected, the Worker can continue processing, making subrequests, and setting timeouts on behalf of that request. When the client disconnects, all tasks associated with that client request are canceled. Use event.waitUntil()
to delay cancellation for another 30 seconds or until the promise passed to waitUntil()
completes.
CPU time is the amount of time the CPU actually spends doing work, during a given request. Most Workers requests consume less than a millisecond of CPU time. It is rare to find normally operating Workers that exceed the CPU time limit.
Each isolate has some built-in flexibility to allow for cases where your Worker infrequently runs over the configured limit. If your Worker starts hitting the limit consistently, its execution will be terminated according to the limit configured.
Using DevTools locally can help identify CPU intensive portions of your code. See the CPU profiling with DevTools documentation to learn more.
You can also set a custom limit on the amount of CPU time that can be used during each invocation of your Worker. To do so, navigate to the Workers section in the Cloudflare dashboard. Select the specific Worker you wish to modify, then click on the "Settings" tab where you can adjust the CPU time limit.
Feature | Workers Free | Workers Paid |
---|---|---|
Maximum object size | 512 MB | 512 MB |
Calls/request | 50 | 1,000 |
Calls/request means the number of calls to put()
, match()
, or delete()
Cache API method per-request, using the same quota as subrequests (fetch()
).
Workers automatically scale onto thousands of Cloudflare global network servers around the world. There is no general limit to the number of requests per second Workers can handle.
Cloudflare’s abuse protection methods do not affect well-intentioned traffic. However, if you send many thousands of requests per second from a small number of client IP addresses, you can inadvertently trigger Cloudflare’s abuse protection. If you expect to receive 1015
errors in response to traffic or expect your application to incur these errors, contact Cloudflare support to increase your limit. Cloudflare's anti-abuse Workers Rate Limiting does not apply to Enterprise customers.
You can also confirm if you have been rate limited by anti-abuse Worker Rate Limiting by logging into the Cloudflare dashboard, selecting your account and zone, and going to Security > Events. Find the event and expand it. If the Rule ID is worker
, this confirms that it is the anti-abuse Worker Rate Limiting.
The burst rate and daily request limits apply at the account level, meaning that requests on your *.workers.dev
subdomain count toward the same limit as your zones. Upgrade to a Workers Paid plan ↗ to automatically lift these limits.
Accounts using the Workers Free plan are subject to a burst rate limit of 1,000 requests per minute. Users visiting a rate limited site will receive a Cloudflare 1015
error page. However if you are calling your Worker programmatically, you can detect the rate limit page and handle it yourself by looking for HTTP status code 429
.
Workers being rate-limited by Anti-Abuse Protection are also visible from the Cloudflare dashboard:
- Log in to the Cloudflare dashboard ↗ and select your account and your website.
- Select Security > Events > scroll to Activity log.
- Review the log for a Web Application Firewall block event with a
ruleID
ofworker
.
Accounts using the Workers Free plan are subject to a daily request limit of 100,000 requests. Free plan daily requests counts reset at midnight UTC. A Worker that fails as a result of daily request limit errors can be configured by toggling its corresponding route in two modes: 1) Fail open and 2) Fail closed.
Routes in fail open mode will bypass the failing Worker and prevent it from operating on incoming traffic. Incoming requests will behave as if there was no Worker.
Routes in fail closed mode will display a Cloudflare 1027
error page to visitors, signifying the Worker has been temporarily disabled. Cloudflare recommends this option if your Worker is performing security related tasks.
Only one Workers instance runs on each of the many global Cloudflare global network servers. Each Workers instance can consume up to 128 MB of memory. Use global variables to persist data between requests on individual nodes. Note however, that nodes are occasionally evicted from memory.
If a Worker processes a request that pushes the Worker over the 128 MB limit, the Cloudflare Workers runtime may cancel one or more requests. To view these errors, as well as CPU limit overages:
- Log in to the Cloudflare dashboard ↗ and select your account.
- Select Workers & Pages and in Overview, select the Worker you would like to investigate.
- Under Metrics, select Errors > Invocation Statuses and examine Exceeded Memory.
Use the TransformStream API to stream responses if you are concerned about memory usage. This avoids loading an entire response into memory.
Using DevTools locally can help identify memory leaks in your code. See the memory profiling with DevTools documentation to learn more.
A subrequest is any request that a Worker makes to either Internet resources using the Fetch API or requests to other Cloudflare services like R2, KV, or D1.
To make subrequests from your Worker to another Worker on your account, use Service Bindings. Service bindings allow you to send HTTP requests to another Worker without those requests going over the Internet.
If you attempt to use global fetch()
to make a subrequest to another Worker on your account that runs on the same zone, without service bindings, the request will fail.
If you make a subrequest from your Worker to a target Worker that runs on a Custom Domain rather than a route, the request will be allowed.
You can make 50 subrequests per request on Workers Free, and 1,000 subrequests per request on Workers Paid. Each subrequest in a redirect chain counts against this limit. This means that the number of subrequests a Worker makes could be greater than the number of fetch(request)
calls in the Worker.
For subrequests to internal services like Workers KV and Durable Objects, the subrequest limit is 1,000 per request, regardless of the usage model configured for the Worker.
There is no set limit on the amount of real time a Worker may use. As long as the client which sent a request remains connected, the Worker may continue processing, making subrequests, and setting timeouts on behalf of that request.
When the client disconnects, all tasks associated with that client’s request are proactively canceled. If the Worker passed a promise to event.waitUntil()
, cancellation will be delayed until the promise has completed or until an additional 30 seconds have elapsed, whichever happens first.
You can open up to six connections simultaneously, for each invocation of your Worker. The connections opened by the following API calls all count toward this limit:
- the
fetch()
method of the Fetch API. get()
,put()
,list()
, anddelete()
methods of Workers KV namespace objects.put()
,match()
, anddelete()
methods of Cache objects.list()
,get()
,put()
,delete()
, andhead()
methods of R2.send()
andsendBatch()
, methods of Queues.- Opening a TCP socket using the
connect()
API.
Once an invocation has six connections open, it can still attempt to open additional connections.
- These attempts are put in a pending queue — the connections will not be initiated until one of the currently open connections has closed.
- Earlier connections can delay later ones, if a Worker tries to make many simultaneous subrequests, its later subrequests may appear to take longer to start.
If you have cases in your application that use fetch()
but that do not require consuming the response body, you can avoid the unread response body from consuming a concurrent connection by using response.body.cancel()
.
For example, if you want to check whether the HTTP response code is successful (2xx) before consuming the body, you should explicitly cancel the pending response body:
This will free up an open connection.
If the system detects that a Worker is deadlocked on open connections — for example, if the Worker has pending connection attempts but has no in-progress reads or writes on the connections that it already has open — then the least-recently-used open connection will be canceled to unblock the Worker.
If the Worker later attempts to use a canceled connection, an exception will be thrown. These exceptions should rarely occur in practice, though, since it is uncommon for a Worker to open a connection that it does not have an immediate use for.
The maximum number of environment variables (secret and text combined) for a Worker is 128 variables on the Workers Paid plan, and 64 variables on the Workers Free plan. There is no limit to the number of environment variables per account.
Each environment variable has a size limitation of 5 KB.
A Worker can be up to 10 MB in size after compression on the Workers Paid plan, and up to 3 MB on the Workers Free plan.
You can assess the size of your Worker bundle after compression by performing a dry-run with wrangler
and reviewing the final compressed (gzip
) size output by wrangler
:
Note that larger Worker bundles can impact the start-up time of the Worker, as the Worker needs to be loaded into memory. You should consider removing unnecessary dependencies and/or using Workers KV, a D1 database or R2 to store configuration files, static assets and binary data instead of attempting to bundle them within your Worker code.
A Worker must be able to be parsed and execute its global scope (top-level code outside of any handlers) within 400 ms. Worker size can impact startup because there is more code to parse and evaluate. Avoiding expensive code in the global scope can keep startup efficient as well.
You can measure your Worker's startup time by deploying it to Cloudflare using Wrangler. When you run npx wrangler@latest deploy
or npx wrangler@latest versions upload
, Wrangler will output the startup time of your Worker in the command-line output, using the startup_time_ms
field in the Workers Script API or Workers Versions API.
If you are having trouble staying under this limit, consider profiling using DevTools locally to learn how to optimize your code.
You can have up to 500 Workers on your account on the Workers Paid plan, and up to 100 Workers on the Workers Free plan.
If you need more than 500 Workers, consider using Workers for Platforms.
Each zone has a limit of 1,000 routes. If you require more than 1,000 routes on your zone, consider using Workers for Platforms or request an increase to this limit.
Each zone has a limit of 100 custom domains. If you require more than 100 custom domains on your zone, consider using a wildcard route or request an increase to this limit.
When configuring routing, the maximum number of zones that can be referenced by a Worker is 1,000. If you require more than 1,000 zones on your Worker, consider using Workers for Platforms or request an increase to this limit.
When using Image Resizing with Workers, refer to Image Resizing documentation for more information on the applied limits.
You can emit a maximum of 128 KB of data (across console.log()
statements, exceptions, request metadata and headers) to the console for a single request. After you exceed this limit, further context associated with the request will not be recorded in logs, appear when tailing logs of your Worker, or within a Tail Worker.
Refer to the Workers Trace Event Logpush documentation for information on the maximum size of fields sent to logpush destinations.
If your Worker is on an Unbound plan, your limits are exactly the same as the Workers Paid plan.
If your Worker is on a Bundled plan, your limits are the same as the Workers Paid plan except for the following differences:
- Your limit for subrequests is 50/request
- Your limit for CPU time is 50ms for HTTP requests and 50ms for Cron Triggers
- You have no Duration limits for Cron Triggers, Durable Object alarms, or Queue consumers
- Your Cache API limits for calls/requests is 50
Review other developer platform resource limits.