# Demos and architectures
URL: https://developers.cloudflare.com/workers/demos/
import { ExternalResources, GlossaryTooltip, ResourcesBySelector } from "~/components"
Learn how you can use Workers within your existing application and architecture.
## Demos
Explore the following demo applications for Workers.
## Reference architectures
Explore the following reference architectures that use Workers:
---
# Glossary
URL: https://developers.cloudflare.com/workers/glossary/
import { Glossary } from "~/components";
Review the definitions for terms used across Cloudflare's Workers documentation.
---
# Cloudflare Workers
URL: https://developers.cloudflare.com/workers/
import {
CardGrid,
Description,
Feature,
LinkButton,
LinkTitleCard,
Plan,
RelatedProduct,
Render,
} from "~/components";
Build serverless applications and deploy instantly across the globe for exceptional performance, reliability, and scale.
Cloudflare Workers provides a [serverless](https://www.cloudflare.com/learning/serverless/what-is-serverless/) execution environment that allows you to create new applications or augment existing ones without configuring or maintaining infrastructure.
Cloudflare Workers runs on [Cloudflare’s global network](https://www.cloudflare.com/network/) in hundreds of cities worldwide, offering both [Free and Paid plans](/workers/platform/pricing/).
Get started
Workers dashboard
---
## Features
The Workers command-line interface, Wrangler, allows you to [create](/workers/wrangler/commands/#init), [test](/workers/wrangler/commands/#dev), and [deploy](/workers/wrangler/commands/#deploy) your Workers projects.
Bindings allow your Workers to interact with resources on the Cloudflare developer platform, including [R2](/r2/), [KV](/kv/concepts/how-kv-works/), [Durable Objects](/durable-objects/), and [D1](/d1/).
The Playground is a sandbox which gives you an instant way to preview and test a Worker directly in the browser against any site. No setup required.
---
## Related products
Run machine learning models, powered by serverless GPUs, on Cloudflare’s global network.
Store large amounts of unstructured data without the costly egress bandwidth fees associated with typical cloud storage services.
Create new serverless SQL databases to query from your Workers and Pages projects.
A globally distributed coordination API with strongly consistent storage.
Create a global, low-latency, key-value data storage.
Send and receive messages with guaranteed delivery and no charges for egress bandwidth.
Turn your existing regional database into a globally distributed database.
Build full-stack AI applications with Vectorize, Cloudflare’s vector database.
Offload third-party tools and services to the cloud and improve the speed and security of your website.
---
## More resources
New to Workers? Get started with the Workers Learning Path.
Learn about Free and Paid plans.
Learn about plan limits (Free plans get 100,000 requests per day).
Learn which storage option is best for your project.
Connect with the Workers community on Discord to ask questions, share what you
are building, and discuss the platform with other developers.
Follow @CloudflareDev on Twitter to learn about product announcements, and
what is new in Cloudflare Workers.
---
# Local development
URL: https://developers.cloudflare.com/workers/local-development/
Cloudflare Workers and most connected resources can be fully developed and tested locally - providing confidence that the applications you build locally will work the same way in production. This allows you to be more efficient and effective by providing a faster feedback loop and removing the need to [test against remote resources](#develop-using-remote-resources-and-bindings). Local development runs against the same production runtime used by Cloudflare Workers, [workerd](https://github.com/cloudflare/workerd).
In addition to testing Workers locally with [`wrangler dev`](/workers/wrangler/commands/#dev), the use of Miniflare allows you to test other Developer Platform products locally, such as [R2](/r2/), [KV](/kv/), [D1](/d1/), and [Durable Objects](/durable-objects/).
## Start a local development server
:::note
This guide assumes you are using [Wrangler v3.0](https://blog.cloudflare.com/wrangler3/) or later.
Users new to Wrangler CLI and Cloudflare Workers should visit the [Wrangler Install/Update guide](/workers/wrangler/install-and-update) to install `wrangler`.
:::
Wrangler provides a [`dev`](/workers/wrangler/commands/#dev) command that starts a local server for developing your Worker. Make sure you have `npm` installed and run the following in the folder containing your Worker application:
```sh
npx wrangler dev
```
`wrangler dev` will run the Worker directly on your local machine. `wrangler dev` uses a combination of `workerd` and [Miniflare](https://github.com/cloudflare/workers-sdk/tree/main/packages/miniflare), a simulator that allows you to test your Worker against additional resources like KV, Durable Objects, WebSockets, and more.
### Supported resource bindings in different environments
| Product | Local Dev Supported | Remote Dev Supported |
| ----------------------------------- | ------------------- | -------------------- |
| AI | ✅[^1] | ✅ |
| Assets | ✅ | ✅ |
| Analytics Engine | ✅ | ✅ |
| Browser Rendering | ❌ | ✅ |
| D1 | ✅ | ✅ |
| Durable Objects | ✅ | ✅ |
| Email Bindings | ❌ | ✅ |
| Hyperdrive | ✅[^2] | ✅ |
| Images | ✅ | ✅ |
| KV | ✅ | ✅ |
| mTLS | ❌ | ✅ |
| Queues | ✅ | ❌ |
| R2 | ✅ | ✅ |
| Rate Limiting | ✅ | ✅ |
| Service Bindings (multiple workers) | ✅ | ✅ |
| Vectorize | ✅[^3] | ✅ |
| Workflows | ✅ | ❌ |
With any bindings that are not supported locally, you will need to use the [`--remote` command](#develop-using-remote-resources-and-bindings) in wrangler, such as `wrangler dev --remote`.
[^1]: Using Workers AI always accesses your Cloudflare account in order to run AI models and will incur usage charges even in local development.
[^2]: Using Hyperdrive with local development allows you to connect to a local database (running on `localhost`) but you cannot connect to a remote database. To connect to a remote database, use remote development.
[^3]: Using Vectorize always accesses your Cloudflare account to run queries, and will incur usage charges even in local development.
## Work with local data
When running `wrangler dev`, resources such as KV, Durable Objects, D1, and R2 will be stored and persisted locally and not affect the production resources.
### Use bindings in Wrangler configuration files
[Wrangler](/workers/wrangler/) will automatically create local versions of bindings found in the [Wrangler configuration file](/workers/wrangler/configuration/). These local resources will not have data in them initially, so you will need to add data manually via Wrangler commands and the [`--local` flag](#use---local-flag).
When you run `wrangler dev` Wrangler stores local resources in a `.wrangler/state` folder, which is automatically created.
If you prefer to specify a directory, you can use the [`--persist-to`](/workers/wrangler/commands/#dev) flag with `wrangler dev` like this:
```sh
npx wrangler dev --persist-to
```
Using this will write all local storage and cache to the specified directory instead of `.wrangler`.
:::note
This local persistence folder should be added to your `.gitignore` file.
:::
### Use `--local` flag
The following [Wrangler commands](/workers/wrangler/commands/) have a `--local` flag which allows you to create, update, and delete local data during development:
| Command |
| ---------------------------------------------------- |
| [`d1 execute`](/workers/wrangler/commands/#execute) |
| [`kv key`](/workers/wrangler/commands/#kv-key) |
| [`kv bulk`](/workers/wrangler/commands/#kv-bulk) |
| [`r2 object`](/workers/wrangler/commands/#r2-object) |
If using `--persist-to` to specify a custom folder with `wrangler dev` you should also add `--persist-to` with the same directory name along with the `--local` flag when running the commands above. For example, to put a custom KV key into a local namespace via the CLI you would run:
```sh
npx wrangler kv key put test 12345 --binding MY_KV_NAMESPACE --local --persist-to worker-local
```
Running `wrangler kv key put` will create a new key `test` with a value of `12345` on the local namespace specified via the binding `MY_KV_NAMESPACE` in the [Wrangler configuration file](/workers/wrangler/configuration/). This example command sets the local persistence directory to `worker-local` using `--persist-to`, to ensure that the data is created in the correct location. If `--persist-to` was not set, it would create the data in the `.wrangler` folder.
### Clear Wrangler's local storage
If you need to clear local storage entirely, delete the `.wrangler/state` folder. You can also be more fine-grained and delete specific resource folders within `.wrangler/state`.
Any deleted folders will be created automatically the next time you run `wrangler dev`.
## Local-only environment variables
When running `wrangler dev`, variables in the [Wrangler configuration file](/workers/wrangler/configuration/) are automatically overridden by values defined in a `.dev.vars` file located in the root directory of your worker. This is useful for providing values you do not want to check in to source control.
```shell
API_HOST = "localhost:4000"
API_ACCOUNT_ID = "local_example_user"
```
## Develop using remote resources and bindings
There may be times you want to develop against remote resources and bindings. To run `wrangler dev` in remote mode, add the `--remote` flag, which will run both your code and resources remotely:
```sh
npx wrangler dev --remote
```
For some products like KV and R2, remote resources used for `wrangler dev --remote` must be specified with preview ID/names in the [Wrangler configuration file](/workers/wrangler/configuration/) such as `preview_id` for KV or `preview_bucket name` for R2. Resources used for remote mode (preview) can be different from resources used for production to prevent changing production data during development. To use production data in `wrangler dev --remote`, set the preview ID/name of the resource to the ID/name of your production resource.
## Customize `wrangler dev`
You can customize how `wrangler dev` works to fit your needs. Refer to [the `wrangler dev` documentation](/workers/wrangler/commands/#dev) for available configuration options.
:::caution
There is a bug associated with how outgoing requests are handled when using `wrangler dev --remote`. For more information, read the [Known issues section](/workers/platform/known-issues/#wrangler-dev).
:::
## Related resources
- [D1 local development](/d1/best-practices/local-development/) - The official D1 guide to local development and testing.
- [DevTools](/workers/observability/dev-tools) - Guides to using DevTools to debug your Worker locally.
---
# Playground
URL: https://developers.cloudflare.com/workers/playground/
import { LinkButton } from "~/components";
:::note[Browser support]
The Cloudflare Workers Playground is currently only supported in Firefox and Chrome desktop browsers. In Safari, it will show a `PreviewRequestFailed` error message.
:::
The quickest way to experiment with Cloudflare Workers is in the [Playground](https://workers.cloudflare.com/playground). It does not require any setup or authentication. The Playground is a sandbox which gives you an instant way to preview and test a Worker directly in the browser.
The Playground uses the same editor as the authenticated experience. The Playground provides the ability to [share](#share) the code you write as well as [deploy](#deploy) it instantly to Cloudflare's global network. This way, you can try new things out and deploy them when you are ready.
Launch the Playground
## Hello Cloudflare Workers
When you arrive in the Playground, you will see this default code:
```js
import welcome from "welcome.html";
/**
* @typedef {Object} Env
*/
export default {
/**
* @param {Request} request
* @param {Env} env
* @param {ExecutionContext} ctx
* @returns {Response}
*/
fetch(request, env, ctx) {
console.log("Hello Cloudflare Workers!");
return new Response(welcome, {
headers: {
"content-type": "text/html",
},
});
},
};
```
This is an example of a multi-module Worker that is receiving a [request](/workers/runtime-apis/request/), logging a message to the console, and then returning a [response](/workers/runtime-apis/response/) body containing the content from `welcome.html`.
Refer to the [Fetch handler documentation](/workers/runtime-apis/handlers/fetch/) to learn more.
## Use the Playground
As you edit the default code, the Worker will auto-update such that the preview on the right shows your Worker running just as it would in a browser. If your Worker uses URL paths, you can enter those in the input field on the right to navigate to them. The Playground provides type-checking via JSDoc comments and [`workers-types`](https://www.npmjs.com/package/@cloudflare/workers-types). The Playground also provides pretty error pages in the event of application errors.
To test a raw HTTP request (for example, to test a `POST` request), go to the **HTTP** tab and select **Send**. You can add and edit headers via this panel, as well as edit the body of a request.
## DevTools
For debugging Workers inside the Playground, use the developer tools at the bottom of the Playground's preview panel to view `console.logs`, network requests, memory and CPU usage. The developer tools for the Workers Playground work similarly to the developer tools in Chrome or Firefox, and are the same developer tools users have access to in the [Wrangler CLI](/workers/wrangler/install-and-update/) and the authenticated dashboard.
### Network tab
**Network** shows the outgoing requests from your Worker — that is, any calls to `fetch` inside your Worker code.
### Console Logs
The console displays the output of any calls to `console.log` that were called for the current preview run as well as any other preview runs in that session.
### Sources
**Sources** displays the sources that make up your Worker. Note that KV, text, and secret bindings are only accessible when authenticated with an account. This means you must be logged in to the dashboard, or use [`wrangler dev`](/workers/wrangler/commands/#dev) with your account credentials.
## Share
To share what you have created, select **Copy Link** in the top right of the screen. This will copy a unique URL to your clipboard that you can share with anyone. These links do not expire, so you can bookmark your creation and share it at any time. Users that open a shared link will see the Playground with the shared code and preview.
## Deploy
You can deploy a Worker from the Playground. If you are already logged in, you can review the Worker before deploying. Otherwise, you will be taken through the first-time user onboarding flow before you can review and deploy.
Once deployed, your Worker will get its own unique URL and be available almost instantly on Cloudflare's global network. From here, you can add [Custom Domains](/workers/configuration/routing/custom-domains/), [storage resources](/workers/platform/storage-options/), and more.
---
# Changelog
URL: https://developers.cloudflare.com/workers-ai/changelog/
import { ProductReleaseNotes } from "~/components";
{/* */}
---
# Demos and architectures
URL: https://developers.cloudflare.com/workers-ai/demos/
import { ExternalResources, GlossaryTooltip, ResourcesBySelector } from "~/components"
Workers AI can be used to build dynamic and performant services. The following demo applications and reference architectures showcase how to use Workers AI optimally within your architecture.
## Demos
Explore the following demo applications for Workers AI.
## Reference architectures
Explore the following reference architectures that use Workers AI:
---
# Glossary
URL: https://developers.cloudflare.com/workers-ai/glossary/
import { Glossary } from "~/components";
Review the definitions for terms used across Cloudflare's Workers AI documentation.
---
# Cloudflare Workers AI
URL: https://developers.cloudflare.com/workers-ai/
import { CardGrid, Description, Feature, LinkTitleCard, Plan, RelatedProduct, Render, LinkButton, Flex } from "~/components"
Run machine learning models, powered by serverless GPUs, on Cloudflare's global network.
Workers AI allows you to run AI models in a serverless way, without having to worry about scaling, maintaining, or paying for unused infrastructure. You can invoke models running on GPUs on Cloudflare's network from your own code — from [Workers](/workers/), [Pages](/pages/), or anywhere via [the Cloudflare API](/api/resources/ai/methods/run/).
Workers AI gives you access to:
- **50+ [open-source models](/workers-ai/models/)**, available as a part of our model catalog
- Serverless, **pay-for-what-you-use** [pricing model](/workers-ai/platform/pricing/)
- All as part of a **fully-featured developer platform**, including [AI Gateway](/ai-gateway/), [Vectorize](/vectorize/), [Workers](/workers/) and more...
Get startedWatch a Workers AI demo
***
## Features
Workers AI comes with a curated set of popular open-source models that enable you to do tasks such as image classification, text generation, object detection and more.
***
## Related products
Observe and control your AI applications with caching, rate limiting, request retries, model fallback, and more.
Build full-stack AI applications with Vectorize, Cloudflare’s vector database. Adding Vectorize enables you to perform tasks such as semantic search, recommendations, anomaly detection or can be used to provide context and memory to an LLM.
Build serverless applications and deploy instantly across the globe for exceptional performance, reliability, and scale.
Create full-stack applications that are instantly deployed to the Cloudflare global network.
Store large amounts of unstructured data without the costly egress bandwidth fees associated with typical cloud storage services.
Create new serverless SQL databases to query from your Workers and Pages projects.
A globally distributed coordination API with strongly consistent storage.
Create a global, low-latency, key-value data storage.
***
## More resources
Build and deploy your first Workers AI application.
Learn about Free and Paid plans.
Learn about Workers AI limits.
Learn how you can build and deploy ambitious AI applications to Cloudflare's global network.
Learn which storage option is best for your project.
Connect with the Workers community on Discord to ask questions, share what you are building, and discuss the platform with other developers.
Follow @CloudflareDev on Twitter to learn about product announcements, and what is new in Cloudflare Workers.
---
# Privacy
URL: https://developers.cloudflare.com/workers-ai/privacy/
Cloudflare processes certain customer data in order to provide the Workers AI service, subject to our [Privacy Policy](https://www.cloudflare.com/privacypolicy/) and [Self-Serve Subscription Agreement](https://www.cloudflare.com/terms/) or [Enterprise Subscription Agreement](https://www.cloudflare.com/enterpriseterms/) (as applicable).
Cloudflare neither creates nor trains the AI models made available on Workers AI. The models constitute Third-Party Services and may be subject to open source or other license terms that apply between you and the model provider. Be sure to review the license terms applicable to each model (if any).
Your inputs (e.g., text prompts, image submissions, audio files, etc.), outputs (e.g., generated text/images, translations, etc.), embeddings, and training data constitute Customer Content.
For Workers AI:
* You own, and are responsible for, all of your Customer Content.
* Cloudflare does not make your Customer Content available to any other Cloudflare customer.
* Cloudflare does not use your Customer Content to (1) train any AI models made available on Workers AI or (2) improve any Cloudflare or third-party services, and would not do so unless we received your explicit consent.
* Your Customer Content for Workers AI may be stored by Cloudflare if you specifically use a storage service (e.g., R2, KV, DO, Vectorize, etc.) in conjunction with Workers AI.
---
# Errors
URL: https://developers.cloudflare.com/workers-ai/workers-ai-errors/
Below is a list of Workers AI errors.
| **Name** | **Internal Code** | **HTTP Code** | **Description** |
| ------------------------------------- | ----------------- | ------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------- |
| No such model | `5007` | `400` | No such model `${model}` or task |
| Invalid data | `5004` | `400` | Invalid data type for base64 input: `${type}` |
| Finetune missing required files | `3039` | `400` | Finetune is missing required files `(model.safetensors and config.json) ` |
| Incomplete request | `3003` | `400` | Request is missing headers or body: `{what}` |
| Account not allowed for private model | `5018` | `403` | The account is not allowed to access this model |
| Model agreement | `5016` | `403` | User has not agreed to Llama3.2 model terms |
| Account blocked | `3023` | `403` | Service unavailable for account |
| Account not allowed for private model | `3041` | `403` | The account is not allowed to access this model |
| Deprecated SDK version | `5019` | `405` | Request trying to use deprecated SDK version |
| LoRa unsupported | `5005` | `405` | The model `${this.model}` does not support LoRa inference |
| Invalid model ID | `3042` | `404` | The model name is invalid |
| Request too large | `3006` | `413` | Request is too large |
| Timeout | `3007` | `408` | Request timeout |
| Aborted | `3008` | `408` | Request was aborted |
| Account limited | `3036` | `429` | You have used up your daily free allocation of 10,000 neurons. Please upgrade to Cloudflare's Workers Paid plan if you would like to continue usage. |
| Out of capacity | `3040` | `429` | No more data centers to forward the request to |
---
# CI/CD
URL: https://developers.cloudflare.com/workers/ci-cd/
You can set up continuous integration and continuous deployment (CI/CD) for your Workers by using either the integrated build system, [Workers Builds](#workers-builds), or using [external providers](#external-cicd) to optimize your development workflow.
## Why use CI/CD?
Using a CI/CD pipeline to deploy your Workers is a best practice because it:
- Automates the build and deployment process, removing the need for manual `wrangler deploy` commands.
- Ensures consistent builds and deployments across your team by using the same source control management (SCM) system.
- Reduces variability and errors by deploying in a uniform environment.
- Simplifies managing access to production credentials.
## Which CI/CD should I use?
Choose [Workers Builds](/workers/ci-cd/builds) if you want a fully integrated solution within Cloudflare's ecosystem that requires minimal setup and configuration for GitHub or GitLab users.
We recommend using [external CI/CD providers](/workers/ci-cd/external-cicd) if:
- You have a self-hosted instance of GitHub or GitLabs, which is currently not supported in Workers Builds' [Git integration](/workers/ci-cd/builds/git-integration/)
- You are using a Git provider that is not GitHub or GitLab
## Workers Builds
[Workers Builds](/workers/ci-cd/builds) is Cloudflare's native CI/CD system that allows you to integrate with GitHub or GitLab to automatically deploy changes with each new push to a selected branch (e.g. `main`).

Ready to streamline your Workers deployments? Get started with [Workers Builds](/workers/ci-cd/builds/#get-started).
## External CI/CD
You can also choose to set up your CI/CD pipeline with an external provider.
- [GitHub Actions](/workers/ci-cd/external-cicd/github-actions/)
- [GitLab CI/CD](/workers/ci-cd/external-cicd/gitlab-cicd/)
---
# Compatibility dates
URL: https://developers.cloudflare.com/workers/configuration/compatibility-dates/
import { WranglerConfig } from "~/components";
Cloudflare regularly updates the Workers runtime. These updates apply to all Workers globally and should never cause a Worker that is already deployed to stop functioning. Sometimes, though, some changes may be backwards-incompatible. In particular, there might be bugs in the runtime API that existing Workers may inadvertently depend upon. Cloudflare implements bug fixes that new Workers can opt into while existing Workers will continue to see the buggy behavior to prevent breaking deployed Workers.
The compatibility date and flags are how you, as a developer, opt into these runtime changes. [Compatibility flags](/workers/configuration/compatibility-flags) will often have a date in which they are enabled by default, and so, by specifying a `compatibility_date` for your Worker, you can quickly enable all of these various compatibility flags up to, and including, that date.
## Setting compatibility date
When you start your project, you should always set `compatibility_date` to the current date. You should occasionally update the `compatibility_date` field. When updating, you should refer to the [compatibility flags](/workers/configuration/compatibility-flags) page to find out what has changed, and you should be careful to test your Worker to see if the changes affect you, updating your code as necessary. The new compatibility date takes effect when you next run the [`npx wrangler deploy`](/workers/wrangler/commands/#deploy) command.
There is no need to update your `compatibility_date` if you do not want to. The Workers runtime will support old compatibility dates forever. If, for some reason, Cloudflare finds it is necessary to make a change that will break live Workers, Cloudflare will actively contact affected developers. That said, Cloudflare aims to avoid this if at all possible.
However, even though you do not need to update the `compatibility_date` field, it is a good practice to do so for two reasons:
1. Sometimes, new features can only be made available to Workers that have a current `compatibility_date`. To access the latest features, you need to stay up-to-date.
2. Generally, other than the [compatibility flags](/workers/configuration/compatibility-flags) page, the Workers documentation may only describe the current `compatibility_date`, omitting information about historical behavior. If your Worker uses an old `compatibility_date`, you will need to continuously refer to the compatibility flags page in order to check if any of the APIs you are using have changed.
#### Via Wrangler
The compatibility date can be set in a Worker's [Wrangler configuration file](/workers/wrangler/configuration/).
```toml
# Opt into backwards-incompatible changes through April 5, 2022.
compatibility_date = "2022-04-05"
```
#### Via the Cloudflare Dashboard
When a Worker is created through the Cloudflare Dashboard, the compatibility date is automatically set to the current date.
The compatibility date can be updated in the Workers settings on the [Cloudflare dashboard](https://dash.cloudflare.com/).
#### Via the Cloudflare API
The compatibility date can be set when uploading a Worker using the [Workers Script API](/api/resources/workers/subresources/scripts/methods/update/) or [Workers Versions API](/api/resources/workers/subresources/scripts/subresources/versions/methods/create/) in the request body's `metadata` field.
If a compatibility date is not specified on upload via the API, it defaults to the oldest compatibility date, before any flags took effect (2021-11-02). When creating new Workers, it is highly recommended to set the compatibility date to the current date when uploading via the API.
---
# Compatibility flags
URL: https://developers.cloudflare.com/workers/configuration/compatibility-flags/
import { CompatibilityFlags, WranglerConfig, Render } from "~/components";
Compatibility flags enable specific features. They can be useful if you want to help the Workers team test upcoming changes that are not yet enabled by default, or if you need to hold back a change that your code depends on but still want to apply other compatibility changes.
Compatibility flags will often have a date in which they are enabled by default, and so, by specifying a [`compatibility_date`](/workers/configuration/compatibility-dates) for your Worker, you can quickly enable all of these various compatibility flags up to, and including, that date.
## Setting compatibility flags
You may provide a list of `compatibility_flags`, which enable or disable specific changes.
#### Via Wrangler
Compatibility flags can be set in a Worker's [Wrangler configuration file](/workers/wrangler/configuration/).
This example enables the specific flag `formdata_parser_supports_files`, which is described [below](/workers/configuration/compatibility-flags/#formdata-parsing-supports-file). As of the specified date, `2021-09-14`, this particular flag was not yet enabled by default, but, by specifying it in `compatibility_flags`, we can enable it anyway. `compatibility_flags` can also be used to disable changes that became the default in the past.
```toml
# Opt into backwards-incompatible changes through September 14, 2021.
compatibility_date = "2021-09-14"
# Also opt into an upcoming fix to the FormData API.
compatibility_flags = [ "formdata_parser_supports_files" ]
```
#### Via the Cloudflare Dashboard
Compatibility flags can be updated in the Workers settings on the [Cloudflare dashboard](https://dash.cloudflare.com/).
#### Via the Cloudflare API
Compatibility flags can be set when uploading a Worker using the [Workers Script API](/api/resources/workers/subresources/scripts/methods/update/) or [Workers Versions API](/api/resources/workers/subresources/scripts/subresources/versions/methods/create/) in the request body's `metadata` field.
## Node.js compatibility flag
:::note
[The `nodejs_compat` flag](/workers/runtime-apis/nodejs/) also enables `nodejs_compat_v2` as long as your compatibility date is 2024-09-23 or later. The v2 flag improves runtime Node.js compatibility by bundling additional polyfills and globals into your Worker. However, this improvement increases bundle size.
If your compatibility date is 2024-09-22 or before and you want to enable v2, add the `nodejs_compat_v2` in addition to the `nodejs_compat` flag.
If your compatibility date is after 2024-09-23, but you want to disable v2 to avoid increasing your bundle size, add the `no_nodejs_compat_v2` in addition to the `nodejs_compat flag`.
:::
A [growing subset](/workers/runtime-apis/nodejs/) of Node.js APIs are available directly as [Runtime APIs](/workers/runtime-apis/nodejs), with no need to add polyfills to your own code. To enable these APIs in your Worker, add the `nodejs_compat` compatibility flag to your [Wrangler configuration file](/workers/wrangler/configuration/):
A [growing subset](/workers/runtime-apis/nodejs/) of Node.js APIs are available directly as [Runtime APIs](/workers/runtime-apis/nodejs), with no need to add polyfills to your own code. To enable these APIs in your Worker, only the `nodejs_compat` compatibility flag is required:
```toml title="wrangler.toml"
compatibility_flags = [ "nodejs_compat" ]
```
As additional Node.js APIs are added, they will be made available under the `nodejs_compat` compatibility flag. Unlike most other compatibility flags, we do not expect the `nodejs_compat` to become active by default at a future date.
The Node.js `AsyncLocalStorage` API is a particularly useful feature for Workers. To enable only the `AsyncLocalStorage` API, use the `nodejs_als` compatibility flag.
```toml title="wrangler.toml"
compatibility_flags = [ "nodejs_als" ]
```
## Flags history
Newest flags are listed first.
## Experimental flags
These flags can be enabled via `compatibility_flags`, but are not yet scheduled to become default on any particular date.
---
# Cron Triggers
URL: https://developers.cloudflare.com/workers/configuration/cron-triggers/
import { Render, WranglerConfig } from "~/components";
## Background
Cron Triggers allow users to map a cron expression to a Worker using a [`scheduled()` handler](/workers/runtime-apis/handlers/scheduled/) that enables Workers to be executed on a schedule.
Cron Triggers are ideal for running periodic jobs, such as for maintenance or calling third-party APIs to collect up-to-date data. Workers scheduled by Cron Triggers will run on underutilized machines to make the best use of Cloudflare's capacity and route traffic efficiently.
:::note
Cron Triggers can also be combined with [Workflows](/workflows/) to trigger multi-step, long-running tasks. You can [bind to a Workflow](/workflows/build/workers-api/) from directly from your Cron Trigger to execute a Workflow on a schedule.
:::
Cron Triggers execute on UTC time.
## Add a Cron Trigger
### 1. Define a scheduled event listener
To respond to a Cron Trigger, you must add a [`"scheduled"` handler](/workers/runtime-apis/handlers/scheduled/) to your Worker.
Refer to the following additional examples to write your code:
- [Setting Cron Triggers](/workers/examples/cron-trigger/)
- [Multiple Cron Triggers](/workers/examples/multiple-cron-triggers/)
### 2. Update configuration
After you have updated your Worker code to include a `"scheduled"` event, you must update your Worker project configuration.
#### Via the [Wrangler configuration file](/workers/wrangler/configuration/)
If a Worker is managed with Wrangler, Cron Triggers should be exclusively managed through the [Wrangler configuration file](/workers/wrangler/configuration/).
Refer to the example below for a Cron Triggers configuration:
```toml
[triggers]
# Schedule cron triggers:
# - At every 3rd minute
# - At 15:00 (UTC) on first day of the month
# - At 23:59 (UTC) on the last weekday of the month
crons = [ "*/3 * * * *", "0 15 1 * *", "59 23 LW * *" ]
```
You also can set a different Cron Trigger for each [environment](/workers/wrangler/environments/) in your [Wrangler configuration file](/workers/wrangler/configuration/). You need to put the `[triggers]` table under your chosen environment. For example:
```toml
[env.dev.triggers]
crons = ["0 * * * *"]
```
#### Via the dashboard
To add Cron Triggers in the Cloudflare dashboard:
1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account.
2. In Account Home, select **Workers & Pages**.
3. In **Overview**, select your Worker > **Settings** > **Triggers** > **Cron Triggers**.
## Supported cron expressions
Cloudflare supports cron expressions with five fields, along with most [Quartz scheduler](http://www.quartz-scheduler.org/documentation/quartz-2.3.0/tutorials/crontrigger.html#introduction)-like cron syntax extensions:
| Field | Values | Characters |
| ------------- | ------------------------------------------------------------------ | ------------ |
| Minute | 0-59 | \* , - / |
| Hours | 0-23 | \* , - / |
| Days of Month | 1-31 | \* , - / L W |
| Months | 1-12, case-insensitive 3-letter abbreviations ("JAN", "aug", etc.) | \* , - / |
| Weekdays | 1-7, case-insensitive 3-letter abbreviations ("MON", "fri", etc.) | \* , - / L # |
### Examples
Some common time intervals that may be useful for setting up your Cron Trigger:
- `* * * * *`
- At every minute
- `*/30 * * * *`
- At every 30th minute
- `45 * * * *`
- On the 45th minute of every hour
- `0 17 * * sun` or `0 17 * * 1`
- 17:00 (UTC) on Sunday
- `10 7 * * mon-fri` or `10 7 * * 2-6`
- 07:10 (UTC) on weekdays
- `0 15 1 * *`
- 15:00 (UTC) on first day of the month
- `0 18 * * 6L` or `0 18 * * friL`
- 18:00 (UTC) on the last Friday of the month
- `59 23 LW * *`
- 23:59 (UTC) on the last weekday of the month
## Test Cron Triggers
The recommended way of testing Cron Triggers is using Wrangler.
:::note[Cron Trigger changes take time to propagate.]
Changes such as adding a new Cron Trigger, updating an old Cron Trigger, or deleting a Cron Trigger may take several minutes (up to 15 minutes) to propagate to the Cloudflare global network.
:::
Test Cron Triggers using `Wrangler` by passing in the `--test-scheduled` flag to [`wrangler dev`](/workers/wrangler/commands/#dev). This will expose a `/__scheduled` route which can be used to test using a HTTP request. To simulate different cron patterns, a `cron` query parameter can be passed in.
```sh
npx wrangler dev --test-scheduled
curl "http://localhost:8787/__scheduled?cron=*+*+*+*+*"
```
## View past events
To view the execution history of Cron Triggers, view **Cron Events**:
1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account.
2. In Account Home, go to **Workers & Pages**.
3. In **Overview**, select your **Worker**.
4. Select **Settings**.
5. Under **Trigger Events**, select **View events**.
Cron Events stores the 100 most recent invocations of the Cron scheduled event. [Workers Logs](/workers/observability/logs/workers-logs) also records invocation logs for the Cron Trigger with a longer retention period and a filter & query interface. If you are interested in an API to access Cron Events, use Cloudflare's [GraphQL Analytics API](/analytics/graphql-api).
:::note
It can take up to 30 minutes before events are displayed in **Past Cron Events** when creating a new Worker or changing a Worker's name.
:::
Refer to [Metrics and Analytics](/workers/observability/metrics-and-analytics/) for more information.
## Remove a Cron Trigger
### Via the dashboard
To delete a Cron Trigger on a deployed Worker via the dashboard:
1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account.
2. Go to **Workers & Pages**, and select your Worker.
3. Go to **Triggers** > select the three dot icon next to the Cron Trigger you want to remove > **Delete**.
:::note
You can only delete Cron Triggers using the Cloudflare dashboard (and not through your Wrangler file).
:::
## Limits
Refer to [Limits](/workers/platform/limits/) to track the maximum number of Cron Triggers per Worker.
## Green Compute
With Green Compute enabled, your Cron Triggers will only run on Cloudflare points of presence that are located in data centers that are powered purely by renewable energy. Organizations may claim that they are powered by 100 percent renewable energy if they have procured sufficient renewable energy to account for their overall energy use.
Renewable energy can be purchased in a number of ways, including through on-site generation (wind turbines, solar panels), directly from renewable energy producers through contractual agreements called Power Purchase Agreements (PPA), or in the form of Renewable Energy Credits (REC, IRECs, GoOs) from an energy credit market.
Green Compute can be configured at the account level:
1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account.
2. In Account Home, select **Workers & Pages**.
3. In the **Account details** section, find **Compute Setting**.
4. Select **Change**.
5. Select **Green Compute**.
6. Select **Confirm**.
## Related resources
- [Triggers](/workers/wrangler/configuration/#triggers) - Review Wrangler configuration file syntax for Cron Triggers.
- Learn how to access Cron Triggers in [ES modules syntax](/workers/reference/migrate-to-module-workers/) for an optimized experience.
---
# Environment variables
URL: https://developers.cloudflare.com/workers/configuration/environment-variables/
import { Render, TabItem, Tabs, WranglerConfig } from "~/components";
## Background
Environment variables are a type of binding that allow you to attach text strings or JSON values to your Worker. Environment variables are available on the [`env` parameter](/workers/runtime-apis/handlers/fetch/#parameters) passed to your Worker's [`fetch` event handler](/workers/runtime-apis/handlers/fetch/).
Text strings and JSON values are not encrypted and are useful for storing application configuration.
## Add environment variables via Wrangler
Text and JSON values are defined via the `[vars]` configuration in your Wrangler file. In the following example, `API_HOST` and `API_ACCOUNT_ID` are text values and `SERVICE_X_DATA` is a JSON value.
Refer to the following example on how to access the `API_HOST` environment variable in your Worker code:
```js
export default {
async fetch(request, env, ctx) {
return new Response(`API host: ${env.API_HOST}`);
},
};
```
```ts
export interface Env {
API_HOST: string;
}
export default {
async fetch(request, env, ctx): Promise {
return new Response(`API host: ${env.API_HOST}`);
},
} satisfies ExportedHandler;
```
### Configuring different environments in Wrangler
[Environments in Wrangler](/workers/wrangler/environments) let you specify different configurations for the same Worker, including different values for `vars` in each environment.
As `vars` is a [non-inheritable key](/workers/wrangler/configuration/#non-inheritable-keys), they are not inherited by environments and must be specified for each environment.
The example below sets up two environments, `staging` and `production`, with different values for `API_HOST`.
```toml
name = "my-worker-dev"
# top level environment
[vars]
API_HOST = "api.example.com"
[env.staging.vars]
API_HOST = "staging.example.com"
[env.production.vars]
API_HOST = "production.example.com"
```
To run Wrangler commands in specific environments, you can pass in the `--env` or `-e` flag. For example, you can develop the Worker in an environment called `staging` by running `npx wrangler dev --env staging`, and deploy it with `npx wrangler deploy --env staging`.
Learn about [environments in Wrangler](/workers/wrangler/environments).
## Add environment variables via the dashboard
To add environment variables via the dashboard:
1. Log in to [Cloudflare dashboard](https://dash.cloudflare.com/) and select your account.
2. Select **Workers & Pages**.
3. In **Overview**, select your Worker.
4. Select **Settings**.
5. Under **Variables and Secrets**, select **Add**.
6. Select a **Type**, input a **Variable name**, and input its **Value**. This variable will be made available to your Worker.
7. (Optional) To add multiple environment variables, select **Add variable**.
8. Select **Deploy** to implement your changes.
:::caution[Plaintext strings and secrets]
Select the **Secret** type if your environment variable is a [secret](/workers/configuration/secrets/).
:::
## Related resources
- Migrating environment variables from [Service Worker format to ES modules syntax](/workers/reference/migrate-to-module-workers/#environment-variables).
---
# Configuration
URL: https://developers.cloudflare.com/workers/configuration/
import { DirectoryListing } from "~/components";
Configure your Worker project with various features and customizations.
---
# Multipart upload metadata
URL: https://developers.cloudflare.com/workers/configuration/multipart-upload-metadata/
import { Type, MetaInfo } from "~/components";
If you're using the [Workers Script Upload API](/api/resources/workers/subresources/scripts/methods/update/) or [Version Upload API](/api/resources/workers/subresources/scripts/subresources/versions/methods/create/) directly, `multipart/form-data` uploads require you to specify a `metadata` part. This metadata defines the Worker's configuration in JSON format, analogue to the [wrangler.toml file](/workers/wrangler/configuration/).
## Sample `metadata`
```json
{
"main_module": "main.js",
"bindings": [
{
"type": "plain_text",
"name": "MESSAGE",
"text": "Hello, world!"
}
],
"compatibility_date": "2021-09-14"
}
```
## Attributes
The following attributes are configurable at the top-level.
:::note
At a minimum, the `main_module` key is required to upload a Worker.
:::
* `main_module`
* The part name that contains the module entry point of the Worker that will be executed. For example, `main.js`.
* `assets`
* [Asset](/workers/static-assets/) configuration for a Worker.
* `config`
* [html_handling](/workers/static-assets/routing/#1-html_handling) determines the redirects and rewrites of requests for HTML content.
* [not_found_handling](/workers/static-assets/routing/#2-not_found_handling) determines the response when a request does not match a static asset, and there is no Worker script.
* `jwt` field provides a token authorizing assets to be attached to a Worker.
* `keep_assets`
* Specifies whether assets should be retained from a previously uploaded Worker version; used in lieu of providing a completion token.
* `bindings` array\[object] optional
* [Bindings](#bindings) to expose in the Worker.
* `placement`
* [Smart placement](/workers/configuration/smart-placement/) object for the Worker.
* `mode` field only supports `smart` for automatic placement.
* `compatibility_date`
* [Compatibility Date](/workers/configuration/compatibility-dates/#setting-compatibility-date) indicating targeted support in the Workers runtime. Backwards incompatible fixes to the runtime following this date will not affect this Worker. Highly recommended to set a `compatibility_date`, otherwise if on upload via the API, it defaults to the oldest compatibility date before any flags took effect (2021-11-02).
* `compatibility_flags` array\[string] optional
* [Compatibility Flags](/workers/configuration/compatibility-flags/#setting-compatibility-flags) that enable or disable certain features in the Workers runtime. Used to enable upcoming features or opt in or out of specific changes not included in a `compatibility_date`.
## Additional attributes: [Workers Script Upload API](/api/resources/workers/subresources/scripts/methods/update/)
For [immediately deployed uploads](/workers/configuration/versions-and-deployments/#upload-a-new-version-and-deploy-it-immediately), the following **additional** attributes are configurable at the top-level.
:::note
These attributes are **not available** for version uploads.
:::
* `migrations` array\[object] optional
* [Durable Objects migrations](/durable-objects/reference/durable-objects-migrations/) to apply.
* `logpush`
* Whether [Logpush](/cloudflare-for-platforms/cloudflare-for-saas/hostname-analytics/#logpush) is turned on for the Worker.
* `tail_consumers` array\[object] optional
* [Tail Workers](/workers/observability/logs/tail-workers/) that will consume logs from the attached Worker.
* `tags` array\[string] optional
* List of strings to use as tags for this Worker.
## Additional attributes: [Version Upload API](/api/resources/workers/subresources/scripts/subresources/versions/methods/create/)
For [version uploads](/workers/configuration/versions-and-deployments/#upload-a-new-version-to-be-gradually-deployed-or-deployed-at-a-later-time), the following **additional** attributes are configurable at the top-level.
:::note
These attributes are **not available** for immediately deployed uploads.
:::
* `annotations`
* Annotations object specific to the Worker version.
* `workers/message` specifies a custom message for the version.
* `workers/tag` specifies a custom identifier for the version.
## Bindings
Workers can interact with resources on the Cloudflare Developer Platform using [bindings](/workers/runtime-apis/bindings/). Refer to the JSON example below that shows how to add bindings in the `metadata` part.
```json
{
"bindings": [
{
"type": "ai",
"name": ""
},
{
"type": "analytics_engine",
"name": "",
"dataset": ""
},
{
"type": "assets",
"name": ""
},
{
"type": "browser_rendering",
"name": ""
},
{
"type": "d1",
"name": "",
"id": ""
},
{
"type": "durable_object_namespace",
"name": "",
"class_name": ""
},
{
"type": "hyperdrive",
"name": "",
"id": ""
},
{
"type": "kv_namespace",
"name": "",
"namespace_id": ""
},
{
"type": "mtls_certificate",
"name": "",
"certificate_id": ""
},
{
"type": "plain_text",
"name": "",
"text": ""
},
{
"type": "queue",
"name": "",
"queue_name": ""
},
{
"type": "r2_bucket",
"name": "",
"bucket_name": ""
},
{
"type": "secret_text",
"name": "",
"text": ""
},
{
"type": "service",
"name": "",
"service": "",
"environment": "production"
},
{
"type": "tail_consumer",
"service": ""
},
{
"type": "vectorize",
"name": "",
"index_name": ""
},
{
"type": "version_metadata",
"name": ""
}
]
}
```
---
# Preview URLs
URL: https://developers.cloudflare.com/workers/configuration/previews/
import { Render, WranglerConfig } from "~/components";
Preview URLs allow you to preview new versions of your Worker without deploying it to production.
Every time you create a new [version](/workers/configuration/versions-and-deployments/#versions) of your Worker a unique preview URL is generated. Preview URLs take the format: `-..workers.dev`. New [versions](/workers/configuration/versions-and-deployments/#versions) of a Worker are created on [`wrangler deploy`](/workers/wrangler/commands/#deploy), [`wrangler versions upload`](/workers/wrangler/commands/#upload) or when you make edits on the Cloudflare dashboard. By default, preview URLs are enabled and available publicly.
Preview URLs can be:
- Integrated into CI/CD pipelines, allowing automatic generation of preview environments for every pull request.
- Used for collaboration between teams to test code changes in a live environment and verify updates.
- Used to test new API endpoints, validate data formats, and ensure backward compatibility with existing services.
When testing zone level performance or security features for a version, we recommend using [version overrides](/workers/configuration/versions-and-deployments/gradual-deployments/#version-overrides) so that your zone's performance and security settings apply.
:::note
Preview URLs are only available for Worker versions uploaded after 2024-09-25.
Minimum required Wrangler version: 3.74.0. Check your version by running `wrangler --version`. To update Wrangler, refer to [Install/Update Wrangler](/workers/wrangler/install-and-update/).
:::
## View preview URLs using wrangler
The [`wrangler versions upload`](/workers/wrangler/commands/#upload) command uploads a new [version](/workers/configuration/versions-and-deployments/#versions) of your Worker and returns a preview URL for each version uploaded.
## View preview URLs on the Workers dashboard
1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com/?to=/:account/workers) and select your project.
2. Go to the **Deployments** tab, and find the version you would like to view.
## Manage access to Preview URLs
By default, preview URLs are enabled and available publicly. You can use [Cloudflare Access](/cloudflare-one/policies/access/) to require visitors to authenticate before accessing preview URLs. You can limit access to yourself, your teammates, your organization, or anyone else you specify in your [access policy](/cloudflare-one/policies/access).
To limit your preview URLs to authorized emails only:
1. Log in to the [Cloudflare Access dashboard](https://one.dash.cloudflare.com/?to=/:account/access/apps).
2. Select your account.
3. Add an application.
4. Select **Self Hosted**.
5. Name your application (for example, "my-worker") and add your `workers.dev` subdomain as the **Application domain**.
For example, if you want to secure preview URLs for a Worker running on `my-worker.my-subdomain.workers.dev`.
- Subdomain: `*-my-worker`
- Domain: `my-subdomain.workers.dev`
:::note
You must press enter after you input your Application domain for it to save. You will see a "Zone is not associated with the current account" warning that you may ignore.
:::
6. Go to the next page.
7. Add a name for your access policy (for example, "Allow employees access to preview URLs for my-worker").
8. In the **Configure rules** section create a new rule with the **Emails** selector, or any other attributes which you wish to gate access to previews with.
9. Enter the emails you want to authorize. View [access policies](/cloudflare-one/policies/access/#selectors) to learn about configuring alternate rules.
10. Go to the next page.
11. Add application.
## Disabling Preview URLs
### Disabling Preview URLs in the dashboard
To disable Preview URLs for a Worker:
1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account.
2. Go to **Workers & Pages** and in **Overview**, select your Worker.
3. Go to **Settings** > **Domains & Routes**.
4. On "Preview URLs" click "Disable".
5. Confirm you want to disable.
### Disabling Preview URLs in the [Wrangler configuration file](/workers/wrangler/configuration/)
:::note
Wrangler 3.91.0 or higher is required to use this feature.
:::
To disable Preview URLs for a Worker, include the following in your Worker's Wrangler file:
```toml
preview_urls = false
```
When you redeploy your Worker with this change, Preview URLs will be disabled.
:::caution
If you disable Preview URLs in the Cloudflare dashboard but do not update your Worker's Wrangler file with `preview_urls = false`, then Preview URLs will be re-enabled the next time you deploy your Worker with Wrangler.
:::
## Limitations
- Preview URLs are not generated for Workers that implement a [Durable Object](/durable-objects/).
- Preview URLs are not currently generated for [Workers for Platforms](/cloudflare-for-platforms/workers-for-platforms/) [user Workers](/cloudflare-for-platforms/workers-for-platforms/reference/how-workers-for-platforms-works/#user-workers). This is a temporary limitation, we are working to remove it.
- You cannot currently configure Preview URLs to run on a subdomain other than [`workers.dev`](/workers/configuration/routing/workers-dev/).
---
# Secrets
URL: https://developers.cloudflare.com/workers/configuration/secrets/
import { Render } from "~/components";
## Background
Secrets are a type of binding that allow you to attach encrypted text values to your Worker. You cannot see secrets after you set them and can only access secrets via [Wrangler](/workers/wrangler/commands/#secret) or programmatically via the [`env` parameter](/workers/runtime-apis/handlers/fetch/#parameters). Secrets are used for storing sensitive information like API keys and auth tokens. Secrets are available on the [`env` parameter](/workers/runtime-apis/handlers/fetch/#parameters) passed to your Worker's [`fetch` event handler](/workers/runtime-apis/handlers/fetch/).
## Local Development with Secrets
## Secrets on deployed Workers
### Adding secrets to your project
#### Via Wrangler
Secrets can be added through [`wrangler secret put`](/workers/wrangler/commands/#secret) or [`wrangler versions secret put`](/workers/wrangler/commands/#secret-put) commands.
`wrangler secret put` creates a new version of the Worker and deploys it immediately.
```sh
npx wrangler secret put
```
If using [gradual deployments](/workers/configuration/versions-and-deployments/gradual-deployments/), instead use the `wrangler versions secret put` command. This will only create a new version of the Worker, that can then be deploying using [`wrangler versions deploy`](/workers/wrangler/commands/#deploy-2).
:::note
Wrangler versions before 3.73.0 require you to specify a `--x-versions` flag.
:::
```sh
npx wrangler versions secret put
```
#### Via the dashboard
To add a secret via the dashboard:
1. Log in to [Cloudflare dashboard](https://dash.cloudflare.com/) and select your account.
2. Select **Workers & Pages**.
3. In **Overview**, select your Worker > **Settings**.
4. Under **Variables and Secrets**, select **Add**.
5. Select the type **Secret**, input a **Variable name**, and input its **Value**. This secret will be made available to your Worker but the value will be hidden in Wrangler and the dashboard.
6. (Optional) To add more secrets, select **Add variable**.
7. Select **Deploy** to implement your changes.
### Delete secrets from your project
#### Via Wrangler
Secrets can be deleted through [`wrangler secret delete`](/workers/wrangler/commands/#delete-2) or [`wrangler versions secret delete`](/workers/wrangler/commands/#secret-delete) commands.
`wrangler secret delete` creates a new version of the Worker and deploys it immediately.
```sh
npx wrangler secret delete
```
If using [gradual deployments](/workers/configuration/versions-and-deployments/gradual-deployments/), instead use the `wrangler versions secret delete` command. This will only create a new version of the Worker, that can then be deploying using [`wrangler versions deploy`](/workers/wrangler/commands/#deploy-2).
```sh
npx wrangler versions secret delete
```
#### Via the dashboard
To delete a secret from your Worker project via the dashboard:
1. Log in to [Cloudflare dashboard](https://dash.cloudflare.com/) and select your account.
2. Select **Workers & Pages**.
3. In **Overview**, select your Worker > **Settings**.
4. Under **Variables and Secrets**, select **Edit**.
5. In the **Edit** drawer, select **X** next to the secret you want to delete.
6. Select **Deploy** to implement your changes.
7. (Optional) Instead of using the edit drawer, you can click the delete icon next to the secret.
## Related resources
- [Wrangler secret commands](/workers/wrangler/commands/#secret) - Review the Wrangler commands to create, delete and list secrets.
---
# Smart Placement
URL: https://developers.cloudflare.com/workers/configuration/smart-placement/
import { WranglerConfig } from "~/components";
By default, [Workers](/workers/) and [Pages Functions](/pages/functions/) are invoked in a data center closest to where the request was received. If you are running back-end logic in a Worker, it may be more performant to run that Worker closer to your back-end infrastructure rather than the end user. Smart Placement automatically places your workloads in an optimal location that minimizes latency and speeds up your applications.
## Background
The following example demonstrates how moving your Worker close to your back-end services could decrease application latency:
You have a user in Sydney, Australia who is accessing an application running on Workers. This application makes multiple round trips to a database located in Frankfurt, Germany in order to serve the user’s request.

The issue is the time that it takes the Worker to perform multiple round trips to the database. Instead of the request being processed close to the user, the Cloudflare network, with Smart Placement enabled, would process the request in a data center closest to the database.

## Understand how Smart Placement works
Smart Placement is enabled on a per-Worker basis. Once enabled, Smart Placement analyzes the [request duration](/workers/observability/metrics-and-analytics/#request-duration) of the Worker in different Cloudflare locations around the world on a regular basis. Smart Placement decides where to run the Worker by comparing the estimated request duration in the location closest to where the request was received (the default location where the Worker would run) to a set of candidate locations around the world. For each candidate location, Smart Placement considers the performance of the Worker in that location as well as the network latency added by forwarding the request to that location. If the estimated request duration in the best candidate location is significantly faster than the location where the request was received, the request will be forwarded to that candidate location. Otherwise, the Worker will run in the default location closest to where the request was received.
Smart Placement only considers candidate locations where the Worker has previously run, since the estimated request duration in each candidate location is based on historical data from the Worker running in that location. This means that Smart Placement cannot run the Worker in a location that it does not normally receive traffic from.
Smart Placement only affects the execution of [fetch event handlers](/workers/runtime-apis/handlers/fetch/). Smart Placement does not affect the execution of [RPC methods](/workers/runtime-apis/rpc/) or [named entrypoints](/workers/runtime-apis/bindings/service-bindings/rpc/#named-entrypoints). Workers without a fetch event handler will be ignored by Smart Placement. For Workers with both fetch and non-fetch event handlers, Smart Placement will only affect the execution of the fetch event handler.
Similarly, Smart Placement will not affect where [static assets](/workers/static-assets/) are served from. Static assets will continue to be served from the location nearest to the incoming request. If a Worker is invoked and your code retrieves assets via the [static assets binding](https://developers.cloudflare.com/workers/static-assets/binding/), then assets will be served from the location that your Worker runs in.
## Enable Smart Placement
Smart Placement is available to users on all Workers plans.
### Enable Smart Placement via Wrangler
To enable Smart Placement via Wrangler:
1. Make sure that you have `wrangler@2.20.0` or later [installed](/workers/wrangler/install-and-update/).
2. Add the following to your Worker project's Wrangler file:
```toml
[placement]
mode = "smart"
```
3. Wait for Smart Placement to analyze your Worker. This process may take up to 15 minutes.
4. View your Worker's [request duration analytics](/workers/observability/metrics-and-analytics/#request-duration).
### Enable Smart Placement via the dashboard
To enable Smart Placement via the dashboard:
1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account.
2. In **Account Home**, select **Workers & Pages**.
3. In **Overview**,select your Worker.
4. Select **Settings** > **General**.
5. Under **Placement**, choose **Smart**.
6. Wait for Smart Placement to analyze your Worker. Smart Placement requires consistent traffic to the Worker from multiple locations around the world to make a placement decision. The analysis process may take up to 15 minutes.
7. View your Worker's [request duration analytics](/workers/observability/metrics-and-analytics/#request-duration)
## Observability
### Placement Status
A Worker's metadata contains details about a Worker's placement status. Query your Worker's placement status through the following Workers API endpoint:
```bash
curl -X GET https://api.cloudflare.com/client/v4/accounts/{ACCOUNT_ID}/workers/services/{WORKER_NAME} \
-H "Authorization: Bearer " \
-H "Content-Type: application/json" | jq .
```
Possible placement states include:
- _(not present)_: The Worker has not been analyzed for Smart Placement yet. The Worker will always run in the default Cloudflare location closest to where the request was received.
- `SUCCESS`: The Worker was successfully analyzed and will be optimized by Smart Placement. The Worker will run in the Cloudflare location that minimizes expected request duration, which may be the default location closest to where the request was received or may be a faster location elsewhere in the world.
- `INSUFFICIENT_INVOCATIONS`: The Worker has not received enough requests to make a placement decision. Smart Placement requires consistent traffic to the Worker from multiple locations around the world. The Worker will always run in the default Cloudflare location closest to where the request was received.
- `UNSUPPORTED_APPLICATION`: Smart Placement began optimizing the Worker and measured the results, which showed that Smart Placement made the Worker slower. In response, Smart Placement reverted the placement decision. The Worker will always run in the default Cloudflare location closest to where the request was received, and Smart Placement will not analyze the Worker again until it's redeployed. This state is rare and accounts for less that 1% of Workers with Smart Placement enabled.
### Request Duration Analytics
Once Smart Placement is enabled, data about request duration gets collected. Request duration is measured at the data center closest to the end user.
By default, one percent (1%) of requests are not routed with Smart Placement. These requests serve as a baseline to compare to.
### `cf-placement` header
Once Smart Placement is enabled, Cloudflare adds a `cf-placement` header to all requests. This can be used to check whether a request has been routed with Smart Placement and where the Worker is processing the request (which is shown as the nearest airport code to the data center).
For example, the `cf-placement: remote-LHR` header's `remote` value indicates that the request was routed using Smart Placement to a Cloudflare data center near London. The `cf-placement: local-EWR` header's `local` value indicates that the request was not routed using Smart Placement and the Worker was invoked in a data center closest to where the request was received, close to Newark Liberty International Airport (EWR).
:::caution[Beta use only]
We may remove the `cf-placement` header before Smart Placement enters general availability.
:::
## Best practices
If you are building full-stack applications on Workers, we recommend splitting up the front-end and back-end logic into different Workers and using [Service Bindings](/workers/runtime-apis/bindings/service-bindings/) to connect your front-end logic and back-end logic Workers.

Enabling Smart Placement on your back-end Worker will invoke it close to your back-end service, while the front-end Worker serves requests close to the user. This architecture maintains fast, reactive front-ends while also improving latency when the back-end Worker is called.
## Give feedback on Smart Placement
Smart Placement is in beta. To share your thoughts and experience with Smart Placement, join the [Cloudflare Developer Discord](https://discord.cloudflare.com).
---
# Page Rules
URL: https://developers.cloudflare.com/workers/configuration/workers-with-page-rules/
Page Rules trigger certain actions whenever a request matches one of the URL patterns you define. You can define a page rule to trigger one or more actions whenever a certain URL pattern is matched. Refer to [Page Rules](/rules/page-rules/) to learn more about configuring Page Rules.
## Page Rules with Workers
Cloudflare acts as a [reverse proxy](https://www.cloudflare.com/learning/what-is-cloudflare/) to provide services, like Page Rules, to Internet properties. Your application's traffic will pass through a Cloudflare data center that is closest to the visitor. There are hundreds of these around the world, each of which are capable of running services like Workers and Page Rules. If your application is built on Workers and/or Pages, the [Cloudflare global network](https://www.cloudflare.com/learning/serverless/glossary/what-is-edge-computing/) acts as your origin server and responds to requests directly from the Cloudflare global network.
When using Page Rules with Workers, the following workflow is applied.
1. Request arrives at Cloudflare data center.
2. Cloudflare decides if this request is a Worker route. Because this is a Worker route, Cloudflare evaluates and disabled a number of features, including some that would be set by Page Rules.
3. Page Rules run as part of normal request processing with some features now disabled.
4. Worker executes.
5. Worker makes a same-zone or other-zone subrequest. Because this is a Worker route, Cloudflare disables a number of features, including some that would be set by Page Rules.
Page Rules are evaluated both at the client-to-Worker request stage (step 2) and the Worker subrequest stage (step 5).
If you are experiencing Page Rule errors when running Workers, contact your Cloudflare account team or [Cloudflare Support](/support/contacting-cloudflare-support/).
## Affected Page Rules
The following Page Rules may not work as expected when an incoming request is matched to a Worker route:
* Always Online
* [Always Use HTTPS](/workers/configuration/workers-with-page-rules/#always-use-https)
* [Automatic HTTPS Rewrites](/workers/configuration/workers-with-page-rules/#automatic-https-rewrites)
* [Browser Cache TTL](/workers/configuration/workers-with-page-rules/#browser-cache-ttl)
* [Browser Integrity Check](/workers/configuration/workers-with-page-rules/#browser-integrity-check)
* [Cache Deception Armor](/workers/configuration/workers-with-page-rules/#cache-deception-armor)
* [Cache Level](/workers/configuration/workers-with-page-rules/#cache-level)
* Disable Apps
* [Disable Zaraz](/workers/configuration/workers-with-page-rules/#disable-zaraz)
* [Edge Cache TTL](/workers/configuration/workers-with-page-rules/#edge-cache-ttl)
* [Email Obfuscation](/workers/configuration/workers-with-page-rules/#email-obfuscation)
* [Forwarding URL](/workers/configuration/workers-with-page-rules/#forwarding-url)
* Host Header Override
* [IP Geolocation Header](/workers/configuration/workers-with-page-rules/#ip-geolocation-header)
* Mirage
* [Origin Cache Control](/workers/configuration/workers-with-page-rules/#origin-cache-control)
* [Rocket Loader](/workers/configuration/workers-with-page-rules/#rocket-loader)
* [Security Level](/workers/configuration/workers-with-page-rules/#security-level)
* [SSL](/workers/configuration/workers-with-page-rules/#ssl)
This is because the default setting of these Page Rules will be disabled when Cloudflare recognizes that the request is headed to a Worker.
:::caution[Testing]
Due to ongoing changes to the Workers runtime, detailed documentation on how these rules will be affected are updated following testing.
:::
To learn what these Page Rules do, refer to [Page Rules](/rules/page-rules/).
:::note[Same zone versus other zone]
A same zone subrequest is a request the Worker makes to an orange-clouded hostname in the same zone the Worker runs on. Depending on your DNS configuration, any request that falls outside that definition may be considered an other zone request by the Cloudflare network.
:::
### Always Use HTTPS
| Source | Target | Behavior |
| ------ | ---------- | -------------- |
| Client | Worker | Rule Respected |
| Worker | Same Zone | Rule Ignored |
| Worker | Other Zone | Rule Ignored |
### Automatic HTTPS Rewrites
| Source | Target | Behavior |
| ------ | ---------- | -------------- |
| Client | Worker | Rule Ignored |
| Worker | Same Zone | Rule Respected |
| Worker | Other Zone | Rule Ignored |
### Browser Cache TTL
| Source | Target | Behavior |
| ------ | ---------- | -------------- |
| Client | Worker | Rule Ignored |
| Worker | Same Zone | Rule Respected |
| Worker | Other Zone | Rule Ignored |
### Browser Integrity Check
| Source | Target | Behavior |
| ------ | ---------- | -------------- |
| Client | Worker | Rule Respected |
| Worker | Same Zone | Rule Ignored |
| Worker | Other Zone | Rule Ignored |
### Cache Deception Armor
| Source | Target | Behavior |
| ------ | ---------- | -------------- |
| Client | Worker | Rule Respected |
| Worker | Same Zone | Rule Respected |
| Worker | Other Zone | Rule Ignored |
### Cache Level
| Source | Target | Behavior |
| ------ | ---------- | -------------- |
| Client | Worker | Rule Respected |
| Worker | Same Zone | Rule Respected |
| Worker | Other Zone | Rule Ignored |
### Disable Zaraz
| Source | Target | Behavior |
| ------ | ---------- | -------------- |
| Client | Worker | Rule Respected |
| Worker | Same Zone | Rule Respected |
| Worker | Other Zone | Rule Ignored |
### Edge Cache TTL
| Source | Target | Behavior |
| ------ | ---------- | -------------- |
| Client | Worker | Rule Respected |
| Worker | Same Zone | Rule Respected |
| Worker | Other Zone | Rule Ignored |
### Email Obfuscation
| Source | Target | Behavior |
| -------------------------|------------|------------|
| Client | Worker | Rule Ignored |
| Worker | Same Zone | Rule Respected |
| Worker | Other Zone | Rule Ignored |
### Forwarding URL
| Source | Target | Behavior |
| ------ | ---------- | -------------- |
| Client | Worker | Rule Ignored |
| Worker | Same Zone | Rule Respected |
| Worker | Other Zone | Rule Ignored |
### IP Geolocation Header
| Source | Target | Behavior |
| ------ | ---------- | -------------- |
| Client | Worker | Rule Respected |
| Worker | Same Zone | Rule Respected |
| Worker | Other Zone | Rule Ignored |
### Origin Cache Control
| Source | Target | Behavior |
| ------ | ---------- | -------------- |
| Client | Worker | Rule Respected |
| Worker | Same Zone | Rule Respected |
| Worker | Other Zone | Rule Ignored |
### Rocket Loader
| Source | Target | Behavior |
| ------ | ---------- | ------------ |
| Client | Worker | Rule Ignored |
| Worker | Same Zone | Rule Ignored |
| Worker | Other Zone | Rule Ignored |
### Security Level
| Source | Target | Behavior |
| ------ | ---------- | -------------- |
| Client | Worker | Rule Respected |
| Worker | Same Zone | Rule Ignored |
| Worker | Other Zone | Rule Ignored |
### SSL
| Source | Target | Behavior |
| ------ | ---------- | -------------- |
| Client | Worker | Rule Respected |
| Worker | Same Zone | Rule Respected |
| Worker | Other Zone | Rule Ignored |
---
# Connect to databases
URL: https://developers.cloudflare.com/workers/databases/connecting-to-databases/
Cloudflare Workers can connect to and query your data in both SQL and NoSQL databases, including:
- Traditional hosted relational databases, including Postgres and MySQL.
- Serverless databases: Supabase, MongoDB Atlas, PlanetScale, FaunaDB, and Prisma.
- Cloudflare's own [D1](/d1/), a serverless SQL-based database.
| Database | Integration | Library or Driver | Connection Method |
| --------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------- | ---------------------------------------------------------------------------------- | ------------------------------------------------------------------ |
| [Postgres](/workers/tutorials/postgres/) | - | [Postgres.js](https://github.com/porsager/postgres),[node-postgres](https://node-postgres.com/) | [Hyperdrive](/hyperdrive/) |
| [Postgres](/workers/tutorials/postgres/) | - | [deno-postgres](https://github.com/cloudflare/worker-template-postgres) | [TCP Socket](/workers/runtime-apis/tcp-sockets/) via database driver |
| [MySQL](/workers/tutorials/postgres/) | - | [deno-mysql](https://github.com/cloudflare/worker-template-mysql) | [TCP Socket](/workers/runtime-apis/tcp-sockets/) via database driver |
| [Fauna](https://docs.fauna.com/fauna/current/build/integration/cloudflare/) | [Yes](/workers/databases/native-integrations/fauna/) | [fauna](https://github.com/fauna/fauna-js) | API through client library |
| [PlanetScale](https://planetscale.com/blog/introducing-the-planetscale-serverless-driver-for-javascript) | [Yes](/workers/databases/native-integrations/planetscale/) | [@planetscale/database](https://github.com/planetscale/database-js) | API via client library |
| [Supabase](https://github.com/supabase/supabase/tree/master/examples/with-cloudflare-workers) | [Yes](/workers/databases/native-integrations/supabase/) | [@supabase/supabase-js](https://github.com/supabase/supabase-js) | API via client library |
| [Prisma](https://www.prisma.io/docs/guides/deployment/deployment-guides/deploying-to-cloudflare-workers) | No | [prisma](https://github.com/prisma/prisma) | API via client library |
| [Neon](https://blog.cloudflare.com/neon-postgres-database-from-workers/) | [Yes](/workers/databases/native-integrations/neon/) | [@neondatabase/serverless](https://neon.tech/blog/serverless-driver-for-postgres/) | API via client library |
| [Hasura](https://hasura.io/blog/building-applications-with-cloudflare-workers-and-hasura-graphql-engine/) | No | API | GraphQL API via fetch() |
| [Upstash Redis](https://blog.cloudflare.com/cloudflare-workers-database-integration-with-upstash/) | [Yes](/workers/databases/native-integrations/upstash/) | [@upstash/redis](https://github.com/upstash/upstash-redis) | API via client library |
| [TiDB Cloud](https://docs.pingcap.com/tidbcloud/integrate-tidbcloud-with-cloudflare) | No | [@tidbcloud/serverless](https://github.com/tidbcloud/serverless-js) | API via client library |
:::note
If you do not see an integration listed or have an integration to add, complete and submit the [Cloudflare Developer Platform Integration form](https://forms.gle/iaUqLWE8aezSEhgd6).
:::
Once you have installed the necessary packages, use the APIs provided by these packages to connect to your database and perform operations on it. Refer to detailed links for service-specific instructions.
## Connect to a database from a Worker
There are four ways to connect to a database from a Worker:
1. With [Hyperdrive](/hyperdrive/) (recommended), which dramatically speeds up accessing traditional databases. Hyperdrive currently supports PostgreSQL and PostgreSQL-compatible database providers.
2. [Database Integrations](/workers/databases/native-integrations/): Simplifies authentication by managing credentials on your behalf and includes support for PlanetScale, Neon and Supabase.
3. [TCP Socket API](/workers/runtime-apis/tcp-sockets): A direct TCP connection to a database. TCP is the de-facto standard protocol that many databases, such as PostgreSQL and MySQL, use for client connectivity.
4. HTTP- or WebSocket-based serverless drivers: Many hosted databases support a HTTP or WebSocket API to enable either clients to connect from environments that do not support TCP, or as their preferred connection protocol.
## Authentication
If your database requires authentication, use Wrangler secrets to securely store your credentials. To do this, create a secret in your Cloudflare Workers project using the following [`wrangler secret`](/workers/wrangler/commands/#secret) command:
```sh
wrangler secret put
```
Then, retrieve the secret value in your code using the following code snippet:
```js
const secretValue = env.;
```
Use the secret value to authenticate with the external service. For example, if the external service requires an API key or database username and password for authentication, include these in using the relevant service's library or API.
For services that require mTLS authentication, use [mTLS certificates](/workers/runtime-apis/bindings/mtls) to present a client certificate.
## Next steps
- Learn how to connect to [an existing PostgreSQL database](/hyperdrive/) with Hyperdrive.
- Discover [other storage options available](/workers/platform/storage-options/) for use with Workers.
- [Create your first database](/d1/get-started/) with Cloudflare D1.
---
# Databases
URL: https://developers.cloudflare.com/workers/databases/
import { DirectoryListing } from "~/components";
Explore database integrations for your Worker projects.
---
# Dashboard
URL: https://developers.cloudflare.com/workers/get-started/dashboard/
import { Render } from "~/components";
Follow this guide to create a Workers application using [the Cloudflare dashboard](https://dash.cloudflare.com).
## Prerequisites
[Create a Cloudflare account](/learning-paths/get-started/account-setup/create-account/), if you have not already.
## Setup
To create a Workers application:
1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account.
2. Go to **Workers & Pages**.
3. Select **Create**.
4. Select a template or **Create Worker**.
5. Review the provided code and select **Deploy**.
6. Preview your Worker at its provided [`workers.dev`](/workers/configuration/routing/workers-dev/) subdomain.
## Development
## Next steps
To do more:
- Push your project to a GitHub or GitLab respoitory then [connect to builds](/workers/ci-cd/builds/#get-started) to enable automatic builds and deployments.
- Review our [Examples](/workers/examples/) and [Tutorials](/workers/tutorials/) for inspiration.
- Set up [bindings](/workers/runtime-apis/bindings/) to allow your Worker to interact with other resources and unlock new functionality.
- Learn how to [test and debug](/workers/testing/) your Workers.
- Read about [Workers limits and pricing](/workers/platform/).
---
# Getting started
URL: https://developers.cloudflare.com/workers/get-started/
import { DirectoryListing, Render } from "~/components";
Build your first Worker.
---
# CLI
URL: https://developers.cloudflare.com/workers/get-started/guide/
import { Details, Render, PackageManagers } from "~/components";
Set up and deploy your first Worker with Wrangler, the Cloudflare Developer Platform CLI.
This guide will instruct you through setting up and deploying your first Worker.
## Prerequisites
## 1. Create a new Worker project
Open a terminal window and run C3 to create your Worker project. [C3 (`create-cloudflare-cli`)](https://github.com/cloudflare/workers-sdk/tree/main/packages/create-cloudflare) is a command-line tool designed to help you set up and deploy new applications to Cloudflare.
Now, you have a new project set up. Move into that project folder.
```sh
cd my-first-worker
```
In your project directory, C3 will have generated the following:
* `wrangler.jsonc`: Your [Wrangler](/workers/wrangler/configuration/#sample-wrangler-configuration) configuration file.
* `index.js` (in `/src`): A minimal `'Hello World!'` Worker written in [ES module](/workers/reference/migrate-to-module-workers/) syntax.
* `package.json`: A minimal Node dependencies configuration file.
* `package-lock.json`: Refer to [`npm` documentation on `package-lock.json`](https://docs.npmjs.com/cli/v9/configuring-npm/package-lock-json).
* `node_modules`: Refer to [`npm` documentation `node_modules`](https://docs.npmjs.com/cli/v7/configuring-npm/folders#node-modules).
In addition to creating new projects from C3 templates, C3 also supports creating new projects from existing Git repositories. To create a new project from an existing Git repository, open your terminal and run:
```sh
npm create cloudflare@latest -- --template
```
`` may be any of the following:
- `user/repo` (GitHub)
- `git@github.com:user/repo`
- `https://github.com/user/repo`
- `user/repo/some-template` (subdirectories)
- `user/repo#canary` (branches)
- `user/repo#1234abcd` (commit hash)
- `bitbucket:user/repo` (Bitbucket)
- `gitlab:user/repo` (GitLab)
Your existing template folder must contain the following files, at a minimum, to meet the requirements for Cloudflare Workers:
- `package.json`
- `wrangler.jsonc` [See sample Wrangler configuration](/workers/wrangler/configuration/#sample-wrangler-configuration)
- `src/` containing a worker script referenced from `wrangler.jsonc`
## 2. Develop with Wrangler CLI
C3 installs [Wrangler](/workers/wrangler/install-and-update/), the Workers command-line interface, in Workers projects by default. Wrangler lets you to [create](/workers/wrangler/commands/#init), [test](/workers/wrangler/commands/#dev), and [deploy](/workers/wrangler/commands/#deploy) your Workers projects.
After you have created your first Worker, run the [`wrangler dev`](/workers/wrangler/commands/#dev) command in the project directory to start a local server for developing your Worker. This will allow you to preview your Worker locally during development.
```sh
npx wrangler dev
```
If you have never used Wrangler before, it will open your web browser so you can login to your Cloudflare account.
Go to [http://localhost:8787](http://localhost:8787) to view your Worker.
If you have issues with this step or you do not have access to a browser interface, refer to the [`wrangler login`](/workers/wrangler/commands/#login) documentation.
## 3. Write code
With your new project generated and running, you can begin to write and edit your code.
Find the `src/index.js` file. `index.js` will be populated with the code below:
```js title="Original index.js"
export default {
async fetch(request, env, ctx) {
return new Response("Hello World!");
},
};
```
This code block consists of a few different parts.
```js title="Updated index.js" {1}
export default {
async fetch(request, env, ctx) {
return new Response("Hello World!");
},
};
```
`export default` is JavaScript syntax required for defining [JavaScript modules](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules#default_exports_versus_named_exports). Your Worker has to have a default export of an object, with properties corresponding to the events your Worker should handle.
```js title="index.js" {2}
export default {
async fetch(request, env, ctx) {
return new Response("Hello World!");
},
};
```
This [`fetch()` handler](/workers/runtime-apis/handlers/fetch/) will be called when your Worker receives an HTTP request. You can define additional event handlers in the exported object to respond to different types of events. For example, add a [`scheduled()` handler](/workers/runtime-apis/handlers/scheduled/) to respond to Worker invocations via a [Cron Trigger](/workers/configuration/cron-triggers/).
Additionally, the `fetch` handler will always be passed three parameters: [`request`, `env` and `context`](/workers/runtime-apis/handlers/fetch/).
```js title="index.js" {3}
export default {
async fetch(request, env, ctx) {
return new Response("Hello World!");
},
};
```
The Workers runtime expects `fetch` handlers to return a `Response` object or a Promise which resolves with a `Response` object. In this example, you will return a new `Response` with the string `"Hello World!"`.
Replace the content in your current `index.js` file with the content below, which changes the text output.
```js title="index.js" {3}
export default {
async fetch(request, env, ctx) {
return new Response("Hello Worker!");
},
};
```
Then, save the file and reload the page. Your Worker's output will have changed to the new text.
If the output for your Worker does not change, make sure that:
1. You saved the changes to `index.js`.
2. You have `wrangler dev` running.
3. You reloaded your browser.
## 4. Deploy your project
Deploy your Worker via Wrangler to a `*.workers.dev` subdomain or a [Custom Domain](/workers/configuration/routing/custom-domains/).
```sh
npx wrangler deploy
```
If you have not configured any subdomain or domain, Wrangler will prompt you during the publish process to set one up.
Preview your Worker at `..workers.dev`.
If you see [`523` errors](/support/troubleshooting/cloudflare-errors/troubleshooting-cloudflare-5xx-errors/#error-523-origin-is-unreachable) when pushing your `*.workers.dev` subdomain for the first time, wait a minute or so and the errors will resolve themselves.
## Next steps
To do more:
- Push your project to a GitHub or GitLab respoitory then [connect to builds](/workers/ci-cd/builds/#get-started) to enable automatic builds and deployments.
- Visit the [Cloudflare dashboard](https://dash.cloudflare.com/) for simpler editing.
- Review our [Examples](/workers/examples/) and [Tutorials](/workers/tutorials/) for inspiration.
- Set up [bindings](/workers/runtime-apis/bindings/) to allow your Worker to interact with other resources and unlock new functionality.
- Learn how to [test and debug](/workers/testing/) your Workers.
- Read about [Workers limits and pricing](/workers/platform/).
---
# Prompting
URL: https://developers.cloudflare.com/workers/get-started/prompting/
import { Tabs, TabItem, GlossaryTooltip, Type, Badge, TypeScriptExample } from "~/components";
import { Code } from "@astrojs/starlight/components";
import BasePrompt from '~/content/partials/prompts/base-prompt.txt?raw';
One of the fastest ways to build an application is by using AI to assist with writing the boiler plate code. When building, iterating on or debugging applications using AI tools and Large Language Models (LLMs), a well-structured and extensive prompt helps provide the model with clearer guidelines & examples that can dramatically improve output.
Below is an extensive example prompt that can help you build applications using Cloudflare Workers and your preferred AI model.
### Getting started with Workers using a prompt
To use the prompt:
1. Use the click-to-copy button at the top right of the code block below to copy the full prompt to your clipboard
2. Paste into your AI tool of choice (for example OpenAI's ChatGPT or Anthropic's Claude)
3. Make sure to enter your part of the prompt at the end between the `` and `` tags.
Base prompt:
The prompt above adopts several best practices, including:
* Using `` tags to structure the prompt
* API and usage examples for products and use-cases
* Guidance on how to generate configuration (e.g. `wrangler.jsonc`) as part of the models response.
* Recommendations on Cloudflare products to use for specific storage or state needs
### Additional uses
You can use the prompt in several ways:
* Within the user context window, with your own user prompt inserted between the `` tags (**easiest**)
* As the `system` prompt for models that support system prompts
* Adding it to the prompt library and/or file context within your preferred IDE:
* Cursor: add the prompt to [your Project Rules](https://docs.cursor.com/context/rules-for-ai)
* Zed: use [the `/file` command](https://zed.dev/docs/assistant/assistant-panel) to add the prompt to the Assistant context.
* Windsurf: use [the `@-mention` command](https://docs.codeium.com/chat/overview) to include a file containing the prompt to your Chat.
* GitHub Copilot: create the [`.github/copilot-instructions.md`](https://docs.github.com/en/copilot/customizing-copilot/adding-repository-custom-instructions-for-github-copilot) file at the root of your project and add the prompt.
:::note
The prompt(s) here are examples and should be adapted to your specific use case. We'll continue to build out the prompts available here, including additional prompts for specific products.
Depending on the model and user prompt, it may generate invalid code, configuration or other errors, and we recommend reviewing and testing the generated code before deploying it.
:::
### Passing a system prompt
If you are building an AI application that will itself generate code, you can additionally use the prompt above as a "system prompt", which will give the LLM additional information on how to structure the output code. For example:
```ts
import workersPrompt from "./workersPrompt.md"
// Llama 3.3 from Workers AI
const PREFERRED_MODEL = "@cf/meta/llama-3.3-70b-instruct-fp8-fast"
export default {
async fetch(req: Request, env: Env, ctx: ExecutionContext) {
const openai = new OpenAI({
apiKey: env.WORKERS_AI_API_KEY
});
const stream = await openai.chat.completions.create({
messages: [
{
role: "system",
content: workersPrompt,
},
{
role: "user",
// Imagine something big!
content: "Build an AI Agent using Workflows. The Workflow should be triggered by a GitHub webhook on a pull request, and ..."
}
],
model: PREFERRED_MODEL,
stream: true,
});
// Stream the response so we're not buffering the entire response in memory,
// since it could be very large.
const transformStream = new TransformStream();
const writer = transformStream.writable.getWriter();
const encoder = new TextEncoder();
(async () => {
try {
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content || '';
await writer.write(encoder.encode(content));
}
} finally {
await writer.close();
}
})();
return new Response(transformStream.readable, {
headers: {
'Content-Type': 'text/plain; charset=utf-8',
'Transfer-Encoding': 'chunked'
}
});
}
}
```
## Additional resources
To get the most out of AI models and tools, we recommend reading the following guides on prompt engineering and structure:
* OpenAI's [prompt engineering](https://platform.openai.com/docs/guides/prompt-engineering) guide and [best practices](https://platform.openai.com/docs/guides/reasoning-best-practices) for using reasoning models.
* The [prompt engineering](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview) guide from Anthropic
* Google's [quick start guide](https://services.google.com/fh/files/misc/gemini-for-google-workspace-prompting-guide-101.pdf) for writing effective prompts
* Meta's [prompting documentation](https://www.llama.com/docs/how-to-guides/prompting/) for their Llama model family.
* GitHub's guide for [prompt engineering](https://docs.github.com/en/copilot/using-github-copilot/copilot-chat/prompt-engineering-for-copilot-chat) when using Copilot Chat.
---
# Quickstarts
URL: https://developers.cloudflare.com/workers/get-started/quickstarts/
import { LinkButton, WorkerStarter } from "~/components";
Quickstarts are GitHub repositories that are designed to be a starting point for building a new Cloudflare Workers project. To start any of the projects below, run:
```sh
npm create cloudflare@latest -- --template
```
- `new-project-name`
- A folder with this name will be created with your new project inside, pre-configured to [your Workers account](/workers/wrangler/configuration/).
- `template`
- This is the URL of the GitHub repo starter, as below. Refer to the [create-cloudflare documentation](/pages/get-started/c3/) for a full list of possible values.
## Example Projects
---
## Frameworks
---
## Built with Workers
Get inspiration from other sites and projects out there that were built with Cloudflare Workers.
Built with Workers
---
# Frameworks
URL: https://developers.cloudflare.com/workers/frameworks/
import {
Badge,
Description,
DirectoryListing,
InlineBadge,
Render,
TabItem,
Tabs,
PackageManagers,
Feature,
} from "~/components";
Run front-end websites — static or dynamic — directly on Cloudflare's global
network.
The following frameworks have support for Cloudflare Workers and the new [ Workers Assets](/workers/static-assets/). Refer to the individual guides below for instructions on how to get started.
:::note
**Static Assets for Workers is currently in open beta.**
If you are looking for a framework not on this list:
- It may be supported in [Cloudflare Pages](/pages/). Refer to [Pages Frameworks guides](/pages/framework-guides/) for a full list.
- Tell us which framework you would like to see supported on Workers in our [Cloudflare's Developer Discord](https://discord.gg/dqgZUwcD).
:::
---
# Languages
URL: https://developers.cloudflare.com/workers/languages/
import { DirectoryListing } from "~/components";
Workers is a polyglot platform, and provides first-class support for the following programming languages:
Workers also supports [WebAssembly](/workers/runtime-apis/webassembly/) (abbreviated as "Wasm") — a binary format that many languages can be compiled to. This allows you to write Workers using programming language beyond the languages listed above, including C, C++, Kotlin, Go and more.
---
# 103 Early Hints
URL: https://developers.cloudflare.com/workers/examples/103-early-hints/
import { TabItem, Tabs } from "~/components";
`103` Early Hints is an HTTP status code designed to speed up content delivery. When enabled, Cloudflare can cache the `Link` headers marked with preload and/or preconnect from HTML pages and serve them in a `103` Early Hints response before reaching the origin server. Browsers can use these hints to fetch linked assets while waiting for the origin’s final response, dramatically improving page load speeds.
To ensure Early Hints are enabled on your zone:
1. Log in to the [Cloudflare Dashboard](https://dash.cloudflare.com) and select your account and website.
2. Go to **Speed** > **Optimization** > **Content Optimization**.
3. Enable the **Early Hints** toggle to on.
You can return `Link` headers from a Worker running on your zone to speed up your page load times.
```js
const CSS = "body { color: red; }";
const HTML = `
Early Hints test
Early Hints test page
`;
export default {
async fetch(req) {
// If request is for test.css, serve the raw CSS
if (/test\.css$/.test(req.url)) {
return new Response(CSS, {
headers: {
"content-type": "text/css",
},
});
} else {
// Serve raw HTML using Early Hints for the CSS file
return new Response(HTML, {
headers: {
"content-type": "text/html",
link: "; rel=preload; as=style",
},
});
}
},
};
```
```js
const CSS = "body { color: red; }";
const HTML = `
Early Hints test
Early Hints test page
`;
export default {
async fetch(req): Promise {
// If request is for test.css, serve the raw CSS
if (/test\.css$/.test(req.url)) {
return new Response(CSS, {
headers: {
"content-type": "text/css",
},
});
} else {
// Serve raw HTML using Early Hints for the CSS file
return new Response(HTML, {
headers: {
"content-type": "text/html",
link: "; rel=preload; as=style",
},
});
}
},
} satisfies ExportedHandler;
```
```py
import re
from js import Response, Headers
CSS = "body { color: red; }"
HTML = """
Early Hints test
Early Hints test page
"""
def on_fetch(request):
if re.search("test.css", request.url):
headers = Headers.new({"content-type": "text/css"}.items())
return Response.new(CSS, headers=headers)
else:
headers = Headers.new({"content-type": "text/html","link": "; rel=preload; as=style"}.items())
return Response.new(HTML, headers=headers)
```
---
# A/B testing with same-URL direct access
URL: https://developers.cloudflare.com/workers/examples/ab-testing/
import { TabItem, Tabs } from "~/components";
```js
const NAME = "myExampleWorkersABTest";
export default {
async fetch(req) {
const url = new URL(req.url);
// Enable Passthrough to allow direct access to control and test routes.
if (url.pathname.startsWith("/control") || url.pathname.startsWith("/test"))
return fetch(req);
// Determine which group this requester is in.
const cookie = req.headers.get("cookie");
if (cookie && cookie.includes(`${NAME}=control`)) {
url.pathname = "/control" + url.pathname;
} else if (cookie && cookie.includes(`${NAME}=test`)) {
url.pathname = "/test" + url.pathname;
} else {
// If there is no cookie, this is a new client. Choose a group and set the cookie.
const group = Math.random() < 0.5 ? "test" : "control"; // 50/50 split
if (group === "control") {
url.pathname = "/control" + url.pathname;
} else {
url.pathname = "/test" + url.pathname;
}
// Reconstruct response to avoid immutability
let res = await fetch(url);
res = new Response(res.body, res);
// Set cookie to enable persistent A/B sessions.
res.headers.append("Set-Cookie", `${NAME}=${group}; path=/`);
return res;
}
return fetch(url);
},
};
```
```ts
const NAME = "myExampleWorkersABTest";
export default {
async fetch(req): Promise {
const url = new URL(req.url);
// Enable Passthrough to allow direct access to control and test routes.
if (url.pathname.startsWith("/control") || url.pathname.startsWith("/test"))
return fetch(req);
// Determine which group this requester is in.
const cookie = req.headers.get("cookie");
if (cookie && cookie.includes(`${NAME}=control`)) {
url.pathname = "/control" + url.pathname;
} else if (cookie && cookie.includes(`${NAME}=test`)) {
url.pathname = "/test" + url.pathname;
} else {
// If there is no cookie, this is a new client. Choose a group and set the cookie.
const group = Math.random() < 0.5 ? "test" : "control"; // 50/50 split
if (group === "control") {
url.pathname = "/control" + url.pathname;
} else {
url.pathname = "/test" + url.pathname;
}
// Reconstruct response to avoid immutability
let res = await fetch(url);
res = new Response(res.body, res);
// Set cookie to enable persistent A/B sessions.
res.headers.append("Set-Cookie", `${NAME}=${group}; path=/`);
return res;
}
return fetch(url);
},
} satisfies ExportedHandler;
```
```py
import random
from urllib.parse import urlparse, urlunparse
from js import Response, Headers, fetch
NAME = "myExampleWorkersABTest"
async def on_fetch(request):
url = urlparse(request.url)
# Uncomment below when testing locally
# url = url._replace(netloc="example.com") if "localhost" in url.netloc else url
# Enable Passthrough to allow direct access to control and test routes.
if url.path.startswith("/control") or url.path.startswith("/test"):
return fetch(urlunparse(url))
# Determine which group this requester is in.
cookie = request.headers.get("cookie")
if cookie and f'{NAME}=control' in cookie:
url = url._replace(path="/control" + url.path)
elif cookie and f'{NAME}=test' in cookie:
url = url._replace(path="/test" + url.path)
else:
# If there is no cookie, this is a new client. Choose a group and set the cookie.
group = "test" if random.random() < 0.5 else "control"
if group == "control":
url = url._replace(path="/control" + url.path)
else:
url = url._replace(path="/test" + url.path)
# Reconstruct response to avoid immutability
res = await fetch(urlunparse(url))
headers = dict(res.headers)
headers["Set-Cookie"] = f'{NAME}={group}; path=/'
headers = Headers.new(headers.items())
return Response.new(res.body, headers=headers)
return fetch(urlunparse(url))
```
---
# Accessing the Cloudflare Object
URL: https://developers.cloudflare.com/workers/examples/accessing-the-cloudflare-object/
import { TabItem, Tabs } from "~/components";
```js
export default {
async fetch(req) {
const data =
req.cf !== undefined
? req.cf
: { error: "The `cf` object is not available inside the preview." };
return new Response(JSON.stringify(data, null, 2), {
headers: {
"content-type": "application/json;charset=UTF-8",
},
});
},
};
```
```ts
export default {
async fetch(req): Promise {
const data =
req.cf !== undefined
? req.cf
: { error: "The `cf` object is not available inside the preview." };
return new Response(JSON.stringify(data, null, 2), {
headers: {
"content-type": "application/json;charset=UTF-8",
},
});
},
} satisfies ExportedHandler;
```
```py
import json
from js import Response, Headers, JSON
def on_fetch(request):
error = json.dumps({ "error": "The `cf` object is not available inside the preview." })
data = request.cf if request.cf is not None else error
headers = Headers.new({"content-type":"application/json"}.items())
return Response.new(JSON.stringify(data, None, 2), headers=headers)
```
---
# Aggregate requests
URL: https://developers.cloudflare.com/workers/examples/aggregate-requests/
import { TabItem, Tabs } from "~/components";
```js
export default {
async fetch(request) {
// someHost is set up to return JSON responses
const someHost = "https://jsonplaceholder.typicode.com";
const url1 = someHost + "/todos/1";
const url2 = someHost + "/todos/2";
const responses = await Promise.all([fetch(url1), fetch(url2)]);
const results = await Promise.all(responses.map((r) => r.json()));
const options = {
headers: { "content-type": "application/json;charset=UTF-8" },
};
return new Response(JSON.stringify(results), options);
},
};
```
```ts
export default {
async fetch(request) {
// someHost is set up to return JSON responses
const someHost = "https://jsonplaceholder.typicode.com";
const url1 = someHost + "/todos/1";
const url2 = someHost + "/todos/2";
const responses = await Promise.all([fetch(url1), fetch(url2)]);
const results = await Promise.all(responses.map((r) => r.json()));
const options = {
headers: { "content-type": "application/json;charset=UTF-8" },
};
return new Response(JSON.stringify(results), options);
},
} satisfies ExportedHandler;
```
```py
from js import Response, fetch, Headers, JSON, Promise
async def on_fetch(request):
# some_host is set up to return JSON responses
some_host = "https://jsonplaceholder.typicode.com"
url1 = some_host + "/todos/1"
url2 = some_host + "/todos/2"
responses = await Promise.all([fetch(url1), fetch(url2)])
results = await Promise.all(map(lambda r: r.json(), responses))
headers = Headers.new({"content-type": "application/json;charset=UTF-8"}.items())
return Response.new(JSON.stringify(results), headers=headers)
```
---
# Alter headers
URL: https://developers.cloudflare.com/workers/examples/alter-headers/
import { TabItem, Tabs } from "~/components";
```js
export default {
async fetch(request) {
const response = await fetch("https://example.com");
// Clone the response so that it's no longer immutable
const newResponse = new Response(response.body, response);
// Add a custom header with a value
newResponse.headers.append(
"x-workers-hello",
"Hello from Cloudflare Workers",
);
// Delete headers
newResponse.headers.delete("x-header-to-delete");
newResponse.headers.delete("x-header2-to-delete");
// Adjust the value for an existing header
newResponse.headers.set("x-header-to-change", "NewValue");
return newResponse;
},
};
```
```ts
export default {
async fetch(request): Promise {
const response = await fetch(request);
// Clone the response so that it's no longer immutable
const newResponse = new Response(response.body, response);
// Add a custom header with a value
newResponse.headers.append(
"x-workers-hello",
"Hello from Cloudflare Workers",
);
// Delete headers
newResponse.headers.delete("x-header-to-delete");
newResponse.headers.delete("x-header2-to-delete");
// Adjust the value for an existing header
newResponse.headers.set("x-header-to-change", "NewValue");
return newResponse;
},
} satisfies ExportedHandler;
```
```py
from js import Response, fetch
async def on_fetch(request):
response = await fetch("https://example.com")
# Clone the response so that it's no longer immutable
new_response = Response.new(response.body, response)
# Add a custom header with a value
new_response.headers.append(
"x-workers-hello",
"Hello from Cloudflare Workers"
)
# Delete headers
new_response.headers.delete("x-header-to-delete")
new_response.headers.delete("x-header2-to-delete")
# Adjust the value for an existing header
new_response.headers.set("x-header-to-change", "NewValue")
return new_response
```
You can also use the [`custom-headers-example` template](https://github.com/kristianfreeman/custom-headers-example) to deploy this code to your custom domain.
---
# Auth with headers
URL: https://developers.cloudflare.com/workers/examples/auth-with-headers/
import { TabItem, Tabs } from "~/components";
:::caution[Caution when using in production]
The example code contains a generic header key and value of `X-Custom-PSK` and `mypresharedkey`. To best protect your resources, change the header key and value in the Workers editor before saving your code.
:::
```js
export default {
async fetch(request) {
/**
* @param {string} PRESHARED_AUTH_HEADER_KEY Custom header to check for key
* @param {string} PRESHARED_AUTH_HEADER_VALUE Hard coded key value
*/
const PRESHARED_AUTH_HEADER_KEY = "X-Custom-PSK";
const PRESHARED_AUTH_HEADER_VALUE = "mypresharedkey";
const psk = request.headers.get(PRESHARED_AUTH_HEADER_KEY);
if (psk === PRESHARED_AUTH_HEADER_VALUE) {
// Correct preshared header key supplied. Fetch request from origin.
return fetch(request);
}
// Incorrect key supplied. Reject the request.
return new Response("Sorry, you have supplied an invalid key.", {
status: 403,
});
},
};
```
```ts
export default {
async fetch(request): Promise {
/**
* @param {string} PRESHARED_AUTH_HEADER_KEY Custom header to check for key
* @param {string} PRESHARED_AUTH_HEADER_VALUE Hard coded key value
*/
const PRESHARED_AUTH_HEADER_KEY = "X-Custom-PSK";
const PRESHARED_AUTH_HEADER_VALUE = "mypresharedkey";
const psk = request.headers.get(PRESHARED_AUTH_HEADER_KEY);
if (psk === PRESHARED_AUTH_HEADER_VALUE) {
// Correct preshared header key supplied. Fetch request from origin.
return fetch(request);
}
// Incorrect key supplied. Reject the request.
return new Response("Sorry, you have supplied an invalid key.", {
status: 403,
});
},
} satisfies ExportedHandler;
```
```py
from js import Response, fetch
async def on_fetch(request):
PRESHARED_AUTH_HEADER_KEY = "X-Custom-PSK"
PRESHARED_AUTH_HEADER_VALUE = "mypresharedkey"
psk = request.headers.get(PRESHARED_AUTH_HEADER_KEY)
if psk == PRESHARED_AUTH_HEADER_VALUE:
# Correct preshared header key supplied. Fetch request from origin.
return fetch(request)
# Incorrect key supplied. Reject the request.
return Response.new("Sorry, you have supplied an invalid key.", status=403);
```
---
# HTTP Basic Authentication
URL: https://developers.cloudflare.com/workers/examples/basic-auth/
import { TabItem, Tabs } from "~/components";
:::note
This example Worker makes use of the [Node.js Buffer API](/workers/runtime-apis/nodejs/buffer/), which is available as part of the Worker's runtime [Node.js compatibility mode](/workers/runtime-apis/nodejs/). To run this Worker, you will need to [enable the `nodejs_compat` compatibility flag](/workers/configuration/compatibility-flags/#nodejs-compatibility-flag).
:::
:::caution[Caution when using in production]
This code is provided as a sample, and is not suitable for production use. Basic Authentication sends credentials unencrypted, and must be used with an HTTPS connection to be considered secure. For a production-ready authentication system, consider using [Cloudflare Access](https://developers.cloudflare.com/cloudflare-one/applications/configure-apps/self-hosted-public-app/).
:::
```js
/**
* Shows how to restrict access using the HTTP Basic schema.
* @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Authentication
* @see https://tools.ietf.org/html/rfc7617
*
*/
import { Buffer } from "node:buffer";
const encoder = new TextEncoder();
/**
* Protect against timing attacks by safely comparing values using `timingSafeEqual`.
* Refer to https://developers.cloudflare.com/workers/runtime-apis/web-crypto/#timingsafeequal for more details
* @param {string} a
* @param {string} b
* @returns {boolean}
*/
function timingSafeEqual(a, b) {
const aBytes = encoder.encode(a);
const bBytes = encoder.encode(b);
if (aBytes.byteLength !== bBytes.byteLength) {
// Strings must be the same length in order to compare
// with crypto.subtle.timingSafeEqual
return false;
}
return crypto.subtle.timingSafeEqual(aBytes, bBytes);
}
export default {
/**
*
* @param {Request} request
* @param {{PASSWORD: string}} env
* @returns
*/
async fetch(request, env) {
const BASIC_USER = "admin";
// You will need an admin password. This should be
// attached to your Worker as an encrypted secret.
// Refer to https://developers.cloudflare.com/workers/configuration/secrets/
const BASIC_PASS = env.PASSWORD ?? "password";
const url = new URL(request.url);
switch (url.pathname) {
case "/":
return new Response("Anyone can access the homepage.");
case "/logout":
// Invalidate the "Authorization" header by returning a HTTP 401.
// We do not send a "WWW-Authenticate" header, as this would trigger
// a popup in the browser, immediately asking for credentials again.
return new Response("Logged out.", { status: 401 });
case "/admin": {
// The "Authorization" header is sent when authenticated.
const authorization = request.headers.get("Authorization");
if (!authorization) {
return new Response("You need to login.", {
status: 401,
headers: {
// Prompts the user for credentials.
"WWW-Authenticate": 'Basic realm="my scope", charset="UTF-8"',
},
});
}
const [scheme, encoded] = authorization.split(" ");
// The Authorization header must start with Basic, followed by a space.
if (!encoded || scheme !== "Basic") {
return new Response("Malformed authorization header.", {
status: 400,
});
}
const credentials = Buffer.from(encoded, "base64").toString();
// The username & password are split by the first colon.
//=> example: "username:password"
const index = credentials.indexOf(":");
const user = credentials.substring(0, index);
const pass = credentials.substring(index + 1);
if (
!timingSafeEqual(BASIC_USER, user) ||
!timingSafeEqual(BASIC_PASS, pass)
) {
return new Response("You need to login.", {
status: 401,
headers: {
// Prompts the user for credentials.
"WWW-Authenticate": 'Basic realm="my scope", charset="UTF-8"',
},
});
}
return new Response("🎉 You have private access!", {
status: 200,
headers: {
"Cache-Control": "no-store",
},
});
}
}
return new Response("Not Found.", { status: 404 });
},
};
```
```ts
/**
* Shows how to restrict access using the HTTP Basic schema.
* @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Authentication
* @see https://tools.ietf.org/html/rfc7617
*
*/
import { Buffer } from "node:buffer";
const encoder = new TextEncoder();
/**
* Protect against timing attacks by safely comparing values using `timingSafeEqual`.
* Refer to https://developers.cloudflare.com/workers/runtime-apis/web-crypto/#timingsafeequal for more details
*/
function timingSafeEqual(a: string, b: string) {
const aBytes = encoder.encode(a);
const bBytes = encoder.encode(b);
if (aBytes.byteLength !== bBytes.byteLength) {
// Strings must be the same length in order to compare
// with crypto.subtle.timingSafeEqual
return false;
}
return crypto.subtle.timingSafeEqual(aBytes, bBytes);
}
interface Env {
PASSWORD: string;
}
export default {
async fetch(request, env): Promise {
const BASIC_USER = "admin";
// You will need an admin password. This should be
// attached to your Worker as an encrypted secret.
// Refer to https://developers.cloudflare.com/workers/configuration/secrets/
const BASIC_PASS = env.PASSWORD ?? "password";
const url = new URL(request.url);
switch (url.pathname) {
case "/":
return new Response("Anyone can access the homepage.");
case "/logout":
// Invalidate the "Authorization" header by returning a HTTP 401.
// We do not send a "WWW-Authenticate" header, as this would trigger
// a popup in the browser, immediately asking for credentials again.
return new Response("Logged out.", { status: 401 });
case "/admin": {
// The "Authorization" header is sent when authenticated.
const authorization = request.headers.get("Authorization");
if (!authorization) {
return new Response("You need to login.", {
status: 401,
headers: {
// Prompts the user for credentials.
"WWW-Authenticate": 'Basic realm="my scope", charset="UTF-8"',
},
});
}
const [scheme, encoded] = authorization.split(" ");
// The Authorization header must start with Basic, followed by a space.
if (!encoded || scheme !== "Basic") {
return new Response("Malformed authorization header.", {
status: 400,
});
}
const credentials = Buffer.from(encoded, "base64").toString();
// The username and password are split by the first colon.
//=> example: "username:password"
const index = credentials.indexOf(":");
const user = credentials.substring(0, index);
const pass = credentials.substring(index + 1);
if (
!timingSafeEqual(BASIC_USER, user) ||
!timingSafeEqual(BASIC_PASS, pass)
) {
return new Response("You need to login.", {
status: 401,
headers: {
// Prompts the user for credentials.
"WWW-Authenticate": 'Basic realm="my scope", charset="UTF-8"',
},
});
}
return new Response("🎉 You have private access!", {
status: 200,
headers: {
"Cache-Control": "no-store",
},
});
}
}
return new Response("Not Found.", { status: 404 });
},
} satisfies ExportedHandler;
```
```rs
use base64::prelude::*;
use worker::*;
#[event(fetch)]
async fn fetch(req: Request, env: Env, _ctx: Context) -> Result {
let basic_user = "admin";
// You will need an admin password. This should be
// attached to your Worker as an encrypted secret.
// Refer to https://developers.cloudflare.com/workers/configuration/secrets/
let basic_pass = match env.secret("PASSWORD") {
Ok(s) => s.to_string(),
Err(_) => "password".to_string(),
};
let url = req.url()?;
match url.path() {
"/" => Response::ok("Anyone can access the homepage."),
// Invalidate the "Authorization" header by returning a HTTP 401.
// We do not send a "WWW-Authenticate" header, as this would trigger
// a popup in the browser, immediately asking for credentials again.
"/logout" => Response::error("Logged out.", 401),
"/admin" => {
// The "Authorization" header is sent when authenticated.
let authorization = req.headers().get("Authorization")?;
if authorization == None {
let mut headers = Headers::new();
// Prompts the user for credentials.
headers.set(
"WWW-Authenticate",
"Basic realm='my scope', charset='UTF-8'",
)?;
return Ok(Response::error("You need to login.", 401)?.with_headers(headers));
}
let authorization = authorization.unwrap();
let auth: Vec<&str> = authorization.split(" ").collect();
let scheme = auth[0];
let encoded = auth[1];
// The Authorization header must start with Basic, followed by a space.
if encoded == "" || scheme != "Basic" {
return Response::error("Malformed authorization header.", 400);
}
let buff = BASE64_STANDARD.decode(encoded).unwrap();
let credentials = String::from_utf8_lossy(&buff);
// The username & password are split by the first colon.
//=> example: "username:password"
let credentials: Vec<&str> = credentials.split(':').collect();
let user = credentials[0];
let pass = credentials[1];
if user != basic_user || pass != basic_pass {
let mut headers = Headers::new();
// Prompts the user for credentials.
headers.set(
"WWW-Authenticate",
"Basic realm='my scope', charset='UTF-8'",
)?;
return Ok(Response::error("You need to login.", 401)?.with_headers(headers));
}
let mut headers = Headers::new();
headers.set("Cache-Control", "no-store")?;
Ok(Response::ok("🎉 You have private access!")?.with_headers(headers))
}
_ => Response::error("Not Found.", 404),
}
}
```
---
# Block on TLS
URL: https://developers.cloudflare.com/workers/examples/block-on-tls/
import { TabItem, Tabs } from "~/components";
```js
export default {
async fetch(request) {
try {
const tlsVersion = request.cf.tlsVersion;
// Allow only TLS versions 1.2 and 1.3
if (tlsVersion !== "TLSv1.2" && tlsVersion !== "TLSv1.3") {
return new Response("Please use TLS version 1.2 or higher.", {
status: 403,
});
}
return fetch(request);
} catch (err) {
console.error(
"request.cf does not exist in the previewer, only in production",
);
return new Response(`Error in workers script ${err.message}`, {
status: 500,
});
}
},
};
```
```ts
export default {
async fetch(request): Promise {
try {
const tlsVersion = request.cf.tlsVersion;
// Allow only TLS versions 1.2 and 1.3
if (tlsVersion !== "TLSv1.2" && tlsVersion !== "TLSv1.3") {
return new Response("Please use TLS version 1.2 or higher.", {
status: 403,
});
}
return fetch(request);
} catch (err) {
console.error(
"request.cf does not exist in the previewer, only in production",
);
return new Response(`Error in workers script ${err.message}`, {
status: 500,
});
}
},
} satisfies ExportedHandler;
```
```py
from js import Response, fetch
async def on_fetch(request):
tls_version = request.cf.tlsVersion
if tls_version not in ("TLSv1.2", "TLSv1.3"):
return Response.new("Please use TLS version 1.2 or higher.", status=403)
return fetch(request)
```
---
# Bulk origin override
URL: https://developers.cloudflare.com/workers/examples/bulk-origin-proxy/
import { TabItem, Tabs } from "~/components";
```js
export default {
async fetch(request) {
/**
* An object with different URLs to fetch
* @param {Object} ORIGINS
*/
const ORIGINS = {
"starwarsapi.yourdomain.com": "swapi.dev",
"google.yourdomain.com": "www.google.com",
};
const url = new URL(request.url);
// Check if incoming hostname is a key in the ORIGINS object
if (url.hostname in ORIGINS) {
const target = ORIGINS[url.hostname];
url.hostname = target;
// If it is, proxy request to that third party origin
return fetch(url.toString(), request);
}
// Otherwise, process request as normal
return fetch(request);
},
};
```
```ts
export default {
async fetch(request): Promise {
/**
* An object with different URLs to fetch
* @param {Object} ORIGINS
*/
const ORIGINS = {
"starwarsapi.yourdomain.com": "swapi.dev",
"google.yourdomain.com": "www.google.com",
};
const url = new URL(request.url);
// Check if incoming hostname is a key in the ORIGINS object
if (url.hostname in ORIGINS) {
const target = ORIGINS[url.hostname];
url.hostname = target;
// If it is, proxy request to that third party origin
return fetch(url.toString(), request);
}
// Otherwise, process request as normal
return fetch(request);
},
} satisfies ExportedHandler;
```
```py
from js import fetch, URL
async def on_fetch(request):
# A dict with different URLs to fetch
ORIGINS = {
"starwarsapi.yourdomain.com": "swapi.dev",
"google.yourdomain.com": "www.google.com",
}
url = URL.new(request.url)
# Check if incoming hostname is a key in the ORIGINS object
if url.hostname in ORIGINS:
url.hostname = ORIGINS[url.hostname]
# If it is, proxy request to that third party origin
return fetch(url.toString(), request)
# Otherwise, process request as normal
return fetch(request)
```
---
# Bulk redirects
URL: https://developers.cloudflare.com/workers/examples/bulk-redirects/
import { TabItem, Tabs } from "~/components";
```js
export default {
async fetch(request) {
const externalHostname = "examples.cloudflareworkers.com";
const redirectMap = new Map([
["/bulk1", "https://" + externalHostname + "/redirect2"],
["/bulk2", "https://" + externalHostname + "/redirect3"],
["/bulk3", "https://" + externalHostname + "/redirect4"],
["/bulk4", "https://google.com"],
]);
const requestURL = new URL(request.url);
const path = requestURL.pathname;
const location = redirectMap.get(path);
if (location) {
return Response.redirect(location, 301);
}
// If request not in map, return the original request
return fetch(request);
},
};
```
```ts
export default {
async fetch(request): Promise {
const externalHostname = "examples.cloudflareworkers.com";
const redirectMap = new Map([
["/bulk1", "https://" + externalHostname + "/redirect2"],
["/bulk2", "https://" + externalHostname + "/redirect3"],
["/bulk3", "https://" + externalHostname + "/redirect4"],
["/bulk4", "https://google.com"],
]);
const requestURL = new URL(request.url);
const path = requestURL.pathname;
const location = redirectMap.get(path);
if (location) {
return Response.redirect(location, 301);
}
// If request not in map, return the original request
return fetch(request);
},
} satisfies ExportedHandler;
```
```py
from js import Response, fetch, URL
async def on_fetch(request):
external_hostname = "examples.cloudflareworkers.com"
redirect_map = {
"/bulk1": "https://" + external_hostname + "/redirect2",
"/bulk2": "https://" + external_hostname + "/redirect3",
"/bulk3": "https://" + external_hostname + "/redirect4",
"/bulk4": "https://google.com",
}
url = URL.new(request.url)
location = redirect_map.get(url.pathname, None)
if location:
return Response.redirect(location, 301)
# If request not in map, return the original request
return fetch(request)
```
---
# Using the Cache API
URL: https://developers.cloudflare.com/workers/examples/cache-api/
import { TabItem, Tabs } from "~/components";
```js
export default {
async fetch(request, env, ctx) {
const cacheUrl = new URL(request.url);
// Construct the cache key from the cache URL
const cacheKey = new Request(cacheUrl.toString(), request);
const cache = caches.default;
// Check whether the value is already available in the cache
// if not, you will need to fetch it from origin, and store it in the cache
let response = await cache.match(cacheKey);
if (!response) {
console.log(
`Response for request url: ${request.url} not present in cache. Fetching and caching request.`,
);
// If not in cache, get it from origin
response = await fetch(request);
// Must use Response constructor to inherit all of response's fields
response = new Response(response.body, response);
// Cache API respects Cache-Control headers. Setting s-max-age to 10
// will limit the response to be in cache for 10 seconds max
// Any changes made to the response here will be reflected in the cached value
response.headers.append("Cache-Control", "s-maxage=10");
ctx.waitUntil(cache.put(cacheKey, response.clone()));
} else {
console.log(`Cache hit for: ${request.url}.`);
}
return response;
},
};
```
```ts
interface Env {}
export default {
async fetch(request, env, ctx): Promise {
const cacheUrl = new URL(request.url);
// Construct the cache key from the cache URL
const cacheKey = new Request(cacheUrl.toString(), request);
const cache = caches.default;
// Check whether the value is already available in the cache
// if not, you will need to fetch it from origin, and store it in the cache
let response = await cache.match(cacheKey);
if (!response) {
console.log(
`Response for request url: ${request.url} not present in cache. Fetching and caching request.`,
);
// If not in cache, get it from origin
response = await fetch(request);
// Must use Response constructor to inherit all of response's fields
response = new Response(response.body, response);
// Cache API respects Cache-Control headers. Setting s-max-age to 10
// will limit the response to be in cache for 10 seconds max
// Any changes made to the response here will be reflected in the cached value
response.headers.append("Cache-Control", "s-maxage=10");
ctx.waitUntil(cache.put(cacheKey, response.clone()));
} else {
console.log(`Cache hit for: ${request.url}.`);
}
return response;
},
} satisfies ExportedHandler;
```
```py
from pyodide.ffi import create_proxy
from js import Response, Request, URL, caches, fetch
async def on_fetch(request, _env, ctx):
cache_url = request.url
# Construct the cache key from the cache URL
cache_key = Request.new(cache_url, request)
cache = caches.default
# Check whether the value is already available in the cache
# if not, you will need to fetch it from origin, and store it in the cache
response = await cache.match(cache_key)
if response is None:
print(f"Response for request url: {request.url} not present in cache. Fetching and caching request.")
# If not in cache, get it from origin
response = await fetch(request)
# Must use Response constructor to inherit all of response's fields
response = Response.new(response.body, response)
# Cache API respects Cache-Control headers. Setting s-max-age to 10
# will limit the response to be in cache for 10 seconds s-maxage
# Any changes made to the response here will be reflected in the cached value
response.headers.append("Cache-Control", "s-maxage=10")
ctx.waitUntil(create_proxy(cache.put(cache_key, response.clone())))
else:
print(f"Cache hit for: {request.url}.")
return response
```
---
# Cache POST requests
URL: https://developers.cloudflare.com/workers/examples/cache-post-request/
import { TabItem, Tabs } from "~/components";
```js
export default {
async fetch(request, env, ctx) {
async function sha256(message) {
// encode as UTF-8
const msgBuffer = await new TextEncoder().encode(message);
// hash the message
const hashBuffer = await crypto.subtle.digest("SHA-256", msgBuffer);
// convert bytes to hex string
return [...new Uint8Array(hashBuffer)]
.map((b) => b.toString(16).padStart(2, "0"))
.join("");
}
try {
if (request.method.toUpperCase() === "POST") {
const body = await request.clone().text();
// Hash the request body to use it as a part of the cache key
const hash = await sha256(body);
const cacheUrl = new URL(request.url);
// Store the URL in cache by prepending the body's hash
cacheUrl.pathname = "/posts" + cacheUrl.pathname + hash;
// Convert to a GET to be able to cache
const cacheKey = new Request(cacheUrl.toString(), {
headers: request.headers,
method: "GET",
});
const cache = caches.default;
// Find the cache key in the cache
let response = await cache.match(cacheKey);
// Otherwise, fetch response to POST request from origin
if (!response) {
response = await fetch(request);
ctx.waitUntil(cache.put(cacheKey, response.clone()));
}
return response;
}
return fetch(request);
} catch (e) {
return new Response("Error thrown " + e.message);
}
},
};
```
```ts
interface Env {}
export default {
async fetch(request, env, ctx): Promise {
async function sha256(message) {
// encode as UTF-8
const msgBuffer = await new TextEncoder().encode(message);
// hash the message
const hashBuffer = await crypto.subtle.digest("SHA-256", msgBuffer);
// convert bytes to hex string
return [...new Uint8Array(hashBuffer)]
.map((b) => b.toString(16).padStart(2, "0"))
.join("");
}
try {
if (request.method.toUpperCase() === "POST") {
const body = await request.clone().text();
// Hash the request body to use it as a part of the cache key
const hash = await sha256(body);
const cacheUrl = new URL(request.url);
// Store the URL in cache by prepending the body's hash
cacheUrl.pathname = "/posts" + cacheUrl.pathname + hash;
// Convert to a GET to be able to cache
const cacheKey = new Request(cacheUrl.toString(), {
headers: request.headers,
method: "GET",
});
const cache = caches.default;
// Find the cache key in the cache
let response = await cache.match(cacheKey);
// Otherwise, fetch response to POST request from origin
if (!response) {
response = await fetch(request);
ctx.waitUntil(cache.put(cacheKey, response.clone()));
}
return response;
}
return fetch(request);
} catch (e) {
return new Response("Error thrown " + e.message);
}
},
} satisfies ExportedHandler;
```
```py
import hashlib
from pyodide.ffi import create_proxy
from js import fetch, URL, Headers, Request, caches
async def on_fetch(request, _, ctx):
if 'POST' in request.method:
# Hash the request body to use it as a part of the cache key
body = await request.clone().text()
body_hash = hashlib.sha256(body.encode('UTF-8')).hexdigest()
# Store the URL in cache by prepending the body's hash
cache_url = URL.new(request.url)
cache_url.pathname = "/posts" + cache_url.pathname + body_hash
# Convert to a GET to be able to cache
headers = Headers.new(dict(request.headers).items())
cache_key = Request.new(cache_url.toString(), method='GET', headers=headers)
# Find the cache key in the cache
cache = caches.default
response = await cache.match(cache_key)
# Otherwise, fetch response to POST request from origin
if response is None:
response = await fetch(request)
ctx.waitUntil(create_proxy(cache.put(cache_key, response.clone())))
return response
return fetch(request)
```
---
# Cache Tags using Workers
URL: https://developers.cloudflare.com/workers/examples/cache-tags/
import { TabItem, Tabs } from "~/components";
```js
export default {
async fetch(request) {
const requestUrl = new URL(request.url);
const params = requestUrl.searchParams;
const tags =
params && params.has("tags") ? params.get("tags").split(",") : [];
const url =
params && params.has("uri") ? JSON.parse(params.get("uri")) : "";
if (!url) {
const errorObject = {
error: "URL cannot be empty",
};
return new Response(JSON.stringify(errorObject), { status: 400 });
}
const init = {
cf: {
cacheTags: tags,
},
};
return fetch(url, init)
.then((result) => {
const cacheStatus = result.headers.get("cf-cache-status");
const lastModified = result.headers.get("last-modified");
const response = {
cache: cacheStatus,
lastModified: lastModified,
};
return new Response(JSON.stringify(response), {
status: result.status,
});
})
.catch((err) => {
const errorObject = {
error: err.message,
};
return new Response(JSON.stringify(errorObject), { status: 500 });
});
},
};
```
```ts
export default {
async fetch(request): Promise {
const requestUrl = new URL(request.url);
const params = requestUrl.searchParams;
const tags =
params && params.has("tags") ? params.get("tags").split(",") : [];
const url =
params && params.has("uri") ? JSON.parse(params.get("uri")) : "";
if (!url) {
const errorObject = {
error: "URL cannot be empty",
};
return new Response(JSON.stringify(errorObject), { status: 400 });
}
const init = {
cf: {
cacheTags: tags,
},
};
return fetch(url, init)
.then((result) => {
const cacheStatus = result.headers.get("cf-cache-status");
const lastModified = result.headers.get("last-modified");
const response = {
cache: cacheStatus,
lastModified: lastModified,
};
return new Response(JSON.stringify(response), {
status: result.status,
});
})
.catch((err) => {
const errorObject = {
error: err.message,
};
return new Response(JSON.stringify(errorObject), { status: 500 });
});
},
} satisfies ExportedHandler;
```
```py
from pyodide.ffi import to_js as _to_js
from js import Response, URL, Object, fetch
def to_js(x):
return _to_js(x, dict_converter=Object.fromEntries)
async def on_fetch(request):
request_url = URL.new(request.url)
params = request_url.searchParams
tags = params["tags"].split(",") if "tags" in params else []
url = params["uri"] or None
if url is None:
error = {"error": "URL cannot be empty"}
return Response.json(to_js(error), status=400)
options = {"cf": {"cacheTags": tags}}
result = await fetch(url, to_js(options))
cache_status = result.headers["cf-cache-status"]
last_modified = result.headers["last-modified"]
response = {"cache": cache_status, "lastModified": last_modified}
return Response.json(to_js(response), status=result.status)
```
---
# Cache using fetch
URL: https://developers.cloudflare.com/workers/examples/cache-using-fetch/
import { TabItem, Tabs } from "~/components";
```js
export default {
async fetch(request) {
const url = new URL(request.url);
// Only use the path for the cache key, removing query strings
// and always store using HTTPS, for example, https://www.example.com/file-uri-here
const someCustomKey = `https://${url.hostname}${url.pathname}`;
let response = await fetch(request, {
cf: {
// Always cache this fetch regardless of content type
// for a max of 5 seconds before revalidating the resource
cacheTtl: 5,
cacheEverything: true,
//Enterprise only feature, see Cache API for other plans
cacheKey: someCustomKey,
},
});
// Reconstruct the Response object to make its headers mutable.
response = new Response(response.body, response);
// Set cache control headers to cache on browser for 25 minutes
response.headers.set("Cache-Control", "max-age=1500");
return response;
},
};
```
```ts
export default {
async fetch(request): Promise {
const url = new URL(request.url);
// Only use the path for the cache key, removing query strings
// and always store using HTTPS, for example, https://www.example.com/file-uri-here
const someCustomKey = `https://${url.hostname}${url.pathname}`;
let response = await fetch(request, {
cf: {
// Always cache this fetch regardless of content type
// for a max of 5 seconds before revalidating the resource
cacheTtl: 5,
cacheEverything: true,
//Enterprise only feature, see Cache API for other plans
cacheKey: someCustomKey,
},
});
// Reconstruct the Response object to make its headers mutable.
response = new Response(response.body, response);
// Set cache control headers to cache on browser for 25 minutes
response.headers.set("Cache-Control", "max-age=1500");
return response;
},
} satisfies ExportedHandler;
```
```py
from pyodide.ffi import to_js as _to_js
from js import Response, URL, Object, fetch
def to_js(x):
return _to_js(x, dict_converter=Object.fromEntries)
async def on_fetch(request):
url = URL.new(request.url)
# Only use the path for the cache key, removing query strings
# and always store using HTTPS, for example, https://www.example.com/file-uri-here
some_custom_key = f"https://{url.hostname}{url.pathname}"
response = await fetch(
request,
cf=to_js({
# Always cache this fetch regardless of content type
# for a max of 5 seconds before revalidating the resource
"cacheTtl": 5,
"cacheEverything": True,
# Enterprise only feature, see Cache API for other plans
"cacheKey": some_custom_key,
}),
)
# Reconstruct the Response object to make its headers mutable
response = Response.new(response.body, response)
# Set cache control headers to cache on browser for 25 minutes
response.headers["Cache-Control"] = "max-age=1500"
return response
```
```rs
use worker::*;
#[event(fetch)]
async fn fetch(req: Request, _env: Env, _ctx: Context) -> Result {
let url = req.url()?;
// Only use the path for the cache key, removing query strings
// and always store using HTTPS, for example, https://www.example.com/file-uri-here
let custom_key = format!(
"https://{host}{path}",
host = url.host_str().unwrap(),
path = url.path()
);
let request = Request::new_with_init(
url.as_str(),
&RequestInit {
headers: req.headers().clone(),
method: req.method(),
cf: CfProperties {
// Always cache this fetch regardless of content type
// for a max of 5 seconds before revalidating the resource
cache_ttl: Some(5),
cache_everything: Some(true),
// Enterprise only feature, see Cache API for other plans
cache_key: Some(custom_key),
..CfProperties::default()
},
..RequestInit::default()
},
)?;
let mut response = Fetch::Request(request).send().await?;
// Set cache control headers to cache on browser for 25 minutes
let _ = response.headers_mut().set("Cache-Control", "max-age=1500");
Ok(response)
}
```
## Caching HTML resources
```js
// Force Cloudflare to cache an asset
fetch(event.request, { cf: { cacheEverything: true } });
```
Setting the cache level to **Cache Everything** will override the default cacheability of the asset. For time-to-live (TTL), Cloudflare will still rely on headers set by the origin.
## Custom cache keys
:::note
This feature is available only to Enterprise customers.
:::
A request's cache key is what determines if two requests are the same for caching purposes. If a request has the same cache key as some previous request, then Cloudflare can serve the same cached response for both. For more about cache keys, refer to the [Create custom cache keys](/cache/how-to/cache-keys/#create-custom-cache-keys) documentation.
```js
// Set cache key for this request to "some-string".
fetch(event.request, { cf: { cacheKey: "some-string" } });
```
Normally, Cloudflare computes the cache key for a request based on the request's URL. Sometimes, though, you may like different URLs to be treated as if they were the same for caching purposes. For example, if your website content is hosted from both Amazon S3 and Google Cloud Storage - you have the same content in both places, and you can use a Worker to randomly balance between the two. However, you do not want to end up caching two copies of your content. You could utilize custom cache keys to cache based on the original request URL rather than the subrequest URL:
```js
export default {
async fetch(request) {
let url = new URL(request.url);
if (Math.random() < 0.5) {
url.hostname = "example.s3.amazonaws.com";
} else {
url.hostname = "example.storage.googleapis.com";
}
let newRequest = new Request(url, request);
return fetch(newRequest, {
cf: { cacheKey: request.url },
});
},
};
```
```ts
export default {
async fetch(request): Promise {
let url = new URL(request.url);
if (Math.random() < 0.5) {
url.hostname = "example.s3.amazonaws.com";
} else {
url.hostname = "example.storage.googleapis.com";
}
let newRequest = new Request(url, request);
return fetch(newRequest, {
cf: { cacheKey: request.url },
});
},
} satisfies ExportedHandler;
```
Workers operating on behalf of different zones cannot affect each other's cache. You can only override cache keys when making requests within your own zone (in the above example `event.request.url` was the key stored), or requests to hosts that are not on Cloudflare. When making a request to another Cloudflare zone (for example, belonging to a different Cloudflare customer), that zone fully controls how its own content is cached within Cloudflare; you cannot override it.
## Override based on origin response code
```js
// Force response to be cached for 86400 seconds for 200 status
// codes, 1 second for 404, and do not cache 500 errors.
fetch(request, {
cf: { cacheTtlByStatus: { "200-299": 86400, 404: 1, "500-599": 0 } },
});
```
This option is a version of the `cacheTtl` feature which chooses a TTL based on the response's status code and does not automatically set `cacheEverything: true`. If the response to this request has a status code that matches, Cloudflare will cache for the instructed time, and override cache directives sent by the origin. You can review [details on the `cacheTtl` feature on the Request page](/workers/runtime-apis/request/#the-cf-property-requestinitcfproperties).
## Customize cache behavior based on request file type
Using custom cache keys and overrides based on response code, you can write a Worker that sets the TTL based on the response status code from origin, and request file type.
The following example demonstrates how you might use this to cache requests for streaming media assets:
```js title="index.js"
export default {
async fetch(request) {
// Instantiate new URL to make it mutable
const newRequest = new URL(request.url);
const customCacheKey = `${newRequest.hostname}${newRequest.pathname}`;
const queryCacheKey = `${newRequest.hostname}${newRequest.pathname}${newRequest.search}`;
// Different asset types usually have different caching strategies. Most of the time media content such as audio, videos and images that are not user-generated content would not need to be updated often so a long TTL would be best. However, with HLS streaming, manifest files usually are set with short TTLs so that playback will not be affected, as this files contain the data that the player would need. By setting each caching strategy for categories of asset types in an object within an array, you can solve complex needs when it comes to media content for your application
const cacheAssets = [
{
asset: "video",
key: customCacheKey,
regex:
/(.*\/Video)|(.*\.(m4s|mp4|ts|avi|mpeg|mpg|mkv|bin|webm|vob|flv|m2ts|mts|3gp|m4v|wmv|qt))/,
info: 0,
ok: 31556952,
redirects: 30,
clientError: 10,
serverError: 0,
},
{
asset: "image",
key: queryCacheKey,
regex:
/(.*\/Images)|(.*\.(jpg|jpeg|png|bmp|pict|tif|tiff|webp|gif|heif|exif|bat|bpg|ppm|pgn|pbm|pnm))/,
info: 0,
ok: 3600,
redirects: 30,
clientError: 10,
serverError: 0,
},
{
asset: "frontEnd",
key: queryCacheKey,
regex: /^.*\.(css|js)/,
info: 0,
ok: 3600,
redirects: 30,
clientError: 10,
serverError: 0,
},
{
asset: "audio",
key: customCacheKey,
regex:
/(.*\/Audio)|(.*\.(flac|aac|mp3|alac|aiff|wav|ogg|aiff|opus|ape|wma|3gp))/,
info: 0,
ok: 31556952,
redirects: 30,
clientError: 10,
serverError: 0,
},
{
asset: "directPlay",
key: customCacheKey,
regex: /.*(\/Download)/,
info: 0,
ok: 31556952,
redirects: 30,
clientError: 10,
serverError: 0,
},
{
asset: "manifest",
key: customCacheKey,
regex: /^.*\.(m3u8|mpd)/,
info: 0,
ok: 3,
redirects: 2,
clientError: 1,
serverError: 0,
},
];
const { asset, regex, ...cache } =
cacheAssets.find(({ regex }) => newRequest.pathname.match(regex)) ?? {};
const newResponse = await fetch(request, {
cf: {
cacheKey: cache.key,
polish: false,
cacheEverything: true,
cacheTtlByStatus: {
"100-199": cache.info,
"200-299": cache.ok,
"300-399": cache.redirects,
"400-499": cache.clientError,
"500-599": cache.serverError,
},
cacheTags: ["static"],
},
});
const response = new Response(newResponse.body, newResponse);
// For debugging purposes
response.headers.set("debug", JSON.stringify(cache));
return response;
},
};
```
```js title="index.js"
addEventListener("fetch", (event) => {
return event.respondWith(handleRequest(event.request));
});
async function handleRequest(request) {
// Instantiate new URL to make it mutable
const newRequest = new URL(request.url);
// Set `const` to be used in the array later on
const customCacheKey = `${newRequest.hostname}${newRequest.pathname}`;
const queryCacheKey = `${newRequest.hostname}${newRequest.pathname}${newRequest.search}`;
// Set all variables needed to manipulate Cloudflare's cache using the fetch API in the `cf` object. You will be passing these variables in the objects down below.
const cacheAssets = [
{
asset: "video",
key: customCacheKey,
regex:
/(.*\/Video)|(.*\.(m4s|mp4|ts|avi|mpeg|mpg|mkv|bin|webm|vob|flv|m2ts|mts|3gp|m4v|wmv|qt))/,
info: 0,
ok: 31556952,
redirects: 30,
clientError: 10,
serverError: 0,
},
{
asset: "image",
key: queryCacheKey,
regex:
/(.*\/Images)|(.*\.(jpg|jpeg|png|bmp|pict|tif|tiff|webp|gif|heif|exif|bat|bpg|ppm|pgn|pbm|pnm))/,
info: 0,
ok: 3600,
redirects: 30,
clientError: 10,
serverError: 0,
},
{
asset: "frontEnd",
key: queryCacheKey,
regex: /^.*\.(css|js)/,
info: 0,
ok: 3600,
redirects: 30,
clientError: 10,
serverError: 0,
},
{
asset: "audio",
key: customCacheKey,
regex:
/(.*\/Audio)|(.*\.(flac|aac|mp3|alac|aiff|wav|ogg|aiff|opus|ape|wma|3gp))/,
info: 0,
ok: 31556952,
redirects: 30,
clientError: 10,
serverError: 0,
},
{
asset: "directPlay",
key: customCacheKey,
regex: /.*(\/Download)/,
info: 0,
ok: 31556952,
redirects: 30,
clientError: 10,
serverError: 0,
},
{
asset: "manifest",
key: customCacheKey,
regex: /^.*\.(m3u8|mpd)/,
info: 0,
ok: 3,
redirects: 2,
clientError: 1,
serverError: 0,
},
];
// the `.find` method is used to find elements in an array (`cacheAssets`), in this case, `regex`, which can passed to the .`match` method to match on file extensions to cache, since they are many media types in the array. If you want to add more types, update the array. Refer to https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/find for more information.
const { asset, regex, ...cache } =
cacheAssets.find(({ regex }) => newRequest.pathname.match(regex)) ?? {};
const newResponse = await fetch(request, {
cf: {
cacheKey: cache.key,
polish: false,
cacheEverything: true,
cacheTtlByStatus: {
"100-199": cache.info,
"200-299": cache.ok,
"300-399": cache.redirects,
"400-499": cache.clientError,
"500-599": cache.serverError,
},
cacheTags: ["static"],
},
});
const response = new Response(newResponse.body, newResponse);
// For debugging purposes
response.headers.set("debug", JSON.stringify(cache));
return response;
}
```
## Using the HTTP Cache API
The `cache` mode can be set in `fetch` options.
Currently Workers only support the `no-store` mode for controlling the cache.
When `no-store` is supplied the cache is bypassed on the way to the origin and the request is not cacheable.
```js
fetch(request, { cache: 'no-store'});
```
---
# Conditional response
URL: https://developers.cloudflare.com/workers/examples/conditional-response/
import { TabItem, Tabs } from "~/components";
```js
export default {
async fetch(request) {
const BLOCKED_HOSTNAMES = ["nope.mywebsite.com", "bye.website.com"];
// Return a new Response based on a URL's hostname
const url = new URL(request.url);
if (BLOCKED_HOSTNAMES.includes(url.hostname)) {
return new Response("Blocked Host", { status: 403 });
}
// Block paths ending in .doc or .xml based on the URL's file extension
const forbiddenExtRegExp = new RegExp(/\.(doc|xml)$/);
if (forbiddenExtRegExp.test(url.pathname)) {
return new Response("Blocked Extension", { status: 403 });
}
// On HTTP method
if (request.method === "POST") {
return new Response("Response for POST");
}
// On User Agent
const userAgent = request.headers.get("User-Agent") || "";
if (userAgent.includes("bot")) {
return new Response("Block User Agent containing bot", { status: 403 });
}
// On Client's IP address
const clientIP = request.headers.get("CF-Connecting-IP");
if (clientIP === "1.2.3.4") {
return new Response("Block the IP 1.2.3.4", { status: 403 });
}
// On ASN
if (request.cf && request.cf.asn == 64512) {
return new Response("Block the ASN 64512 response");
}
// On Device Type
// Requires Enterprise "CF-Device-Type Header" zone setting or
// Page Rule with "Cache By Device Type" setting applied.
const device = request.headers.get("CF-Device-Type");
if (device === "mobile") {
return Response.redirect("https://mobile.example.com");
}
console.error(
"Getting Client's IP address, device type, and ASN are not supported in playground. Must test on a live worker",
);
return fetch(request);
},
};
```
```ts
export default {
async fetch(request): Promise {
const BLOCKED_HOSTNAMES = ["nope.mywebsite.com", "bye.website.com"];
// Return a new Response based on a URL's hostname
const url = new URL(request.url);
if (BLOCKED_HOSTNAMES.includes(url.hostname)) {
return new Response("Blocked Host", { status: 403 });
}
// Block paths ending in .doc or .xml based on the URL's file extension
const forbiddenExtRegExp = new RegExp(/\.(doc|xml)$/);
if (forbiddenExtRegExp.test(url.pathname)) {
return new Response("Blocked Extension", { status: 403 });
}
// On HTTP method
if (request.method === "POST") {
return new Response("Response for POST");
}
// On User Agent
const userAgent = request.headers.get("User-Agent") || "";
if (userAgent.includes("bot")) {
return new Response("Block User Agent containing bot", { status: 403 });
}
// On Client's IP address
const clientIP = request.headers.get("CF-Connecting-IP");
if (clientIP === "1.2.3.4") {
return new Response("Block the IP 1.2.3.4", { status: 403 });
}
// On ASN
if (request.cf && request.cf.asn == 64512) {
return new Response("Block the ASN 64512 response");
}
// On Device Type
// Requires Enterprise "CF-Device-Type Header" zone setting or
// Page Rule with "Cache By Device Type" setting applied.
const device = request.headers.get("CF-Device-Type");
if (device === "mobile") {
return Response.redirect("https://mobile.example.com");
}
console.error(
"Getting Client's IP address, device type, and ASN are not supported in playground. Must test on a live worker",
);
return fetch(request);
},
} satisfies ExportedHandler;
```
```py
import re
from js import Response, URL, fetch
async def on_fetch(request):
blocked_hostnames = ["nope.mywebsite.com", "bye.website.com"]
url = URL.new(request.url)
# Block on hostname
if url.hostname in blocked_hostnames:
return Response.new("Blocked Host", status=403)
# On paths ending in .doc or .xml
if re.search(r'\.(doc|xml)$', url.pathname):
return Response.new("Blocked Extension", status=403)
# On HTTP method
if "POST" in request.method:
return Response.new("Response for POST")
# On User Agent
user_agent = request.headers["User-Agent"] or ""
if "bot" in user_agent:
return Response.new("Block User Agent containing bot", status=403)
# On Client's IP address
client_ip = request.headers["CF-Connecting-IP"]
if client_ip == "1.2.3.4":
return Response.new("Block the IP 1.2.3.4", status=403)
# On ASN
if request.cf and request.cf.asn == 64512:
return Response.new("Block the ASN 64512 response")
# On Device Type
# Requires Enterprise "CF-Device-Type Header" zone setting or
# Page Rule with "Cache By Device Type" setting applied.
device = request.headers["CF-Device-Type"]
if device == "mobile":
return Response.redirect("https://mobile.example.com")
return fetch(request)
```
---
# CORS header proxy
URL: https://developers.cloudflare.com/workers/examples/cors-header-proxy/
import { TabItem, Tabs } from "~/components";
```js
export default {
async fetch(request) {
const corsHeaders = {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "GET,HEAD,POST,OPTIONS",
"Access-Control-Max-Age": "86400",
};
// The URL for the remote third party API you want to fetch from
// but does not implement CORS
const API_URL = "https://examples.cloudflareworkers.com/demos/demoapi";
// The endpoint you want the CORS reverse proxy to be on
const PROXY_ENDPOINT = "/corsproxy/";
// The rest of this snippet for the demo page
function rawHtmlResponse(html) {
return new Response(html, {
headers: {
"content-type": "text/html;charset=UTF-8",
},
});
}
const DEMO_PAGE = `
Waiting
`;
async function handleRequest(request) {
const url = new URL(request.url);
let apiUrl = url.searchParams.get("apiurl");
if (apiUrl == null) {
apiUrl = API_URL;
}
// Rewrite request to point to API URL. This also makes the request mutable
// so you can add the correct Origin header to make the API server think
// that this request is not cross-site.
request = new Request(apiUrl, request);
request.headers.set("Origin", new URL(apiUrl).origin);
let response = await fetch(request);
// Recreate the response so you can modify the headers
response = new Response(response.body, response);
// Set CORS headers
response.headers.set("Access-Control-Allow-Origin", url.origin);
// Append to/Add Vary header so browser will cache response correctly
response.headers.append("Vary", "Origin");
return response;
}
async function handleOptions(request) {
if (
request.headers.get("Origin") !== null &&
request.headers.get("Access-Control-Request-Method") !== null &&
request.headers.get("Access-Control-Request-Headers") !== null
) {
// Handle CORS preflight requests.
return new Response(null, {
headers: {
...corsHeaders,
"Access-Control-Allow-Headers": request.headers.get(
"Access-Control-Request-Headers",
),
},
});
} else {
// Handle standard OPTIONS request.
return new Response(null, {
headers: {
Allow: "GET, HEAD, POST, OPTIONS",
},
});
}
}
const url = new URL(request.url);
if (url.pathname.startsWith(PROXY_ENDPOINT)) {
if (request.method === "OPTIONS") {
// Handle CORS preflight requests
return handleOptions(request);
} else if (
request.method === "GET" ||
request.method === "HEAD" ||
request.method === "POST"
) {
// Handle requests to the API server
return handleRequest(request);
} else {
return new Response(null, {
status: 405,
statusText: "Method Not Allowed",
});
}
} else {
return rawHtmlResponse(DEMO_PAGE);
}
},
};
```
```ts
export default {
async fetch(request): Promise {
const corsHeaders = {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "GET,HEAD,POST,OPTIONS",
"Access-Control-Max-Age": "86400",
};
// The URL for the remote third party API you want to fetch from
// but does not implement CORS
const API_URL = "https://examples.cloudflareworkers.com/demos/demoapi";
// The endpoint you want the CORS reverse proxy to be on
const PROXY_ENDPOINT = "/corsproxy/";
// The rest of this snippet for the demo page
function rawHtmlResponse(html) {
return new Response(html, {
headers: {
"content-type": "text/html;charset=UTF-8",
},
});
}
const DEMO_PAGE = `
Waiting
`;
async function handleRequest(request) {
const url = new URL(request.url);
let apiUrl = url.searchParams.get("apiurl");
if (apiUrl == null) {
apiUrl = API_URL;
}
// Rewrite request to point to API URL. This also makes the request mutable
// so you can add the correct Origin header to make the API server think
// that this request is not cross-site.
request = new Request(apiUrl, request);
request.headers.set("Origin", new URL(apiUrl).origin);
let response = await fetch(request);
// Recreate the response so you can modify the headers
response = new Response(response.body, response);
// Set CORS headers
response.headers.set("Access-Control-Allow-Origin", url.origin);
// Append to/Add Vary header so browser will cache response correctly
response.headers.append("Vary", "Origin");
return response;
}
async function handleOptions(request) {
if (
request.headers.get("Origin") !== null &&
request.headers.get("Access-Control-Request-Method") !== null &&
request.headers.get("Access-Control-Request-Headers") !== null
) {
// Handle CORS preflight requests.
return new Response(null, {
headers: {
...corsHeaders,
"Access-Control-Allow-Headers": request.headers.get(
"Access-Control-Request-Headers",
),
},
});
} else {
// Handle standard OPTIONS request.
return new Response(null, {
headers: {
Allow: "GET, HEAD, POST, OPTIONS",
},
});
}
}
const url = new URL(request.url);
if (url.pathname.startsWith(PROXY_ENDPOINT)) {
if (request.method === "OPTIONS") {
// Handle CORS preflight requests
return handleOptions(request);
} else if (
request.method === "GET" ||
request.method === "HEAD" ||
request.method === "POST"
) {
// Handle requests to the API server
return handleRequest(request);
} else {
return new Response(null, {
status: 405,
statusText: "Method Not Allowed",
});
}
} else {
return rawHtmlResponse(DEMO_PAGE);
}
},
} satisfies ExportedHandler;
```
```py
from pyodide.ffi import to_js as _to_js
from js import Response, URL, fetch, Object, Request
def to_js(x):
return _to_js(x, dict_converter=Object.fromEntries)
async def on_fetch(request):
cors_headers = {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "GET,HEAD,POST,OPTIONS",
"Access-Control-Max-Age": "86400",
}
api_url = "https://examples.cloudflareworkers.com/demos/demoapi"
proxy_endpoint = "/corsproxy/"
def raw_html_response(html):
return Response.new(html, headers=to_js({"content-type": "text/html;charset=UTF-8"}))
demo_page = f'''
You now have access to geolocation data about where your user is visiting from.
{html_content}
"""
headers = Headers.new({"content-type": "text/html;charset=UTF-8"}.items())
return Response.new(html, headers=headers)
```
---
# Hot-link protection
URL: https://developers.cloudflare.com/workers/examples/hot-link-protection/
import { TabItem, Tabs } from "~/components";
```js
export default {
async fetch(request) {
const HOMEPAGE_URL = "https://tutorial.cloudflareworkers.com/";
const PROTECTED_TYPE = "image/";
// Fetch the original request
const response = await fetch(request);
// If it's an image, engage hotlink protection based on the
// Referer header.
const referer = request.headers.get("Referer");
const contentType = response.headers.get("Content-Type") || "";
if (referer && contentType.startsWith(PROTECTED_TYPE)) {
// If the hostnames don't match, it's a hotlink
if (new URL(referer).hostname !== new URL(request.url).hostname) {
// Redirect the user to your website
return Response.redirect(HOMEPAGE_URL, 302);
}
}
// Everything is fine, return the response normally.
return response;
},
};
```
```ts
export default {
async fetch(request): Promise {
const HOMEPAGE_URL = "https://tutorial.cloudflareworkers.com/";
const PROTECTED_TYPE = "image/";
// Fetch the original request
const response = await fetch(request);
// If it's an image, engage hotlink protection based on the
// Referer header.
const referer = request.headers.get("Referer");
const contentType = response.headers.get("Content-Type") || "";
if (referer && contentType.startsWith(PROTECTED_TYPE)) {
// If the hostnames don't match, it's a hotlink
if (new URL(referer).hostname !== new URL(request.url).hostname) {
// Redirect the user to your website
return Response.redirect(HOMEPAGE_URL, 302);
}
}
// Everything is fine, return the response normally.
return response;
},
} satisfies ExportedHandler;
```
```py
from js import Response, URL, fetch
async def on_fetch(request):
homepage_url = "https://tutorial.cloudflareworkers.com/"
protected_type = "image/"
# Fetch the original request
response = await fetch(request)
# If it's an image, engage hotlink protection based on the referer header
referer = request.headers["Referer"]
content_type = response.headers["Content-Type"] or ""
if referer and content_type.startswith(protected_type):
# If the hostnames don't match, it's a hotlink
if URL.new(referer).hostname != URL.new(request.url).hostname:
# Redirect the user to your website
return Response.redirect(homepage_url, 302)
# Everything is fine, return the response normally
return response
```
---
# Custom Domain with Images
URL: https://developers.cloudflare.com/workers/examples/images-workers/
import { TabItem, Tabs } from "~/components";
To serve images from a custom domain:
1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com).
2. Select your account > select **Workers & Pages**.
3. Select **Create application** > **Workers** > **Create Worker** and create your Worker.
4. In your Worker, select **Quick edit** and paste the following code.
```js
export default {
async fetch(request) {
// You can find this in the dashboard, it should look something like this: ZWd9g1K7eljCn_KDTu_MWA
const accountHash = "";
const { pathname } = new URL(request.url);
// A request to something like cdn.example.com/83eb7b2-5392-4565-b69e-aff66acddd00/public
// will fetch "https://imagedelivery.net//83eb7b2-5392-4565-b69e-aff66acddd00/public"
return fetch(`https://imagedelivery.net/${accountHash}${pathname}`);
},
};
```
```ts
export default {
async fetch(request): Promise {
// You can find this in the dashboard, it should look something like this: ZWd9g1K7eljCn_KDTu_MWA
const accountHash = "";
const { pathname } = new URL(request.url);
// A request to something like cdn.example.com/83eb7b2-5392-4565-b69e-aff66acddd00/public
// will fetch "https://imagedelivery.net//83eb7b2-5392-4565-b69e-aff66acddd00/public"
return fetch(`https://imagedelivery.net/${accountHash}${pathname}`);
},
} satisfies ExportedHandler;
```
```py
from js import URL, fetch
async def on_fetch(request):
# You can find this in the dashboard, it should look something like this: ZWd9g1K7eljCn_KDTu_MWA
account_hash = ""
url = URL.new(request.url)
# A request to something like cdn.example.com/83eb7b2-5392-4565-b69e-aff66acddd00/public
# will fetch "https://imagedelivery.net//83eb7b2-5392-4565-b69e-aff66acddd00/public"
return fetch(f'https://imagedelivery.net/{account_hash}{url.pathname}')
```
Another way you can serve images from a custom domain is by using the `cdn-cgi/imagedelivery` prefix path which is used as path to trigger `cdn-cgi` image proxy.
Below is an example showing the hostname as a Cloudflare proxied domain under the same account as the Image, followed with the prefix path and the image ``, `` and `` which can be found in the **Images** on the Cloudflare dashboard.
```js
https://example.com/cdn-cgi/imagedelivery///
```
---
# Examples
URL: https://developers.cloudflare.com/workers/examples/
import { GlossaryTooltip, ListExamples } from "~/components";
:::note
[Explore our community-written tutorials contributed through the Developer Spotlight program.](/developer-spotlight/)
:::
Explore the following examples for Workers.
---
# Logging headers to console
URL: https://developers.cloudflare.com/workers/examples/logging-headers/
import { TabItem, Tabs } from "~/components";
```js
export default {
async fetch(request) {
console.log(new Map(request.headers));
return new Response("Hello world");
},
};
```
```ts
export default {
async fetch(request): Promise {
console.log(new Map(request.headers));
return new Response("Hello world");
},
} satisfies ExportedHandler;
```
```py
from js import Response
async def on_fetch(request):
print(dict(request.headers))
return Response.new('Hello world')
```
```rs
use worker::*;
#[event(fetch)]
async fn fetch(req: HttpRequest, _env: Env, _ctx: Context) -> Result {
console_log!("{:?}", req.headers());
Response::ok("hello world")
}
```
---
## Console-logging headers
Use a `Map` if you need to log a `Headers` object to the console:
```js
console.log(new Map(request.headers));
```
Use the `spread` operator if you need to quickly stringify a `Headers` object:
```js
let requestHeaders = JSON.stringify([...request.headers]);
```
Use `Object.fromEntries` to convert the headers to an object:
```js
let requestHeaders = Object.fromEntries(request.headers);
```
### The problem
When debugging Workers, examine the headers on a request or response. A common mistake is to try to log headers to the developer console via code like this:
```js
console.log(request.headers);
```
Or this:
```js
console.log(`Request headers: ${JSON.stringify(request.headers)}`);
```
Both attempts result in what appears to be an empty object — the string `"{}"` — even though calling `request.headers.has("Your-Header-Name")` might return true. This is the same behavior that browsers implement.
The reason this happens is because [Headers](https://developer.mozilla.org/en-US/docs/Web/API/Headers) objects do not store headers in enumerable JavaScript properties, so the developer console and JSON stringifier do not know how to read the names and values of the headers. It is not actually an empty object, but rather an opaque object.
`Headers` objects are iterable, which you can take advantage of to develop a couple of quick one-liners for debug-printing headers.
### Pass headers through a Map
The first common idiom for making Headers `console.log()`-friendly is to construct a `Map` object from the `Headers` object and log the `Map` object.
```js
console.log(new Map(request.headers));
```
This works because:
- `Map` objects can be constructed from iterables, like `Headers`.
- The `Map` object does store its entries in enumerable JavaScript properties, so the developer console can see into it.
### Spread headers into an array
The `Map` approach works for calls to `console.log()`. If you need to stringify your headers, you will discover that stringifying a `Map` yields nothing more than `[object Map]`.
Even though a `Map` stores its data in enumerable properties, those properties are [Symbol](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Symbol)-keyed. Because of this, `JSON.stringify()` will [ignore Symbol-keyed properties](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Symbol#symbols_and_json.stringify) and you will receive an empty `{}`.
Instead, you can take advantage of the iterability of the `Headers` object in a new way by applying the [spread operator](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_syntax) (`...`) to it.
```js
let requestHeaders = JSON.stringify([...request.headers], null, 2);
console.log(`Request headers: ${requestHeaders}`);
```
### Convert headers into an object with Object.fromEntries (ES2019)
ES2019 provides [`Object.fromEntries`](https://github.com/tc39/proposal-object-from-entries) which is a call to convert the headers into an object:
```js
let headersObject = Object.fromEntries(request.headers);
let requestHeaders = JSON.stringify(headersObject, null, 2);
console.log(`Request headers: ${requestHeaders}`);
```
This results in something like:
```js
Request headers: {
"accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8",
"accept-encoding": "gzip",
"accept-language": "en-US,en;q=0.9",
"cf-ipcountry": "US",
// ...
}"
```
---
# Modify response
URL: https://developers.cloudflare.com/workers/examples/modify-response/
import { TabItem, Tabs } from "~/components";
```js
export default {
async fetch(request) {
/**
* @param {string} headerNameSrc Header to get the new value from
* @param {string} headerNameDst Header to set based off of value in src
*/
const headerNameSrc = "foo"; //"Orig-Header"
const headerNameDst = "Last-Modified";
/**
* Response properties are immutable. To change them, construct a new
* Response and pass modified status or statusText in the ResponseInit
* object. Response headers can be modified through the headers `set` method.
*/
const originalResponse = await fetch(request);
// Change status and statusText, but preserve body and headers
let response = new Response(originalResponse.body, {
status: 500,
statusText: "some message",
headers: originalResponse.headers,
});
// Change response body by adding the foo prop
const originalBody = await originalResponse.json();
const body = JSON.stringify({ foo: "bar", ...originalBody });
response = new Response(body, response);
// Add a header using set method
response.headers.set("foo", "bar");
// Set destination header to the value of the source header
const src = response.headers.get(headerNameSrc);
if (src != null) {
response.headers.set(headerNameDst, src);
console.log(
`Response header "${headerNameDst}" was set to "${response.headers.get(
headerNameDst,
)}"`,
);
}
return response;
},
};
```
```ts
export default {
async fetch(request): Promise {
/**
* @param {string} headerNameSrc Header to get the new value from
* @param {string} headerNameDst Header to set based off of value in src
*/
const headerNameSrc = "foo"; //"Orig-Header"
const headerNameDst = "Last-Modified";
/**
* Response properties are immutable. To change them, construct a new
* Response and pass modified status or statusText in the ResponseInit
* object. Response headers can be modified through the headers `set` method.
*/
const originalResponse = await fetch(request);
// Change status and statusText, but preserve body and headers
let response = new Response(originalResponse.body, {
status: 500,
statusText: "some message",
headers: originalResponse.headers,
});
// Change response body by adding the foo prop
const originalBody = await originalResponse.json();
const body = JSON.stringify({ foo: "bar", ...originalBody });
response = new Response(body, response);
// Add a header using set method
response.headers.set("foo", "bar");
// Set destination header to the value of the source header
const src = response.headers.get(headerNameSrc);
if (src != null) {
response.headers.set(headerNameDst, src);
console.log(
`Response header "${headerNameDst}" was set to "${response.headers.get(
headerNameDst,
)}"`,
);
}
return response;
},
} satisfies ExportedHandler;
```
```py
from js import Response, fetch, JSON
async def on_fetch(request):
header_name_src = "foo" # Header to get the new value from
header_name_dst = "Last-Modified" # Header to set based off of value in src
# Response properties are immutable. To change them, construct a new response
original_response = await fetch(request)
# Change status and statusText, but preserve body and headers
response = Response.new(original_response.body, status=500, statusText="some message", headers=original_response.headers)
# Change response body by adding the foo prop
original_body = await original_response.json()
original_body.foo = "bar"
response = Response.new(JSON.stringify(original_body), response)
# Add a new header
response.headers["foo"] = "bar"
# Set destination header to the value of the source header
src = response.headers[header_name_src]
if src is not None:
response.headers[header_name_dst] = src
print(f'Response header {header_name_dst} was set to {response.headers[header_name_dst]}')
return response
```
---
# Modify request property
URL: https://developers.cloudflare.com/workers/examples/modify-request-property/
import { TabItem, Tabs } from "~/components";
```js
export default {
async fetch(request) {
/**
* Example someHost is set up to return raw JSON
* @param {string} someUrl the URL to send the request to, since we are setting hostname too only path is applied
* @param {string} someHost the host the request will resolve too
*/
const someHost = "example.com";
const someUrl = "https://foo.example.com/api.js";
/**
* The best practice is to only assign new RequestInit properties
* on the request object using either a method or the constructor
*/
const newRequestInit = {
// Change method
method: "POST",
// Change body
body: JSON.stringify({ bar: "foo" }),
// Change the redirect mode.
redirect: "follow",
// Change headers, note this method will erase existing headers
headers: {
"Content-Type": "application/json",
},
// Change a Cloudflare feature on the outbound response
cf: { apps: false },
};
// Change just the host
const url = new URL(someUrl);
url.hostname = someHost;
// Best practice is to always use the original request to construct the new request
// to clone all the attributes. Applying the URL also requires a constructor
// since once a Request has been constructed, its URL is immutable.
const newRequest = new Request(
url.toString(),
new Request(request, newRequestInit),
);
// Set headers using method
newRequest.headers.set("X-Example", "bar");
newRequest.headers.set("Content-Type", "application/json");
try {
return await fetch(newRequest);
} catch (e) {
return new Response(JSON.stringify({ error: e.message }), {
status: 500,
});
}
},
};
```
```ts
export default {
async fetch(request): Promise {
/**
* Example someHost is set up to return raw JSON
* @param {string} someUrl the URL to send the request to, since we are setting hostname too only path is applied
* @param {string} someHost the host the request will resolve too
*/
const someHost = "example.com";
const someUrl = "https://foo.example.com/api.js";
/**
* The best practice is to only assign new RequestInit properties
* on the request object using either a method or the constructor
*/
const newRequestInit = {
// Change method
method: "POST",
// Change body
body: JSON.stringify({ bar: "foo" }),
// Change the redirect mode.
redirect: "follow",
// Change headers, note this method will erase existing headers
headers: {
"Content-Type": "application/json",
},
// Change a Cloudflare feature on the outbound response
cf: { apps: false },
};
// Change just the host
const url = new URL(someUrl);
url.hostname = someHost;
// Best practice is to always use the original request to construct the new request
// to clone all the attributes. Applying the URL also requires a constructor
// since once a Request has been constructed, its URL is immutable.
const newRequest = new Request(
url.toString(),
new Request(request, newRequestInit),
);
// Set headers using method
newRequest.headers.set("X-Example", "bar");
newRequest.headers.set("Content-Type", "application/json");
try {
return await fetch(newRequest);
} catch (e) {
return new Response(JSON.stringify({ error: e.message }), {
status: 500,
});
}
},
} satisfies ExportedHandler;
```
```py
import json
from pyodide.ffi import to_js as _to_js
from js import Object, URL, Request, fetch, Response
def to_js(obj):
return _to_js(obj, dict_converter=Object.fromEntries)
async def on_fetch(request):
some_host = "example.com"
some_url = "https://foo.example.com/api.js"
# The best practice is to only assign new_request_init properties
# on the request object using either a method or the constructor
new_request_init = {
"method": "POST", # Change method
"body": json.dumps({ "bar": "foo" }), # Change body
"redirect": "follow", # Change the redirect mode
# Change headers, note this method will erase existing headers
"headers": {
"Content-Type": "application/json",
},
# Change a Cloudflare feature on the outbound response
"cf": { "apps": False },
}
# Change just the host
url = URL.new(some_url)
url.hostname = some_host
# Best practice is to always use the original request to construct the new request
# to clone all the attributes. Applying the URL also requires a constructor
# since once a Request has been constructed, its URL is immutable.
org_request = Request.new(request, new_request_init)
new_request = Request.new(url.toString(),org_request)
new_request.headers["X-Example"] = "bar"
new_request.headers["Content-Type"] = "application/json"
try:
return await fetch(new_request)
except Exception as e:
return Response.new({"error": str(e)}, status=500)
```
---
# Multiple Cron Triggers
URL: https://developers.cloudflare.com/workers/examples/multiple-cron-triggers/
import { TabItem, Tabs } from "~/components";
```js
export default {
async scheduled(event, env, ctx) {
// Write code for updating your API
switch (event.cron) {
case "*/3 * * * *":
// Every three minutes
await updateAPI();
break;
case "*/10 * * * *":
// Every ten minutes
await updateAPI2();
break;
case "*/45 * * * *":
// Every forty-five minutes
await updateAPI3();
break;
}
console.log("cron processed");
},
};
```
```ts
interface Env {}
export default {
async scheduled(
controller: ScheduledController,
env: Env,
ctx: ExecutionContext,
) {
// Write code for updating your API
switch (controller.cron) {
case "*/3 * * * *":
// Every three minutes
await updateAPI();
break;
case "*/10 * * * *":
// Every ten minutes
await updateAPI2();
break;
case "*/45 * * * *":
// Every forty-five minutes
await updateAPI3();
break;
}
console.log("cron processed");
},
};
```
## Test Cron Triggers using Wrangler
The recommended way of testing Cron Triggers is using Wrangler.
Cron Triggers can be tested using Wrangler by passing in the `--test-scheduled` flag to [`wrangler dev`](/workers/wrangler/commands/#dev). This will expose a `/__scheduled` route which can be used to test using a HTTP request. To simulate different cron patterns, a `cron` query parameter can be passed in.
```sh
npx wrangler dev --test-scheduled
curl "http://localhost:8787/__scheduled?cron=*%2F3+*+*+*+*"
```
---
# Stream OpenAI API Responses
URL: https://developers.cloudflare.com/workers/examples/openai-sdk-streaming/
In order to run this code, you must install the OpenAI SDK by running `npm i openai`.
:::note
For analytics, caching, rate limiting, and more, you can also send requests like this through Cloudflare's [AI Gateway](/ai-gateway/providers/openai/).
:::
```ts
import OpenAI from "openai";
export default {
async fetch(request, env, ctx): Promise {
const openai = new OpenAI({
apiKey: env.OPENAI_API_KEY,
});
// Create a TransformStream to handle streaming data
let { readable, writable } = new TransformStream();
let writer = writable.getWriter();
const textEncoder = new TextEncoder();
ctx.waitUntil(
(async () => {
const stream = await openai.chat.completions.create({
model: "gpt-4o-mini",
messages: [{ role: "user", content: "Tell me a story" }],
stream: true,
});
// loop over the data as it is streamed and write to the writeable
for await (const part of stream) {
writer.write(
textEncoder.encode(part.choices[0]?.delta?.content || ""),
);
}
writer.close();
})(),
);
// Send the readable back to the browser
return new Response(readable);
},
} satisfies ExportedHandler;
```
---
# Post JSON
URL: https://developers.cloudflare.com/workers/examples/post-json/
import { TabItem, Tabs } from "~/components";
```js
export default {
async fetch(request) {
/**
* Example someHost is set up to take in a JSON request
* Replace url with the host you wish to send requests to
* @param {string} url the URL to send the request to
* @param {BodyInit} body the JSON data to send in the request
*/
const someHost = "https://examples.cloudflareworkers.com/demos";
const url = someHost + "/requests/json";
const body = {
results: ["default data to send"],
errors: null,
msg: "I sent this to the fetch",
};
/**
* gatherResponse awaits and returns a response body as a string.
* Use await gatherResponse(..) in an async function to get the response body
* @param {Response} response
*/
async function gatherResponse(response) {
const { headers } = response;
const contentType = headers.get("content-type") || "";
if (contentType.includes("application/json")) {
return JSON.stringify(await response.json());
} else if (contentType.includes("application/text")) {
return response.text();
} else if (contentType.includes("text/html")) {
return response.text();
} else {
return response.text();
}
}
const init = {
body: JSON.stringify(body),
method: "POST",
headers: {
"content-type": "application/json;charset=UTF-8",
},
};
const response = await fetch(url, init);
const results = await gatherResponse(response);
return new Response(results, init);
},
};
```
```ts
export default {
async fetch(request): Promise {
/**
* Example someHost is set up to take in a JSON request
* Replace url with the host you wish to send requests to
* @param {string} url the URL to send the request to
* @param {BodyInit} body the JSON data to send in the request
*/
const someHost = "https://examples.cloudflareworkers.com/demos";
const url = someHost + "/requests/json";
const body = {
results: ["default data to send"],
errors: null,
msg: "I sent this to the fetch",
};
/**
* gatherResponse awaits and returns a response body as a string.
* Use await gatherResponse(..) in an async function to get the response body
* @param {Response} response
*/
async function gatherResponse(response) {
const { headers } = response;
const contentType = headers.get("content-type") || "";
if (contentType.includes("application/json")) {
return JSON.stringify(await response.json());
} else if (contentType.includes("application/text")) {
return response.text();
} else if (contentType.includes("text/html")) {
return response.text();
} else {
return response.text();
}
}
const init = {
body: JSON.stringify(body),
method: "POST",
headers: {
"content-type": "application/json;charset=UTF-8",
},
};
const response = await fetch(url, init);
const results = await gatherResponse(response);
return new Response(results, init);
},
} satisfies ExportedHandler;
```
```py
import json
from pyodide.ffi import to_js as _to_js
from js import Object, fetch, Response, Headers
def to_js(obj):
return _to_js(obj, dict_converter=Object.fromEntries)
# gather_response returns both content-type & response body as a string
async def gather_response(response):
headers = response.headers
content_type = headers["content-type"] or ""
if "application/json" in content_type:
return (content_type, json.dumps(dict(await response.json())))
return (content_type, await response.text())
async def on_fetch(_request):
url = "https://jsonplaceholder.typicode.com/todos/1"
body = {
"results": ["default data to send"],
"errors": None,
"msg": "I sent this to the fetch",
}
options = {
"body": json.dumps(body),
"method": "POST",
"headers": {
"content-type": "application/json;charset=UTF-8",
},
}
response = await fetch(url, to_js(options))
content_type, result = await gather_response(response)
headers = Headers.new({"content-type": content_type}.items())
return Response.new(result, headers=headers)
```
---
# Using timingSafeEqual
URL: https://developers.cloudflare.com/workers/examples/protect-against-timing-attacks/
import { TabItem, Tabs } from "~/components";
The [`crypto.subtle.timingSafeEqual`](/workers/runtime-apis/web-crypto/#timingsafeequal) function compares two values using a constant-time algorithm. The time taken is independent of the contents of the values.
When strings are compared using the equality operator (`==` or `===`), the comparison will end at the first mismatched character. By using `timingSafeEqual`, an attacker would not be able to use timing to find where at which point in the two strings there is a difference.
The `timingSafeEqual` function takes two `ArrayBuffer` or `TypedArray` values to compare. These buffers must be of equal length, otherwise an exception is thrown.
Note that this function is not constant time with respect to the length of the parameters and also does not guarantee constant time for the surrounding code.
Handling of secrets should be taken with care to not introduce timing side channels.
In order to compare two strings, you must use the [`TextEncoder`](/workers/runtime-apis/encoding/#textencoder) API.
```ts
interface Environment {
MY_SECRET_VALUE?: string;
}
export default {
async fetch(req: Request, env: Environment) {
if (!env.MY_SECRET_VALUE) {
return new Response("Missing secret binding", { status: 500 });
}
const authToken = req.headers.get("Authorization") || "";
if (authToken.length !== env.MY_SECRET_VALUE.length) {
return new Response("Unauthorized", { status: 401 });
}
const encoder = new TextEncoder();
const a = encoder.encode(authToken);
const b = encoder.encode(env.MY_SECRET_VALUE);
if (a.byteLength !== b.byteLength) {
return new Response("Unauthorized", { status: 401 });
}
if (!crypto.subtle.timingSafeEqual(a, b)) {
return new Response("Unauthorized", { status: 401 });
}
return new Response("Welcome!");
},
};
```
```py
from js import Response, TextEncoder, crypto
async def on_fetch(request, env):
auth_token = request.headers["Authorization"] or ""
secret = env.MY_SECRET_VALUE
if secret is None:
return Response.new("Missing secret binding", status=500)
if len(auth_token) != len(secret):
return Response.new("Unauthorized", status=401)
if a.byteLength != b.byteLength:
return Response.new("Unauthorized", status=401)
encoder = TextEncoder.new()
a = encoder.encode(auth_token)
b = encoder.encode(secret)
if not crypto.subtle.timingSafeEqual(a, b):
return Response.new("Unauthorized", status=401)
return Response.new("Welcome!")
```
---
# Read POST
URL: https://developers.cloudflare.com/workers/examples/read-post/
import { TabItem, Tabs } from "~/components";
```js
export default {
async fetch(request) {
/**
* rawHtmlResponse returns HTML inputted directly
* into the worker script
* @param {string} html
*/
function rawHtmlResponse(html) {
return new Response(html, {
headers: {
"content-type": "text/html;charset=UTF-8",
},
});
}
/**
* readRequestBody reads in the incoming request body
* Use await readRequestBody(..) in an async function to get the string
* @param {Request} request the incoming request to read from
*/
async function readRequestBody(request) {
const contentType = request.headers.get("content-type");
if (contentType.includes("application/json")) {
return JSON.stringify(await request.json());
} else if (contentType.includes("application/text")) {
return request.text();
} else if (contentType.includes("text/html")) {
return request.text();
} else if (contentType.includes("form")) {
const formData = await request.formData();
const body = {};
for (const entry of formData.entries()) {
body[entry[0]] = entry[1];
}
return JSON.stringify(body);
} else {
// Perhaps some other type of data was submitted in the form
// like an image, or some other binary data.
return "a file";
}
}
const { url } = request;
if (url.includes("form")) {
return rawHtmlResponse(someForm);
}
if (request.method === "POST") {
const reqBody = await readRequestBody(request);
const retBody = `The request body sent in was ${reqBody}`;
return new Response(retBody);
} else if (request.method === "GET") {
return new Response("The request was a GET");
}
},
};
```
```ts
export default {
async fetch(request): Promise {
/**
* rawHtmlResponse returns HTML inputted directly
* into the worker script
* @param {string} html
*/
function rawHtmlResponse(html) {
return new Response(html, {
headers: {
"content-type": "text/html;charset=UTF-8",
},
});
}
/**
* readRequestBody reads in the incoming request body
* Use await readRequestBody(..) in an async function to get the string
* @param {Request} request the incoming request to read from
*/
async function readRequestBody(request: Request) {
const contentType = request.headers.get("content-type");
if (contentType.includes("application/json")) {
return JSON.stringify(await request.json());
} else if (contentType.includes("application/text")) {
return request.text();
} else if (contentType.includes("text/html")) {
return request.text();
} else if (contentType.includes("form")) {
const formData = await request.formData();
const body = {};
for (const entry of formData.entries()) {
body[entry[0]] = entry[1];
}
return JSON.stringify(body);
} else {
// Perhaps some other type of data was submitted in the form
// like an image, or some other binary data.
return "a file";
}
}
const { url } = request;
if (url.includes("form")) {
return rawHtmlResponse(someForm);
}
if (request.method === "POST") {
const reqBody = await readRequestBody(request);
const retBody = `The request body sent in was ${reqBody}`;
return new Response(retBody);
} else if (request.method === "GET") {
return new Response("The request was a GET");
}
},
} satisfies ExportedHandler;
```
```py
from js import Object, Response, Headers, JSON
async def read_request_body(request):
headers = request.headers
content_type = headers["content-type"] or ""
if "application/json" in content_type:
return JSON.stringify(await request.json())
if "form" in content_type:
form = await request.formData()
data = Object.fromEntries(form.entries())
return JSON.stringify(data)
return await request.text()
async def on_fetch(request):
def raw_html_response(html):
headers = Headers.new({"content-type": "text/html;charset=UTF-8"}.items())
return Response.new(html, headers=headers)
if "form" in request.url:
return raw_html_response("")
if "POST" in request.method:
req_body = await read_request_body(request)
ret_body = f"The request body sent in was {req_body}"
return Response.new(ret_body)
return Response.new("The request was not POST")
```
```rs
use serde::{Deserialize, Serialize};
use worker::*;
fn raw_html_response(html: &str) -> Result {
Response::from_html(html)
}
#[derive(Deserialize, Serialize, Debug)]
struct Payload {
msg: String,
}
async fn read_request_body(mut req: Request) -> String {
let ctype = req.headers().get("content-type").unwrap().unwrap();
match ctype.as_str() {
"application/json" => format!("{:?}", req.json::().await.unwrap()),
"text/html" => req.text().await.unwrap(),
"multipart/form-data" => format!("{:?}", req.form_data().await.unwrap()),
_ => String::from("a file"),
}
}
#[event(fetch)]
async fn fetch(req: Request, _env: Env, _ctx: Context) -> Result {
if String::from(req.url()?).contains("form") {
return raw_html_response("some html form");
}
match req.method() {
Method::Post => {
let req_body = read_request_body(req).await;
Response::ok(format!("The request body sent in was {}", req_body))
}
_ => Response::ok(format!("The result was a {:?}", req.method())),
}
}
```
---
# Redirect
URL: https://developers.cloudflare.com/workers/examples/redirect/
import { Render, TabItem, Tabs } from "~/components";
## Redirect all requests to one URL
```ts
export default {
async fetch(request): Promise {
const destinationURL = "https://example.com";
const statusCode = 301;
return Response.redirect(destinationURL, statusCode);
},
} satisfies ExportedHandler;
```
```py
from js import Response
def on_fetch(request):
destinationURL = "https://example.com"
statusCode = 301
return Response.redirect(destinationURL, statusCode)
```
```rs
use worker::*;
#[event(fetch)]
async fn fetch(_req: Request, _env: Env, _ctx: Context) -> Result {
let destination_url = Url::parse("https://example.com")?;
let status_code = 301;
Response::redirect_with_status(destination_url, status_code)
}
```
## Redirect requests from one domain to another
```js
export default {
async fetch(request) {
const base = "https://example.com";
const statusCode = 301;
const url = new URL(request.url);
const { pathname, search } = url;
const destinationURL = `${base}${pathname}${search}`;
console.log(destinationURL);
return Response.redirect(destinationURL, statusCode);
},
};
```
```ts
export default {
async fetch(request): Promise {
const base = "https://example.com";
const statusCode = 301;
const url = new URL(request.url);
const { pathname, search } = url;
const destinationURL = `${base}${pathname}${search}`;
console.log(destinationURL);
return Response.redirect(destinationURL, statusCode);
},
} satisfies ExportedHandler;
```
```py
from js import Response, URL
async def on_fetch(request):
base = "https://example.com"
statusCode = 301
url = URL.new(request.url)
destinationURL = f'{base}{url.pathname}{url.search}'
print(destinationURL)
return Response.redirect(destinationURL, statusCode)
```
```rs
use worker::*;
#[event(fetch)]
async fn fetch(req: Request, _env: Env, _ctx: Context) -> Result {
let mut base = Url::parse("https://example.com")?;
let status_code = 301;
let url = req.url()?;
base.set_path(url.path());
base.set_query(url.query());
console_log!("{:?}", base.to_string());
Response::redirect_with_status(base, status_code)
}
```
---
# Respond with another site
URL: https://developers.cloudflare.com/workers/examples/respond-with-another-site/
import { Render, TabItem, Tabs } from "~/components";
```ts
export default {
async fetch(request): Promise {
async function MethodNotAllowed(request) {
return new Response(`Method ${request.method} not allowed.`, {
status: 405,
headers: {
Allow: "GET",
},
});
}
// Only GET requests work with this proxy.
if (request.method !== "GET") return MethodNotAllowed(request);
return fetch(`https://example.com`);
},
} satisfies ExportedHandler;
```
```py
from js import Response, fetch, Headers
def on_fetch(request):
def method_not_allowed(request):
msg = f'Method {request.method} not allowed.'
headers = Headers.new({"Allow": "GET"}.items)
return Response.new(msg, headers=headers, status=405)
# Only GET requests work with this proxy.
if request.method != "GET":
return method_not_allowed(request)
return fetch("https://example.com")
```
---
# Return small HTML page
URL: https://developers.cloudflare.com/workers/examples/return-html/
import { Render, TabItem, Tabs } from "~/components";
```ts
export default {
async fetch(request): Promise {
const html = `
Hello World
This markup was generated by a Cloudflare Worker.
`;
return new Response(html, {
headers: {
"content-type": "text/html;charset=UTF-8",
},
});
},
} satisfies ExportedHandler;
```
```py
from js import Response, Headers
def on_fetch(request):
html = """
Hello World
This markup was generated by a Cloudflare Worker.
"""
headers = Headers.new({"content-type": "text/html;charset=UTF-8"}.items())
return Response.new(html, headers=headers)
```
```rs
use worker::*;
#[event(fetch)]
async fn fetch(_req: Request, _env: Env, _ctx: Context) -> Result {
let html = r#"
Hello World
This markup was generated by a Cloudflare Worker.
"#;
Response::from_html(html)
}
```
---
# Return JSON
URL: https://developers.cloudflare.com/workers/examples/return-json/
import { Render, TabItem, Tabs } from "~/components";
```ts
export default {
async fetch(request): Promise {
const data = {
hello: "world",
};
return Response.json(data);
},
} satisfies ExportedHandler;
```
```py
from js import Response, Headers
import json
def on_fetch(request):
data = json.dumps({"hello": "world"})
headers = Headers.new({"content-type": "application/json"}.items())
return Response.new(data, headers=headers)
```
```rs
use serde::{Deserialize, Serialize};
use worker::*;
#[derive(Deserialize, Serialize, Debug)]
struct Json {
hello: String,
}
#[event(fetch)]
async fn fetch(_req: Request, _env: Env, _ctx: Context) -> Result {
let data = Json {
hello: String::from("world"),
};
Response::from_json(&data)
}
```
---
# Rewrite links
URL: https://developers.cloudflare.com/workers/examples/rewrite-links/
import { TabItem, Tabs } from "~/components";
```js
export default {
async fetch(request) {
const OLD_URL = "developer.mozilla.org";
const NEW_URL = "mynewdomain.com";
class AttributeRewriter {
constructor(attributeName) {
this.attributeName = attributeName;
}
element(element) {
const attribute = element.getAttribute(this.attributeName);
if (attribute) {
element.setAttribute(
this.attributeName,
attribute.replace(OLD_URL, NEW_URL),
);
}
}
}
const rewriter = new HTMLRewriter()
.on("a", new AttributeRewriter("href"))
.on("img", new AttributeRewriter("src"));
const res = await fetch(request);
const contentType = res.headers.get("Content-Type");
// If the response is HTML, it can be transformed with
// HTMLRewriter -- otherwise, it should pass through
if (contentType.startsWith("text/html")) {
return rewriter.transform(res);
} else {
return res;
}
},
};
```
```ts
export default {
async fetch(request): Promise {
const OLD_URL = "developer.mozilla.org";
const NEW_URL = "mynewdomain.com";
class AttributeRewriter {
constructor(attributeName) {
this.attributeName = attributeName;
}
element(element) {
const attribute = element.getAttribute(this.attributeName);
if (attribute) {
element.setAttribute(
this.attributeName,
attribute.replace(OLD_URL, NEW_URL),
);
}
}
}
const rewriter = new HTMLRewriter()
.on("a", new AttributeRewriter("href"))
.on("img", new AttributeRewriter("src"));
const res = await fetch(request);
const contentType = res.headers.get("Content-Type");
// If the response is HTML, it can be transformed with
// HTMLRewriter -- otherwise, it should pass through
if (contentType.startsWith("text/html")) {
return rewriter.transform(res);
} else {
return res;
}
},
} satisfies ExportedHandler;
```
```py
from pyodide.ffi import create_proxy
from js import HTMLRewriter, fetch
async def on_fetch(request):
old_url = "developer.mozilla.org"
new_url = "mynewdomain.com"
class AttributeRewriter:
def __init__(self, attr_name):
self.attr_name = attr_name
def element(self, element):
attr = element.getAttribute(self.attr_name)
if attr:
element.setAttribute(self.attr_name, attr.replace(old_url, new_url))
href = create_proxy(AttributeRewriter("href"))
src = create_proxy(AttributeRewriter("src"))
rewriter = HTMLRewriter.new().on("a", href).on("img", src)
res = await fetch(request)
content_type = res.headers["Content-Type"]
# If the response is HTML, it can be transformed with
# HTMLRewriter -- otherwise, it should pass through
if content_type.startswith("text/html"):
return rewriter.transform(res)
return res
```
---
# Set security headers
URL: https://developers.cloudflare.com/workers/examples/security-headers/
import { TabItem, Tabs } from "~/components";
```js
export default {
async fetch(request) {
const DEFAULT_SECURITY_HEADERS = {
/*
Secure your application with Content-Security-Policy headers.
Enabling these headers will permit content from a trusted domain and all its subdomains.
@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy
"Content-Security-Policy": "default-src 'self' example.com *.example.com",
*/
/*
You can also set Strict-Transport-Security headers.
These are not automatically set because your website might get added to Chrome's HSTS preload list.
Here's the code if you want to apply it:
"Strict-Transport-Security" : "max-age=63072000; includeSubDomains; preload",
*/
/*
Permissions-Policy header provides the ability to allow or deny the use of browser features, such as opting out of FLoC - which you can use below:
"Permissions-Policy": "interest-cohort=()",
*/
/*
X-XSS-Protection header prevents a page from loading if an XSS attack is detected.
@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-XSS-Protection
*/
"X-XSS-Protection": "0",
/*
X-Frame-Options header prevents click-jacking attacks.
@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Frame-Options
*/
"X-Frame-Options": "DENY",
/*
X-Content-Type-Options header prevents MIME-sniffing.
@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Content-Type-Options
*/
"X-Content-Type-Options": "nosniff",
"Referrer-Policy": "strict-origin-when-cross-origin",
"Cross-Origin-Embedder-Policy": 'require-corp; report-to="default";',
"Cross-Origin-Opener-Policy": 'same-site; report-to="default";',
"Cross-Origin-Resource-Policy": "same-site",
};
const BLOCKED_HEADERS = [
"Public-Key-Pins",
"X-Powered-By",
"X-AspNet-Version",
];
let response = await fetch(request);
let newHeaders = new Headers(response.headers);
const tlsVersion = request.cf.tlsVersion;
console.log(tlsVersion);
// This sets the headers for HTML responses:
if (
newHeaders.has("Content-Type") &&
!newHeaders.get("Content-Type").includes("text/html")
) {
return new Response(response.body, {
status: response.status,
statusText: response.statusText,
headers: newHeaders,
});
}
Object.keys(DEFAULT_SECURITY_HEADERS).map((name) => {
newHeaders.set(name, DEFAULT_SECURITY_HEADERS[name]);
});
BLOCKED_HEADERS.forEach((name) => {
newHeaders.delete(name);
});
if (tlsVersion !== "TLSv1.2" && tlsVersion !== "TLSv1.3") {
return new Response("You need to use TLS version 1.2 or higher.", {
status: 400,
});
} else {
return new Response(response.body, {
status: response.status,
statusText: response.statusText,
headers: newHeaders,
});
}
},
};
```
```ts
export default {
async fetch(request): Promise {
const DEFAULT_SECURITY_HEADERS = {
/*
Secure your application with Content-Security-Policy headers.
Enabling these headers will permit content from a trusted domain and all its subdomains.
@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy
"Content-Security-Policy": "default-src 'self' example.com *.example.com",
*/
/*
You can also set Strict-Transport-Security headers.
These are not automatically set because your website might get added to Chrome's HSTS preload list.
Here's the code if you want to apply it:
"Strict-Transport-Security" : "max-age=63072000; includeSubDomains; preload",
*/
/*
Permissions-Policy header provides the ability to allow or deny the use of browser features, such as opting out of FLoC - which you can use below:
"Permissions-Policy": "interest-cohort=()",
*/
/*
X-XSS-Protection header prevents a page from loading if an XSS attack is detected.
@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-XSS-Protection
*/
"X-XSS-Protection": "0",
/*
X-Frame-Options header prevents click-jacking attacks.
@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Frame-Options
*/
"X-Frame-Options": "DENY",
/*
X-Content-Type-Options header prevents MIME-sniffing.
@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Content-Type-Options
*/
"X-Content-Type-Options": "nosniff",
"Referrer-Policy": "strict-origin-when-cross-origin",
"Cross-Origin-Embedder-Policy": 'require-corp; report-to="default";',
"Cross-Origin-Opener-Policy": 'same-site; report-to="default";',
"Cross-Origin-Resource-Policy": "same-site",
};
const BLOCKED_HEADERS = [
"Public-Key-Pins",
"X-Powered-By",
"X-AspNet-Version",
];
let response = await fetch(request);
let newHeaders = new Headers(response.headers);
const tlsVersion = request.cf.tlsVersion;
console.log(tlsVersion);
// This sets the headers for HTML responses:
if (
newHeaders.has("Content-Type") &&
!newHeaders.get("Content-Type").includes("text/html")
) {
return new Response(response.body, {
status: response.status,
statusText: response.statusText,
headers: newHeaders,
});
}
Object.keys(DEFAULT_SECURITY_HEADERS).map((name) => {
newHeaders.set(name, DEFAULT_SECURITY_HEADERS[name]);
});
BLOCKED_HEADERS.forEach((name) => {
newHeaders.delete(name);
});
if (tlsVersion !== "TLSv1.2" && tlsVersion !== "TLSv1.3") {
return new Response("You need to use TLS version 1.2 or higher.", {
status: 400,
});
} else {
return new Response(response.body, {
status: response.status,
statusText: response.statusText,
headers: newHeaders,
});
}
},
} satisfies ExportedHandler;
```
```py
from js import Response, fetch, Headers
async def on_fetch(request):
default_security_headers = {
# Secure your application with Content-Security-Policy headers.
#Enabling these headers will permit content from a trusted domain and all its subdomains.
#@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy
"Content-Security-Policy": "default-src 'self' example.com *.example.com",
#You can also set Strict-Transport-Security headers.
#These are not automatically set because your website might get added to Chrome's HSTS preload list.
#Here's the code if you want to apply it:
"Strict-Transport-Security" : "max-age=63072000; includeSubDomains; preload",
#Permissions-Policy header provides the ability to allow or deny the use of browser features, such as opting out of FLoC - which you can use below:
"Permissions-Policy": "interest-cohort=()",
#X-XSS-Protection header prevents a page from loading if an XSS attack is detected.
#@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-XSS-Protection
"X-XSS-Protection": "0",
#X-Frame-Options header prevents click-jacking attacks.
#@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Frame-Options
"X-Frame-Options": "DENY",
#X-Content-Type-Options header prevents MIME-sniffing.
#@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Content-Type-Options
"X-Content-Type-Options": "nosniff",
"Referrer-Policy": "strict-origin-when-cross-origin",
"Cross-Origin-Embedder-Policy": 'require-corp; report-to="default";',
"Cross-Origin-Opener-Policy": 'same-site; report-to="default";',
"Cross-Origin-Resource-Policy": "same-site",
}
blocked_headers = ["Public-Key-Pins", "X-Powered-By" ,"X-AspNet-Version"]
res = await fetch(request)
new_headers = Headers.new(res.headers)
# This sets the headers for HTML responses
if "text/html" in new_headers["Content-Type"]:
return Response.new(res.body, status=res.status, statusText=res.statusText, headers=new_headers)
for name in default_security_headers:
new_headers.set(name, default_security_headers[name])
for name in blocked_headers:
new_headers.delete(name)
tls = request.cf.tlsVersion
if not tls in ("TLSv1.2", "TLSv1.3"):
return Response.new("You need to use TLS version 1.2 or higher.", status=400)
return Response.new(res.body, status=res.status, statusText=res.statusText, headers=new_headers)
```
```rs
use std::collections::HashMap;
use worker::*;
#[event(fetch)]
async fn fetch(req: Request, _env: Env, _ctx: Context) -> Result {
let default_security_headers = HashMap::from([
//Secure your application with Content-Security-Policy headers.
//Enabling these headers will permit content from a trusted domain and all its subdomains.
//@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy
(
"Content-Security-Policy",
"default-src 'self' example.com *.example.com",
),
//You can also set Strict-Transport-Security headers.
//These are not automatically set because your website might get added to Chrome's HSTS preload list.
//Here's the code if you want to apply it:
(
"Strict-Transport-Security",
"max-age=63072000; includeSubDomains; preload",
),
//Permissions-Policy header provides the ability to allow or deny the use of browser features, such as opting out of FLoC - which you can use below:
("Permissions-Policy", "interest-cohort=()"),
//X-XSS-Protection header prevents a page from loading if an XSS attack is detected.
//@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-XSS-Protection
("X-XSS-Protection", "0"),
//X-Frame-Options header prevents click-jacking attacks.
//@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Frame-Options
("X-Frame-Options", "DENY"),
//X-Content-Type-Options header prevents MIME-sniffing.
//@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Content-Type-Options
("X-Content-Type-Options", "nosniff"),
("Referrer-Policy", "strict-origin-when-cross-origin"),
(
"Cross-Origin-Embedder-Policy",
"require-corp; report-to='default';",
),
(
"Cross-Origin-Opener-Policy",
"same-site; report-to='default';",
),
("Cross-Origin-Resource-Policy", "same-site"),
]);
let blocked_headers = ["Public-Key-Pins", "X-Powered-By", "X-AspNet-Version"];
let tls = req.cf().unwrap().tls_version();
let res = Fetch::Request(req).send().await?;
let mut new_headers = res.headers().clone();
// This sets the headers for HTML responses
if Some(String::from("text/html")) == new_headers.get("Content-Type")? {
return Ok(Response::from_body(res.body().clone())?
.with_headers(new_headers)
.with_status(res.status_code()));
}
for (k, v) in default_security_headers {
new_headers.set(k, v)?;
}
for k in blocked_headers {
new_headers.delete(k)?;
}
if !vec!["TLSv1.2", "TLSv1.3"].contains(&tls.as_str()) {
return Response::error("You need to use TLS version 1.2 or higher.", 400);
}
Ok(Response::from_body(res.body().clone())?
.with_headers(new_headers)
.with_status(res.status_code()))
}
```
---
# Sign requests
URL: https://developers.cloudflare.com/workers/examples/signing-requests/
import { TabItem, Tabs } from "~/components";
:::note
This example Worker makes use of the [Node.js Buffer API](/workers/runtime-apis/nodejs/buffer/), which is available as part of the Worker's runtime [Node.js compatibility mode](/workers/runtime-apis/nodejs/). To run this Worker, you will need to [enable the `nodejs_compat` compatibility flag](/workers/runtime-apis/nodejs/#get-started).
:::
You can both verify and generate signed requests from within a Worker using the [Web Crypto APIs](https://developer.mozilla.org/en-US/docs/Web/API/Crypto/subtle).
The following Worker will:
- For request URLs beginning with `/generate/`, replace `/generate/` with `/`, sign the resulting path with its timestamp, and return the full, signed URL in the response body.
- For all other request URLs, verify the signed URL and allow the request through.
```js
import { Buffer } from "node:buffer";
const encoder = new TextEncoder();
// How long an HMAC token should be valid for, in seconds
const EXPIRY = 60;
export default {
/**
*
* @param {Request} request
* @param {{SECRET_DATA: string}} env
* @returns
*/
async fetch(request, env) {
// You will need some secret data to use as a symmetric key. This should be
// attached to your Worker as an encrypted secret.
// Refer to https://developers.cloudflare.com/workers/configuration/secrets/
const secretKeyData = encoder.encode(
env.SECRET_DATA ?? "my secret symmetric key",
);
// Import your secret as a CryptoKey for both 'sign' and 'verify' operations
const key = await crypto.subtle.importKey(
"raw",
secretKeyData,
{ name: "HMAC", hash: "SHA-256" },
false,
["sign", "verify"],
);
const url = new URL(request.url);
// This is a demonstration Worker that allows unauthenticated access to /generate
// In a real application you would want to make sure that
// users could only generate signed URLs when authenticated
if (url.pathname.startsWith("/generate/")) {
url.pathname = url.pathname.replace("/generate/", "/");
const timestamp = Math.floor(Date.now() / 1000);
// This contains all the data about the request that you want to be able to verify
// Here we only sign the timestamp and the pathname, but often you will want to
// include more data (for instance, the URL hostname or query parameters)
const dataToAuthenticate = `${url.pathname}${timestamp}`;
const mac = await crypto.subtle.sign(
"HMAC",
key,
encoder.encode(dataToAuthenticate),
);
// Refer to https://developers.cloudflare.com/workers/runtime-apis/nodejs/
// for more details on using Node.js APIs in Workers
const base64Mac = Buffer.from(mac).toString("base64");
url.searchParams.set("verify", `${timestamp}-${base64Mac}`);
return new Response(`${url.pathname}${url.search}`);
// Verify all non /generate requests
} else {
// Make sure you have the minimum necessary query parameters.
if (!url.searchParams.has("verify")) {
return new Response("Missing query parameter", { status: 403 });
}
const [timestamp, hmac] = url.searchParams.get("verify").split("-");
const assertedTimestamp = Number(timestamp);
const dataToAuthenticate = `${url.pathname}${assertedTimestamp}`;
const receivedMac = Buffer.from(hmac, "base64");
// Use crypto.subtle.verify() to guard against timing attacks. Since HMACs use
// symmetric keys, you could implement this by calling crypto.subtle.sign() and
// then doing a string comparison -- this is insecure, as string comparisons
// bail out on the first mismatch, which leaks information to potential
// attackers.
const verified = await crypto.subtle.verify(
"HMAC",
key,
receivedMac,
encoder.encode(dataToAuthenticate),
);
if (!verified) {
return new Response("Invalid MAC", { status: 403 });
}
// Signed requests expire after one minute. Note that this value should depend on your specific use case
if (Date.now() / 1000 > assertedTimestamp + EXPIRY) {
return new Response(
`URL expired at ${new Date((assertedTimestamp + EXPIRY) * 1000)}`,
{ status: 403 },
);
}
}
return fetch(new URL(url.pathname, "https://example.com"), request);
},
};
```
```ts
import { Buffer } from "node:buffer";
const encoder = new TextEncoder();
// How long an HMAC token should be valid for, in seconds
const EXPIRY = 60;
interface Env {
SECRET_DATA: string;
}
export default {
async fetch(request, env): Promise {
// You will need some secret data to use as a symmetric key. This should be
// attached to your Worker as an encrypted secret.
// Refer to https://developers.cloudflare.com/workers/configuration/secrets/
const secretKeyData = encoder.encode(
env.SECRET_DATA ?? "my secret symmetric key",
);
// Import your secret as a CryptoKey for both 'sign' and 'verify' operations
const key = await crypto.subtle.importKey(
"raw",
secretKeyData,
{ name: "HMAC", hash: "SHA-256" },
false,
["sign", "verify"],
);
const url = new URL(request.url);
// This is a demonstration Worker that allows unauthenticated access to /generate
// In a real application you would want to make sure that
// users could only generate signed URLs when authenticated
if (url.pathname.startsWith("/generate/")) {
url.pathname = url.pathname.replace("/generate/", "/");
const timestamp = Math.floor(Date.now() / 1000);
// This contains all the data about the request that you want to be able to verify
// Here we only sign the timestamp and the pathname, but often you will want to
// include more data (for instance, the URL hostname or query parameters)
const dataToAuthenticate = `${url.pathname}${timestamp}`;
const mac = await crypto.subtle.sign(
"HMAC",
key,
encoder.encode(dataToAuthenticate),
);
// Refer to https://developers.cloudflare.com/workers/runtime-apis/nodejs/
// for more details on using NodeJS APIs in Workers
const base64Mac = Buffer.from(mac).toString("base64");
url.searchParams.set("verify", `${timestamp}-${base64Mac}`);
return new Response(`${url.pathname}${url.search}`);
// Verify all non /generate requests
} else {
// Make sure you have the minimum necessary query parameters.
if (!url.searchParams.has("verify")) {
return new Response("Missing query parameter", { status: 403 });
}
const [timestamp, hmac] = url.searchParams.get("verify").split("-");
const assertedTimestamp = Number(timestamp);
const dataToAuthenticate = `${url.pathname}${assertedTimestamp}`;
const receivedMac = Buffer.from(hmac, "base64");
// Use crypto.subtle.verify() to guard against timing attacks. Since HMACs use
// symmetric keys, you could implement this by calling crypto.subtle.sign() and
// then doing a string comparison -- this is insecure, as string comparisons
// bail out on the first mismatch, which leaks information to potential
// attackers.
const verified = await crypto.subtle.verify(
"HMAC",
key,
receivedMac,
encoder.encode(dataToAuthenticate),
);
if (!verified) {
return new Response("Invalid MAC", { status: 403 });
}
// Signed requests expire after one minute. Note that this value should depend on your specific use case
if (Date.now() / 1000 > assertedTimestamp + EXPIRY) {
return new Response(
`URL expired at ${new Date((assertedTimestamp + EXPIRY) * 1000)}`,
{ status: 403 },
);
}
}
return fetch(new URL(url.pathname, "https://example.com"), request);
},
} satisfies ExportedHandler;
```
## Validate signed requests using the WAF
The provided example code for signing requests is compatible with the [`is_timed_hmac_valid_v0()`](/ruleset-engine/rules-language/functions/#hmac-validation) Rules language function. This means that you can verify requests signed by the Worker script using a [WAF custom rule](/waf/custom-rules/use-cases/configure-token-authentication/#option-2-configure-using-waf-custom-rules).
---
# Turnstile with Workers
URL: https://developers.cloudflare.com/workers/examples/turnstile-html-rewriter/
import { TabItem, Tabs } from "~/components";
```js
export default {
async fetch(request, env) {
const SITE_KEY = env.SITE_KEY; // The Turnstile Sitekey of your widget (pass as env or secret)
const TURNSTILE_ATTR_NAME = "your_id_to_replace"; // The id of the element to put a Turnstile widget in
let res = await fetch(request);
// Instantiate the API to run on specific elements, for example, `head`, `div`
let newRes = new HTMLRewriter()
// `.on` attaches the element handler and this allows you to match on element/attributes or to use the specific methods per the API
.on("head", {
element(element) {
// In this case, you are using `append` to add a new script to the `head` element
element.append(
``,
{ html: true },
);
},
})
.on("div", {
element(element) {
// Add a turnstile widget element into if an element with the id of TURNSTILE_ATTR_NAME is found
if (element.getAttribute("id") === TURNSTILE_ATTR_NAME) {
element.append(
``,
{ html: true },
);
}
},
})
.transform(res);
return newRes;
},
};
```
```ts
export default {
async fetch(request, env): Promise {
const SITE_KEY = env.SITE_KEY; // The Turnstile Sitekey of your widget (pass as env or secret)
const TURNSTILE_ATTR_NAME = "your_id_to_replace"; // The id of the element to put a Turnstile widget in
let res = await fetch(request);
// Instantiate the API to run on specific elements, for example, `head`, `div`
let newRes = new HTMLRewriter()
// `.on` attaches the element handler and this allows you to match on element/attributes or to use the specific methods per the API
.on("head", {
element(element) {
// In this case, you are using `append` to add a new script to the `head` element
element.append(
``,
{ html: true },
);
},
})
.on("div", {
element(element) {
// Add a turnstile widget element into if an element with the id of TURNSTILE_ATTR_NAME is found
if (element.getAttribute("id") === TURNSTILE_ATTR_NAME) {
element.append(
``,
{ html: true },
);
}
},
})
.transform(res);
return newRes;
},
} satisfies ExportedHandler;
```
```py
from pyodide.ffi import create_proxy
from js import HTMLRewriter, fetch
async def on_fetch(request, env):
site_key = env.SITE_KEY
attr_name = env.TURNSTILE_ATTR_NAME
res = await fetch(request)
class Append:
def element(self, element):
s = ''
element.append(s, {"html": True})
class AppendOnID:
def __init__(self, name):
self.name = name
def element(self, element):
# You are using the `getAttribute` method here to retrieve the `id` or `class` of an element
if element.getAttribute("id") == self.name:
div = f''
element.append(div, { "html": True })
# Instantiate the API to run on specific elements, for example, `head`, `div`
head = create_proxy(Append())
div = create_proxy(AppendOnID(attr_name))
new_res = HTMLRewriter.new().on("head", head).on("div", div).transform(res)
return new_res
```
:::note
This is only half the implementation for Turnstile. The corresponding token that is a result of a widget being rendered also needs to be verified using the [siteverify API](/turnstile/get-started/server-side-validation/). Refer to the example below for one such implementation.
:::
```js
async function handlePost(request, env) {
const body = await request.formData();
// Turnstile injects a token in `cf-turnstile-response`.
const token = body.get('cf-turnstile-response');
const ip = request.headers.get('CF-Connecting-IP');
// Validate the token by calling the `/siteverify` API.
let formData = new FormData();
// `secret_key` here is the Turnstile Secret key, which should be set using Wrangler secrets
formData.append('secret', env.SECRET_KEY);
formData.append('response', token);
formData.append('remoteip', ip); //This is optional.
const url = 'https://challenges.cloudflare.com/turnstile/v0/siteverify';
const result = await fetch(url, {
body: formData,
method: 'POST',
});
const outcome = await result.json();
if (!outcome.success) {
return new Response('The provided Turnstile token was not valid!', { status: 401 });
}
// The Turnstile token was successfully validated. Proceed with your application logic.
// Validate login, redirect user, etc.
// Clone the original request with a new body
const newRequest = new Request(request, {
body: request.body, // Reuse the body
method: request.method,
headers: request.headers
});
return await fetch(newRequest);
}
export default {
async fetch(request, env) {
const SITE_KEY = env.SITE_KEY; // The Turnstile Sitekey of your widget (pass as env or secret)
const TURNSTILE_ATTR_NAME = 'your_id_to_replace'; // The id of the element to put a Turnstile widget in
let res = await fetch(request)
if (request.method === 'POST') {
return handlePost(request, env)
}
// Instantiate the API to run on specific elements, for example, `head`, `div`
let newRes = new HTMLRewriter()
// `.on` attaches the element handler and this allows you to match on element/attributes or to use the specific methods per the API
.on('head', {
element(element) {
// In this case, you are using `append` to add a new script to the `head` element
element.append(``, { html: true });
},
})
.on('div', {
element(element) {
// You are using the `getAttribute` method here to retrieve the `id` or `class` of an element
if (element.getAttribute('id') === ) {
element.append(``, { html: true });
}
},
})
.transform(res);
return newRes
}
}
```
---
# Using the WebSockets API
URL: https://developers.cloudflare.com/workers/examples/websockets/
import { TabItem, Tabs } from "~/components";
WebSockets allow you to communicate in real time with your Cloudflare Workers serverless functions. In this guide, you will learn the basics of WebSockets on Cloudflare Workers, both from the perspective of writing WebSocket servers in your Workers functions, as well as connecting to and working with those WebSocket servers as a client.
WebSockets are open connections sustained between the client and the origin server. Inside a WebSocket connection, the client and the origin can pass data back and forth without having to reestablish sessions. This makes exchanging data within a WebSocket connection fast. WebSockets are often used for real-time applications such as live chat and gaming.
:::note
WebSockets utilize an event-based system for receiving and sending messages, much like the Workers runtime model of responding to events.
:::
:::note
If your application needs to coordinate among multiple WebSocket connections, such as a chat room or game match, you will need clients to send messages to a single-point-of-coordination. Durable Objects provide a single-point-of-coordination for Cloudflare Workers, and are often used in parallel with WebSockets to persist state over multiple clients and connections. In this case, refer to [Durable Objects](/durable-objects/) to get started, and prefer using the Durable Objects' extended [WebSockets API](/durable-objects/best-practices/websockets/).
:::
## Write a WebSocket Server
WebSocket servers in Cloudflare Workers allow you to receive messages from a client in real time. This guide will show you how to set up a WebSocket server in Workers.
A client can make a WebSocket request in the browser by instantiating a new instance of `WebSocket`, passing in the URL for your Workers function:
```js
// In client-side JavaScript, connect to your Workers function using WebSockets:
const websocket = new WebSocket('wss://example-websocket.signalnerve.workers.dev');
```
:::note
For more details about creating and working with WebSockets in the client, refer to [Writing a WebSocket client](#write-a-websocket-client).
:::
When an incoming WebSocket request reaches the Workers function, it will contain an `Upgrade` header, set to the string value `websocket`. Check for this header before continuing to instantiate a WebSocket:
```js
async function handleRequest(request) {
const upgradeHeader = request.headers.get('Upgrade');
if (!upgradeHeader || upgradeHeader !== 'websocket') {
return new Response('Expected Upgrade: websocket', { status: 426 });
}
}
```
```rs
use worker::*;
#[event(fetch)]
async fn fetch(req: HttpRequest, _env: Env, _ctx: Context) -> Result {
let upgrade_header = match req.headers().get("Upgrade") {
Some(h) => h.to_str().unwrap(),
None => "",
};
if upgrade_header != "websocket" {
return worker::Response::error("Expected Upgrade: websocket", 426);
}
}
```
After you have appropriately checked for the `Upgrade` header, you can create a new instance of `WebSocketPair`, which contains server and client WebSockets. One of these WebSockets should be handled by the Workers function and the other should be returned as part of a `Response` with the [`101` status code](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/101), indicating the request is switching protocols:
```js
async function handleRequest(request) {
const upgradeHeader = request.headers.get('Upgrade');
if (!upgradeHeader || upgradeHeader !== 'websocket') {
return new Response('Expected Upgrade: websocket', { status: 426 });
}
const webSocketPair = new WebSocketPair();
const client = webSocketPair[0],
server = webSocketPair[1];
return new Response(null, {
status: 101,
webSocket: client,
});
}
```
```rs
use worker::*;
#[event(fetch)]
async fn fetch(req: HttpRequest, _env: Env, _ctx: Context) -> Result {
let upgrade_header = match req.headers().get("Upgrade") {
Some(h) => h.to_str().unwrap(),
None => "",
};
if upgrade_header != "websocket" {
return worker::Response::error("Expected Upgrade: websocket", 426);
}
let ws = WebSocketPair::new()?;
let client = ws.client;
let server = ws.server;
server.accept()?;
worker::Response::from_websocket(client)
}
```
The `WebSocketPair` constructor returns an Object, with the `0` and `1` keys each holding a `WebSocket` instance as its value. It is common to grab the two WebSockets from this pair using [`Object.values`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_objects/Object/values) and [ES6 destructuring](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Destructuring_assignment), as seen in the below example.
In order to begin communicating with the `client` WebSocket in your Worker, call `accept` on the `server` WebSocket. This will tell the Workers runtime that it should listen for WebSocket data and keep the connection open with your `client` WebSocket:
```js
async function handleRequest(request) {
const upgradeHeader = request.headers.get('Upgrade');
if (!upgradeHeader || upgradeHeader !== 'websocket') {
return new Response('Expected Upgrade: websocket', { status: 426 });
}
const webSocketPair = new WebSocketPair();
const [client, server] = Object.values(webSocketPair);
server.accept();
return new Response(null, {
status: 101,
webSocket: client,
});
}
```
```rs
use worker::*;
#[event(fetch)]
async fn fetch(req: HttpRequest, _env: Env, _ctx: Context) -> Result {
let upgrade_header = match req.headers().get("Upgrade") {
Some(h) => h.to_str().unwrap(),
None => "",
};
if upgrade_header != "websocket" {
return worker::Response::error("Expected Upgrade: websocket", 426);
}
let ws = WebSocketPair::new()?;
let client = ws.client;
let server = ws.server;
server.accept()?;
worker::Response::from_websocket(client)
}
```
WebSockets emit a number of [Events](/workers/runtime-apis/websockets/#events) that can be connected to using `addEventListener`. The below example hooks into the `message` event and emits a `console.log` with the data from it:
```js
async function handleRequest(request) {
const upgradeHeader = request.headers.get('Upgrade');
if (!upgradeHeader || upgradeHeader !== 'websocket') {
return new Response('Expected Upgrade: websocket', { status: 426 });
}
const webSocketPair = new WebSocketPair();
const [client, server] = Object.values(webSocketPair);
server.accept();
server.addEventListener('message', event => {
console.log(event.data);
});
return new Response(null, {
status: 101,
webSocket: client,
});
}
```
```rs
use futures::StreamExt;
use worker::*;
#[event(fetch)]
async fn fetch(req: HttpRequest, _env: Env, _ctx: Context) -> Result {
let upgrade_header = match req.headers().get("Upgrade") {
Some(h) => h.to_str().unwrap(),
None => "",
};
if upgrade_header != "websocket" {
return worker::Response::error("Expected Upgrade: websocket", 426);
}
let ws = WebSocketPair::new()?;
let client = ws.client;
let server = ws.server;
server.accept()?;
wasm_bindgen_futures::spawn_local(async move {
let mut event_stream = server.events().expect("could not open stream");
while let Some(event) = event_stream.next().await {
match event.expect("received error in websocket") {
WebsocketEvent::Message(msg) => server.send(&msg.text()).unwrap(),
WebsocketEvent::Close(event) => console_log!("{:?}", event),
}
}
});
worker::Response::from_websocket(client)
}
```
### Connect to the WebSocket server from a client
Writing WebSocket clients that communicate with your Workers function is a two-step process: first, create the WebSocket instance, and then attach event listeners to it:
```js
const websocket = new WebSocket('wss://websocket-example.signalnerve.workers.dev');
websocket.addEventListener('message', event => {
console.log('Message received from server');
console.log(event.data);
});
```
WebSocket clients can send messages back to the server using the [`send`](/workers/runtime-apis/websockets/#send) function:
```js
websocket.send('MESSAGE');
```
When the WebSocket interaction is complete, the client can close the connection using [`close`](/workers/runtime-apis/websockets/#close):
```js
websocket.close();
```
For an example of this in practice, refer to the [`websocket-template`](https://github.com/cloudflare/websocket-template) to get started with WebSockets.
## Write a WebSocket client
Cloudflare Workers supports the `new WebSocket(url)` constructor. A Worker can establish a WebSocket connection to a remote server in the same manner as the client implementation described above.
Additionally, Cloudflare supports establishing WebSocket connections by making a fetch request to a URL with the `Upgrade` header set.
```js
async function websocket(url) {
// Make a fetch request including `Upgrade: websocket` header.
// The Workers Runtime will automatically handle other requirements
// of the WebSocket protocol, like the Sec-WebSocket-Key header.
let resp = await fetch(url, {
headers: {
Upgrade: 'websocket',
},
});
// If the WebSocket handshake completed successfully, then the
// response has a `webSocket` property.
let ws = resp.webSocket;
if (!ws) {
throw new Error("server didn't accept WebSocket");
}
// Call accept() to indicate that you'll be handling the socket here
// in JavaScript, as opposed to returning it on to a client.
ws.accept();
// Now you can send and receive messages like before.
ws.send('hello');
ws.addEventListener('message', msg => {
console.log(msg.data);
});
}
```
## WebSocket compression
Cloudflare Workers supports WebSocket compression. Refer to [WebSocket Compression](/workers/configuration/compatibility-flags/#websocket-compression) for more information.
---
# Platform
URL: https://developers.cloudflare.com/workers/platform/
import { DirectoryListing } from "~/components";
Pricing, limits and other information about the Workers platform.
---
# Betas
URL: https://developers.cloudflare.com/workers/platform/betas/
These are the current alphas and betas relevant to the Cloudflare Workers platform.
* **Public alphas and betas are openly available**, but may have limitations and caveats due to their early stage of development.
* Private alphas and betas require explicit access to be granted. Refer to the documentation to join the relevant product waitlist.
| Product | Private Beta | Public Beta | More Info |
| ------------------------------------------------- | ------------ | ----------- | --------------------------------------------------------------------------- |
| Email Workers | | ✅ | [Docs](/email-routing/email-workers/) |
| Green Compute | | ✅ | [Blog](https://blog.cloudflare.com/earth-day-2022-green-compute-open-beta/) |
| Pub/Sub | ✅ | | [Docs](/pub-sub) |
| [TCP Sockets](/workers/runtime-apis/tcp-sockets/) | | ✅ | [Docs](/workers/runtime-apis/tcp-sockets) |
---
# Known issues
URL: https://developers.cloudflare.com/workers/platform/known-issues/
Below are some known bugs and issues to be aware of when using Cloudflare Workers.
## Route specificity
* When defining route specificity, a trailing `/*` in your pattern may not act as expected.
Consider two different Workers, each deployed to the same zone. Worker A is assigned the `example.com/images/*` route and Worker B is given the `example.com/images*` route pattern. With these in place, here are how the following URLs will be resolved:
```
// (A) example.com/images/*
// (B) example.com/images*
"example.com/images"
// -> B
"example.com/images123"
// -> B
"example.com/images/hello"
// -> B
```
You will notice that all examples trigger Worker B. This includes the final example, which exemplifies the unexpected behavior.
When adding a wildcard on a subdomain, here are how the following URLs will be resolved:
```
// (A) *.example.com/a
// (B) a.example.com/*
"a.example.com/a"
// -> B
```
## wrangler dev
* When running `wrangler dev --remote`, all outgoing requests are given the `cf-workers-preview-token` header, which Cloudflare recognizes as a preview request. This applies to the entire Cloudflare network, so making HTTP requests to other Cloudflare zones is currently discarded for security reasons. To enable a workaround, insert the following code into your Worker script:
```js
const request = new Request(url, incomingRequest);
request.headers.delete('cf-workers-preview-token');
return await fetch(request);
```
## Fetch API in CNAME setup
When you make a subrequest using [`fetch()`](/workers/runtime-apis/fetch/) from a Worker, the Cloudflare DNS resolver is used. When a zone has a [Partial (CNAME) setup](/dns/zone-setups/partial-setup/), all hostnames that the Worker needs to be able to resolve require a dedicated DNS entry in Cloudflare's DNS setup. Otherwise the Fetch API call will fail with status code [530 (1016)](/support/troubleshooting/cloudflare-errors/troubleshooting-cloudflare-1xxx-errors#error-1016-origin-dns-error).
Setup with missing DNS records in Cloudflare DNS
```
// Zone in partial setup: example.com
// DNS records at Authoritative DNS: sub1.example.com, sub2.example.com, ...
// DNS records at Cloudflare DNS: sub1.example.com
"sub1.example.com/"
// -> Can be resolved by Fetch API
"sub2.example.com/"
// -> Cannot be resolved by Fetch API, will lead to 530 status code
```
After adding `sub2.example.com` to Cloudflare DNS
```
// Zone in partial setup: example.com
// DNS records at Authoritative DNS: sub1.example.com, sub2.example.com, ...
// DNS records at Cloudflare DNS: sub1.example.com, sub2.example.com
"sub1.example.com/"
// -> Can be resolved by Fetch API
"sub2.example.com/"
// -> Can be resolved by Fetch API
```
## Fetch to IP addresses
For Workers subrequests, requests can only be made to URLs, not to IP addresses directly. To overcome this limitation [add a A or AAAA name record to your zone](https://developers.cloudflare.com/dns/manage-dns-records/how-to/create-dns-records/) and then fetch that resource.
For example, in the zone `example.com` create a record of type `A` with the name `server` and value `192.0.2.1`, and then use:
```js
await fetch('http://server.example.com')
```
Do not use:
```js
await fetch('http://192.0.2.1')
```
---
# Limits
URL: https://developers.cloudflare.com/workers/platform/limits/
import { Render } from "~/components";
## Account plan limits
| Feature | Workers Free | Workers Paid |
| -------------------------------------------------------------------------------- | ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| [Subrequests](#subrequests) | 50/request | 1000/request |
| [Simultaneous outgoing connections/request](#simultaneous-open-connections) | 6 | 6 |
| [Environment variables](#environment-variables) | 64/Worker | 128/Worker |
| [Environment variable size](#environment-variables) | 5 KB | 5 KB |
| [Worker size](#worker-size) | 3 MB | 10 MB |
| [Worker startup time](#worker-startup-time) | 400 ms | 400 ms |
| [Number of Workers](#number-of-workers)1 | 100 | 500 |
| Number of [Cron Triggers](/workers/configuration/cron-triggers/) per account | 5 | 250 |
1 If you are running into limits, your project may be a good fit for
[Workers for Platforms](/cloudflare-for-platforms/workers-for-platforms/).
---
## Request limits
URLs have a limit of 16 KB.
Request headers observe a total limit of 32 KB, but each header is limited to 16 KB.
Cloudflare has network-wide limits on the request body size. This limit is tied to your Cloudflare account's plan, which is separate from your Workers plan. When the request body size of your `POST`/`PUT`/`PATCH` requests exceed your plan's limit, the request is rejected with a `(413) Request entity too large` error.
Cloudflare Enterprise customers may contact their account team or [Cloudflare Support](/support/contacting-cloudflare-support/) to have a request body limit beyond 500 MB.
| Cloudflare Plan | Maximum body size |
| --------------- | ------------------- |
| Free | 100 MB |
| Pro | 100 MB |
| Business | 200 MB |
| Enterprise | 500 MB (by default) |
---
## Response limits
Response headers observe a total limit of 32 KB, but each header is limited to 16 KB.
Cloudflare does not enforce response limits on response body sizes, but cache limits for [our CDN are observed](/cache/concepts/default-cache-behavior/). Maximum file size is 512 MB for Free, Pro, and Business customers and 5 GB for Enterprise customers.
---
## Worker limits
| Feature | Workers Free | Workers Paid |
| ------------------------ | ------------------------------------------ | ---------------- |
| [Request](#request) | 100,000 requests/day 1000 requests/min | No limit |
| [Worker memory](#memory) | 128 MB | 128 MB |
| [CPU time](#cpu-time) | 10 ms | 30 s HTTP request 15 min [Cron Trigger](/workers/configuration/cron-triggers/) |
| [Duration](#duration) | No limit | No limit for Workers. 15 min duration limit for [Cron Triggers](/workers/configuration/cron-triggers/), [Durable Object Alarms](/durable-objects/api/alarms/) and [Queue Consumers](/queues/configuration/javascript-apis/#consumer) |
### Duration
Duration is a measurement of wall-clock time — the total amount of time from the start to end of an invocation of a Worker. There is no hard limit on the duration of a Worker. As long as the client that sent the request remains connected, the Worker can continue processing, making subrequests, and setting timeouts on behalf of that request. When the client disconnects, all tasks associated with that client request are canceled. Use [`event.waitUntil()`](/workers/runtime-apis/handlers/fetch/) to delay cancellation for another 30 seconds or until the promise passed to `waitUntil()` completes.
:::note
Cloudflare updates the Workers runtime a few times per week. When this happens, any in-flight requests are given a grace period of 30 seconds to finish. If a request does not finish within this time, it is terminated. While your application should follow the best practice of handling disconnects by retrying requests, this scenario is extremely improbable. To encounter it, you would need to have a request that takes longer than 30 seconds that also happens to intersect with the exact time an update to the runtime is happening.
:::
### CPU time
CPU time is the amount of time the CPU actually spends doing work, during a given request. Most Workers requests consume less than a millisecond of CPU time. It is rare to find normally operating Workers that exceed the CPU time limit.
Using DevTools locally can help identify CPU intensive portions of your code. See the [CPU profiling with DevTools documentation](/workers/observability/dev-tools/cpu-usage/) to learn more.
You can also set a custom limit on the amount of CPU time that can be used during each invocation of your Worker. To do so, navigate to the Workers section in the Cloudflare dashboard. Select the specific Worker you wish to modify, then click on the "Settings" tab where you can adjust the CPU time limit.
:::note
Scheduled Workers ([Cron Triggers](/workers/configuration/cron-triggers/)) have different limits on CPU time based on the schedule interval. When the schedule interval is less than 1 hour, a Scheduled Worker may run for up to 30 seconds. When the schedule interval is more than 1 hour, a scheduled Worker may run for up to 15 minutes.
:::
---
## Cache API limits
| Feature | Workers Free | Workers Paid
| ---------------------------------------- | ------------ | ------------ |
| [Maximum object size](#cache-api-limits) | 512 MB | 512 MB |
| [Calls/request](#cache-api-limits) | 50 | 1,000 |
Calls/request means the number of calls to `put()`, `match()`, or `delete()` Cache API method per-request, using the same quota as subrequests (`fetch()`).
:::note
The size of chunked response bodies (`Transfer-Encoding: chunked`) is not known in advance. Then, `.put()`ing such responses will block subsequent `.put()`s from starting until the current `.put()` completes.
:::
---
## Request
Workers automatically scale onto thousands of Cloudflare global network servers around the world. There is no general limit to the number of requests per second Workers can handle.
Cloudflare’s abuse protection methods do not affect well-intentioned traffic. However, if you send many thousands of requests per second from a small number of client IP addresses, you can inadvertently trigger Cloudflare’s abuse protection. If you expect to receive `1015` errors in response to traffic or expect your application to incur these errors, [contact Cloudflare support](/support/contacting-cloudflare-support/) to increase your limit. Cloudflare's anti-abuse Workers Rate Limiting does not apply to Enterprise customers.
You can also confirm if you have been rate limited by anti-abuse Worker Rate Limiting by logging into the Cloudflare dashboard, selecting your account and zone, and going to **Security** > **Events**. Find the event and expand it. If the **Rule ID** is `worker`, this confirms that it is the anti-abuse Worker Rate Limiting.
The burst rate and daily request limits apply at the account level, meaning that requests on your `*.workers.dev` subdomain count toward the same limit as your zones. Upgrade to a [Workers Paid plan](https://dash.cloudflare.com/?account=workers/plans) to automatically lift these limits.
:::caution
If you are currently being rate limited, upgrade to a [Workers Paid plan](https://dash.cloudflare.com/?account=workers/plans) to lift burst rate and daily request limits.
:::
### Burst rate
Accounts using the Workers Free plan are subject to a burst rate limit of 1,000 requests per minute. Users visiting a rate limited site will receive a Cloudflare `1015` error page. However if you are calling your Worker programmatically, you can detect the rate limit page and handle it yourself by looking for HTTP status code `429`.
Workers being rate-limited by Anti-Abuse Protection are also visible from the Cloudflare dashboard:
1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account and your website.
2. Select **Security** > **Events** > scroll to **Sampled logs**.
3. Review the log for a Web Application Firewall block event with a `ruleID` of `worker`.
### Daily request
Accounts using the Workers Free plan are subject to a daily request limit of 100,000 requests. Free plan daily requests counts reset at midnight UTC. A Worker that fails as a result of daily request limit errors can be configured by toggling its corresponding [route](/workers/configuration/routing/routes/) in two modes: 1) Fail open and 2) Fail closed.
#### Fail open
Routes in fail open mode will bypass the failing Worker and prevent it from operating on incoming traffic. Incoming requests will behave as if there was no Worker.
#### Fail closed
Routes in fail closed mode will display a Cloudflare `1027` error page to visitors, signifying the Worker has been temporarily disabled. Cloudflare recommends this option if your Worker is performing security related tasks.
---
## Memory
Only one Workers instance runs on each of the many global Cloudflare global network servers. Each Workers instance can consume up to 128 MB of memory. Use [global variables](/workers/runtime-apis/web-standards/) to persist data between requests on individual nodes. Note however, that nodes are occasionally evicted from memory.
If a Worker processes a request that pushes the Worker over the 128 MB limit, the Cloudflare Workers runtime may cancel one or more requests. To view these errors, as well as CPU limit overages:
1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account.
2. Select **Workers & Pages** and in **Overview**, select the Worker you would like to investigate.
3. Under **Metrics**, select **Errors** > **Invocation Statuses** and examine **Exceeded Memory**.
Use the [TransformStream API](/workers/runtime-apis/streams/transformstream/) to stream responses if you are concerned about memory usage. This avoids loading an entire response into memory.
Using DevTools locally can help identify memory leaks in your code. See the [memory profiling with DevTools documentation](/workers/observability/dev-tools/memory-usage/) to learn more.
---
## Subrequests
A subrequest is any request that a Worker makes to either Internet resources using the [Fetch API](/workers/runtime-apis/fetch/) or requests to other Cloudflare services like [R2](/r2/), [KV](/kv/), or [D1](/d1/).
### Worker-to-Worker subrequests
To make subrequests from your Worker to another Worker on your account, use [Service Bindings](/workers/runtime-apis/bindings/service-bindings/). Service bindings allow you to send HTTP requests to another Worker without those requests going over the Internet.
If you attempt to use global [`fetch()`](/workers/runtime-apis/fetch/) to make a subrequest to another Worker on your account that runs on the same [zone](/fundamentals/setup/accounts-and-zones/#zones), without service bindings, the request will fail.
If you make a subrequest from your Worker to a target Worker that runs on a [Custom Domain](/workers/configuration/routing/custom-domains/#worker-to-worker-communication) rather than a route, the request will be allowed.
### How many subrequests can I make?
You can make 50 subrequests per request on Workers Free, and 1,000 subrequests per request on Workers Paid. Each subrequest in a redirect chain counts against this limit. This means that the number of subrequests a Worker makes could be greater than the number of `fetch(request)` calls in the Worker.
For subrequests to internal services like Workers KV and Durable Objects, the subrequest limit is 1,000 per request, regardless of the [usage model](/workers/platform/pricing/#workers) configured for the Worker.
### How long can a subrequest take?
There is no set limit on the amount of real time a Worker may use. As long as the client which sent a request remains connected, the Worker may continue processing, making subrequests, and setting timeouts on behalf of that request.
When the client disconnects, all tasks associated with that client’s request are proactively canceled. If the Worker passed a promise to [`event.waitUntil()`](/workers/runtime-apis/handlers/fetch/), cancellation will be delayed until the promise has completed or until an additional 30 seconds have elapsed, whichever happens first.
---
## Simultaneous open connections
You can open up to six connections simultaneously, for each invocation of your Worker. The connections opened by the following API calls all count toward this limit:
- the `fetch()` method of the [Fetch API](/workers/runtime-apis/fetch/).
- `get()`, `put()`, `list()`, and `delete()` methods of [Workers KV namespace objects](/kv/api/).
- `put()`, `match()`, and `delete()` methods of [Cache objects](/workers/runtime-apis/cache/).
- `list()`, `get()`, `put()`, `delete()`, and `head()` methods of [R2](/r2/).
- `send()` and `sendBatch()`, methods of [Queues](/queues/).
- Opening a TCP socket using the [`connect()`](/workers/runtime-apis/tcp-sockets/) API.
Once an invocation has six connections open, it can still attempt to open additional connections.
- These attempts are put in a pending queue — the connections will not be initiated until one of the currently open connections has closed.
- Earlier connections can delay later ones, if a Worker tries to make many simultaneous subrequests, its later subrequests may appear to take longer to start.
If you have cases in your application that use `fetch()` but that do not require consuming the response body, you can avoid the unread response body from consuming a concurrent connection by using `response.body.cancel()`.
For example, if you want to check whether the HTTP response code is successful (2xx) before consuming the body, you should explicitly cancel the pending response body:
```ts
let resp = await fetch(url);
// Only read the response body for successful responses
if (resp.statusCode <= 299) {
// Call resp.json(), resp.text() or otherwise process the body
} else {
// Explicitly cancel it
resp.body.cancel();
}
```
This will free up an open connection.
If the system detects that a Worker is deadlocked on open connections — for example, if the Worker has pending connection attempts but has no in-progress reads or writes on the connections that it already has open — then the least-recently-used open connection will be canceled to unblock the Worker.
If the Worker later attempts to use a canceled connection, an exception will be thrown. These exceptions should rarely occur in practice, though, since it is uncommon for a Worker to open a connection that it does not have an immediate use for.
:::note
Simultaneous Open Connections are measured from the top-level request, meaning any connections open from Workers sharing resources (for example, Workers triggered via [Service bindings](/workers/runtime-apis/bindings/service-bindings/)) will share the simultaneous open connection limit.
:::
---
## Environment variables
The maximum number of environment variables (secret and text combined) for a Worker is 128 variables on the Workers Paid plan, and 64 variables on the Workers Free plan.
There is no limit to the number of environment variables per account.
Each environment variable has a size limitation of 5 KB.
---
## Worker size
A Worker can be up to 10 MB in size _after compression_ on the Workers Paid plan, and up to 3 MB on the Workers Free plan.
You can assess the size of your Worker bundle after compression by performing a dry-run with `wrangler` and reviewing the final compressed (`gzip`) size output by `wrangler`:
```sh
wrangler deploy --outdir bundled/ --dry-run
```
```sh output
# Output will resemble the below:
Total Upload: 259.61 KiB / gzip: 47.23 KiB
```
Note that larger Worker bundles can impact the start-up time of the Worker, as the Worker needs to be loaded into memory. You should consider removing unnecessary dependencies and/or using [Workers KV](/kv/), a [D1 database](/d1/) or [R2](/r2/) to store configuration files, static assets and binary data instead of attempting to bundle them within your Worker code.
---
## Worker startup time
A Worker must be able to be parsed and execute its global scope (top-level code outside of any handlers) within 400 ms. Worker size can impact startup because there is more code to parse and evaluate. Avoiding expensive code in the global scope can keep startup efficient as well.
You can measure your Worker's startup time by deploying it to Cloudflare using [Wrangler](/workers/wrangler/). When you run `npx wrangler@latest deploy` or `npx wrangler@latest versions upload`, Wrangler will output the startup time of your Worker in the command-line output, using the `startup_time_ms` field in the [Workers Script API](/api/resources/workers/subresources/scripts/methods/update/) or [Workers Versions API](/api/resources/workers/subresources/scripts/subresources/versions/methods/create/).
If you are having trouble staying under this limit, consider [profiling using DevTools](/workers/observability/dev-tools/) locally to learn how to optimize your code.
---
## Number of Workers
You can have up to 500 Workers on your account on the Workers Paid plan, and up to 100 Workers on the Workers Free plan.
If you need more than 500 Workers, consider using [Workers for Platforms](/cloudflare-for-platforms/workers-for-platforms/).
---
## Routes and domains
### Number of routes per zone
Each zone has a limit of 1,000 [routes](/workers/configuration/routing/routes/). If you require more than 1,000 routes on your zone, consider using [Workers for Platforms](/cloudflare-for-platforms/workers-for-platforms/) or request an increase to this limit.
### Number of custom domains per zone
Each zone has a limit of 100 [custom domains](/workers/configuration/routing/custom-domains/). If you require more than 100 custom domains on your zone, consider using a wildcard [route](/workers/configuration/routing/routes/) or request an increase to this limit.
### Number of routed zones per Worker
When configuring [routing](/workers/configuration/routing/), the maximum number of zones that can be referenced by a Worker is 1,000. If you require more than 1,000 zones on your Worker, consider using [Workers for Platforms](/cloudflare-for-platforms/workers-for-platforms/) or request an increase to this limit.
---
## Image Resizing with Workers
When using Image Resizing with Workers, refer to [Image Resizing documentation](/images/transform-images/) for more information on the applied limits.
---
## Log size
You can emit a maximum of 128 KB of data (across `console.log()` statements, exceptions, request metadata and headers) to the console for a single request. After you exceed this limit, further context associated with the request will not be recorded in logs, appear when tailing logs of your Worker, or within a [Tail Worker](/workers/observability/logs/tail-workers/).
Refer to the [Workers Trace Event Logpush documentation](/workers/observability/logs/logpush/#limits) for information on the maximum size of fields sent to logpush destinations.
---
## Unbound and Bundled plan limits
:::note
Unbound and Bundled plans have been deprecated and are no longer available for new accounts.
:::
If your Worker is on an Unbound plan, your limits are exactly the same as the Workers Paid plan.
If your Worker is on a Bundled plan, your limits are the same as the Workers Paid plan except for the following differences:
* Your limit for [subrequests](/workers/platform/limits/#subrequests) is 50/request
* Your limit for [CPU time](/workers/platform/limits/#cpu-time) is 50ms for HTTP requests and 50ms for [Cron Triggers](/workers/configuration/cron-triggers/)
* You have no [Duration](/workers/platform/limits/#duration) limits for [Cron Triggers](/workers/configuration/cron-triggers/), [Durable Object alarms](/durable-objects/api/alarms/), or [Queue consumers](/queues/configuration/javascript-apis/#consumer)
* Your Cache API limits for calls/requests is 50
---
## Related resources
Review other developer platform resource limits.
- [KV limits](/kv/platform/limits/)
- [Durable Object limits](/durable-objects/platform/limits/)
- [Queues limits](/queues/platform/limits/)
---
# Choose a data or storage product
URL: https://developers.cloudflare.com/workers/platform/storage-options/
import { Render } from "~/components";
Cloudflare Workers support a range of storage and database options for persisting different types of data across different use-cases, from key-value stores (like [Workers KV](/kv/)) through to SQL databases (such as [D1](/d1/)). This guide describes the use-cases suited to each storage option, as well as their performance and consistency properties.
:::note[Pages Functions]
Storage options can also be used by your front-end application built with Cloudflare Pages. For more information on available storage options for Pages applications, refer to the [Pages Functions bindings documentation](/pages/functions/bindings/).
:::
Available storage and persistency products include:
- [Workers KV](#workers-kv) for key-value storage.
- [R2](#r2) for object storage, including use-cases where S3 compatible storage is required.
- [Durable Objects](#durable-objects) for transactional, globally coordinated storage.
- [D1](#d1) as a relational, SQL-based database.
- [Queues](#queues) for job queueing, batching and inter-Service (Worker to Worker) communication.
- [Hyperdrive](/hyperdrive/) for connecting to and speeding up access to existing hosted and on-premises databases.
- [Analytics Engine](/analytics/analytics-engine/) for storing and querying (using SQL) time-series data and product metrics at scale.
- [Vectorize](/vectorize/) for vector search and storing embeddings from [Workers AI](/workers-ai/).
Applications built on the Workers platform may combine one or more storage components as they grow, scale or as requirements demand.
## Choose a storage product
## Performance and consistency
The following table highlights the performance and consistency characteristics of the primary storage offerings available to Cloudflare Workers:
| Feature | Workers KV | R2 | Durable Objects | D1 |
| --------------------------- | ------------------------------------------------ | ------------------------------------- | -------------------------------- | --------------------------------------------------- |
| Maximum storage per account | Unlimited1 | Unlimited2 | 50 GiB | 250GiB 3 |
| Storage grouping name | Namespace | Bucket | Durable Object | Database |
| Maximum size per value | 25 MiB | 5 TiB per object | 128 KiB per value | 10 GiB per database 4 |
| Consistency model | Eventual: updates take up to 60s to be reflected | Strong (read-after-write)5 | Serializable (with transactions) | Serializable (no replicas) / Causal (with replicas) |
| Supported APIs | Workers, HTTP/REST API | Workers, S3 compatible | Workers | Workers, HTTP/REST API |
1 Free accounts are limited to 1 GiB of KV storage.
2 Free accounts are limited to 10 GB of R2 storage.
3 Free accounts are limited to 5 GiB of database storage.
4 Free accounts are limited to 500 MiB per database.
5 Refer to the [R2 documentation](/r2/reference/consistency/) for
more details on R2's consistency model.
## Workers KV
Workers KV is an eventually consistent key-value data store that caches on the Cloudflare global network.
It is ideal for projects that require:
- High volumes of reads and/or repeated reads to the same keys.
- Per-object time-to-live (TTL).
- Distributed configuration.
To get started with KV:
- Read how [KV works](/kv/concepts/how-kv-works/).
- Create a [KV namespace](/kv/concepts/kv-namespaces/).
- Review the [KV Runtime API](/kv/api/).
- Learn about KV [Limits](/kv/platform/limits/).
## R2
R2 is S3-compatible blob storage that allows developers to store large amounts of unstructured data without egress fees associated with typical cloud storage services.
It is ideal for projects that require:
- Storage for files which are infrequently accessed.
- Large object storage (for example, gigabytes or more per object).
- Strong consistency per object.
- Asset storage for websites (refer to [caching guide](/r2/buckets/public-buckets/#caching))
To get started with R2:
- Read the [Get started guide](/r2/get-started/).
- Learn about R2 [Limits](/r2/platform/limits/).
- Review the [R2 Workers API](/r2/api/workers/workers-api-reference/).
## Durable Objects
Durable Objects provide low-latency coordination and consistent storage for the Workers platform through global uniqueness and a transactional storage API.
- Global Uniqueness guarantees that there will be a single instance of a Durable Object class with a given ID running at once, across the world. Requests for a Durable Object ID are routed by the Workers runtime to the Cloudflare data center that owns the Durable Object.
- The transactional storage API provides strongly consistent key-value storage to the Durable Object. Each Object can only read and modify keys associated with that Object. Execution of a Durable Object is single-threaded, but multiple request events may still be processed out-of-order from how they arrived at the Object.
It is ideal for projects that require:
- Real-time collaboration (such as a chat application or a game server).
- Consistent storage.
- Data locality.
To get started with Durable Objects:
- Read the [introductory blog post](https://blog.cloudflare.com/introducing-workers-durable-objects/).
- Review the [Durable Objects documentation](/durable-objects/).
- Get started with [Durable Objects](/durable-objects/get-started/).
- Learn about Durable Objects [Limits](/durable-objects/platform/limits/).
## D1
[D1](/d1/) is Cloudflare’s native serverless database. With D1, you can create a database by importing data or defining your tables and writing your queries within a Worker or through the API.
D1 is ideal for:
- Persistent, relational storage for user data, account data, and other structured datasets.
- Use-cases that require querying across your data ad-hoc (using SQL).
- Workloads with a high ratio of reads to writes (most web applications).
To get started with D1:
- Read [the documentation](/d1)
- Follow the [Get started guide](/d1/get-started/) to provision your first D1 database.
- Review the [D1 Workers Binding API](/d1/worker-api/).
:::note
If your working data size exceeds 10 GB (the maximum size for a D1 database), consider splitting the database into multiple, smaller D1 databases.
:::
## Queues
Cloudflare Queues allows developers to send and receive messages with guaranteed delivery. It integrates with [Cloudflare Workers](/workers) and offers at-least once delivery, message batching, and does not charge for egress bandwidth.
Queues is ideal for:
- Offloading work from a request to schedule later.
- Send data from Worker to Worker (inter-Service communication).
- Buffering or batching data before writing to upstream systems, including third-party APIs or [Cloudflare R2](/queues/examples/send-errors-to-r2/).
To get started with Queues:
- [Set up your first queue](/queues/get-started/).
- Learn more [about how Queues works](/queues/reference/how-queues-works/).
## Hyperdrive
Hyperdrive is a service that accelerates queries you make to existing databases, making it faster to access your data from across the globe, irrespective of your users’ location.
Hyperdrive allows you to:
- Connect to an existing database from Workers without connection overhead.
- Cache frequent queries across Cloudflare's global network to reduce response times on highly trafficked content.
- Reduce load on your origin database with connection pooling.
To get started with Hyperdrive:
- [Connect Hyperdrive](/hyperdrive/get-started/) to your existing database.
- Learn more [about how Hyperdrive speeds up your database queries](/hyperdrive/configuration/how-hyperdrive-works/).
## Analytics Engine
Analytics Engine is Cloudflare's time-series and metrics database that allows you to write unlimited-cardinality analytics at scale using a built-in API to write data points from Workers and query that data using SQL directly.
Analytics Engine allows you to:
- Expose custom analytics to your own customers
- Build usage-based billing systems
- Understand the health of your service on a per-customer or per-user basis
- Add instrumentation to frequently called code paths, without impacting performance or overwhelming external analytics systems with events
Cloudflare uses Analytics Engine internally to store and product per-product metrics for products like D1 and R2 at scale.
To get started with Analytics Engine:
- Learn how to [get started with Analytics Engine](/analytics/analytics-engine/get-started/)
- See [an example of writing time-series data to Analytics Engine](/analytics/analytics-engine/recipes/usage-based-billing-for-your-saas-product/)
- Understand the [SQL API](/analytics/analytics-engine/sql-api/) for reading data from your Analytics Engine datasets
## Vectorize
Vectorize is a globally distributed vector database that enables you to build full-stack, AI-powered applications with Cloudflare Workers and [Workers AI](/workers-ai/).
Vectorize allows you to:
- Store embeddings from any vector embeddings model (Bring Your Own embeddings) for semantic search and classification tasks.
- Add context to Large Language Model (LLM) queries by using vector search as part of a [Retrieval Augmented Generation](/workers-ai/tutorials/build-a-retrieval-augmented-generation-ai/) (RAG) workflow.
- [Filter on vector metadata](/vectorize/reference/metadata-filtering/) to reduce the search space and return more relevant results.
To get started with Vectorize:
- [Create your first vector database](/vectorize/get-started/intro/).
- Combine [Workers AI and Vectorize](/vectorize/get-started/embeddings/) to generate, store and query text embeddings.
- Learn more about [how vector databases work](/vectorize/reference/what-is-a-vector-database/).
:::note[SQLite in Durable Objects Beta]
The new beta version of Durable Objects is available where each Durable Object has a private, embedded SQLite database. When deploying a new Durable Object class, users can opt-in to using SQL storage in order to access [Storage SQL API methods](/durable-objects/api/sql-storage/#exec). Otherwise, a Durable Object class has the standard, private key-value storage.
:::
## D1 vs Hyperdrive
D1 is a standalone, serverless database that provides a SQL API, using SQLite's SQL semantics, to store and access your relational data.
Hyperdrive is a service that lets you connect to your existing, regional PostgreSQL databases and improves database performance by optimizing them for global, scalable data access from Workers.
- If you are building a new project on Workers or are considering migrating your data, use D1.
- If you are building a Workers project with an existing PostgreSQL database, use Hyperdrive.
:::note
You cannot use D1 with Hyperdrive.
However, D1 does not need to be used with Hyperdrive because it does not have slow connection setups which would benefit from Hyperdrive's connection pooling. D1 data can also be cached within Workers using the [Cache API](/workers/runtime-apis/cache/).
:::
---
# Pricing
URL: https://developers.cloudflare.com/workers/platform/pricing/
import { GlossaryTooltip, Render } from "~/components";
By default, users have access to the Workers Free plan. The Workers Free plan includes limited usage of Workers, Pages Functions and Workers KV. Read more about the [Free plan limits](/workers/platform/limits/#worker-limits).
The Workers Paid plan includes Workers, Pages Functions, Workers KV, and Durable Objects usage for a minimum charge of $5 USD per month for an account. The plan includes increased initial usage allotments, with clear charges for usage that exceeds the base plan.
All included usage is on a monthly basis.
:::note[Pages Functions billing]
All [Pages Functions](/pages/functions/) are billed as Workers. All pricing and inclusions in this document apply to Pages Functions. Refer to [Functions Pricing](/pages/functions/pricing/) for more information on Pages Functions pricing.
:::
## Workers
Users on the Workers Paid plan have access to the Standard usage model. Workers Enterprise accounts are billed based on the usage model specified in their contract. To switch to the Standard usage model, reach out to your CSM.
| | Requests1, 2 | Duration | CPU time |
| ------------ | ------------------------------------------------------------------ | ------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| **Free** | 100,000 per day | No charge for duration | 10 milliseconds of CPU time per invocation |
| **Standard** | 10 million included per month +$0.30 per additional million | No charge or limit for duration | 30 million CPU milliseconds included per month +$0.02 per additional million CPU milliseconds
Max of 30 seconds of CPU time per invocation Max of 15 minutes of CPU time per [Cron Trigger](/workers/configuration/cron-triggers/) or [Queue Consumer](/queues/configuration/javascript-apis/#consumer) invocation |
1 Inbound requests to your Worker. Cloudflare does not bill for
[subrequests](/workers/platform/limits/#subrequests) you make from your Worker.
2 Requests to static assets are free and unlimited.
### Example pricing
#### Example 1
A Worker that serves 15 million requests per month, and uses an average of 7 milliseconds (ms) of CPU time per request, would have the following estimated costs:
| | Monthly Costs | Formula |
| ---------------- | ------------- | --------------------------------------------------------------------------------------------------------- |
| **Subscription** | $5.00 | |
| **Requests** | $1.50 | (15,000,000 requests - 10,000,000 included requests) / 1,000,000 \* $0.30 |
| **CPU time** | $1.50 | ((7 ms of CPU time per request \* 15,000,000 requests) - 30,000,000 included CPU ms) / 1,000,000 \* $0.02 |
| **Total** | $8.00 | |
#### Example 2
A project that serves 15 million requests per month, with 80% (12 million) requests serving [static assets](/workers/static-assets/) and the remaining invoking dynamic Worker code. The Worker uses an average of 7 milliseconds (ms) of CPU time per request.
Requests to static assets are free and unlimited. This project would have the following estimated costs:
| | Monthly Costs | Formula |
| ----------------------------- | ------------- | ------- |
| **Subscription** | $5.00 | |
| **Requests to static assets** | $0 | - |
| **Requests to Worker** | $0 | - |
| **CPU time** | $0 | - |
| **Total** | $5.00 |
| |
#### Example 3
A Worker that runs on a [Cron Trigger](/workers/configuration/cron-triggers/) once an hour to collect data from multiple APIs, process the data and create a report.
- 720 requests/month
- 3 minutes (180,000ms) of CPU time per request
In this scenario, the estimated monthly cost would be calculated as:
| | Monthly Costs | Formula |
| ---------------- | ------------- | -------------------------------------------------------------------------------------------------------- |
| **Subscription** | $5.00 | |
| **Requests** | $0.00 | - |
| **CPU time** | $1.99 | ((180,000 ms of CPU time per request \* 720 requests) - 30,000,000 included CPU ms) / 1,000,000 \* $0.02 |
| **Total** | $6.99 | |
| | | |
#### Example 4
A high traffic Worker that serves 100 million requests per month, and uses an average of 7 milliseconds (ms) of CPU time per request, would have the following estimated costs:
| | Monthly Costs | Formula |
| ---------------- | ------------- | ---------------------------------------------------------------------------------------------------------- |
| **Subscription** | $5.00 | |
| **Requests** | $27.00 | (100,000,000 requests - 10,000,000 included requests) / 1,000,000 \* $0.30 |
| **CPU time** | $13.40 | ((7 ms of CPU time per request \* 100,000,000 requests) - 30,000,000 included CPU ms) / 1,000,000 \* $0.02 |
| **Total** | $45.40 | |
:::note[Custom limits]
To prevent accidental runaway bills or denial-of-wallet attacks, configure the maximum amount of CPU time that can be used per invocation by [defining limits in your Worker's Wrangler file](/workers/wrangler/configuration/#limits), or via the Cloudflare dashboard (**Workers & Pages** > Select your Worker > **Settings** > **CPU Limits**).
If you had a Worker on the Bundled usage model prior to the migration to Standard pricing on March 1, 2024, Cloudflare has automatically added a 50 ms CPU limit on your Worker.
:::
:::note
Some Workers Enterprise customers maintain the ability to change usage models.
Usage models may be changed at the individual Worker level:
1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account.
2. In Account Home, select **Workers & Pages**.
3. In **Overview**, select your Worker > **Settings** > **Usage Model**.
To change your default account-wide usage model:
1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account.
2. In Account Home, select **Workers & Pages**.
3. Find **Usage Model** on the right-side menu > **Change**.
Existing Workers will not be impacted when changing the default usage model. You may change the usage model for individual Workers without affecting your account-wide default usage model.
:::
## Workers Logs
:::note[Workers Logs documentation]
For more information and [examples of Workers Logs billing](/workers/observability/logs/workers-logs/#example-pricing), refer to the [Workers Logs documentation](/workers/observability/logs/workers-logs).
:::
## Workers Trace Events Logpush
Workers Logpush is only available on the Workers Paid plan.
| | Paid plan |
| --------------------- | ---------------------------------- |
| Requests 1 | 10 million / month, +$0.05/million |
1 Workers Logpush charges for request logs that reach your end
destination after applying filtering or sampling.
## Workers KV
:::note[KV documentation]
To learn more about KV, refer to the [KV documentation](/kv/).
:::
## Queues
:::note[Queues billing examples]
To learn more about Queues pricing and review billing examples, refer to [Queues Pricing](/queues/platform/pricing/).
:::
## D1
D1 is available on both the [Workers Free](#workers) and [Workers Paid](#workers) plans.
:::note[D1 billing]
Refer to [D1 Pricing](/d1/platform/pricing/) to learn more about how D1 is billed.
:::
## Durable Objects
:::note[Durable Objects billing examples]
For more information and [examples of Durable Objects billing](/durable-objects/platform/pricing/#durable-objects-billing-examples), refer to [Durable Objects Pricing](/durable-objects/platform/pricing/).
:::
## Durable Objects Storage API
## Vectorize
Vectorize is currently only available on the Workers paid plan.
## Service bindings
Requests made from your Worker to another worker via a [Service Binding](/workers/runtime-apis/bindings/service-bindings/) do not incur additional request fees. This allows you to split apart functionality into multiple Workers, without incurring additional costs.
For example, if Worker A makes a subrequest to Worker B via a Service Binding, or calls an RPC method provided by Worker B via a Service Binding, this is billed as:
- One request (for the initial invocation of Worker A)
- The total amount of CPU time used across both Worker A and Worker B
:::note[Only available on Workers Standard pricing]
If your Worker is on the deprecated Bundled or Unbound pricing plans, incoming requests from Service Bindings are charged the same as requests from the Internet. In the example above, you would be charged for two requests, one to Worker A, and one to Worker B.
:::
## Fine Print
Workers Paid plan is separate from any other Cloudflare plan (Free, Professional, Business) you may have. If you are an Enterprise customer, reach out to your account team to confirm pricing details.
Only requests that hit a Worker will count against your limits and your bill. Since Cloudflare Workers runs before the Cloudflare cache, the caching of a request still incurs costs. Refer to [Limits](/workers/platform/limits/) to review definitions and behavior after a limit is hit.
---
# Workers for Platforms
URL: https://developers.cloudflare.com/workers/platform/workers-for-platforms/
Deploy custom code on behalf of your users or let your users directly deploy their own code to your platform, managing infrastructure.
---
# Errors and exceptions
URL: https://developers.cloudflare.com/workers/observability/errors/
import { TabItem, Tabs } from "~/components";
Review Workers errors and exceptions.
## Error pages generated by Workers
When a Worker running in production has an error that prevents it from returning a response, the client will receive an error page with an error code, defined as follows:
| Error code | Meaning |
| ---------- | ----------------------------------------------------------------------------------------------------------------- |
| `1101` | Worker threw a JavaScript exception. |
| `1102` | Worker exceeded [CPU time limit](/workers/platform/limits/#cpu-time). |
| `1103` | The owner of this worker needs to contact [Cloudflare Support](/support/contacting-cloudflare-support/) |
| `1015` | Worker hit the [burst rate limit](/workers/platform/limits/#burst-rate). |
| `1019` | Worker hit [loop limit](#loop-limit). |
| `1021` | Worker has requested a host it cannot access. |
| `1022` | Cloudflare has failed to route the request to the Worker. |
| `1024` | Worker cannot make a subrequest to a Cloudflare-owned IP address. |
| `1027` | Worker exceeded free tier [daily request limit](/workers/platform/limits/#daily-request). |
| `1042` | Worker tried to fetch from another Worker on the same zone, which is [unsupported](/workers/runtime-apis/fetch/). |
Other `11xx` errors generally indicate a problem with the Workers runtime itself. Refer to the [status page](https://www.cloudflarestatus.com) if you are experiencing an error.
### Loop limit
A Worker cannot call itself or another Worker more than 16 times. In order to prevent infinite loops between Workers, the [`CF-EW-Via`](/fundamentals/reference/http-headers/#cf-ew-via) header's value is an integer that indicates how many invocations are left. Every time a Worker is invoked, the integer will decrement by 1. If the count reaches zero, a [`1019`](#error-pages-generated-by-workers) error is returned.
### "The script will never generate a response" errors
Some requests may return a 1101 error with `The script will never generate a response` in the error message. This occurs when the Workers runtime detects that all the code associated with the request has executed and no events are left in the event loop, but a Response has not been returned.
#### Cause 1: Unresolved Promises
This is most commonly caused by relying on a Promise that is never resolved or rejected, which is required to return a Response. To debug, look for Promises within your code or dependencies' code that block a Response, and ensure they are resolved or rejected.
In browsers and other JavaScript runtimes, equivalent code will hang indefinitely, leading to both bugs and memory leaks. The Workers runtime throws an explicit error to help you debug.
In the example below, the Response relies on a Promise resolution that never happens. Uncommenting the `resolve` callback solves the issue.
```js null {9}
export default {
fetch(req) {
let response = new Response("Example response");
let { promise, resolve } = Promise.withResolvers();
// If the promise is not resolved, the Workers runtime will
// recognize this and throw an error.
// setTimeout(resolve, 0)
return promise.then(() => response);
},
};
```
You can prevent this by enforcing the [`no-floating-promises` eslint rule](https://typescript-eslint.io/rules/no-floating-promises/), which reports when a Promise is created and not properly handled.
#### Cause 2: WebSocket connections that are never closed
If a WebSocket is missing the proper code to close its server-side connection, the Workers runtime will throw a `script will never generate a response` error. In the example below, the `'close'` event from the client is not properly handled by calling `server.close()`, and the error is thrown. In order to avoid this, ensure that the WebSocket's server-side connection is properly closed via an event listener or other server-side logic.
```js null {10}
async function handleRequest(request) {
let webSocketPair = new WebSocketPair();
let [client, server] = Object.values(webSocketPair);
server.accept();
server.addEventListener("close", () => {
// This missing line would keep a WebSocket connection open indefinitely
// and results in "The script will never generate a response" errors
// server.close();
});
return new Response(null, {
status: 101,
webSocket: client,
});
}
```
### "Illegal invocation" errors
The error message `TypeError: Illegal invocation: function called with incorrect this reference` can be a source of confusion.
This is typically caused by calling a function that calls `this`, but the value of `this` has been lost.
For example, given an `obj` object with the `obj.foo()` method which logic relies on `this`, executing the method via `obj.foo();` will make sure that `this` properly references the `obj` object. However, assigning the method to a variable, e.g.`const func = obj.foo;` and calling such variable, e.g. `func();` would result in `this` being `undefined`. This is because `this` is lost when the method is called as a standalone function. This is standard behavior in JavaScript.
In practice, this is often seen when destructuring runtime provided Javascript objects that have functions that rely on the presence of `this`, such as `ctx`.
The following code will error:
```js
export default {
async fetch(request, env, ctx) {
// destructuring ctx makes waitUntil lose its 'this' reference
const { waitUntil } = ctx;
// waitUntil errors, as it has no 'this'
waitUntil(somePromise);
return fetch(request);
},
};
```
Avoid destructuring or re-bind the function to the original context to avoid the error.
The following code will run properly:
```js
export default {
async fetch(request, env, ctx) {
// directly calling the method on ctx avoids the error
ctx.waitUntil(somePromise);
// alternatively re-binding to ctx via apply, call, or bind avoids the error
const { waitUntil } = ctx;
waitUntil.apply(ctx, [somePromise]);
waitUntil.call(ctx, somePromise);
const reboundWaitUntil = waitUntil.bind(ctx);
reboundWaitUntil(somePromise);
return fetch(request);
},
};
```
### Cannot perform I/O on behalf of a different request
```
Uncaught (in promise) Error: Cannot perform I/O on behalf of a different request. I/O objects (such as streams, request/response bodies, and others) created in the context of one request handler cannot be accessed from a different request's handler.
```
This error occurs when you attempt to share input/output (I/O) objects (such as streams, requests, or responses) created by one invocation of your Worker in the context of a different invocation.
In Cloudflare Workers, each invocation is handled independently and has its own execution context. This design ensures optimal performance and security by isolating requests from one another. When you try to share I/O objects between different invocations, you break this isolation. Since these objects are tied to the specific request they were created in, accessing them from another request's handler is not allowed and leads to the error.
This error is most commonly caused by attempting to cache an I/O object, like a [Request](/workers/runtime-apis/request/) in global scope, and then access it in a subsequent request. For example, if you create a Worker and run the following code in local development, and make two requests to your Worker in quick succession, you can reproduce this error:
```js
let cachedResponse = null;
export default {
async fetch(request, env, ctx) {
if (cachedResponse) {
return cachedResponse;
}
cachedResponse = new Response("Hello, world!");
await new Promise((resolve) => setTimeout(resolve, 5000)); // Sleep for 5s to demonstrate this particular error case
return cachedResponse;
},
};
```
You can fix this by instead storing only the data in global scope, rather than the I/O object itself:
```js
let cachedData = null;
export default {
async fetch(request, env, ctx) {
if (cachedData) {
return new Response(cachedData);
}
const response = new Response("Hello, world!");
cachedData = await response.text();
return new Response(cachedData, response);
},
};
```
If you need to share state across requests, consider using [Durable Objects](/durable-objects/). If you need to cache data across requests, consider using [Workers KV](/kv/).
## Errors on Worker upload
These errors occur when a Worker is uploaded or modified.
| Error code | Meaning |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------- |
| `10006` | Could not parse your Worker's code. |
| `10007` | Worker or [workers.dev subdomain](/workers/configuration/routing/workers-dev/) not found. |
| `10015` | Account is not entitled to use Workers. |
| `10016` | Invalid Worker name. |
| `10021` | Validation Error. Refer to [Validation Errors](/workers/observability/errors/#validation-errors-10021) for details. |
| `10026` | Could not parse request body. |
| `10027` | Your Worker exceeded the size limit of XX MB (for more details see [Worker size limits](/workers/platform/limits/#worker-size)) |
| `10035` | Multiple attempts to modify a resource at the same time |
| `10037` | An account has exceeded the number of [Workers allowed](/workers/platform/limits/#number-of-workers). |
| `10052` | A [binding](/workers/runtime-apis/bindings/) is uploaded without a name. |
| `10054` | A environment variable or secret exceeds the [size limit](/workers/platform/limits/#environment-variables). |
| `10055` | The number of environment variables or secrets exceeds the [limit/Worker](/workers/platform/limits/#environment-variables). |
| `10056` | [Binding](/workers/runtime-apis/bindings/) not found. |
| `10068` | The uploaded Worker has no registered [event handlers](/workers/runtime-apis/handlers/). |
| `10069` | The uploaded Worker contains [event handlers](/workers/runtime-apis/handlers/) unsupported by the Workers runtime. |
### Validation Errors (10021)
The 10021 error code includes all errors that occur when you attempt to deploy a Worker, and Cloudflare then attempts to load and run the top-level scope (everything that happens before your Worker's [handler](/workers/runtime-apis/handlers/) is invoked). For example, if you attempt to deploy a broken Worker with invalid JavaScript that would throw a `SyntaxError` — Cloudflare will not deploy your Worker.
Specific error cases include but are not limited to:
#### Worker exceeded the upload size limit
A Worker can be up to 10 MB in size after compression on the Workers Paid plan, and up to 3 MB on the Workers Free plan.
To reduce the upload size of a Worker, you should consider removing unnecessary dependencies and/or using Workers KV, a D1 database or R2 to store configuration files, static assets and binary data instead of attempting to bundle them within your Worker code.
Another method to reduce a Worker's file size is to split its functionality across multiple Workers and connect them using [Service bindings](/workers/runtime-apis/bindings/service-bindings/).
#### Script startup exceeded CPU time limit
This means that you are doing work in the top-level scope of your Worker that takes [more than the startup time limit (400ms)](/workers/platform/limits/#worker-startup-time) of CPU time.
This is usually a sign of a bug and/or large performance problem with your code or a dependency you rely on. It's not typical to use more than 400ms of CPU time when your app starts. The more time your Worker's code spends parsing and executing top-level scope, the slower your Worker will be when you deploy a code change or a new [isolate](/workers/reference/how-workers-works/) is created.
This error is most commonly caused by attempting to perform expernsive initialization work directly in top level (global) scope, rather than either at build time or when your Worker's handler is invoked. For example, attempting to initialize an app by generating or consuming a large schema.
To analyze what is consuming so much CPU time, you should open Chrome DevTools for your Worker and look at the Profiling and/or Performance panels to understand where time is being spent. Is there something glaring that consumes tons of CPU time, especially the first time you make a request to your Worker?
## Runtime errors
Runtime errors will occur within the runtime, do not throw up an error page, and are not visible to the end user. Runtime errors are detected by the user with logs.
| Error message | Meaning |
| -------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------- |
| `Network connection lost` | Connection failure. Catch a `fetch` or binding invocation and retry it. |
| `Memory limit` `would be exceeded` `before EOF` | Trying to read a stream or buffer that would take you over the [memory limit](/workers/platform/limits/#memory). |
| `daemonDown` | A temporary problem invoking the Worker. |
## Identify errors: Workers Metrics
To review whether your application is experiencing any downtime or returning any errors:
1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account.
2. In **Account Home**, select **Workers & Pages**.
3. In **Overview**, select your Worker and review your Worker's metrics.
A `responseStreamDisconnected` event `outcome` occurs when one end of the connection hangs up during the deferred proxying stage of a Worker request flow. This is regarded as an error for request metrics, and presents in logs as a non-error log entry. It commonly appears for longer lived connections such as WebSockets.
## Debug exceptions
After you have identified your Workers application is returning exceptions, use `wrangler tail` to inspect and fix the exceptions.
{/* */}
Exceptions will show up under the `exceptions` field in the JSON returned by `wrangler tail`. After you have identified the exception that is causing errors, redeploy your code with a fix, and continue tailing the logs to confirm that it is fixed.
## Set up a logging service
A Worker can make HTTP requests to any HTTP service on the public Internet. You can use a service like [Sentry](https://sentry.io) to collect error logs from your Worker, by making an HTTP request to the service to report the error. Refer to your service’s API documentation for details on what kind of request to make.
When using an external logging strategy, remember that outstanding asynchronous tasks are canceled as soon as a Worker finishes sending its main response body to the client. To ensure that a logging subrequest completes, pass the request promise to [`event.waitUntil()`](https://developer.mozilla.org/en-US/docs/Web/API/ExtendableEvent/waitUntil). For example:
```js
export default {
async fetch(request, env, ctx) {
function postLog(data) {
return fetch("https://log-service.example.com/", {
method: "POST",
body: data,
});
}
// Without ctx.waitUntil(), the `postLog` function may or may not complete.
ctx.waitUntil(postLog(stack));
return fetch(request);
},
};
```
```js
addEventListener("fetch", (event) => {
event.respondWith(handleEvent(event));
});
async function handleEvent(event) {
// ...
// Without event.waitUntil(), the `postLog` function may or may not complete.
event.waitUntil(postLog(stack));
return fetch(event.request);
}
function postLog(data) {
return fetch("https://log-service.example.com/", {
method: "POST",
body: data,
});
}
```
## Go to origin on error
By using [`event.passThroughOnException`](/workers/runtime-apis/context/#passthroughonexception), a Workers application will forward requests to your origin if an exception is thrown during the Worker's execution. This allows you to add logging, tracking, or other features with Workers, without degrading your application's functionality.
```js
export default {
async fetch(request, env, ctx) {
ctx.passThroughOnException();
// an error here will return the origin response, as if the Worker wasn't present
return fetch(request);
},
};
```
```js
addEventListener("fetch", (event) => {
event.passThroughOnException();
event.respondWith(handleRequest(event.request));
});
async function handleRequest(request) {
// An error here will return the origin response, as if the Worker wasn’t present.
// ...
return fetch(request);
}
```
## Related resources
- [Log from Workers](/workers/observability/logs/) - Learn how to log your Workers.
- [Logpush](/workers/observability/logs/logpush/) - Learn how to push Workers Trace Event Logs to supported destinations.
- [RPC error handling](/workers/runtime-apis/rpc/error-handling/) - Learn how to handle errors from remote-procedure calls.
---
# Observability
URL: https://developers.cloudflare.com/workers/observability/
import { DirectoryListing } from "~/components";
Understand how your Worker projects are performing via logs, traces, and other data sources.
---
# Source maps and stack traces
URL: https://developers.cloudflare.com/workers/observability/source-maps/
import { Render, WranglerConfig } from "~/components"
:::caution
Support for uploading source maps is available now in open beta. Minimum required Wrangler version: 3.46.0.
:::
## Source Maps
To enable source maps, add the following to your Worker's [Wrangler configuration](/workers/wrangler/configuration/):
```toml
upload_source_maps = true
```
When `upload_source_maps` is set to `true`, Wrangler will automatically generate and upload source map files when you run [`wrangler deploy`](/workers/wrangler/commands/#deploy) or [`wrangler versions deploy`](/workers/wrangler/commands/#deploy-2).
:::note
Miniflare can also [output source maps](https://miniflare.dev/developing/source-maps) for use in local development or [testing](/workers/testing/integration-testing/#miniflares-api).
:::
## Stack traces
When your Worker throws an uncaught exception, we fetch the source map and use it to map the stack trace of the exception back to lines of your Worker’s original source code.
You can then view the stack trace when streaming [real-time logs](/workers/observability/logs/real-time-logs/) or in [Tail Workers](/workers/observability/logs/tail-workers/).
:::note
The source map is retrieved after your Worker invocation completes — it's an asynchronous process that does not impact your Worker's CPU utilization or performance. Source maps are not accessible inside the Worker at runtime, if you `console.log()` the [stack property](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Error/stack) within a Worker, you will not get a deobfuscated stack trace.
:::
When Cloudflare attempts to remap a stack trace to the Worker's source map, it does so line-by-line, remapping as much as possible. If a line of the stack trace cannot be remapped for any reason, Cloudflare will leave that line of the stack trace unchanged, and continue to the next line of the stack trace.
## Related resources
* [Tail Workers](/workers/observability/logs/logpush/) - Learn how to attach Tail Workers to transform your logs and send them to HTTP endpoints.
* [Real-time logs](/workers/observability/logs/real-time-logs/) - Learn how to capture Workers logs in real-time.
---
# Metrics and analytics
URL: https://developers.cloudflare.com/workers/observability/metrics-and-analytics/
import { GlossaryTooltip } from "~/components"
There are two graphical sources of information about your Workers traffic at a given time: Workers metrics and zone-based Workers analytics.
Workers metrics can help you diagnose issues and understand your Workers' workloads by showing performance and usage of your Workers. If your Worker runs on a route on a zone, or on a few zones, Workers metrics will show how much traffic your Worker is handling on a per-zone basis, and how many requests your site is getting.
Zone analytics show how much traffic all Workers assigned to a zone are handling.
## Workers metrics
Workers metrics aggregate request data for an individual Worker (if your Worker is running across multiple domains, and on `*.workers.dev`, metrics will aggregate requests across them). To view your Worker's metrics:
1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account.
2. Select **Compute (Workers)**.
3. In **Overview**, select your Worker to view its metrics.
There are two metrics that can help you understand the health of your Worker in a given moment: requests success and error metrics, and invocation statuses.
### Requests
The first graph shows historical request counts from the Workers runtime broken down into successful requests, errored requests, and subrequests.
* **Total**: All incoming requests registered by a Worker. Requests blocked by [WAF](https://www.cloudflare.com/waf/) or other security features will not count.
* **Success**: Requests that returned a Success or Client Disconnected invocation status.
* **Errors**: Requests that returned a Script Threw Exception, Exceeded Resources, or Internal Error invocation status — refer to [Invocation Statuses](/workers/observability/metrics-and-analytics/#invocation-statuses) for a breakdown of where your errors are coming from.
Request traffic data may display a drop off near the last few minutes displayed in the graph for time ranges less than six hours. This does not reflect a drop in traffic, but a slight delay in aggregation and metrics delivery.
### Subrequests
Subrequests are requests triggered by calling `fetch` from within a Worker. A subrequest that throws an uncaught error will not be counted.
* **Total**: All subrequests triggered by calling `fetch` from within a Worker.
* **Cached**: The number of cached responses returned.
* **Uncached**: The number of uncached responses returned.
### Wall time per execution
Wall time represents the elapsed time in milliseconds between the start of a Worker invocation, and when the Workers runtime determines that no more JavaScript needs to run. Specifically, wall time per execution chart measures the wall time that the JavaScript context remained open — including time spent waiting on I/O, and time spent executing in your Worker's [`waitUntil()`](/workers/runtime-apis/context/#waituntil) handler. Wall time is not the same as the time it takes your Worker to send the final byte of a response back to the client - wall time can be higher, if tasks within `waitUntil()` are still running after the response has been sent, or it can be lower. For example, when returning a response with a large body, the Workers runtime can, in some cases, determine that no more JavaScript needs to run, and closes the JavaScript context before all the bytes have passed through and been sent.
The Wall Time per execution chart shows historical wall time data broken down into relevant quantiles using [reservoir sampling](https://en.wikipedia.org/wiki/Reservoir_sampling). Learn more about [interpreting quantiles](https://www.statisticshowto.com/quantile-definition-find-easy-steps/).
### CPU Time per execution
The CPU Time per execution chart shows historical CPU time data broken down into relevant quantiles using [reservoir sampling](https://en.wikipedia.org/wiki/Reservoir_sampling). Learn more about [interpreting quantiles](https://www.statisticshowto.com/quantile-definition-find-easy-steps/). In some cases, higher quantiles may appear to exceed [CPU time limits](/workers/platform/limits/#cpu-time) without generating invocation errors because of a mechanism in the Workers runtime that allows rollover CPU time for requests below the CPU limit.
### Execution duration (GB-seconds)
The Duration per request chart shows historical [duration](/workers/platform/limits/#duration) per Worker invocation. The data is broken down into relevant quantiles, similar to the CPU time chart. Learn more about [interpreting quantiles](https://www.statisticshowto.com/quantile-definition-find-easy-steps/). Understanding duration on your Worker is especially useful when you are intending to do a significant amount of computation on the Worker itself.
### Invocation statuses
To review invocation statuses:
1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account.
2. Select **Workers & Pages**.
3. Select your Worker.
4. Find the **Summary** graph in **Metrics**.
5. Select **Errors**.
Worker invocation statuses indicate whether a Worker executed successfully or failed to generate a response in the Workers runtime. Invocation statuses differ from HTTP status codes. In some cases, a Worker invocation succeeds but does not generate a successful HTTP status because of another error encountered outside of the Workers runtime. Some invocation statuses result in a [Workers error code](/workers/observability/errors/#error-pages-generated-by-workers) being returned to the client.
| Invocation status | Definition | Workers error code | GraphQL field |
| ---------------------- | ---------------------------------------------------------------------------- | ------------------ | ---------------------- |
| Success | Worker executed successfully | | `success` |
| Client disconnected | HTTP client (that is, the browser) disconnected before the request completed | | `clientDisconnected` |
| Worker threw exception | Worker threw an unhandled JavaScript exception | 1101 | `scriptThrewException` |
| Exceeded resources¹ | Worker exceeded runtime limits | 1102, 1027 | `exceededResources` |
| Internal error² | Workers runtime encountered an error | | `internalError` |
¹ The Exceeded Resources status may appear when the Worker exceeds a [runtime limit](/workers/platform/limits/#request-limits). The most common cause is excessive CPU time, but is also caused by a Worker exceeding startup time or free tier limits.
² The Internal Error status may appear when the Workers runtime fails to process a request due to an internal failure in our system. These errors are not caused by any issue with the Worker code nor any resource limit. While requests with Internal Error status are rare, some may appear during normal operation. These requests are not counted towards usage for billing purposes. If you notice an elevated rate of requests with Internal Error status, review [www.cloudflarestatus.com](https://www.cloudflarestatus.com/).
To further investigate exceptions, use [`wrangler tail`](/workers/wrangler/commands/#tail).
### Request duration
The request duration chart shows how long it took your Worker to respond to requests, including code execution and time spent waiting on I/O. The request duration chart is currently only available when your Worker has [Smart Placement](/workers/configuration/smart-placement) enabled.
In contrast to [execution duration](/workers/observability/metrics-and-analytics/#execution-duration-gb-seconds), which measures only the time a Worker is active, request duration measures from the time a request comes into a data center until a response is delivered.
The data shows the duration for requests with Smart Placement enabled compared to those with Smart Placement disabled (by default, 1% of requests are routed with Smart Placement disabled). The chart shows a histogram with duration across the x-axis and the percentage of requests that fall into the corresponding duration on the y-axis.
### Metrics retention
Worker metrics can be inspected for up to three months in the past in maximum increments of one week.
## Zone analytics
Zone analytics aggregate request data for all Workers assigned to any [routes](/workers/configuration/routing/routes/) defined for a zone.
To review zone metrics:
1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account.
2. Select your site.
3. In **Analytics & Logs**, select **Workers**.
Zone data can be scoped by time range within the last 30 days. The dashboard includes charts and information described below.
### Subrequests
This chart shows subrequests — requests triggered by calling `fetch` from within a Worker — broken down by cache status.
* **Uncached**: Requests answered directly by your origin server or other servers responding to subrequests.
* **Cached**: Requests answered by Cloudflare’s [cache](https://www.cloudflare.com/learning/cdn/what-is-caching/). As Cloudflare caches more of your content, it accelerates content delivery and reduces load on your origin.
### Bandwidth
This chart shows historical bandwidth usage for all Workers on a zone broken down by cache status.
### Status codes
This chart shows historical requests for all Workers on a zone broken down by HTTP status code.
### Total requests
This chart shows historical data for all Workers on a zone broken down by successful requests, failed requests, and subrequests. These request types are categorized by HTTP status code where `200`-level requests are successful and `400` to `500`-level requests are failed.
## GraphQL
Worker metrics are powered by GraphQL. Learn more about querying our data sets in the [Querying Workers Metrics with GraphQL tutorial](/analytics/graphql-api/tutorials/querying-workers-metrics/).
---
# How the Cache works
URL: https://developers.cloudflare.com/workers/reference/how-the-cache-works/
Workers was designed and built on top of Cloudflare's global network to allow developers to interact directly with the Cloudflare cache. The cache can provide ephemeral, data center-local storage, as a convenient way to frequently access static or dynamic content.
By allowing developers to write to the cache, Workers provide a way to customize cache behavior on Cloudflare’s CDN. To learn about the benefits of caching, refer to the Learning Center’s article on [What is Caching?](https://www.cloudflare.com/learning/cdn/what-is-caching/).
Cloudflare Workers run before the cache but can also be utilized to modify assets once they are returned from the cache. Modifying assets returned from cache allows for the ability to sign or personalize responses while also reducing load on an origin and reducing latency to the end user by serving assets from a nearby location.
## Interact with the Cloudflare Cache
Conceptually, there are two ways to interact with Cloudflare’s Cache using a Worker:
- Call to [`fetch()`](/workers/runtime-apis/fetch/) in a Workers script. Requests proxied through Cloudflare are cached even without Workers according to a zone’s default or configured behavior (for example, static assets like files ending in `.jpg` are cached by default). Workers can further customize this behavior by:
- Setting Cloudflare cache rules (that is, operating on the `cf` object of a [request](/workers/runtime-apis/request/)).
- Store responses using the [Cache API](/workers/runtime-apis/cache/) from a Workers script. This allows caching responses that did not come from an origin and also provides finer control by:
- Customizing cache behavior of any asset by setting headers such as `Cache-Control` on the response passed to `cache.put()`.
- Caching responses generated by the Worker itself through `cache.put()`.
:::caution[Tiered caching]
The Cache API is not compatible with tiered caching. To take advantage of tiered caching, use the [fetch API](/workers/runtime-apis/fetch/).
:::
### Single file purge assets cached by a worker
When using single-file purge to purge assets cached by a Worker, make sure not to purge the end user URL. Instead, purge the URL that is in the `fetch` request. For example, you have a Worker that runs on `https://example.com/hello` and this Worker makes a `fetch` request to `https://notexample.com/hello`.
As far as cache is concerned, the asset in the `fetch` request (`https://notexample.com/hello`) is the asset that is cached. To purge it, you need to purge `https://notexample.com/hello`.
Purging the end user URL, `https://example.com/hello`, will not work because that is not the URL that cache sees. You need to confirm in your Worker which URL you are actually fetching, so you can purge the correct asset.
In the previous example, `https://notexample.com/hello` is not proxied through Cloudflare. If `https://notexample.com/hello` was proxied ([orange-clouded](/dns/proxy-status/)) through Cloudflare, then you must own `notexample.com` and purge `https://notexample.com/hello` from the `notexample.com` zone.
To better understand the example, review the following diagram:
```mermaid
flowchart TD
accTitle: Single file purge assets cached by a worker
accDescr: This diagram is meant to help choose how to purge a file.
A("You have a Worker script that runs on https://example.com/hello and this Worker makes a fetch request to https://notexample.com/hello.") --> B(Is notexample.com an active zone on Cloudflare?)
B -- Yes --> C(Is https://notexample.com/ proxied through Cloudflare?)
B -- No --> D(Purge https://notexample.com/hello from the original example.com zone.)
C -- Yes --> E(Do you own notexample.com?)
C -- No --> F(Purge https://notexample.com/hello from the original example.com zone.)
E -- Yes --> G(Purge https://notexample.com/hello from the notexample.com zone.)
E -- No --> H(Sorry, you can not purge the asset. Only the owner of notexample.com can purge it.)
```
### Purge assets stored with the Cache API
Assets stored in the cache through [Cache API](/workers/runtime-apis/cache/) operations can be purged in a couple of ways:
- Call `cache.delete` within a Worker to invalidate the cache for the asset with a matching request variable.
- Assets purged in this way are only purged locally to the data center the Worker runtime was executed.
- To purge an asset globally, you must use the standard cache purge options. Based on cache API implementation, not all cache purge endpoints function for purging assets stored by the Cache API.
- All assets on a zone can be purged by using the [Purge Everything](/cache/how-to/purge-cache/purge-everything/) cache operation. This purge will remove all assets associated with a Cloudflare zone from cache in all data centers regardless of the method set.
- Available to Enterprise Customers, [Cache Tags](/cache/how-to/purge-cache/purge-by-tags/#add-cache-tag-http-response-headers) can be added to requests dynamically in a Worker by calling `response.headers.append()` and appending `Cache-Tag` values dynamically to that request. Once set, those tags can be used to selectively purge assets from cache without invalidating all cached assets on a zone.
- Currently, it is not possible to purge a URL stored through Cache API that uses a custom cache key set by a Worker. Instead, use a [custom key created via Cache Rules](/cache/how-to/cache-rules/settings/#cache-key). Alternatively, purge your assets using purge everything, purge by tag, purge by host or purge by prefix.
## Edge versus browser caching
The browser cache is controlled through the `Cache-Control` header sent in the response to the client (the `Response` instance return from the handler). Workers can customize browser cache behavior by setting this header on the response.
Other means to control Cloudflare’s cache that are not mentioned in this documentation include: Page Rules and Cloudflare cache settings. Refer to the [How to customize Cloudflare’s cache](/cache/concepts/customize-cache/) if you wish to avoid writing JavaScript with still some granularity of control.
:::note[What should I use: the Cache API or fetch for caching objects on Cloudflare?]
For requests where Workers are behaving as middleware (that is, Workers are sending a subrequest via `fetch`) it is recommended to use `fetch`. This is because preexisting settings are in place that optimize caching while preventing unintended dynamic caching. For projects where there is no backend (that is, the entire project is on Workers as in [Workers Sites](/workers/configuration/sites/start-from-scratch)) the Cache API is the only option to customize caching.
The asset will be cached under the hostname specified within the Worker's subrequest — not the Worker's own hostname. Therefore, in order to purge the cached asset, the purge will have to be performed for the hostname included in the Worker subrequest.
:::
### `fetch`
In the context of Workers, a [`fetch`](/workers/runtime-apis/fetch/) provided by the runtime communicates with the Cloudflare cache. First, `fetch` checks to see if the URL matches a different zone. If it does, it reads through that zone’s cache (or Worker). Otherwise, it reads through its own zone’s cache, even if the URL is for a non-Cloudflare site. Cache settings on `fetch` automatically apply caching rules based on your Cloudflare settings. `fetch` does not allow you to modify or inspect objects before they reach the cache, but does allow you to modify how it will cache.
When a response fills the cache, the response header contains `CF-Cache-Status: HIT`. You can tell an object is attempting to cache if one sees the `CF-Cache-Status` at all.
This [template](/workers/examples/cache-using-fetch/) shows ways to customize Cloudflare cache behavior on a given request using fetch.
### Cache API
The [Cache API](/workers/runtime-apis/cache/) can be thought of as an ephemeral key-value store, whereby the `Request` object (or more specifically, the request URL) is the key, and the `Response` is the value.
There are two types of cache namespaces available to the Cloudflare Cache:
- **`caches.default`** – You can access the default cache (the same cache shared with `fetch` requests) by accessing `caches.default`. This is useful when needing to override content that is already cached, after receiving the response.
- **`caches.open()`** – You can access a namespaced cache (separate from the cache shared with `fetch` requests) using `let cache = await caches.open(CACHE_NAME)`. Note that [`caches.open`](https://developer.mozilla.org/en-US/docs/Web/API/CacheStorage/open) is an async function, unlike `caches.default`.
When to use the Cache API:
- When you want to programmatically save and/or delete responses from a cache. For example, say an origin is responding with a `Cache-Control: max-age:0` header and cannot be changed. Instead, you can clone the `Response`, adjust the header to the `max-age=3600` value, and then use the Cache API to save the modified `Response` for an hour.
- When you want to programmatically access a Response from a cache without relying on a `fetch` request. For example, you can check to see if you have already cached a `Response` for the `https://example.com/slow-response` endpoint. If so, you can avoid the slow request.
This [template](/workers/examples/cache-api/) shows ways to use the cache API. For limits of the cache API, refer to [Limits](/workers/platform/limits/#cache-api-limits).
:::caution[Tiered caching and the Cache API]
Cache API within Workers does not support tiered caching. Tiered Cache concentrates connections to origin servers so they come from a small number of data centers rather than the full set of network locations. Cache API is local to a data center, this means that `cache.match` does a lookup, `cache.put` stores a response, and `cache.delete` removes a stored response only in the cache of the data center that the Worker handling the request is in. Because these methods apply only to local cache, they will not work with tiered cache.
:::
## Related resources
- [Cache API](/workers/runtime-apis/cache/)
- [Customize cache behavior with Workers](/cache/interaction-cloudflare-products/workers/)
---
# How Workers works
URL: https://developers.cloudflare.com/workers/reference/how-workers-works/
import { Render, NetworkMap, WorkersIsolateDiagram } from "~/components"
Though Cloudflare Workers behave similarly to [JavaScript](https://www.cloudflare.com/learning/serverless/serverless-javascript/) in the browser or in Node.js, there are a few differences in how you have to think about your code. Under the hood, the Workers runtime uses the [V8 engine](https://www.cloudflare.com/learning/serverless/glossary/what-is-chrome-v8/) — the same engine used by Chromium and Node.js. The Workers runtime also implements many of the standard [APIs](/workers/runtime-apis/) available in most modern browsers.
The differences between JavaScript written for the browser or Node.js happen at runtime. Rather than running on an individual's machine (for example, [a browser application or on a centralized server](https://www.cloudflare.com/learning/serverless/glossary/client-side-vs-server-side/)), Workers functions run on [Cloudflare's global network](https://www.cloudflare.com/network) - a growing global network of thousands of machines distributed across hundreds of locations.
Each of these machines hosts an instance of the Workers runtime, and each of those runtimes is capable of running thousands of user-defined applications. This guide will review some of those differences.
For more information, refer to the [Cloud Computing without Containers blog post](https://blog.cloudflare.com/cloud-computing-without-containers).
The three largest differences are: Isolates, Compute per Request, and Distributed Execution.
## Isolates
[V8](https://v8.dev) orchestrates isolates: lightweight contexts that provide your code with variables it can access and a safe environment to be executed within. You could even consider an isolate a sandbox for your function to run in.
A given isolate has its own scope, but isolates are not necessarily long-lived. An isolate may be spun down and evicted for a number of reasons:
* Resource limitations on the machine.
* A suspicious script - anything seen as trying to break out of the isolate sandbox.
* Individual [resource limits](/workers/platform/limits/).
Because of this, it is generally advised that you not store mutable state in your global scope unless you have accounted for this contingency.
If you are interested in how Cloudflare handles security with the Workers runtime, you can [read more about how Isolates relate to Security and Spectre Threat Mitigation](/workers/reference/security-model/).
## Compute per request
## Distributed execution
Isolates are resilient and continuously available for the duration of a request, but in rare instances isolates may be evicted. When a Worker hits official [limits](/workers/platform/limits/) or when resources are exceptionally tight on the machine the request is running on, the runtime will selectively evict isolates after their events are properly resolved.
Like all other JavaScript platforms, a single Workers instance may handle multiple requests including concurrent requests in a single-threaded event loop. That means that other requests may (or may not) be processed during awaiting any `async` tasks (such as `fetch`) if other requests come in while processing a request.
Because there is no guarantee that any two user requests will be routed to the same or a different instance of your Worker, Cloudflare recommends you do not use or mutate global state.
## Related resources
* [`fetch()` handler](/workers/runtime-apis/handlers/fetch/) - Review how incoming HTTP requests to a Worker are passed to the `fetch()` handler.
* [Request](/workers/runtime-apis/request/) - Learn how incoming HTTP requests are passed to the `fetch()` handler.
* [Workers limits](/workers/platform/limits/) - Learn about Workers limits including Worker size, startup time, and more.
---
# Reference
URL: https://developers.cloudflare.com/workers/reference/
import { DirectoryListing } from "~/components";
Conceptual knowledge about how Workers works.
---
# Migrate from Service Workers to ES Modules
URL: https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/
import { WranglerConfig } from "~/components";
This guide will show you how to migrate your Workers from the [Service Worker](https://developer.mozilla.org/en-US/docs/Web/API/Service_Worker_API) format to the [ES modules](https://blog.cloudflare.com/workers-javascript-modules/) format.
## Advantages of migrating
There are several reasons to migrate your Workers to the ES modules format:
1. [Durable Objects](/durable-objects/), [D1](/d1/), [Workers AI](/workers-ai/), [Vectorize](/vectorize/) and other bindings can only be used from Workers that use ES modules.
2. Your Worker will run faster. With service workers, bindings are exposed as globals. This means that for every request, the Workers runtime must create a new JavaScript execution context, which adds overhead and time. Workers written using ES modules can reuse the same execution context across multiple requests.
3. You can [gradually deploy changes to your Worker](/workers/configuration/versions-and-deployments/gradual-deployments/) when you use the ES modules format.
4. You can easily publish Workers using ES modules to `npm`, allowing you to import and reuse Workers within your codebase.
## Migrate a Worker
The following example demonstrates a Worker that redirects all incoming requests to a URL with a `301` status code.
With the Service Worker syntax, the example Worker looks like:
```js
async function handler(request) {
const base = 'https://example.com';
const statusCode = 301;
const destination = new URL(request.url, base);
return Response.redirect(destination.toString(), statusCode);
}
// Initialize Worker
addEventListener('fetch', event => {
event.respondWith(handler(event.request));
});
```
Workers using ES modules format replace the `addEventListener` syntax with an object definition, which must be the file's default export (via `export default`). The previous example code becomes:
```js
export default {
fetch(request) {
const base = "https://example.com";
const statusCode = 301;
const source = new URL(request.url);
const destination = new URL(source.pathname, base);
return Response.redirect(destination.toString(), statusCode);
},
};
```
## Bindings
[Bindings](/workers/runtime-apis/bindings/) allow your Workers to interact with resources on the Cloudflare developer platform.
Workers using ES modules format do not rely on any global bindings. However, Service Worker syntax accesses bindings on the global scope.
To understand bindings, refer the following `TODO` KV namespace binding example. To create a `TODO` KV namespace binding, you will:
1. Create a KV namespace named `My Tasks` and receive an ID that you will use in your binding.
2. Create a Worker.
3. Find your Worker's [Wrangler configuration file](/workers/wrangler/configuration/) and add a KV namespace binding:
```toml
kv_namespaces = [
{ binding = "TODO", id = "" }
]
```
In the following sections, you will use your binding in Service Worker and ES modules format.
:::note[Reference KV from Durable Objects and Workers]
To learn more about how to reference KV from Workers, refer to the [KV bindings documentation](/kv/concepts/kv-bindings/).
:::
### Bindings in Service Worker format
In Service Worker syntax, your `TODO` KV namespace binding is defined in the global scope of your Worker. Your `TODO` KV namespace binding is available to use anywhere in your Worker application's code.
```js
addEventListener("fetch", async (event) => {
return await getTodos()
});
async function getTodos() {
// Get the value for the "to-do:123" key
// NOTE: Relies on the TODO KV binding that maps to the "My Tasks" namespace.
let value = await TODO.get("to-do:123");
// Return the value, as is, for the Response
event.respondWith(new Response(value));
}
```
### Bindings in ES modules format
In ES modules format, bindings are only available inside the `env` parameter that is provided at the entry point to your Worker.
To access the `TODO` KV namespace binding in your Worker code, the `env` parameter must be passed from the `fetch` handler in your Worker to the `getTodos` function.
```js
import { getTodos } from './todos'
export default {
async fetch(request, env, ctx) {
// Passing the env parameter so other functions
// can reference the bindings available in the Workers application
return await getTodos(env)
},
};
```
The following code represents a `getTodos` function that calls the `get` function on the `TODO` KV binding.
```js
async function getTodos(env) {
// NOTE: Relies on the TODO KV binding which has been provided inside of
// the env parameter of the `getTodos` function
let value = await env.TODO.get("to-do:123");
return new Response(value);
}
export { getTodos }
```
## Environment variables
[Environment variables](/workers/configuration/environment-variables/) are accessed differently in code written in ES modules format versus Service Worker format.
Review the following example environment variable configuration in the [Wrangler configuration file](/workers/wrangler/configuration/):
```toml
name = "my-worker-dev"
# Define top-level environment variables
# under the `[vars]` block using
# the `key = "value"` format
[vars]
API_ACCOUNT_ID = ""
```
### Environment variables in Service Worker format
In Service Worker format, the `API_ACCOUNT_ID` is defined in the global scope of your Worker application. Your `API_ACCOUNT_ID` environment variable is available to use anywhere in your Worker application's code.
```js
addEventListener("fetch", async (event) => {
console.log(API_ACCOUNT_ID) // Logs ""
return new Response("Hello, world!")
})
```
### Environment variables in ES modules format
In ES modules format, environment variables are only available inside the `env` parameter that is provided at the entrypoint to your Worker application.
```js
export default {
async fetch(request, env, ctx) {
console.log(env.API_ACCOUNT_ID) // Logs ""
return new Response("Hello, world!")
},
};
```
## Cron Triggers
To handle a [Cron Trigger](/workers/configuration/cron-triggers/) event in a Worker written with ES modules syntax, implement a [`scheduled()` event handler](/workers/runtime-apis/handlers/scheduled/#syntax), which is the equivalent of listening for a `scheduled` event in Service Worker syntax.
This example code:
```js
addEventListener("scheduled", (event) => {
// ...
});
```
Then becomes:
```js
export default {
async scheduled(event, env, ctx) {
// ...
},
};
```
## Access `event` or `context` data
Workers often need access to data not in the `request` object. For example, sometimes Workers use [`waitUntil`](/workers/runtime-apis/context/#waituntil) to delay execution. Workers using ES modules format can access `waitUntil` via the `context` parameter. Refer to [ES modules parameters](/workers/runtime-apis/handlers/fetch/#parameters) for more information.
This example code:
```js
async function triggerEvent(event) {
// Fetch some data
console.log('cron processed', event.scheduledTime);
}
// Initialize Worker
addEventListener('scheduled', event => {
event.waitUntil(triggerEvent(event));
});
```
Then becomes:
```js
async function triggerEvent(event) {
// Fetch some data
console.log('cron processed', event.scheduledTime);
}
export default {
async scheduled(event, env, ctx) {
ctx.waitUntil(triggerEvent(event));
},
};
```
## Service Worker syntax
A Worker written in Service Worker syntax consists of two parts:
1. An event listener that listens for `FetchEvents`.
2. An event handler that returns a [Response](/workers/runtime-apis/response/) object which is passed to the event’s `.respondWith()` method.
When a request is received on one of Cloudflare’s global network servers for a URL matching a Worker, Cloudflare's server passes the request to the Workers runtime. This dispatches a `FetchEvent` in the [isolate](/workers/reference/how-workers-works/#isolates) where the Worker is running.
```js
addEventListener('fetch', event => {
event.respondWith(handleRequest(event.request));
});
async function handleRequest(request) {
return new Response('Hello worker!', {
headers: { 'content-type': 'text/plain' },
});
}
```
Below is an example of the request response workflow:
1. An event listener for the `FetchEvent` tells the script to listen for any request coming to your Worker. The event handler is passed the `event` object, which includes `event.request`, a [`Request`](/workers/runtime-apis/request/) object which is a representation of the HTTP request that triggered the `FetchEvent`.
2. The call to `.respondWith()` lets the Workers runtime intercept the request in order to send back a custom response (in this example, the plain text `'Hello worker!'`).
* The `FetchEvent` handler typically culminates in a call to the method `.respondWith()` with either a [`Response`](/workers/runtime-apis/response/) or `Promise` that determines the response.
* The `FetchEvent` object also provides [two other methods](/workers/runtime-apis/handlers/fetch/) to handle unexpected exceptions and operations that may complete after a response is returned.
Learn more about [the lifecycle methods of the `fetch()` handler](/workers/runtime-apis/rpc/lifecycle/).
### Supported `FetchEvent` properties
* `event.type` string
* The type of event. This will always return `"fetch"`.
* `event.request` Request
* The incoming HTTP request.
* event.respondWith(responseResponse|Promise) : void
* Refer to [`respondWith`](#respondwith).
* event.waitUntil(promisePromise) : void
* Refer to [`waitUntil`](#waituntil).
* event.passThroughOnException() : void
* Refer to [`passThroughOnException`](#passthroughonexception).
### `respondWith`
Intercepts the request and allows the Worker to send a custom response.
If a `fetch` event handler does not call `respondWith`, the runtime delivers the event to the next registered `fetch` event handler. In other words, while not recommended, this means it is possible to add multiple `fetch` event handlers within a Worker.
If no `fetch` event handler calls `respondWith`, then the runtime forwards the request to the origin as if the Worker did not. However, if there is no origin – or the Worker itself is your origin server, which is always true for `*.workers.dev` domains – then you must call `respondWith` for a valid response.
```js
// Format: Service Worker
addEventListener('fetch', event => {
let { pathname } = new URL(event.request.url);
// Allow "/ignore/*" URLs to hit origin
if (pathname.startsWith('/ignore/')) return;
// Otherwise, respond with something
event.respondWith(handler(event));
});
```
### `waitUntil`
The `waitUntil` command extends the lifetime of the `"fetch"` event. It accepts a `Promise`-based task which the Workers runtime will execute before the handler terminates but without blocking the response. For example, this is ideal for [caching responses](/workers/runtime-apis/cache/#put) or handling logging.
With the Service Worker format, `waitUntil` is available within the `event` because it is a native `FetchEvent` property.
With the ES modules format, `waitUntil` is moved and available on the `context` parameter object.
```js
// Format: Service Worker
addEventListener('fetch', event => {
event.respondWith(handler(event));
});
async function handler(event) {
// Forward / Proxy original request
let res = await fetch(event.request);
// Add custom header(s)
res = new Response(res.body, res);
res.headers.set('x-foo', 'bar');
// Cache the response
// NOTE: Does NOT block / wait
event.waitUntil(caches.default.put(event.request, res.clone()));
// Done
return res;
}
```
### `passThroughOnException`
The `passThroughOnException` method prevents a runtime error response when the Worker throws an unhandled exception. Instead, the script will [fail open](https://community.microfocus.com/cyberres/b/sws-22/posts/security-fundamentals-part-1-fail-open-vs-fail-closed), which will proxy the request to the origin server as though the Worker was never invoked.
To prevent JavaScript errors from causing entire requests to fail on uncaught exceptions, `passThroughOnException()` causes the Workers runtime to yield control to the origin server.
With the Service Worker format, `passThroughOnException` is added to the `FetchEvent` interface, making it available within the `event`.
With the ES modules format, `passThroughOnException` is available on the `context` parameter object.
```js
// Format: Service Worker
addEventListener('fetch', event => {
// Proxy to origin on unhandled/uncaught exceptions
event.passThroughOnException();
throw new Error('Oops');
});
```
---
# Protocols
URL: https://developers.cloudflare.com/workers/reference/protocols/
Cloudflare Workers support the following protocols and interfaces:
| Protocol | Inbound | Outbound |
| ---------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ |
| **HTTP / HTTPS** | Handle incoming HTTP requests using the [`fetch()` handler](/workers/runtime-apis/handlers/fetch/) | Make HTTP subrequests using the [`fetch()` API](/workers/runtime-apis/fetch/) |
| **Direct TCP sockets** | Support for handling inbound TCP connections is [coming soon](https://blog.cloudflare.com/workers-tcp-socket-api-connect-databases/) | Create outbound TCP connections using the [`connect()` API](/workers/runtime-apis/tcp-sockets/) |
| **WebSockets** | Accept incoming WebSocket connections using the [`WebSocket` API](/workers/runtime-apis/websockets/), or with [MQTT over WebSockets (Pub/Sub)](/pub-sub/learning/websockets-browsers/) | [MQTT over WebSockets (Pub/Sub)](/pub-sub/learning/websockets-browsers/) |
| **MQTT** | Handle incoming messages to an MQTT broker with [Pub Sub](/pub-sub/learning/integrate-workers/) | Support for publishing MQTT messages to an MQTT topic is [coming soon](/pub-sub/learning/integrate-workers/) |
| **HTTP/3 (QUIC)** | Accept inbound requests over [HTTP/3](https://www.cloudflare.com/learning/performance/what-is-http3/) by enabling it on your [zone](/fundamentals/setup/accounts-and-zones/#zones) in **Speed** > **Optimization** > **Protocol Optimization** area of the [Cloudflare dashboard](https://dash.cloudflare.com/). | |
| **SMTP** | Use [Email Workers](/email-routing/email-workers/) to process and forward email, without having to manage TCP connections to SMTP email servers | [Email Workers](/email-routing/email-workers/) |
---
# Security model
URL: https://developers.cloudflare.com/workers/reference/security-model/
import { WorkersArchitectureDiagram } from "~/components"
This article includes an overview of Cloudflare security architecture, and then addresses two frequently asked about issues: V8 bugs and Spectre.
Since the very start of the Workers project, security has been a high priority — there was a concern early on that when hosting a large number of tenants on shared infrastructure, side channels of various kinds would pose a threat. The Cloudflare Workers runtime is carefully designed to defend against side channel attacks.
To this end, Workers is designed to make it impossible for code to measure its own execution time locally. For example, the value returned by `Date.now()` is locked in place while code is executing. No other timers are provided. Moreover, Cloudflare provides no access to concurrency (for example, multi-threading), as it could allow attackers to construct ad hoc timers. These design choices cannot be introduced retroactively into other platforms — such as web browsers — because they remove APIs that existing applications depend on. They were possible in Workers only because of runtime design choices from the start.
While these early design decisions have proven effective, Cloudflare is continuing to add defense-in-depth, including techniques to disrupt attacks by rescheduling Workers to create additional layers of isolation between suspicious Workers and high-value Workers.
The Workers approach is very different from the approach taken by most of the industry. It is resistant to the entire range of [Spectre-style attacks](https://www.cloudflare.com/learning/security/threats/meltdown-spectre/), without requiring special attention paid to each one and without needing to block speculation in general. However, because the Workers approach is different, it requires careful study. Cloudflare is currently working with researchers at Graz University of Technology (TU Graz) to study what has been done. These researchers include some of the people who originally discovered Spectre. Cloudflare will publish the results of this research as they becomes available.
For more details, refer to [this talk](https://www.infoq.com/presentations/cloudflare-v8/) by Kenton Varda, architect of Cloudflare Workers. Spectre is covered near the end.
## Architectural overview
Beginning with a quick overview of the Workers runtime architecture:
There are two fundamental parts of designing a code sandbox: secure isolation and API design.
### Isolation
First, a secure execution environment needed to be created wherein code cannot access anything it is not supposed to.
For this, the primary tool is V8, the JavaScript engine developed by Google for use in Chrome. V8 executes code inside isolates, which prevent that code from accessing memory outside the isolate — even within the same process. Importantly, this means Cloudflare can run many isolates within a single process. This is essential for an edge compute platform like Workers where Cloudflare must host many thousands of guest applications on every machine and rapidly switch between these guests thousands of times per second with minimal overhead. If Cloudflare had to run a separate process for every guest, the number of tenants Cloudflare could support would be drastically reduced, and Cloudflare would have to limit edge compute to a small number of big Enterprise customers. With isolate technology, Cloudflare can make edge compute available to everyone.
Sometimes, though, Cloudflare does decide to schedule a Worker in its own private process. Cloudflare does this if the Worker uses certain features that needs an extra layer of isolation. For example, when a developer uses the devtools debugger to inspect their Worker, Cloudflare runs that Worker in a separate process. This is because historically, in the browser, the inspector protocol has only been usable by the browser’s trusted operator, and therefore has not received as much security scrutiny as the rest of V8. In order to hedge against the increased risk of bugs in the inspector protocol, Cloudflare moves inspected Workers into a separate process with a process-level sandbox. Cloudflare also uses process isolation as an extra defense against Spectre.
Additionally, even for isolates that run in a shared process with other isolates, Cloudflare runs multiple instances of the whole runtime on each machine, which is called cordons. Workers are distributed among cordons by assigning each Worker a level of trust and separating low-trusted Workers from those trusted more highly. As one example of this in operation: a customer who signs up for the Free plan will not be scheduled in the same process as an Enterprise customer. This provides some defense-in-depth in the case a zero-day security vulnerability is found in V8.
At the whole-process level, Cloudflare applies another layer of sandboxing for defense in depth. The layer 2 sandbox uses Linux namespaces and `seccomp` to prohibit all access to the filesystem and network. Namespaces and `seccomp` are commonly used to implement containers. However, Cloudflare's use of these technologies is much stricter than what is usually possible in container engines, because Cloudflare configures namespaces and `seccomp` after the process has started but before any isolates have been loaded. This means, for example, Cloudflare can (and does) use a totally empty filesystem (mount namespace) and uses `seccomp` to block absolutely all filesystem-related system calls. Container engines cannot normally prohibit all filesystem access because doing so would make it impossible to use `exec()` to start the guest program from disk. In the Workers case, Cloudflare's guest programs are not native binaries and the Workers runtime itself has already finished loading before Cloudflare blocks filesystem access.
The layer 2 sandbox also totally prohibits network access. Instead, the process is limited to communicating only over local UNIX domain sockets to talk to other processes on the same system. Any communication to the outside world must be mediated by some other local process outside the sandbox.
One such process in particular, which is called the supervisor, is responsible for fetching Worker code and configuration from disk or from other internal services. The supervisor ensures that the sandbox process cannot read any configuration except that which is relevant to the Workers that it should be running.
For example, when the sandbox process receives a request for a Worker it has not seen before, that request includes the encryption key for that Worker’s code, including attached secrets. The sandbox can then pass that key to the supervisor in order to request the code. The sandbox cannot request any Worker for which it has not received the appropriate key. It cannot enumerate known Workers. It also cannot request configuration it does not need; for example, it cannot request the TLS key used for HTTPS traffic to the Worker.
Aside from reading configuration, the other reason for the sandbox to talk to other processes on the system is to implement APIs exposed to Workers.
### API design
There is a saying: If a tree falls in the forest, but no one is there to hear it, does it make a sound? A Cloudflare saying: If a Worker executes in a fully-isolated environment in which it is totally prevented from communicating with the outside world, does it actually run?
Complete code isolation is, in fact, useless. In order for Workers to do anything useful, they have to be allowed to communicate with users. At the very least, a Worker needs to be able to receive requests and respond to them. For Workers to send requests to the world safely, APIs are needed.
In the context of sandboxing, API design takes on a new level of responsibility. Cloudflare APIs define exactly what a Worker can and cannot do. Cloudflare must be very careful to design each API so that it can only express allowed operations and no more. For example, Cloudflare wants to allow Workers to make and receive HTTP requests, while not allowing them to be able to access the local filesystem or internal network services.
Currently, Workers does not allow any access to the local filesystem. Therefore, Cloudflare does not expose a filesystem API at all. No API means no access.
But, imagine if Workers did want to support local filesystem access in the future. How can that be done? Workers should not see the whole filesystem. Imagine, though, if each Worker had its own private directory on the filesystem where it can store whatever it wants.
To do this, Workers would use a design based on [capability-based security](https://en.wikipedia.org/wiki/Capability-based_security). Capabilities are a big topic, but in this case, what it would mean is that Cloudflare would give the Worker an object of type `Directory`, representing a directory on the filesystem. This object would have an API that allows creating and opening files and subdirectories, but does not permit traversing up the parent directory. Effectively, each Worker would see its private `Directory` as if it were the root of their own filesystem.
How would such an API be implemented? As described above, the sandbox process cannot access the real filesystem. Instead, file access would be mediated by the supervisor process. The sandbox talks to the supervisor using [Cap’n Proto RPC](https://capnproto.org/rpc.html), a capability-based RPC protocol. (Cap’n Proto is an open source project currently maintained by the Cloudflare Workers team.) This protocol makes it very easy to implement capability-based APIs, so that Cloudflare can strictly limit the sandbox to accessing only the files that belong to the Workers it is running.
Now what about network access? Today, Workers are allowed to talk to the rest of the world only via HTTP — both incoming and outgoing. There is no API for other forms of network access, therefore it is prohibited; although, Cloudflare plans to support other protocols in the future.
As mentioned before, the sandbox process cannot connect directly to the network. Instead, all outbound HTTP requests are sent over a UNIX domain socket to a local proxy service. That service implements restrictions on the request. For example, it verifies that the request is either addressed to a public Internet service or to the Worker’s zone’s own origin server, not to internal services that might be visible on the local machine or network. It also adds a header to every request identifying the Worker from which it originates, so that abusive requests can be traced and blocked. Once everything is in order, the request is sent on to the Cloudflare network's HTTP caching layer and then out to the Internet.
Similarly, inbound HTTP requests do not go directly to the Workers runtime. They are first received by an inbound proxy service. That service is responsible for TLS termination (the Workers runtime never sees TLS keys), as well as identifying the correct Worker script to run for a particular request URL. Once everything is in order, the request is passed over a UNIX domain socket to the sandbox process.
## V8 bugs and the patch gap
Every non-trivial piece of software has bugs and sandboxing technologies are no exception. Virtual machines, containers, and isolates — which Workers use — also have bugs.
Workers rely heavily on isolation provided by V8, the JavaScript engine built by Google for use in Chrome. This has pros and cons. On one hand, V8 is an extraordinarily complicated piece of technology, creating a wider attack surface than virtual machines. More complexity means more opportunities for something to go wrong. However, an extraordinary amount of effort goes into finding and fixing V8 bugs, owing to its position as arguably the most popular sandboxing technology in the world. Google regularly pays out 5-figure bounties to anyone finding a V8 sandbox escape. Google also operates fuzzing infrastructure that automatically finds bugs faster than most humans can. Google’s investment does a lot to minimize the danger of V8 zero-days — bugs that are found by malicious actors and not known to Google.
But, what happens after a bug is found and reported? V8 is open source, so fixes for security bugs are developed in the open and released to everyone at the same time. It is important that any patch be rolled out to production as fast as possible, before malicious actors can develop an exploit.
The time between publishing the fix and deploying it is known as the patch gap. Google previously [announced that Chrome’s patch gap had been reduced from 33 days to 15 days](https://www.zdnet.com/article/google-cuts-chrome-patch-gap-in-half-from-33-to-15-days/).
Fortunately, Cloudflare directly controls the machines on which the Workers runtime operates. Nearly the entire build and release process has been automated, so the moment a V8 patch is published, Cloudflare systems automatically build a new release of the Workers runtime and, after one-click sign-off from the necessary (human) reviewers, automatically push that release out to production.
As a result, the Workers patch gap is now under 24 hours. A patch published by V8’s team in Munich during their work day will usually be in production before the end of the US work day.
## Spectre: Introduction
The V8 team at Google has stated that [V8 itself cannot defend against Spectre](https://arxiv.org/abs/1902.05178). Workers does not need to depend on V8 for this. The Workers environment presents many alternative approaches to mitigating Spectre.
### What is it?
Spectre is a class of attacks in which a malicious program can trick the CPU into speculatively performing computation using data that the program is not supposed to have access to. The CPU eventually realizes the problem and does not allow the program to see the results of the speculative computation. However, the program may be able to derive bits of the secret data by looking at subtle side effects of the computation, such as the effects on the cache.
For more information about Spectre, refer to the [Learning Center page on the topic](https://www.cloudflare.com/learning/security/threats/meltdown-spectre/).
### Why does it matter for Workers?
Spectre encompasses a wide variety of vulnerabilities present in modern CPUs. The specific vulnerabilities vary by architecture and model and it is likely that many vulnerabilities exist which have not yet been discovered.
These vulnerabilities are a problem for every cloud compute platform. Any time you have more than one tenant running code on the same machine, Spectre attacks are possible. However, the closer together the tenants are, the more difficult it can be to mitigate specific vulnerabilities. Many of the known issues can be mitigated at the kernel level (protecting processes from each other) or at the hypervisor level (protecting VMs), often with the help of CPU microcode updates and various defenses (many of which can come with serious performance impact).
In Cloudflare Workers, tenants are isolated from each other using V8 isolates — not processes nor VMs. This means that Workers cannot necessarily rely on OS or hypervisor patches to prevent Spectre. Workers need its own strategy.
### Why not use process isolation?
Cloudflare Workers is designed to run your code in every single Cloudflare location.
Workers is designed to be a platform accessible to everyone. It needs to handle a huge number of tenants, where many tenants get very little traffic.
Combine these two points and planning becomes difficult.
A typical, non-edge serverless provider could handle a low-traffic tenant by sending all of that tenant’s traffic to a single machine, so that only one copy of the application needs to be loaded. If the machine can handle, say, a dozen tenants, that is plenty. That machine can be hosted in a massive data center with millions of machines, achieving economies of scale. However, this centralization incurs latency and worldwide bandwidth costs when the users are not nearby.
With Workers, on the other hand, every tenant, regardless of traffic level, currently runs in every Cloudflare location. And in the quest to get as close to the end user as possible, Cloudflare sometimes chooses locations that only have space for a limited number of machines. The net result is that Cloudflare needs to be able to host thousands of active tenants per machine, with the ability to rapidly spin up inactive ones on-demand. That means that each guest cannot take more than a couple megabytes of memory — hardly enough space for a call stack, much less everything else that a process needs.
Moreover, Cloudflare need context switching to be computationally efficient. Many Workers resident in memory will only handle an event every now and then, and many Workers spend less than a fraction of a millisecond on any particular event. In this environment, a single core can easily find itself switching between thousands of different tenants every second. To handle one event, a significant amount of communication needs to happen between the guest application and its host, meaning still more switching and communications overhead. If each tenant lives in its own process, all this overhead is orders of magnitude larger than if many tenants live in a single process. When using strict process isolation in Workers, the CPU cost can easily be 10x what it is with a shared process.
In order to keep Workers inexpensive, fast, and accessible to everyone, Cloudflare needed to find a way to host multiple tenants in a single process.
### There is no fix for Spectre
Spectre does not have an official solution. Not even when using heavyweight virtual machines. Everyone is still vulnerable.
The industry encounters new Spectre attacks. Every couple months, researchers uncover a new Spectre vulnerability, CPU vendors release new microcode, and OS vendors release kernel patches. Everyone must continue updating.
But is it enough to merely deploy the latest patches?
More vulnerabilities exist but have not yet been publicized. To defend against Spectre, Cloudflare needed to take a different approach. It is not enough to block individual known vulnerabilities. Instead, entire classes of vulnerabilities must be addressed at once.
### Building a defense
It is unlikely that any all-encompassing fix for Spectre will be found. However, the following thought experiment raises points to consider:
Fundamentally, all Spectre vulnerabilities use side channels to detect hidden processor state. Side channels, by definition, involve observing some non-deterministic behavior of a system. Conveniently, most software execution environments try hard to eliminate non-determinism, because non-deterministic execution makes applications unreliable.
However, there are a few sorts of non-determinism that are still common. The most obvious among these is timing. The industry long ago gave up on the idea that a program should take the same amount of time every time it runs, because deterministic timing is fundamentally at odds with heuristic performance optimization. Most Spectre attacks focus on timing as a way to detect the hidden microarchitectural state of the CPU.
Some have proposed that this can be solved by making timers inaccurate or adding random noise. However, it turns out that this does not stop attacks; it only makes them slower. If the timer tracks real time at all, then anything you can do to make it inaccurate can be overcome by running an attack multiple times and using statistics to filter out inconsistencies.
Many security researchers see this as the end of the story. What good is slowing down an attack if the attack is still possible?
### Cascading slow-downs
However, measures that slow down an attack can be powerful.
The key insight is this: as an attack becomes slower, new techniques become practical to make it even slower still. The goal, then, is to chain together enough techniques that an attack becomes so slow as to be uninteresting.
Much of cryptography, after all, is technically vulnerable to brute force attacks — technically, with enough time, you can break it. But when the time required is thousands (or even billions) of years, this is a sufficient defense.
What can be done to slow down Spectre attacks to the point of meaninglessness?
## Freezing a Spectre attack
### Step 0: Do not allow native code
Workers does not allow our customers to upload native-code binaries to run on the Cloudflare network — only JavaScript and WebAssembly. Many other languages, like Python, Rust, or even Cobol, can be compiled or transpiled to one of these two formats. Both are passed through V8 to convert these formats into true native code.
This, in itself, does not necessarily make Spectre attacks harder. However, this is presented as step 0 because it is fundamental to enabling the following steps.
Accepting native code programs implies being beholden to an existing CPU architecture (typically, x86). In order to execute code with reasonable performance, it is usually necessary to run the code directly on real hardware, severely limiting the host’s control over how that execution plays out. For example, a kernel or hypervisor has no ability to prohibit applications from invoking the `CLFLUSH` instruction, an instruction [which is useful in side channel attacks](https://gruss.cc/files/flushflush.pdf) and almost nothing else.
Moreover, supporting native code typically implies supporting whole existing operating systems and software stacks, which bring with them decades of expectations about how the architecture works under them. For example, x86 CPUs allow a kernel or hypervisor to disable the RDTSC instruction, which reads a high-precision timer. Realistically, though, disabling it will break many programs because they are implemented to use RDTSC any time they want to know the current time.
Supporting native code would limit choice in future mitigation techniques. There is greater freedom in using an abstract intermediate format.
### Step 1: Disallow timers and multi-threading
In Workers, you can get the current time using the JavaScript Date API by calling `Date.now()`. However, the time value returned is not the current time. `Date.now()` returns the time of the last I/O. It does not advance during code execution. For example, if an attacker writes:
```js
let start = Date.now();
for (let i = 0; i < 1e6; i++) {
doSpectreAttack();
}
let end = Date.now();
```
The values of `start` and `end` will always be exactly the same. The attacker cannot use `Date` to measure the execution time of their code, which they would need to do to carry out an attack.
:::note
This measure was implemented in mid-2017, before Spectre was announced. This measure was implemented because Cloudflare was already concerned about side channel timing attacks. The Workers team has designed the system with side channels in mind.
:::
Similarly, multi-threading and shared memory are not permitted in Workers. Everything related to the processing of one event happens on the same thread. Otherwise, one would be able to race threads in order to guess and check the underlying timer. Multiple Workers are not allowed to operate on the same request concurrently. For example, if you have installed a Cloudflare App on your zone which is implemented using Workers, and your zone itself also uses Workers, then a request to your zone may actually be processed by two Workers in sequence. These run in the same thread.
At this point, measuring code execution time locally is prevented. However, it can still be measured remotely. For example, the HTTP client that is sending a request to trigger the execution of the Worker can measure how long it takes for the Worker to respond. Such a measurement is likely to be very noisy, as it would have to traverse the Internet and incur general networking costs. Such noise can be overcome, in theory, by executing the attack many times and taking an average.
:::note
It has been suggested that if Workers reset its execution environment on every request, that Workers would be in a much safer position against timing attacks. Unfortunately, it is not so simple. The execution state could be stored in a client — not the Worker itself — allowing a Worker to resume its previous state on every new request.
:::
In adversarial testing and with help from leading Spectre experts, Cloudflare has not been able to develop a remote timing attack that works in production. However, the lack of a working attack does not mean that Workers should stop building defenses. Instead, the Workers team is currently testing some more advanced measures.
### Step 2: Dynamic process isolation
If an attack is possible at all, it would take a long time to run — hours at the very least, maybe as long as weeks. But once an attack has been running even for a second, there is a large amount of new data that can be used to trigger further measures.
Spectre attacks exhibit abnormal behavior that would not usually be seen in a normal program. These attacks intentionally try to create pathological performance scenarios in order to amplify microarchitectural effects. This is especially true when the attack has already been forced to run billions of times in a loop in order to overcome other mitigations, like those discussed above. This tends to show up in metrics like CPU performance counters.
Now, the usual problem with using performance metrics to detect Spectre attacks is that there are sometimes false positives. Sometimes, a legitimate program behaves poorly. The runtime cannot shut down every application that has poor performance.
Instead, the runtime chooses to reschedule any Worker with suspicious performance metrics into its own process. As described above, the runtime cannot do this with every Worker because the overhead would be too high. However, it is acceptable to isolate a few Worker processes as a defense mechanism. If the Worker is legitimate, it will keep operating, with a little more overhead. Fortunately, Cloudflare can relocate a Worker into its own process at basically any time.
In fact, elaborate performance-counter based triggering may not even be necessary here. If a Worker uses a large amount of CPU time per event, then the overhead of isolating it in its own process is relatively less because it switches context less often. So, the runtime might as well use process isolation for any Worker that is CPU-hungry.
Once a Worker is isolated, Cloudflare can rely on the operating system’s Spectre defenses, as most desktop web browsers do.
Cloudflare has been working with the experts at Graz Technical University to develop this approach. TU Graz’s team co-discovered Spectre itself and has been responsible for a huge number of the follow-on discoveries since then. Cloudflare has developed the ability to dynamically isolate Workers and has identified metrics which reliably detect attacks.
As mentioned previously, process isolation is not a complete defense. Over time, Spectre attacks tend to be slower to carry out which means Cloudflare has the ability to reasonably guess and identify malicious actors. Isolating the process further slows down the potential attack.
### Step 3: Periodic whole-memory shuffling
At this point, all known attacks have been prevented. This leaves Workers susceptible to unknown attacks in the future, as with all other CPU-based systems. However, all new attacks will generally be very slow, taking days or longer, leaving Cloudflare with time to prepare a defense.
For example, it is within reason to restart the entire Workers runtime on a daily basis. This will reset the locations of everything in memory, forcing attacks to restart the process of discovering the locations of secrets. Cloudflare can also reschedule Workers across physical machines or cordons, so that the window to attack any particular neighbor is limited.
In general, because Workers are fundamentally preemptible (unlike containers or VMs), Cloudflare has a lot of freedom to frustrate attacks.
Cloudflare sees this as an ongoing investment — not something that will ever be done.
---
# Cache
URL: https://developers.cloudflare.com/workers/runtime-apis/cache/
## Background
The [Cache API](https://developer.mozilla.org/en-US/docs/Web/API/Cache) allows fine grained control of reading and writing from the [Cloudflare global network](https://www.cloudflare.com/network/) cache.
The Cache API is available globally but the contents of the cache do not replicate outside of the originating data center. A `GET /users` response can be cached in the originating data center, but will not exist in another data center unless it has been explicitly created.
:::caution[Tiered caching]
The `cache.put` method is not compatible with tiered caching. Refer to [Cache API](/workers/reference/how-the-cache-works/#cache-api) for more information. To perform tiered caching, use the [fetch API](/workers/reference/how-the-cache-works/#interact-with-the-cloudflare-cache).
:::
Workers deployed to custom domains have access to functional `cache` operations. So do [Pages functions](/pages/functions/), whether attached to custom domains or `*.pages.dev` domains.
However, any Cache API operations in the Cloudflare Workers dashboard editor and [Playground](/workers/playground/) previews will have no impact. For Workers fronted by [Cloudflare Access](https://www.cloudflare.com/teams/access/), the Cache API is not currently available.
:::note
This individualized zone cache object differs from Cloudflare’s Global CDN. For details, refer to [How the cache works](/workers/reference/how-the-cache-works/).
:::
***
## Accessing Cache
The `caches.default` API is strongly influenced by the web browsers’ Cache API, but there are some important differences. For instance, Cloudflare Workers runtime exposes a single global cache object.
```js
let cache = caches.default;
await cache.match(request);
```
You may create and manage additional Cache instances via the [`caches.open`](https://developer.mozilla.org/en-US/docs/Web/API/CacheStorage/open) method.
```js
let myCache = await caches.open('custom:cache');
await myCache.match(request);
```
***
## Headers
Our implementation of the Cache API respects the following HTTP headers on the response passed to `put()`:
* `Cache-Control`
* Controls caching directives. This is consistent with [Cloudflare Cache-Control Directives](/cache/concepts/cache-control#cache-control-directives). Refer to [Edge TTL](/cache/how-to/configure-cache-status-code#edge-ttl) for a list of HTTP response codes and their TTL when `Cache-Control` directives are not present.
* `Cache-Tag`
* Allows resource purging by tag(s) later (Enterprise only).
* `ETag`
* Allows `cache.match()` to evaluate conditional requests with `If-None-Match`.
* `Expires` string
* A string that specifies when the resource becomes invalid.
* `Last-Modified`
* Allows `cache.match()` to evaluate conditional requests with `If-Modified-Since`.
This differs from the web browser Cache API as they do not honor any headers on the request or response.
:::note
Responses with `Set-Cookie` headers are never cached, because this sometimes indicates that the response contains unique data. To store a response with a `Set-Cookie` header, either delete that header or set `Cache-Control: private=Set-Cookie` on the response before calling `cache.put()`.
Use the `Cache-Control` method to store the response without the `Set-Cookie` header.
:::
***
## Methods
### Put
```js
cache.put(request, response);
```
* put(request, response) : Promise
* Attempts to add a response to the cache, using the given request as the key. Returns a promise that resolves to `undefined` regardless of whether the cache successfully stored the response.
:::note
The `stale-while-revalidate` and `stale-if-error` directives are not supported when using the `cache.put` or `cache.match` methods.
:::
#### Parameters
* `request` string | Request
* Either a string or a [`Request`](/workers/runtime-apis/request/) object to serve as the key. If a string is passed, it is interpreted as the URL for a new Request object.
* `response` Response
* A [`Response`](/workers/runtime-apis/response/) object to store under the given key.
#### Invalid parameters
`cache.put` will throw an error if:
* The `request` passed is a method other than `GET`.
* The `response` passed has a `status` of [`206 Partial Content`](https://www.webfx.com/web-development/glossary/http-status-codes/what-is-a-206-status-code/).
* The `response` passed contains the header `Vary: *`. The value of the `Vary` header is an asterisk (`*`). Refer to the [Cache API specification](https://w3c.github.io/ServiceWorker/#cache-put) for more information.
#### Errors
`cache.put` returns a `413` error if `Cache-Control` instructs not to cache or if the response is too large.
### `Match`
```js
cache.match(request, options);
```
* match(request, options) : Promise``
* Returns a promise wrapping the response object keyed to that request.
:::note
The `stale-while-revalidate` and `stale-if-error` directives are not supported when using the `cache.put` or `cache.match` methods.
:::
#### Parameters
* `request` string | Request
* The string or [`Request`](/workers/runtime-apis/request/) object used as the lookup key. Strings are interpreted as the URL for a new `Request` object.
* `options`
* Can contain one possible property: `ignoreMethod` (Boolean). When `true`, the request is considered to be a `GET` request regardless of its actual value.
Unlike the browser Cache API, Cloudflare Workers do not support the `ignoreSearch` or `ignoreVary` options on `match()`. You can accomplish this behavior by removing query strings or HTTP headers at `put()` time.
Our implementation of the Cache API respects the following HTTP headers on the request passed to `match()`:
* `Range`
* Results in a `206` response if a matching response with a Content-Length header is found. Your Cloudflare cache always respects range requests, even if an `Accept-Ranges` header is on the response.
* `If-Modified-Since`
* Results in a `304` response if a matching response is found with a `Last-Modified` header with a value after the time specified in `If-Modified-Since`.
* `If-None-Match`
* Results in a `304` response if a matching response is found with an `ETag` header with a value that matches a value in `If-None-Match`.
* `cache.match()`
* Never sends a subrequest to the origin. If no matching response is found in cache, the promise that `cache.match()` returns is fulfilled with `undefined`.
#### Errors
`cache.match` generates a `504` error response when the requested content is missing or expired. The Cache API does not expose this `504` directly to the Worker script, instead returning `undefined`. Nevertheless, the underlying `504` is still visible in Cloudflare Logs.
If you use Cloudflare Logs, you may see these `504` responses with the `RequestSource` of `edgeWorkerCacheAPI`. Again, these are expected if the cached asset was missing or expired. Note that `edgeWorkerCacheAPI` requests are already filtered out in other views, such as Cache Analytics. To filter out these requests or to filter requests by end users of your website only, refer to [Filter end users](/analytics/graphql-api/features/filtering/#filter-end-users).
### `Delete`
```js
cache.delete(request, options);
```
* delete(request, options) : Promise``
Deletes the `Response` object from the cache and returns a `Promise` for a Boolean response:
* `true`: The response was cached but is now deleted
* `false`: The response was not in the cache at the time of deletion.
:::caution[Global purges]
The `cache.delete` method only purges content of the cache in the data center that the Worker was invoked. For global purges, refer to [Purging assets stored with the Cache API](/workers/reference/how-the-cache-works/#purge-assets-stored-with-the-cache-api).
:::
#### Parameters
* `request` string | Request
* The string or [`Request`](/workers/runtime-apis/request/) object used as the lookup key. Strings are interpreted as the URL for a new `Request` object.
* `options` object
* Can contain one possible property: `ignoreMethod` (Boolean). Consider the request method a GET regardless of its actual value.
***
## Related resources
* [How the cache works](/workers/reference/how-the-cache-works/)
* [Example: Cache using `fetch()`](/workers/examples/cache-using-fetch/)
* [Example: using the Cache API](/workers/examples/cache-api/)
* [Example: caching POST requests](/workers/examples/cache-post-request/)
---
# Console
URL: https://developers.cloudflare.com/workers/runtime-apis/console/
The `console` object provides a set of methods to help you emit logs, warnings, and debug code.
All standard [methods of the `console` API](https://developer.mozilla.org/en-US/docs/Web/API/console) are present on the `console` object in Workers.
However, some methods are no ops — they can be called, and do not emit an error, but do not do anything. This ensures compatibility with libraries which may use these APIs.
The table below enumerates each method, and the extent to which it is supported in Workers.
All methods noted as "✅ supported" have the following behavior:
* They will be written to the console in local dev (`npx wrangler@latest dev`)
* They will appear in live logs, when tailing logs in the dashboard or running [`wrangler tail`](https://developers.cloudflare.com/workers/observability/log-from-workers/#use-wrangler-tail)
* They will create entries in the `logs` field of [Tail Worker](https://developers.cloudflare.com/workers/observability/tail-workers/) events and [Workers Trace Events](https://developers.cloudflare.com/logs/reference/log-fields/account/workers_trace_events/), which can be pushed to a destination of your choice via [Logpush](https://developers.cloudflare.com/workers/observability/logpush/).
All methods noted as "🟡 partial support" have the following behavior:
* In both production and local development the method can be safely called, but will do nothing (no op)
* In the [Workers Playground](https://workers.cloudflare.com/playground), Quick Editor in the Workers dashboard, and remote preview mode (`wrangler dev --remote`) calling the method will behave as expected, print to the console, etc.
Refer to [Log from Workers](https://developers.cloudflare.com/workers/observability/log-from-workers/) for more on debugging and adding logs to Workers.
| Method | Behavior |
| -------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------- |
| [`console.debug()`](https://developer.mozilla.org/en-US/docs/Web/API/console/debug_static) | ✅ supported |
| [`console.error()`](https://developer.mozilla.org/en-US/docs/Web/API/console/error_static) | ✅ supported |
| [`console.info()`](https://developer.mozilla.org/en-US/docs/Web/API/console/info_static) | ✅ supported |
| [`console.log()`](https://developer.mozilla.org/en-US/docs/Web/API/console/log_static) | ✅ supported |
| [`console.warn()`](https://developer.mozilla.org/en-US/docs/Web/API/console/warn_static) | ✅ supported |
| [`console.clear()`](https://developer.mozilla.org/en-US/docs/Web/API/console/clear_static) | 🟡 partial support |
| [`console.count()`](https://developer.mozilla.org/en-US/docs/Web/API/console/count_static) | 🟡 partial support |
| [`console.group()`](https://developer.mozilla.org/en-US/docs/Web/API/console/group_static) | 🟡 partial support |
| [`console.table()`](https://developer.mozilla.org/en-US/docs/Web/API/console/table_static) | 🟡 partial support |
| [`console.trace()`](https://developer.mozilla.org/en-US/docs/Web/API/console/trace_static) | 🟡 partial support |
| [`console.assert()`](https://developer.mozilla.org/en-US/docs/Web/API/console/assert_static) | ⚪ no op |
| [`console.countReset()`](https://developer.mozilla.org/en-US/docs/Web/API/console/countreset_static) | ⚪ no op |
| [`console.dir()`](https://developer.mozilla.org/en-US/docs/Web/API/console/dir_static) | ⚪ no op |
| [`console.dirxml()`](https://developer.mozilla.org/en-US/docs/Web/API/console/dirxml_static) | ⚪ no op |
| [`console.groupCollapsed()`](https://developer.mozilla.org/en-US/docs/Web/API/console/groupcollapsed_static) | ⚪ no op |
| [`console.groupEnd`](https://developer.mozilla.org/en-US/docs/Web/API/console/groupend_static) | ⚪ no op |
| [`console.profile()`](https://developer.mozilla.org/en-US/docs/Web/API/console/profile_static) | ⚪ no op |
| [`console.profileEnd()`](https://developer.mozilla.org/en-US/docs/Web/API/console/profileend_static) | ⚪ no op |
| [`console.time()`](https://developer.mozilla.org/en-US/docs/Web/API/console/time_static) | ⚪ no op |
| [`console.timeEnd()`](https://developer.mozilla.org/en-US/docs/Web/API/console/timeend_static) | ⚪ no op |
| [`console.timeLog()`](https://developer.mozilla.org/en-US/docs/Web/API/console/timelog_static) | ⚪ no op |
| [`console.timeStamp()`](https://developer.mozilla.org/en-US/docs/Web/API/console/timestamp_static) | ⚪ no op |
| [`console.createTask()`](https://developer.chrome.com/blog/devtools-modern-web-debugging/#linked-stack-traces) | 🔴 Will throw an exception in production, but works in local dev, Quick Editor, and remote preview |
---
# Context (ctx)
URL: https://developers.cloudflare.com/workers/runtime-apis/context/
The Context API provides methods to manage the lifecycle of your Worker or Durable Object.
Context is exposed via the following places:
* As the third parameter in all [handlers](/workers/runtime-apis/handlers/), including the [`fetch()` handler](/workers/runtime-apis/handlers/fetch/). (`fetch(request, env, ctx)`)
* As a class property of the [`WorkerEntrypoint` class](/workers/runtime-apis/bindings/service-bindings/rpc)
## `waitUntil`
`ctx.waitUntil()` extends the lifetime of your Worker, allowing you to perform work without blocking returning a response, and that may continue after a response is returned. It accepts a `Promise`, which the Workers runtime will continue executing, even after a response has been returned by the Worker's [handler](/workers/runtime-apis/handlers/).
`waitUntil` is commonly used to:
* Fire off events to external analytics providers. (note that when you use [Workers Analytics Engine](/analytics/analytics-engine/), you do not need to use `waitUntil`)
* Put items into cache using the [Cache API](/workers/runtime-apis/cache/)
:::note[Alternatives to waitUntil]
If you are using `waitUntil()` to emit logs or exceptions, we recommend using [Tail Workers](/workers/observability/logs/tail-workers/) instead. Even if your Worker throws an uncaught exception, the Tail Worker will execute, ensuring that you can emit logs or exceptions regardless of the Worker's invocation status.
[Cloudflare Queues](/queues/) is purpose-built for performing work out-of-band, without blocking returning a response back to the client Worker.
:::
You can call `waitUntil()` multiple times. Similar to `Promise.allSettled`, even if a promise passed to one `waitUntil` call is rejected, promises passed to other `waitUntil()` calls will still continue to execute.
For example:
```js
export default {
async fetch(request, env, ctx) {
// Forward / proxy original request
let res = await fetch(request);
// Add custom header(s)
res = new Response(res.body, res);
res.headers.set('x-foo', 'bar');
// Cache the response
// NOTE: Does NOT block / wait
ctx.waitUntil(caches.default.put(request, res.clone()));
// Done
return res;
},
};
```
## `passThroughOnException`
:::caution[Reuse of body]
The Workers Runtime uses streaming for request and response bodies. It does not buffer the body. Hence, if an exception occurs after the body has been consumed, `passThroughOnException()` cannot send the body again.
If this causes issues, we recommend cloning the request body and handling exceptions in code. This will protect against uncaught code exceptions. However some exception times such as exceed CPU or memory limits will not be mitigated.
:::
The `passThroughOnException` method allows a Worker to [fail open](https://community.microfocus.com/cyberres/b/sws-22/posts/security-fundamentals-part-1-fail-open-vs-fail-closed), and pass a request through to an origin server when a Worker throws an unhandled exception. This can be useful when using Workers as a layer in front of an existing service, allowing the service behind the Worker to handle any unexpected error cases that arise in your Worker.
```js
export default {
async fetch(request, env, ctx) {
// Proxy to origin on unhandled/uncaught exceptions
ctx.passThroughOnException();
throw new Error('Oops');
},
};
```
---
# EventSource
URL: https://developers.cloudflare.com/workers/runtime-apis/eventsource/
## Background
The [`EventSource`](https://developer.mozilla.org/en-US/docs/Web/API/EventSource) interface is a server-sent event API that allows a server to push events to a client. The `EventSource` object is used to receive server-sent events. It connects to a server over HTTP and receives events in a text-based format.
### Constructor
```js
let eventSource = new EventSource(url, options);
```
* `url` USVString - The URL to which to connect.
* `options` EventSourceInit - An optional dictionary containing any optional settings.
By default, the `EventSource` will use the global `fetch()` function under the
covers to make requests. If you need to use a different fetch implementation as
provided by a Cloudflare Workers binding, you can pass the `fetcher` option:
```js
export default {
async fetch(req, env) {
let eventSource = new EventSource(url, { fetcher: env.MYFETCHER });
// ...
}
};
```
Note that the `fetcher` option is a Cloudflare Workers specific extension.
### Properties
* `eventSource.url` USVString read-only
* The URL of the event source.
* `eventSource.readyState` USVString read-only
* The state of the connection.
* `eventSource.withCredentials` Boolean read-only
* A Boolean indicating whether the `EventSource` object was instantiated with cross-origin (CORS) credentials set (`true`), or not (`false`).
### Methods
* `eventSource.close()`
* Closes the connection.
* `eventSource.onopen`
* An event handler called when a connection is opened.
* `eventSource.onmessage`
* An event handler called when a message is received.
* `eventSource.onerror`
* An event handler called when an error occurs.
### Events
* `message`
* Fired when a message is received.
* `open`
* Fired when the connection is opened.
* `error`
* Fired when an error occurs.
### Class Methods
* EventSource.from(readableStreamReadableStream) : EventSource
* This is a Cloudflare Workers specific extension that creates a new `EventSource` object from an existing `ReadableStream`. Such an instance does not initiate a new connection but instead attaches to the provided stream.
---
# Encoding
URL: https://developers.cloudflare.com/workers/runtime-apis/encoding/
## TextEncoder
### Background
The `TextEncoder` takes a stream of code points as input and emits a stream of bytes. Encoding types passed to the constructor are ignored and a UTF-8 `TextEncoder` is created.
[`TextEncoder()`](https://developer.mozilla.org/en-US/docs/Web/API/TextEncoder/TextEncoder) returns a newly constructed `TextEncoder` that generates a byte stream with UTF-8 encoding. `TextEncoder` takes no parameters and throws no exceptions.
### Constructor
```js
let encoder = new TextEncoder();
```
### Properties
* `encoder.encoding` DOMString read-only
* The name of the encoder as a string describing the method the `TextEncoder` uses (always `utf-8`).
### Methods
* encode(inputUSVString) : Uint8Array
* Encodes a string input.
***
## TextDecoder
### Background
The `TextDecoder` interface represents a UTF-8 decoder. Decoders take a stream of bytes as input and emit a stream of code points.
[`TextDecoder()`](https://developer.mozilla.org/en-US/docs/Web/API/TextDecoder/TextDecoder) returns a newly constructed `TextDecoder` that generates a code-point stream.
### Constructor
```js
let decoder = new TextDecoder();
```
### Properties
* `decoder.encoding` DOMString read-only
* The name of the decoder that describes the method the `TextDecoder` uses.
* `decoder.fatal` boolean read-only
* Indicates if the error mode is fatal.
* `decoder.ignoreBOM` boolean read-only
* Indicates if the byte-order marker is ignored.
### Methods
* `decode()` : DOMString
* Decodes using the method specified in the `TextDecoder` object. Learn more at [MDN’s `TextDecoder` documentation](https://developer.mozilla.org/en-US/docs/Web/API/TextDecoder/decode).
---
# Fetch
URL: https://developers.cloudflare.com/workers/runtime-apis/fetch/
import { TabItem, Tabs } from "~/components"
The [Fetch API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API) provides an interface for asynchronously fetching resources via HTTP requests inside of a Worker.
:::note
Asynchronous tasks such as `fetch` must be executed within a [handler](/workers/runtime-apis/handlers/). If you try to call `fetch()` within [global scope](https://developer.mozilla.org/en-US/docs/Glossary/Global_scope), your Worker will throw an error. Learn more about [the Request context](/workers/runtime-apis/request/#the-request-context).
:::
:::caution[Worker to Worker]
Worker-to-Worker `fetch` requests are possible with [Service bindings](/workers/runtime-apis/bindings/service-bindings/).
:::
## Syntax
```js null {3-7}
export default {
async scheduled(event, env, ctx) {
return await fetch("https://example.com", {
headers: {
"X-Source": "Cloudflare-Workers",
},
});
},
};
```
```js null {8}
addEventListener('fetch', event => {
// NOTE: can’t use fetch here, as we’re not in an async scope yet
event.respondWith(eventHandler(event));
});
async function eventHandler(event) {
// fetch can be awaited here since `event.respondWith()` waits for the Promise it receives to settle
const resp = await fetch(event.request);
return resp;
}
```
* fetch(resource, options optional) : Promise``
* Fetch returns a promise to a Response.
### Parameters
* [`resource`](https://developer.mozilla.org/en-US/docs/Web/API/fetch#resource) Request | string | URL
* `options` options
* `cache` `undefined | 'no-store'` optional
* Standard HTTP `cache` header. Only `cache: 'no-store'` is supported.
Any other `cache` header will result in a `TypeError` with the message `Unsupported cache mode: `.
* For all requests this forwards the `Pragma: no-cache` and `Cache-Control: no-cache` headers to the origin.
* For requests to origins not hosted by Cloudflare, `no-store` bypasses the use of Cloudflare's caches.
* An object that defines the content and behavior of the request.
***
## How the `Accept-Encoding` header is handled
When making a subrequest with the `fetch()` API, you can specify which forms of compression to prefer that the server will respond with (if the server supports it) by including the [`Accept-Encoding`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Accept-Encoding) header.
Workers supports both the gzip and brotli compression algorithms. Usually it is not necessary to specify `Accept-Encoding` or `Content-Encoding` headers in the Workers Runtime production environment – brotli or gzip compression is automatically requested when fetching from an origin and applied to the response when returning data to the client, depending on the capabilities of the client and origin server.
To support requesting brotli from the origin, you must enable the [`brotli_content_encoding`](/workers/configuration/compatibility-flags/#brotli-content-encoding-support) compatibility flag in your Worker. Soon, this compatibility flag will be enabled by default for all Workers past an upcoming compatibility date.
### Passthrough behavior
One scenario where the Accept-Encoding header is useful is for passing through compressed data from a server to the client, where the Accept-Encoding allows the worker to directly receive the compressed data stream from the server without it being decompressed beforehand. As long as you do not read the body of the compressed response prior to returning it to the client and keep the `Content-Encoding` header intact, it will "pass through" without being decompressed and then recompressed again. This can be helpful when using Workers in front of origin servers or when fetching compressed media assets, to ensure that the same compression used by the origin server is used in the response that your Worker returns.
In addition to a change in the content encoding, recompression is also needed when a response uses an encoding not supported by the client. As an example, when a Worker requests either brotli or gzip as the encoding but the client only supports gzip, recompression will still be needed if the server returns brotli-encoded data to the server (and will be applied automatically). Note that this behavior may also vary based on the [compression rules](/rules/compression-rules/), which can be used to configure what compression should be applied for different types of data on the server side.
```typescript
export default {
async fetch(request) {
// Accept brotli or gzip compression
const headers = new Headers({
'Accept-Encoding': "br, gzip"
});
let response = await fetch("https://developers.cloudflare.com", {method: "GET", headers});
// As long as the original response body is returned and the Content-Encoding header is
// preserved, the same encoded data will be returned without needing to be compressed again.
return new Response(response.body, {
status: response.status,
statusText: response.statusText,
headers: response.headers,
});
}
}
```
## Related resources
* [Example: use `fetch` to respond with another site](/workers/examples/respond-with-another-site/)
* [Example: Fetch HTML](/workers/examples/fetch-html/)
* [Example: Fetch JSON](/workers/examples/fetch-json/)
* [Example: cache using Fetch](/workers/examples/cache-using-fetch/)
* Write your Worker code in [ES modules syntax](/workers/reference/migrate-to-module-workers/) for an optimized experience.
---
# Headers
URL: https://developers.cloudflare.com/workers/runtime-apis/headers/
## Background
All HTTP request and response headers are available through the [Headers API](https://developer.mozilla.org/en-US/docs/Web/API/Headers).
When a header name possesses multiple values, those values will be concatenated as a single, comma-delimited string value. This means that `Headers.get` will always return a string or a `null` value. This applies to all header names except for `Set-Cookie`, which requires `Headers.getAll`. This is documented below in [Differences](#differences).
```js
let headers = new Headers();
headers.get('x-foo'); //=> null
headers.set('x-foo', '123');
headers.get('x-foo'); //=> "123"
headers.set('x-foo', 'hello');
headers.get('x-foo'); //=> "hello"
headers.append('x-foo', 'world');
headers.get('x-foo'); //=> "hello, world"
```
## Differences
* Despite the fact that the `Headers.getAll` method has been made obsolete, Cloudflare still offers this method but only for use with the `Set-Cookie` header. This is because cookies will often contain date strings, which include commas. This can make parsing multiple values in a `Set-Cookie` header more difficult. Any attempts to use `Headers.getAll` with other header names will throw an error. A brief history `Headers.getAll` is available in this [GitHub issue](https://github.com/whatwg/fetch/issues/973).
* Due to [RFC 6265](https://www.rfc-editor.org/rfc/rfc6265) prohibiting folding multiple `Set-Cookie` headers into a single header, the `Headers.append` method will allow you to set multiple `Set-Cookie` response headers instead of appending the value onto the existing header.
```js
const headers = new Headers();
headers.append("Set-Cookie", "cookie1=value_for_cookie_1; Path=/; HttpOnly;");
headers.append("Set-Cookie", "cookie2=value_for_cookie_2; Path=/; HttpOnly;");
console.log(headers.getAll("Set-Cookie"));
// Array(2) [ cookie1=value_for_cookie_1; Path=/; HttpOnly;, cookie2=value_for_cookie_2; Path=/; HttpOnly; ]
```
* In Cloudflare Workers, the `Headers.get` method returns a [`USVString`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String) instead of a [`ByteString`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String), which is specified by the spec. For most scenarios, this should have no noticeable effect. To compare the differences between these two string classes, refer to this [Playground example](https://workers.cloudflare.com/playground#LYVwNgLglgDghgJwgegGYHsHALQBM4RwDcABAEbogB2+CAngLzbMutvvsCMALAJx-cAzAHZeANkG8AHAAZOU7t2EBWAEy9eqsXNWdOALg5HjbHv34jxk2fMUr1m7Z12cAsACgAwuioQApr7YACJQAM4w6KFQ0D76JBhYeATEJFRwwH4MAERQNH4AHgB0AFahWaSoUGAB6Zk5eUWlWR7evgEQ2AAqdDB+cXAwMGBQAMYEUD7IxXAAbnChIwiwEADUwOi44H4eHgURSCS4fqhw4BAkAN7uAJDzdFQj8X4QIwAWABQIfgCOIH6hEAAlJcbtdqucxucGCQsoBeDcAXHtZUHgkggCCoKSeAgkaFUPwAdxInQKEAAog8Nn4EO9AYUAiNKe9IYDkc8SPTKbgsVCSABlCBLKgAc0KqAQ6GAnleiG8R3ehQVaIx3JZoIZVFC6GqhTA6CF7yynVeYRIJrgJAAqryAGr8wVCkj46KvEjmyH6LIAGhIzLVPk12t1+sNxtCprD5oAQnR-Hbcg6nRAXW7sT5LZ0AGLYKQe70co5cgiq67XZDIEgACT8cCOCAjXxIoRAg0iflwJAg6EdmAA1iQfGA6I7nSRo7GBfHQt6yGj+yAEKCy6bgEM-BlfOM0yBQv9LTa48LQoUiaHUiSSMM8cOwGASDBBec4Ivy-jEFR466KLOk2FCqzzq81a1mGuIEpWQFUqE7wXDC+ZttgkJZHEcGFucAC+xbXF8EDzlQZ6EgASv8EQan4BpSn4Ix9pQ5xJn4JAAAatAGfgMa6NAdoBJBEeE-r0YBNaQR2XY7vRdFzhAMCzgyK6IGE-qFF6lwkAJwEkBhNxoe4aEeCYelGGYAiWBI0hyAoShqBoWg6HoLQ+P4gQhLxUQxFQcQJDg+CEKQaQZNkGSEF5cDlPEVQ1H5WRkLqZDNF49ntF0PR9K6gzDJCExUFMmpUDs7gXFkwBwLkAD66ybNUSH1EcjRlDp7j6Q1rCGRYogmTY5n2FZTguMwHhAA).
## Cloudflare headers
Cloudflare sets a number of its own custom headers on incoming requests and outgoing responses. While some may be used for its own tracking and bookkeeping, many of these can be useful to your own applications – or Workers – too.
For a list of documented Cloudflare request headers, refer to [Cloudflare HTTP headers](/fundamentals/reference/http-headers/).
## Related resources
* [Logging headers to console](/workers/examples/logging-headers/) - Review how to log headers in the console.
* [Cloudflare HTTP headers](/fundamentals/reference/http-headers/) - Contains a list of specific headers that Cloudflare adds.
---
# HTMLRewriter
URL: https://developers.cloudflare.com/workers/runtime-apis/html-rewriter/
import { Render } from "~/components";
## Background
The `HTMLRewriter` class allows developers to build comprehensive and expressive HTML parsers inside of a Cloudflare Workers application. It can be thought of as a jQuery-like experience directly inside of your Workers application. Leaning on a powerful JavaScript API to parse and transform HTML, `HTMLRewriter` allows developers to build deeply functional applications.
The `HTMLRewriter` class should be instantiated once in your Workers script, with a number of handlers attached using the `on` and `onDocument` functions.
---
## Constructor
```js
new HTMLRewriter()
.on("*", new ElementHandler())
.onDocument(new DocumentHandler());
```
---
## Global types
Throughout the `HTMLRewriter` API, there are a few consistent types that many properties and methods use:
- `Content` string | Response | ReadableStream
- Content inserted in the output stream should be a string, [`Response`](/workers/runtime-apis/response/), or [`ReadableStream`](/workers/runtime-apis/streams/readablestream/).
- `ContentOptions` Object
- `{ html: Boolean }` Controls the way the HTMLRewriter treats inserted content. If the `html` boolean is set to true, content is treated as raw HTML. If the `html` boolean is set to false or not provided, content will be treated as text and proper HTML escaping will be applied to it.
---
## Handlers
There are two handler types that can be used with `HTMLRewriter`: element handlers and document handlers.
### Element Handlers
An element handler responds to any incoming element, when attached using the `.on` function of an `HTMLRewriter` instance. The element handler should respond to `element`, `comments`, and `text`. The example processes `div` elements with an `ElementHandler` class.
```js
class ElementHandler {
element(element) {
// An incoming element, such as `div`
console.log(`Incoming element: ${element.tagName}`);
}
comments(comment) {
// An incoming comment
}
text(text) {
// An incoming piece of text
}
}
async function handleRequest(req) {
const res = await fetch(req);
return new HTMLRewriter().on("div", new ElementHandler()).transform(res);
}
```
### Document Handlers
A document handler represents the incoming HTML document. A number of functions can be defined on a document handler to query and manipulate a document’s `doctype`, `comments`, `text`, and `end`. Unlike an element handler, a document handler’s `doctype`, `comments`, `text`, and `end` functions are not scoped by a particular selector. A document handler's functions are called for all the content on the page including the content outside of the top-level HTML tag:
```js
class DocumentHandler {
doctype(doctype) {
// An incoming doctype, such as
}
comments(comment) {
// An incoming comment
}
text(text) {
// An incoming piece of text
}
end(end) {
// The end of the document
}
}
```
#### Async Handlers
All functions defined on both element and document handlers can return either `void` or a `Promise`. Making your handler function `async` allows you to access external resources such as an API via fetch, Workers KV, Durable Objects, or the cache.
```js
class UserElementHandler {
async element(element) {
let response = await fetch(new Request("/user"));
// fill in user info using response
}
}
async function handleRequest(req) {
const res = await fetch(req);
// run the user element handler via HTMLRewriter on a div with ID `user_info`
return new HTMLRewriter()
.on("div#user_info", new UserElementHandler())
.transform(res);
}
```
### Element
The `element` argument, used only in element handlers, is a representation of a DOM element. A number of methods exist on an element to query and manipulate it:
#### Properties
- `tagName` string
- The name of the tag, such as `"h1"` or `"div"`. This property can be assigned different values, to modify an element’s tag.
- `attributes` Iterator read-only
- A `[name, value]` pair of the tag’s attributes.
- `removed` boolean
- Indicates whether the element has been removed or replaced by one of the previous handlers.
- `namespaceURI` String
- Represents the [namespace URI](https://infra.spec.whatwg.org/#namespaces) of an element.
#### Methods
- getAttribute(namestring) : string | null
- Returns the value for a given attribute name on the element, or `null` if it is not found.
- hasAttribute(namestring) : boolean
- Returns a boolean indicating whether an attribute exists on the element.
- setAttribute(namestring, valuestring) : Element
- Sets an attribute to a provided value, creating the attribute if it does not exist.
- removeAttribute(namestring) : Element
- Removes the attribute.
- before(contentContent, contentOptionsContentOptionsoptional) :
Element
- Inserts content before the element.
- after(contentContent, contentOptionsContentOptionsoptional) :
Element
- Inserts content right after the element.
- prepend(contentContent, contentOptionsContentOptionsoptional) :
Element
- Inserts content right after the start tag of the element.
- append(contentContent, contentOptionsContentOptionsoptional) :
Element
- Inserts content right before the end tag of the element.
- replace(contentContent, contentOptionsContentOptionsoptional) :
Element
- Removes the element and inserts content in place of it.
-
setInnerContent(contentContent, contentOptionsContentOptionsoptional)
: Element
- Replaces content of the element.
- remove() : Element
- Removes the element with all its content.
- removeAndKeepContent() : Element
- Removes the start tag and end tag of the element but keeps its inner content intact.
- `onEndTag(handlerFunction)` : void
- Registers a handler that is invoked when the end tag of the element is reached.
### EndTag
The `endTag` argument, used only in handlers registered with `element.onEndTag`, is a limited representation of a DOM element.
#### Properties
- `name` string
- The name of the tag, such as `"h1"` or `"div"`. This property can be assigned different values, to modify an element’s tag.
#### Methods
- before(contentContent, contentOptionsContentOptionsoptional) :
EndTag
- Inserts content right before the end tag.
- after(contentContent, contentOptionsContentOptionsoptional) :
EndTag
- Inserts content right after the end tag.
- remove() : EndTag
- Removes the element with all its content.
### Text chunks
Since Cloudflare performs zero-copy streaming parsing, text chunks are not the same thing as text nodes in the lexical tree. A lexical tree text node can be represented by multiple chunks, as they arrive over the wire from the origin.
Consider the following markup: `
Hey. How are you?
`. It is possible that the Workers script will not receive the entire text node from the origin at once; instead, the `text` element handler will be invoked for each received part of the text node. For example, the handler might be invoked with `“Hey. How ”,` then `“are you?”`. When the last chunk arrives, the text’s `lastInTextNode` property will be set to `true`. Developers should make sure to concatenate these chunks together.
#### Properties
- `removed` boolean
- Indicates whether the element has been removed or replaced by one of the previous handlers.
- `text` string read-only
- The text content of the chunk. Could be empty if the chunk is the last chunk of the text node.
- `lastInTextNode` boolean read-only
- Specifies whether the chunk is the last chunk of the text node.
#### Methods
- before(contentContent, contentOptionsContentOptionsoptional) :
Element
- Inserts content before the element.
- after(contentContent, contentOptionsContentOptionsoptional) :
Element
- Inserts content right after the element.
- replace(contentContent, contentOptionsContentOptionsoptional) :
Element
- Removes the element and inserts content in place of it.
- remove() : Element
- Removes the element with all its content.
### Comments
The `comments` function on an element handler allows developers to query and manipulate HTML comment tags.
```js
class ElementHandler {
comments(comment) {
// An incoming comment element, such as
}
}
```
#### Properties
- `comment.removed` boolean
- Indicates whether the element has been removed or replaced by one of the previous handlers.
- `comment.text` string
- The text of the comment. This property can be assigned different values, to modify comment’s text.
#### Methods
- before(contentContent, contentOptionsContentOptionsoptional) :
Element
- Inserts content before the element.
- after(contentContent, contentOptionsContentOptionsoptional) :
Element
- Inserts content right after the element.
- replace(contentContent, contentOptionsContentOptionsoptional) :
Element
- Removes the element and inserts content in place of it.
- remove() : Element
- Removes the element with all its content.
### Doctype
The `doctype` function on a document handler allows developers to query a document’s [doctype](https://developer.mozilla.org/en-US/docs/Glossary/Doctype).
```js
class DocumentHandler {
doctype(doctype) {
// An incoming doctype element, such as
//
}
}
```
#### Properties
- `doctype.name` string | null read-only
- The doctype name.
- `doctype.publicId` string | null read-only
- The quoted string in the doctype after the PUBLIC atom.
- `doctype.systemId` string | null read-only
- The quoted string in the doctype after the SYSTEM atom or immediately after the `publicId`.
### End
The `end` function on a document handler allows developers to append content to the end of a document.
```js
class DocumentHandler {
end(end) {
// The end of the document
}
}
```
#### Methods
- append(contentContent, contentOptionsContentOptionsoptional) :
DocumentEnd
- Inserts content after the end of the document.
---
## Selectors
This is what selectors are and what they are used for.
- `*`
- Any element.
- `E`
- Any element of type E.
- `E:nth-child(n)`
- An E element, the n-th child of its parent.
- `E:first-child`
- An E element, first child of its parent.
- `E:nth-of-type(n)`
- An E element, the n-th sibling of its type.
- `E:first-of-type`
- An E element, first sibling of its type.
- `E:not(s)`
- An E element that does not match either compound selectors.
- `E.warning`
- An E element belonging to the class warning.
- `E#myid`
- An E element with ID equal to myid.
- `E[foo]`
- An E element with a foo attribute.
- `E[foo="bar"]`
- An E element whose foo attribute value is exactly equal to bar.
- `E[foo="bar" i]`
- An E element whose foo attribute value is exactly equal to any (ASCII-range) case-permutation of bar.
- `E[foo="bar" s]`
- An E element whose foo attribute value is exactly and case-sensitively equal to bar.
- `E[foo~="bar"]`
- An E element whose foo attribute value is a list of whitespace-separated values, one of which is exactly equal to bar.
- `E[foo^="bar"]`
- An E element whose foo attribute value begins exactly with the string bar.
- `E[foo$="bar"]`
- An E element whose foo attribute value ends exactly with the string bar.
- `E[foo*="bar"]`
- An E element whose foo attribute value contains the substring bar.
- `E[foo|="en"]`
- An E element whose foo attribute value is a hyphen-separated list of values beginning with en.
- `E F`
- An F element descendant of an E element.
- `E > F`
- An F element child of an E element.
---
## Errors
If a handler throws an exception, parsing is immediately halted, the transformed response body is errored with the thrown exception, and the untransformed response body is canceled (closed). If the transformed response body was already partially streamed back to the client, the client will see a truncated response.
```js
async function handle(request) {
let oldResponse = await fetch(request);
let newResponse = new HTMLRewriter()
.on("*", {
element(element) {
throw new Error("A really bad error.");
},
})
.transform(oldResponse);
// At this point, an expression like `await newResponse.text()`
// will throw `new Error("A really bad error.")`.
// Thereafter, any use of `newResponse.body` will throw the same error,
// and `oldResponse.body` will be closed.
// Alternatively, this will produce a truncated response to the client:
return newResponse;
}
```
---
## Related resources
- [Introducing `HTMLRewriter`](https://blog.cloudflare.com/introducing-htmlrewriter/)
- [Tutorial: Localize a Website](/pages/tutorials/localize-a-website/)
- [Example: rewrite links](/workers/examples/rewrite-links/)
- [Example: Inject Turnstile](/workers/examples/turnstile-html-rewriter/)
---
# Runtime APIs
URL: https://developers.cloudflare.com/workers/runtime-apis/
import { DirectoryListing } from "~/components";
The Workers runtime is designed to be [JavaScript standards compliant](https://ecma-international.org/publications-and-standards/standards/ecma-262/) and web-interoperable. Wherever possible, it uses web platform APIs, so that code can be reused across client and server, as well as across [WinterCG](https://wintercg.org/) JavaScript runtimes.
[Workers runtime features](/workers/runtime-apis/) are [compatible with a subset of Node.js APIs](/workers/runtime-apis/nodejs) and the ability to set a [compatibility date or compatibility flag](/workers/configuration/compatibility-dates/).
---
# Performance and timers
URL: https://developers.cloudflare.com/workers/runtime-apis/performance/
## Background
The Workers runtime supports a subset of the [`Performance` API](https://developer.mozilla.org/en-US/docs/Web/API/Performance), used to measure timing and performance, as well as timing of subrequests and other operations.
### `performance.now()`
The [`performance.now()` method](https://developer.mozilla.org/en-US/docs/Web/API/Performance/now) returns timestamp in milliseconds, representing the time elapsed since `performance.timeOrigin`.
When Workers are deployed to Cloudflare, as a security measure to [mitigate against Spectre attacks](/workers/reference/security-model/#step-1-disallow-timers-and-multi-threading), APIs that return timers, including [`performance.now()`](https://developer.mozilla.org/en-US/docs/Web/API/Performance/now) and [`Date.now()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/now), only advance or increment after I/O occurs. Consider the following examples:
```typescript title="Time is frozen — start will have the exact same value as end."
const start = performance.now();
for (let i = 0; i < 1e6; i++) {
// do expensive work
}
const end = performance.now();
const timing = end - start; // 0
```
```typescript title="Time advances, because a subrequest has occurred between start and end."
const start = performance.now();
const response = await fetch("https://developers.cloudflare.com/");
const end = performance.now();
const timing = end - start; // duration of the subrequest to developers.cloudflare.com
```
By wrapping a subrequest in calls to `performance.now()` or `Date.now()` APIs, you can measure the timing of a subrequest, fetching a key from KV, an object from R2, or any other form of I/O in your Worker.
In local development, however, timers will increment regardless of whether I/O happens or not. This means that if you need to measure timing of a piece of code that is CPU intensive, that does not involve I/O, you can run your Worker locally, via [Wrangler](/workers/wrangler/), which uses the open-source Workers runtime, [workerd](https://github.com/cloudflare/workerd) — the same runtime that your Worker runs in when deployed to Cloudflare.
### `performance.timeOrigin`
The [`performance.timeOrigin`](https://developer.mozilla.org/en-US/docs/Web/API/Performance/timeOrigin) API is a read-only property that returns a baseline timestamp to base other measurements off of.
In the Workers runtime, the `timeOrigin` property returns 0.
---
# Request
URL: https://developers.cloudflare.com/workers/runtime-apis/request/
import { Type, MetaInfo } from "~/components";
The [`Request`](https://developer.mozilla.org/en-US/docs/Web/API/Request/Request) interface represents an HTTP request and is part of the [Fetch API](/workers/runtime-apis/fetch/).
## Background
The most common way you will encounter a `Request` object is as a property of an incoming request:
```js null {2}
export default {
async fetch(request, env, ctx) {
return new Response('Hello World!');
},
};
```
You may also want to construct a `Request` yourself when you need to modify a request object, because the incoming `request` parameter that you receive from the [`fetch()` handler](/workers/runtime-apis/handlers/fetch/) is immutable.
```js
export default {
async fetch(request, env, ctx) {
const url = "https://example.com";
const modifiedRequest = new Request(url, request);
// ...
},
};
```
The [`fetch() handler`](/workers/runtime-apis/handlers/fetch/) invokes the `Request` constructor. The [`RequestInit`](#options) and [`RequestInitCfProperties`](#the-cf-property-requestinitcfproperties) types defined below also describe the valid parameters that can be passed to the [`fetch() handler`](/workers/runtime-apis/handlers/fetch/).
***
## Constructor
```js
let request = new Request(input, options)
```
### Parameters
* `input` string | Request
* Either a string that contains a URL, or an existing `Request` object.
* `options` options optional
* Optional options object that contains settings to apply to the `Request`.
#### `options`
An object containing properties that you want to apply to the request.
* `cache` `undefined | 'no-store'` optional
* Standard HTTP `cache` header. Only `cache: 'no-store'` is supported.
Any other cache header will result in a `TypeError` with the message `Unsupported cache mode: `.
* `cf` RequestInitCfProperties optional
* Cloudflare-specific properties that can be set on the `Request` that control how Cloudflare’s global network handles the request.
* `method`
* The HTTP request method. The default is `GET`. In Workers, all [HTTP request methods](https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods) are supported, except for [`CONNECT`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/CONNECT).
* `headers` Headers optional
* A [`Headers` object](https://developer.mozilla.org/en-US/docs/Web/API/Headers).
* `body` string | ReadableStream | FormData | URLSearchParams optional
* The request body, if any.
* Note that a request using the GET or HEAD method cannot have a body.
* `redirect`
* The redirect mode to use: `follow`, `error`, or `manual`. The default for a new `Request` object is `follow`. Note, however, that the incoming `Request` property of a `FetchEvent` will have redirect mode `manual`.
#### The `cf` property (`RequestInitCfProperties`)
An object containing Cloudflare-specific properties that can be set on the `Request` object. For example:
```js
// Disable ScrapeShield for this request.
fetch(event.request, { cf: { scrapeShield: false } })
```
Invalid or incorrectly-named keys in the `cf` object will be silently ignored. Consider using TypeScript and [`@cloudflare/workers-types`](https://www.npmjs.com/package/@cloudflare/workers-types) to ensure proper use of the `cf` object.
* `apps`
* Whether [Cloudflare Apps](https://www.cloudflare.com/apps/) should be enabled for this request. Defaults to `true`.
* `cacheEverything`
* Treats all content as static and caches all [file types](/cache/concepts/default-cache-behavior#default-cached-file-extensions) beyond the Cloudflare default cached content. Respects cache headers from the origin web server. This is equivalent to setting the Page Rule [**Cache Level** (to **Cache Everything**)](/rules/page-rules/reference/settings/). Defaults to `false`.
This option applies to `GET` and `HEAD` request methods only.
* `cacheKey`
* A request’s cache key is what determines if two requests are the same for caching purposes. If a request has the same cache key as some previous request, then Cloudflare can serve the same cached response for both.
* `cacheTags` Array\ optional
* This option appends additional [**Cache-Tag**](/cache/how-to/purge-cache/purge-by-tags/) headers to the response from the origin server. This allows for purges of cached content based on tags provided by the Worker, without modifications to the origin server. This is performed using the [**Purge by Tag**](/cache/how-to/purge-cache/purge-by-tags/#purge-using-cache-tags) feature, which is currently only available to Enterprise zones. If this option is used in a non-Enterprise zone, the additional headers will not be appended.
* `cacheTtl`
* This option forces Cloudflare to cache the response for this request, regardless of what headers are seen on the response. This is equivalent to setting two Page Rules: [**Edge Cache TTL**](/cache/how-to/edge-browser-cache-ttl/) and [**Cache Level** (to **Cache Everything**)](/rules/page-rules/reference/settings/). The value must be zero or a positive number. A value of `0` indicates that the cache asset expires immediately. This option applies to `GET` and `HEAD` request methods only.
* `cacheTtlByStatus` `{ [key: string]: number }` optional
* This option is a version of the `cacheTtl` feature which chooses a TTL based on the response’s status code. If the response to this request has a status code that matches, Cloudflare will cache for the instructed time and override cache instructives sent by the origin. For example: `{ "200-299": 86400, "404": 1, "500-599": 0 }`. The value can be any integer, including zero and negative integers. A value of `0` indicates that the cache asset expires immediately. Any negative value instructs Cloudflare not to cache at all. This option applies to `GET` and `HEAD` request methods only.
* `image` Object | null optional
* Enables [Image Resizing](/images/transform-images/) for this request. The possible values are described in [Transform images via Workers](/images/transform-images/transform-via-workers/) documentation.
* `mirage`
* Whether [Mirage](https://www.cloudflare.com/website-optimization/mirage/) should be enabled for this request, if otherwise configured for this zone. Defaults to `true`.
* `polish`
* Sets [Polish](https://blog.cloudflare.com/introducing-polish-automatic-image-optimizati/) mode. The possible values are `lossy`, `lossless` or `off`.
* `resolveOverride`
* Directs the request to an alternate origin server by overriding the DNS lookup. The value of `resolveOverride` specifies an alternate hostname which will be used when determining the origin IP address, instead of using the hostname specified in the URL. The `Host` header of the request will still match what is in the URL. Thus, `resolveOverride` allows a request to be sent to a different server than the URL / `Host` header specifies. However, `resolveOverride` will only take effect if both the URL host and the host specified by `resolveOverride` are within your zone. If either specifies a host from a different zone / domain, then the option will be ignored for security reasons. If you need to direct a request to a host outside your zone (while keeping the `Host` header pointing within your zone), first create a CNAME record within your zone pointing to the outside host, and then set `resolveOverride` to point at the CNAME record. Note that, for security reasons, it is not possible to set the `Host` header to specify a host outside of your zone unless the request is actually being sent to that host.
* `scrapeShield`
* Whether [ScrapeShield](https://blog.cloudflare.com/introducing-scrapeshield-discover-defend-dete/) should be enabled for this request, if otherwise configured for this zone. Defaults to `true`.
* `webp`
* Enables or disables [WebP](https://blog.cloudflare.com/a-very-webp-new-year-from-cloudflare/) image format in [Polish](/images/polish/).
***
## Properties
All properties of an incoming `Request` object (the request you receive from the [`fetch()` handler](/workers/runtime-apis/handlers/fetch/)) are read-only. To modify the properties of an incoming request, create a new `Request` object and pass the options to modify to its [constructor](#constructor).
* `body` ReadableStream read-only
* Stream of the body contents.
* `bodyUsed` Boolean read-only
* Declares whether the body has been used in a response yet.
* `cf` IncomingRequestCfProperties read-only
* An object containing properties about the incoming request provided by Cloudflare’s global network.
* This property is read-only (unless created from an existing `Request`). To modify its values, pass in the new values on the [`cf` key of the `init` options argument](/workers/runtime-apis/request/#the-cf-property-requestinitcfproperties) when creating a new `Request` object.
* `headers` Headers read-only
* A [`Headers` object](https://developer.mozilla.org/en-US/docs/Web/API/Headers).
* Compared to browsers, Cloudflare Workers imposes very few restrictions on what headers you are allowed to send. For example, a browser will not allow you to set the `Cookie` header, since the browser is responsible for handling cookies itself. Workers, however, has no special understanding of cookies, and treats the `Cookie` header like any other header.
:::caution
If the response is a redirect and the redirect mode is set to `follow` (see below), then all headers will be forwarded to the redirect destination, even if the destination is a different hostname or domain. This includes sensitive headers like `Cookie`, `Authorization`, or any application-specific headers. If this is not the behavior you want, you should set redirect mode to `manual` and implement your own redirect policy. Note that redirect mode defaults to `manual` for requests that originated from the Worker's client, so this warning only applies to `fetch()`es made by a Worker that are not proxying the original request.
:::
* `method` string read-only
* Contains the request’s method, for example, `GET`, `POST`, etc.
* `redirect` string read-only
* The redirect mode to use: `follow`, `error`, or `manual`. The `fetch` method will automatically follow redirects if the redirect mode is set to `follow`. If set to `manual`, the `3xx` redirect response will be returned to the caller as-is. The default for a new `Request` object is `follow`. Note, however, that the incoming `Request` property of a `FetchEvent` will have redirect mode `manual`.
* `url` string read-only
* Contains the URL of the request.
### `IncomingRequestCfProperties`
In addition to the properties on the standard [`Request`](https://developer.mozilla.org/en-US/docs/Web/API/Request) object, the `request.cf` object on an inbound `Request` contains information about the request provided by Cloudflare’s global network.
All plans have access to:
* `asn` Number
* ASN of the incoming request, for example, `395747`.
* `asOrganization` string
* The organization which owns the ASN of the incoming request, for example, `Google Cloud`.
* `botManagement` Object | null
* Only set when using Cloudflare Bot Management. Object with the following properties: `score`, `verifiedBot`, `staticResource`, `ja3Hash`, `ja4`, and `detectionIds`. Refer to [Bot Management Variables](/bots/reference/bot-management-variables/) for more details.
* `clientAcceptEncoding` string | null
* If Cloudflare replaces the value of the `Accept-Encoding` header, the original value is stored in the `clientAcceptEncoding` property, for example, `"gzip, deflate, br"`.
* `colo` string
* The three-letter [`IATA`](https://en.wikipedia.org/wiki/IATA_airport_code) airport code of the data center that the request hit, for example, `"DFW"`.
* `country` string | null
* Country of the incoming request. The two-letter country code in the request. This is the same value as that provided in the `CF-IPCountry` header, for example, `"US"`.
* `isEUCountry` string | null
* If the country of the incoming request is in the EU, this will return `"1"`. Otherwise, this property will be omitted.
* `httpProtocol` string
* HTTP Protocol, for example, `"HTTP/2"`.
* `requestPriority` string | null
* The browser-requested prioritization information in the request object, for example, `"weight=192;exclusive=0;group=3;group-weight=127"`.
* `tlsCipher` string
* The cipher for the connection to Cloudflare, for example, `"AEAD-AES128-GCM-SHA256"`.
* `tlsClientAuth` Object | null
* Only set when using Cloudflare Access or API Shield (mTLS). Object with the following properties: `certFingerprintSHA1`, `certFingerprintSHA256`, `certIssuerDN`, `certIssuerDNLegacy`, `certIssuerDNRFC2253`, `certIssuerSKI`, `certIssuerSerial`, `certNotAfter`, `certNotBefore`, `certPresented`, `certRevoked`, `certSKI`, `certSerial`, `certSubjectDN`, `certSubjectDNLegacy`, `certSubjectDNRFC2253`, `certVerified`.
* `tlsClientHelloLength` string
* The length of the client hello message sent in a [TLS handshake](https://www.cloudflare.com/learning/ssl/what-happens-in-a-tls-handshake/). For example, `"508"`. Specifically, the length of the bytestring of the client hello.
* `tlsClientRandom` string
* The value of the 32-byte random value provided by the client in a [TLS handshake](https://www.cloudflare.com/learning/ssl/what-happens-in-a-tls-handshake/). Refer to [RFC 8446](https://datatracker.ietf.org/doc/html/rfc8446#section-4.1.2) for more details.
* `tlsVersion` string
* The TLS version of the connection to Cloudflare, for example, `TLSv1.3`.
* `city` string | null
* City of the incoming request, for example, `"Austin"`.
* `continent` string | null
* Continent of the incoming request, for example, `"NA"`.
* `latitude` string | null
* Latitude of the incoming request, for example, `"30.27130"`.
* `longitude` string | null
* Longitude of the incoming request, for example, `"-97.74260"`.
* `postalCode` string | null
* Postal code of the incoming request, for example, `"78701"`.
* `metroCode` string | null
* Metro code (DMA) of the incoming request, for example, `"635"`.
* `region` string | null
* If known, the [ISO 3166-2](https://en.wikipedia.org/wiki/ISO_3166-2) name for the first level region associated with the IP address of the incoming request, for example, `"Texas"`.
* `regionCode` string | null
* If known, the [ISO 3166-2](https://en.wikipedia.org/wiki/ISO_3166-2) code for the first-level region associated with the IP address of the incoming request, for example, `"TX"`.
* `timezone` string
* Timezone of the incoming request, for example, `"America/Chicago"`.
:::caution
The `request.cf` object is not available in the Cloudflare Workers dashboard or Playground preview editor.
:::
***
## Methods
### Instance methods
These methods are only available on an instance of a `Request` object or through its prototype.
* `clone()` : Promise\
* Creates a copy of the `Request` object.
* `arrayBuffer()` : Promise\
* Returns a promise that resolves with an [`ArrayBuffer`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/ArrayBuffer) representation of the request body.
* `formData()` : Promise\
* Returns a promise that resolves with a [`FormData`](https://developer.mozilla.org/en-US/docs/Web/API/FormData) representation of the request body.
* `json()` : Promise\