# Changelog URL: https://developers.cloudflare.com/workers-ai/changelog/ import { ProductReleaseNotes } from "~/components"; {/* <!-- Actual content lives in /src/content/release-notes/workers-ai.yaml. Update the file there for new entries to appear here. For more details, refer to https://developers.cloudflare.com/style-guide/documentation-content-strategy/content-types/changelog/#yaml-file --> */} <ProductReleaseNotes /> --- # Demos and architectures URL: https://developers.cloudflare.com/workers-ai/demos/ import { ExternalResources, GlossaryTooltip, ResourcesBySelector } from "~/components" Workers AI can be used to build dynamic and performant services. The following demo applications and reference architectures showcase how to use Workers AI optimally within your architecture. ## Demos Explore the following <GlossaryTooltip term="demo application">demo applications</GlossaryTooltip> for Workers AI. <ExternalResources type="apps" products={["Workers AI"]} /> ## Reference architectures Explore the following <GlossaryTooltip term="reference architecture">reference architectures</GlossaryTooltip> that use Workers AI: <ResourcesBySelector types={["reference-architecture","design-guide","reference-architecture-diagram"]} products={["Workers AI"]} /> --- # Glossary URL: https://developers.cloudflare.com/workers-ai/glossary/ import { Glossary } from "~/components"; Review the definitions for terms used across Cloudflare's Workers AI documentation. <Glossary product="workers-ai" /> --- # Overview URL: https://developers.cloudflare.com/workers-ai/ import { CardGrid, Description, Feature, LinkTitleCard, Plan, RelatedProduct, Render, LinkButton, Flex } from "~/components" <Description> Run machine learning models, powered by serverless GPUs, on Cloudflare's global network. </Description> <Plan type="workers-all" /> Workers AI allows you to run AI models in a serverless way, without having to worry about scaling, maintaining, or paying for unused infrastructure. You can invoke models running on GPUs on Cloudflare's network from your own code — from [Workers](/workers/), [Pages](/pages/), or anywhere via [the Cloudflare API](/api/resources/ai/methods/run/). Workers AI gives you access to: - **50+ [open-source models](/workers-ai/models/)**, available as a part of our model catalog - Serverless, **pay-for-what-you-use** [pricing model](/workers-ai/platform/pricing/) - All as part of a **fully-featured developer platform**, including [AI Gateway](/ai-gateway/), [Vectorize](/vectorize/), [Workers](/workers/) and more... <div> <LinkButton href="/workers-ai/get-started">Get started</LinkButton> <LinkButton target="_blank" variant="secondary" icon="external" href="https://youtu.be/cK_leoJsBWY?si=4u6BIy_uBOZf9Ve8">Watch a Workers AI demo</LinkButton> </div> <Render file="custom_requirements" /> <Render file="file_issues" /> *** ## Features <Feature header="Models" href="/workers-ai/models/" cta="Browse models"> Workers AI comes with a curated set of popular open-source models that enable you to do tasks such as image classification, text generation, object detection and more. </Feature> *** ## Related products <RelatedProduct header="AI Gateway" href="/ai-gateway/" product="ai-gateway"> Observe and control your AI applications with caching, rate limiting, request retries, model fallback, and more. </RelatedProduct> <RelatedProduct header="Vectorize" href="/vectorize/" product="vectorize"> Build full-stack AI applications with Vectorize, Cloudflare’s vector database. Adding Vectorize enables you to perform tasks such as semantic search, recommendations, anomaly detection or can be used to provide context and memory to an LLM. </RelatedProduct> <RelatedProduct header="Workers" href="/workers/" product="workers"> Build serverless applications and deploy instantly across the globe for exceptional performance, reliability, and scale. </RelatedProduct> <RelatedProduct header="Pages" href="/pages/" product="pages"> Create full-stack applications that are instantly deployed to the Cloudflare global network. </RelatedProduct> <RelatedProduct header="R2" href="/r2/" product="r2"> Store large amounts of unstructured data without the costly egress bandwidth fees associated with typical cloud storage services. </RelatedProduct> <RelatedProduct header="D1" href="/d1/" product="d1"> Create new serverless SQL databases to query from your Workers and Pages projects. </RelatedProduct> <RelatedProduct header="Durable Objects" href="/durable-objects/" product="durable-objects"> A globally distributed coordination API with strongly consistent storage. </RelatedProduct> <RelatedProduct header="KV" href="/kv/" product="kv"> Create a global, low-latency, key-value data storage. </RelatedProduct> *** ## More resources <CardGrid> <LinkTitleCard title="Get started" href="/workers-ai/get-started/workers-wrangler/" icon="open-book"> Build and deploy your first Workers AI application. </LinkTitleCard> <LinkTitleCard title="Plans" href="/workers-ai/platform/pricing/" icon="seti:shell"> Learn about Free and Paid plans. </LinkTitleCard> <LinkTitleCard title="Limits" href="/workers-ai/platform/limits/" icon="document"> Learn about Workers AI limits. </LinkTitleCard> <LinkTitleCard title="Use cases" href="/use-cases/ai/" icon="document"> Learn how you can build and deploy ambitious AI applications to Cloudflare's global network. </LinkTitleCard> <LinkTitleCard title="Storage options" href="/workers/platform/storage-options/" icon="open-book"> Learn which storage option is best for your project. </LinkTitleCard> <LinkTitleCard title="Developer Discord" href="https://discord.cloudflare.com" icon="discord"> Connect with the Workers community on Discord to ask questions, share what you are building, and discuss the platform with other developers. </LinkTitleCard> <LinkTitleCard title="@CloudflareDev" href="https://x.com/cloudflaredev" icon="x.com"> Follow @CloudflareDev on Twitter to learn about product announcements, and what is new in Cloudflare Workers. </LinkTitleCard> </CardGrid> --- # Privacy URL: https://developers.cloudflare.com/workers-ai/privacy/ Cloudflare processes certain customer data in order to provide the Workers AI service, subject to our [Privacy Policy](https://www.cloudflare.com/privacypolicy/) and [Self-Serve Subscription Agreement](https://www.cloudflare.com/terms/) or [Enterprise Subscription Agreement](https://www.cloudflare.com/enterpriseterms/) (as applicable). Cloudflare neither creates nor trains the AI models made available on Workers AI. The models constitute Third-Party Services and may be subject to open source or other license terms that apply between you and the model provider. Be sure to review the license terms applicable to each model (if any). Your inputs (e.g., text prompts, image submissions, audio files, etc.), outputs (e.g., generated text/images, translations, etc.), embeddings, and training data constitute Customer Content. For Workers AI: * You own, and are responsible for, all of your Customer Content. * Cloudflare does not make your Customer Content available to any other Cloudflare customer. * Cloudflare does not use your Customer Content to (1) train any AI models made available on Workers AI or (2) improve any Cloudflare or third-party services, and would not do so unless we received your explicit consent. * Your Customer Content for Workers AI may be stored by Cloudflare if you specifically use a storage service (e.g., R2, KV, DO, Vectorize, etc.) in conjunction with Workers AI. --- # Errors URL: https://developers.cloudflare.com/workers-ai/workers-ai-errors/ Below is a list of Workers AI errors. | **Name** | **Internal Code** | **HTTP Code** | **Description** | | ------------------------------------- | ----------------- | ------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------- | | No such model | `5007` | `400` | No such model `${model}` or task | | Invalid data | `5004` | `400` | Invalid data type for base64 input: `${type}` | | Finetune missing required files | `3039` | `400` | Finetune is missing required files `(model.safetensors and config.json) ` | | Incomplete request | `3003` | `400` | Request is missing headers or body: `{what}` | | Account not allowed for private model | `5018` | `403` | The account is not allowed to access this model | | Model agreement | `5016` | `403` | User has not agreed to Llama3.2 model terms | | Account blocked | `3023` | `403` | Service unavailable for account | | Account not allowed for private model | `3041` | `403` | The account is not allowed to access this model | | Deprecated SDK version | `5019` | `405` | Request trying to use deprecated SDK version | | LoRa unsupported | `5005` | `405` | The model `${this.model}` does not support LoRa inference | | Invalid model ID | `3042` | `404` | The model name is invalid | | Request too large | `3006` | `413` | Request is too large | | Timeout | `3007` | `408` | Request timeout | | Aborted | `3008` | `408` | Request was aborted | | Account limited | `3036` | `429` | You have used up your daily free allocation of 10,000 neurons. Please upgrade to Cloudflare's Workers Paid plan if you would like to continue usage. | | Out of capacity | `3040` | `429` | No more data centers to forward the request to | --- # Glossary URL: https://developers.cloudflare.com/workers/glossary/ import { Glossary } from "~/components"; Review the definitions for terms used across Cloudflare's Workers documentation. <Glossary product="workers" /> --- # Demos and architectures URL: https://developers.cloudflare.com/workers/demos/ import { ExternalResources, GlossaryTooltip, ResourcesBySelector } from "~/components" Learn how you can use Workers within your existing application and architecture. ## Demos Explore the following <GlossaryTooltip term="demo application">demo applications</GlossaryTooltip> for Workers. <ExternalResources type="apps" products={["Workers"]} /> ## Reference architectures Explore the following <GlossaryTooltip term="reference architecture">reference architectures</GlossaryTooltip> that use Workers: <ResourcesBySelector types={["reference-architecture","design-guide","reference-architecture-diagram"]} products={["Workers"]} /> --- # Overview URL: https://developers.cloudflare.com/workers/ import { CardGrid, Description, Feature, LinkButton, LinkTitleCard, Plan, RelatedProduct, Render, } from "~/components"; <Description> Build serverless applications and deploy instantly across the globe for exceptional performance, reliability, and scale. </Description> <Plan type="all" /> Cloudflare Workers provides a [serverless](https://www.cloudflare.com/learning/serverless/what-is-serverless/) execution environment that allows you to create new applications or augment existing ones without configuring or maintaining infrastructure. Cloudflare Workers runs on [Cloudflare’s global network](https://www.cloudflare.com/network/) in hundreds of cities worldwide, offering both [Free and Paid plans](/workers/platform/pricing/). <LinkButton variant="primary" href="/workers/get-started/guide/"> Get started </LinkButton> <LinkButton variant="secondary" href="https://dash.cloudflare.com/?to=/:account/workers-and-pages/create" > Workers dashboard </LinkButton> --- ## Features <Feature header="Wrangler" href="/workers/wrangler/install-and-update/"> The Workers command-line interface, Wrangler, allows you to [create](/workers/wrangler/commands/#init), [test](/workers/wrangler/commands/#dev), and [deploy](/workers/wrangler/commands/#deploy) your Workers projects. </Feature> <Feature header="Bindings" href="/workers/runtime-apis/bindings/"> Bindings allow your Workers to interact with resources on the Cloudflare developer platform, including [R2](/r2/), [KV](/kv/concepts/how-kv-works/), [Durable Objects](/durable-objects/), and [D1](/d1/). </Feature> <Feature header="Playground" href="/workers/playground/" cta="Use the Playground"> The Playground is a sandbox which gives you an instant way to preview and test a Worker directly in the browser against any site. No setup required. </Feature> --- ## Related products <RelatedProduct header="Workers AI" href="/workers-ai/" product="workers-ai"> Run machine learning models, powered by serverless GPUs, on Cloudflare’s global network. </RelatedProduct> <RelatedProduct header="R2" href="/r2/" product="r2"> Store large amounts of unstructured data without the costly egress bandwidth fees associated with typical cloud storage services. </RelatedProduct> <RelatedProduct header="D1" href="/d1/" product="d1"> Create new serverless SQL databases to query from your Workers and Pages projects. </RelatedProduct> <RelatedProduct header="Durable Objects" href="/durable-objects/" product="durable-objects"> A globally distributed coordination API with strongly consistent storage. </RelatedProduct> <RelatedProduct header="KV" href="/kv/" product="kv"> Create a global, low-latency, key-value data storage. </RelatedProduct> <RelatedProduct header="Queues" href="/queues/" product="queues"> Send and receive messages with guaranteed delivery and no charges for egress bandwidth. </RelatedProduct> <RelatedProduct header="Hyperdrive" href="/hyperdrive/" product="hyperdrive"> Turn your existing regional database into a globally distributed database. </RelatedProduct> <RelatedProduct header="Vectorize" href="/vectorize/" product="vectorize"> Build full-stack AI applications with Vectorize, Cloudflare’s vector database. </RelatedProduct> <RelatedProduct header="Zaraz" href="/zaraz/" product="zaraz"> Offload third-party tools and services to the cloud and improve the speed and security of your website. </RelatedProduct> --- ## More resources <CardGrid> <LinkTitleCard title="Learning Path" href="/learning-paths/workers/concepts/" icon="pen" > New to Workers? Get started with the Workers Learning Path. </LinkTitleCard> <LinkTitleCard title="Plans" href="/workers/platform/pricing/" icon="seti:shell" > Learn about Free and Paid plans. </LinkTitleCard> <LinkTitleCard title="Limits" href="/workers/platform/limits/" icon="document"> Learn about plan limits (Free plans get 100,000 requests per day). </LinkTitleCard> <LinkTitleCard title="Storage options" href="/workers/platform/storage-options/" icon="open-book" > Learn which storage option is best for your project. </LinkTitleCard> <LinkTitleCard title="Developer Discord" href="https://discord.cloudflare.com" icon="discord" > Connect with the Workers community on Discord to ask questions, share what you are building, and discuss the platform with other developers. </LinkTitleCard> <LinkTitleCard title="@CloudflareDev" href="https://x.com/cloudflaredev" icon="x.com" > Follow @CloudflareDev on Twitter to learn about product announcements, and what is new in Cloudflare Workers. </LinkTitleCard> </CardGrid> --- # Local development URL: https://developers.cloudflare.com/workers/local-development/ Cloudflare Workers and most connected resources can be fully developed and tested locally - providing confidence that the applications you build locally will work the same way in production. This allows you to be more efficient and effective by providing a faster feedback loop and removing the need to [test against remote resources](#develop-using-remote-resources-and-bindings). Local development runs against the same production runtime used by Cloudflare Workers, [workerd](https://github.com/cloudflare/workerd). In addition to testing Workers locally with [`wrangler dev`](/workers/wrangler/commands/#dev), the use of Miniflare allows you to test other Developer Platform products locally, such as [R2](/r2/), [KV](/kv/), [D1](/d1/), and [Durable Objects](/durable-objects/). ## Start a local development server :::note This guide assumes you are using [Wrangler v3.0](https://blog.cloudflare.com/wrangler3/) or later. Users new to Wrangler CLI and Cloudflare Workers should visit the [Wrangler Install/Update guide](/workers/wrangler/install-and-update) to install `wrangler`. ::: Wrangler provides a [`dev`](/workers/wrangler/commands/#dev) command that starts a local server for developing your Worker. Make sure you have `npm` installed and run the following in the folder containing your Worker application: ```sh npx wrangler dev ``` `wrangler dev` will run the Worker directly on your local machine. `wrangler dev` uses a combination of `workerd` and [Miniflare](https://github.com/cloudflare/workers-sdk/tree/main/packages/miniflare), a simulator that allows you to test your Worker against additional resources like KV, Durable Objects, WebSockets, and more. ### Supported resource bindings in different environments | Product | Local Dev Supported | Remote Dev Supported | | ----------------------------------- | ------------------- | -------------------- | | AI | ✅[^1] | ✅ | | Assets | ✅ | ✅ | | Analytics Engine | ✅ | ✅ | | Browser Rendering | ⌠| ✅ | | D1 | ✅ | ✅ | | Durable Objects | ✅ | ✅ | | Email Bindings | ⌠| ✅ | | Hyperdrive | ✅ | ✅ | | Images | ✅ | ✅ | | KV | ✅ | ✅ | | mTLS | ⌠| ✅ | | Queues | ✅ | ⌠| | R2 | ✅ | ✅ | | Rate Limiting | ✅ | ✅ | | Service Bindings (multiple workers) | ✅ | ✅ | | Vectorize | ✅[^2] | ✅ | | Workflows | ✅ | ⌠| With any bindings that are not supported locally, you will need to use the [`--remote` command](#develop-using-remote-resources-and-bindings) in wrangler, such as `wrangler dev --remote`. [^1]: Using Workers AI always accesses your Cloudflare account in order to run AI models and will incur usage charges even in local development. [^2]: Using Vectorize always accesses your Cloudflare account to run queries, and will incur usage charges even in local development. ## Work with local data When running `wrangler dev`, resources such as KV, Durable Objects, D1, and R2 will be stored and persisted locally and not affect the production resources. ### Use bindings in Wrangler configuration files [Wrangler](/workers/wrangler/) will automatically create local versions of bindings found in the [Wrangler configuration file](/workers/wrangler/configuration/). These local resources will not have data in them initially, so you will need to add data manually via Wrangler commands and the [`--local` flag](#use---local-flag). When you run `wrangler dev` Wrangler stores local resources in a `.wrangler/state` folder, which is automatically created. If you prefer to specify a directory, you can use the [`--persist-to`](/workers/wrangler/commands/#dev) flag with `wrangler dev` like this: ```sh npx wrangler dev --persist-to <DIRECTORY> ``` Using this will write all local storage and cache to the specified directory instead of `.wrangler`. :::note This local persistence folder should be added to your `.gitignore` file. ::: ### Use `--local` flag The following [Wrangler commands](/workers/wrangler/commands/) have a `--local` flag which allows you to create, update, and delete local data during development: | Command | | ---------------------------------------------------- | | [`d1 execute`](/workers/wrangler/commands/#execute) | | [`kv key`](/workers/wrangler/commands/#kv-key) | | [`kv bulk`](/workers/wrangler/commands/#kv-bulk) | | [`r2 object`](/workers/wrangler/commands/#r2-object) | If using `--persist-to` to specify a custom folder with `wrangler dev` you should also add `--persist-to` with the same directory name along with the `--local` flag when running the commands above. For example, to put a custom KV key into a local namespace via the CLI you would run: ```sh npx wrangler kv key put test 12345 --binding MY_KV_NAMESPACE --local --persist-to worker-local ``` Running `wrangler kv key put` will create a new key `test` with a value of `12345` on the local namespace specified via the binding `MY_KV_NAMESPACE` in the [Wrangler configuration file](/workers/wrangler/configuration/). This example command sets the local persistence directory to `worker-local` using `--persist-to`, to ensure that the data is created in the correct location. If `--persist-to` was not set, it would create the data in the `.wrangler` folder. ### Clear Wrangler's local storage If you need to clear local storage entirely, delete the `.wrangler/state` folder. You can also be more fine-grained and delete specific resource folders within `.wrangler/state`. Any deleted folders will be created automatically the next time you run `wrangler dev`. ## Local-only environment variables When running `wrangler dev`, variables in the [Wrangler configuration file](/workers/wrangler/configuration/) are automatically overridden by values defined in a `.dev.vars` file located in the root directory of your worker. This is useful for providing values you do not want to check in to source control. ```shell API_HOST = "localhost:4000" API_ACCOUNT_ID = "local_example_user" ``` ## Develop using remote resources and bindings There may be times you want to develop against remote resources and bindings. To run `wrangler dev` in remote mode, add the `--remote` flag, which will run both your code and resources remotely: ```sh npx wrangler dev --remote ``` For some products like KV and R2, remote resources used for `wrangler dev --remote` must be specified with preview ID/names in the [Wrangler configuration file](/workers/wrangler/configuration/) such as `preview_id` for KV or `preview_bucket name` for R2. Resources used for remote mode (preview) can be different from resources used for production to prevent changing production data during development. To use production data in `wrangler dev --remote`, set the preview ID/name of the resource to the ID/name of your production resource. ## Customize `wrangler dev` You can customize how `wrangler dev` works to fit your needs. Refer to [the `wrangler dev` documentation](/workers/wrangler/commands/#dev) for available configuration options. :::caution There is a bug associated with how outgoing requests are handled when using `wrangler dev --remote`. For more information, read the [Known issues section](/workers/platform/known-issues/#wrangler-dev). ::: ## Related resources - [D1 local development](/d1/best-practices/local-development/) - The official D1 guide to local development and testing. - [DevTools](/workers/observability/dev-tools) - Guides to using DevTools to debug your Worker locally. --- # Playground URL: https://developers.cloudflare.com/workers/playground/ import { LinkButton } from "~/components"; :::note[Browser support] The Cloudflare Workers Playground is currently only supported in Firefox and Chrome desktop browsers. In Safari, it will show a `PreviewRequestFailed` error message. ::: The quickest way to experiment with Cloudflare Workers is in the [Playground](https://workers.cloudflare.com/playground). It does not require any setup or authentication. The Playground is a sandbox which gives you an instant way to preview and test a Worker directly in the browser. The Playground uses the same editor as the authenticated experience. The Playground provides the ability to [share](#share) the code you write as well as [deploy](#deploy) it instantly to Cloudflare's global network. This way, you can try new things out and deploy them when you are ready. <LinkButton href="https://workers.cloudflare.com/playground" icon="external"> Launch the Playground </LinkButton> ## Hello Cloudflare Workers When you arrive in the Playground, you will see this default code: ```js import welcome from "welcome.html"; /** * @typedef {Object} Env */ export default { /** * @param {Request} request * @param {Env} env * @param {ExecutionContext} ctx * @returns {Response} */ fetch(request, env, ctx) { console.log("Hello Cloudflare Workers!"); return new Response(welcome, { headers: { "content-type": "text/html", }, }); }, }; ``` This is an example of a multi-module Worker that is receiving a [request](/workers/runtime-apis/request/), logging a message to the console, and then returning a [response](/workers/runtime-apis/response/) body containing the content from `welcome.html`. Refer to the [Fetch handler documentation](/workers/runtime-apis/handlers/fetch/) to learn more. ## Use the Playground As you edit the default code, the Worker will auto-update such that the preview on the right shows your Worker running just as it would in a browser. If your Worker uses URL paths, you can enter those in the input field on the right to navigate to them. The Playground provides type-checking via JSDoc comments and [`workers-types`](https://www.npmjs.com/package/@cloudflare/workers-types). The Playground also provides pretty error pages in the event of application errors. To test a raw HTTP request (for example, to test a `POST` request), go to the **HTTP** tab and select **Send**. You can add and edit headers via this panel, as well as edit the body of a request. ## DevTools For debugging Workers inside the Playground, use the developer tools at the bottom of the Playground's preview panel to view `console.logs`, network requests, memory and CPU usage. The developer tools for the Workers Playground work similarly to the developer tools in Chrome or Firefox, and are the same developer tools users have access to in the [Wrangler CLI](/workers/wrangler/install-and-update/) and the authenticated dashboard. ### Network tab **Network** shows the outgoing requests from your Worker — that is, any calls to `fetch` inside your Worker code. ### Console Logs The console displays the output of any calls to `console.log` that were called for the current preview run as well as any other preview runs in that session. ### Sources **Sources** displays the sources that make up your Worker. Note that KV, text, and secret bindings are only accessible when authenticated with an account. This means you must be logged in to the dashboard, or use [`wrangler dev`](/workers/wrangler/commands/#dev) with your account credentials. ## Share To share what you have created, select **Copy Link** in the top right of the screen. This will copy a unique URL to your clipboard that you can share with anyone. These links do not expire, so you can bookmark your creation and share it at any time. Users that open a shared link will see the Playground with the shared code and preview. ## Deploy You can deploy a Worker from the Playground. If you are already logged in, you can review the Worker before deploying. Otherwise, you will be taken through the first-time user onboarding flow before you can review and deploy. Once deployed, your Worker will get its own unique URL and be available almost instantly on Cloudflare's global network. From here, you can add [Custom Domains](/workers/configuration/routing/custom-domains/), [storage resources](/workers/platform/storage-options/), and more. --- # AI SDK URL: https://developers.cloudflare.com/workers-ai/configuration/ai-sdk/ Workers AI can be used with the [AI SDK](https://sdk.vercel.ai/) for JavaScript and TypeScript codebases. ## Setup Install the [`workers-ai-provider` provider](https://sdk.vercel.ai/providers/community-providers/cloudflare-workers-ai): ```bash npm install workers-ai-provider ``` Then, add an AI binding in your Workers project Wrangler file: ```toml [ai] binding = "AI" ``` ## Models The AI SDK can be configured to work with [any AI model](/workers-ai/models/). ```js import { createWorkersAI } from 'workers-ai-provider'; const workersai = createWorkersAI({ binding: env.AI }); // Choose any model: https://developers.cloudflare.com/workers-ai/models/ const model = workersai('@cf/meta/llama-3.1-8b-instruct', {}); ``` ## Generate Text Once you have selected your model, you can generate text from a given prompt. ```js import { createWorkersAI } from 'workers-ai-provider'; import { generateText } from 'ai'; type Env = { AI: Ai; }; export default { async fetch(_: Request, env: Env) { const workersai = createWorkersAI({ binding: env.AI }); const result = await generateText({ model: workersai('@cf/meta/llama-2-7b-chat-int8'), prompt: 'Write a 50-word essay about hello world.', }); return new Response(result.text); }, }; ``` ## Stream Text For longer responses, consider streaming responses to provide as the generation completes. ```js import { createWorkersAI } from 'workers-ai-provider'; import { streamText } from 'ai'; type Env = { AI: Ai; }; export default { async fetch(_: Request, env: Env) { const workersai = createWorkersAI({ binding: env.AI }); const result = streamText({ model: workersai('@cf/meta/llama-2-7b-chat-int8'), prompt: 'Write a 50-word essay about hello world.', }); return result.toTextStreamResponse({ headers: { // add these headers to ensure that the // response is chunked and streamed 'Content-Type': 'text/x-unknown', 'content-encoding': 'identity', 'transfer-encoding': 'chunked', }, }); }, }; ``` ## Generate Structured Objects You can provide a Zod schema to generate a structured JSON response. ```js import { createWorkersAI } from 'workers-ai-provider'; import { generateObject } from 'ai'; import { z } from 'zod'; type Env = { AI: Ai; }; export default { async fetch(_: Request, env: Env) { const workersai = createWorkersAI({ binding: env.AI }); const result = await generateObject({ model: workersai('@cf/meta/llama-3.1-8b-instruct'), prompt: 'Generate a Lasagna recipe', schema: z.object({ recipe: z.object({ ingredients: z.array(z.string()), description: z.string(), }), }), }); return Response.json(result.object); }, }; ``` --- # Workers Bindings URL: https://developers.cloudflare.com/workers-ai/configuration/bindings/ import { Type, MetaInfo, WranglerConfig } from "~/components"; ## Workers [Workers](/workers/) provides a serverless execution environment that allows you to create new applications or augment existing ones. To use Workers AI with Workers, you must create a Workers AI [binding](/workers/runtime-apis/bindings/). Bindings allow your Workers to interact with resources, like Workers AI, on the Cloudflare Developer Platform. You create bindings on the Cloudflare dashboard or by updating your [Wrangler file](/workers/wrangler/configuration/). To bind Workers AI to your Worker, add the following to the end of your Wrangler file: <WranglerConfig> ```toml [ai] binding = "AI" # i.e. available in your Worker on env.AI ``` </WranglerConfig> ## Pages Functions [Pages Functions](/pages/functions/) allow you to build full-stack applications with Cloudflare Pages by executing code on the Cloudflare network. Functions are Workers under the hood. To configure a Workers AI binding in your Pages Function, you must use the Cloudflare dashboard. Refer to [Workers AI bindings](/pages/functions/bindings/#workers-ai) for instructions. ## Methods ### async env.AI.run() `async env.AI.run()` runs a model. Takes a model as the first parameter, and an object as the second parameter. ```javascript const answer = await env.AI.run('@cf/meta/llama-3.1-8b-instruct', { prompt: "What is the origin of the phrase 'Hello, World'" }); ``` **Parameters** * `model` <Type text="string" /> <MetaInfo text="required" /> * The model to run. **Supported options** * `stream` <Type text="boolean" /> <MetaInfo text="optional" /> * Returns a stream of results as they are available. ```javascript const answer = await env.AI.run('@cf/meta/llama-3.1-8b-instruct', { prompt: "What is the origin of the phrase 'Hello, World'", stream: true }); return new Response(answer, { headers: { "content-type": "text/event-stream" } }); ``` --- # Hugging Face Chat UI URL: https://developers.cloudflare.com/workers-ai/configuration/hugging-face-chat-ui/ Use Workers AI with [Chat UI](https://github.com/huggingface/chat-ui?tab=readme-ov-file#text-embedding-models), an open-source chat interface offered by Hugging Face. ## Prerequisites You will need the following: * A [Cloudflare account](https://dash.cloudflare.com) * Your [Account ID](/fundamentals/setup/find-account-and-zone-ids/) * An [API token](/workers-ai/get-started/rest-api/#1-get-api-token-and-account-id) for Workers AI ## Setup First, decide how to reference your Account ID and API token (either directly in your `.env.local` using the `CLOUDFLARE_ACCOUNT_ID` and `CLOUDFLARE_API_TOKEN` variables or in the endpoint configuration). Then, follow the rest of the setup instructions in the [Chat UI GitHub repository](https://github.com/huggingface/chat-ui?tab=readme-ov-file#text-embedding-models). When setting up your models, specify the `cloudflare` endpoint. ```json { "name" : "nousresearch/hermes-2-pro-mistral-7b", "tokenizer": "nousresearch/hermes-2-pro-mistral-7b", "parameters": { "stop": ["<|im_end|>"] }, "endpoints" : [ { "type": "cloudflare", // optionally specify these if not included in .env.local "accountId": "your-account-id", "apiToken": "your-api-token" // } ] } ``` ## Supported models This template works with any [text generation models](/workers-ai/models/) that begin with the `@hf` parameter. --- # Configuration URL: https://developers.cloudflare.com/workers-ai/configuration/ import { DirectoryListing } from "~/components"; <DirectoryListing /> --- # OpenAI compatible API endpoints URL: https://developers.cloudflare.com/workers-ai/configuration/open-ai-compatibility/ import { Render } from "~/components" <Render file="openai-compatibility" /> <br/> ## Usage ### Workers AI Normally, Workers AI requires you to specify the model name in the cURL endpoint or within the `env.AI.run` function. With OpenAI compatible endpoints,you can leverage the [openai-node sdk](https://github.com/openai/openai-node) to make calls to Workers AI. This allows you to use Workers AI by simply changing the base URL and the model name. ```js title="OpenAI SDK Example" import OpenAI from "openai"; const openai = new OpenAI({ apiKey: env.CLOUDFLARE_API_KEY, baseURL: `https://api.cloudflare.com/client/v4/accounts/${env.CLOUDFLARE_ACCOUNT_ID}/ai/v1` }); const chatCompletion = await openai.chat.completions.create({ messages: [{ role: "user", content: "Make some robot noises" }], model: "@cf/meta/llama-3.1-8b-instruct", }); const embeddings = await openai.embeddings.create({ model: "@cf/baai/bge-large-en-v1.5", input: "I love matcha" }); ``` ```bash title="cURL example" curl --request POST \ --url https://api.cloudflare.com/client/v4/accounts/{account_id}/ai/v1/chat/completions \ --header "Authorization: Bearer {api_token}" \ --header "Content-Type: application/json" \ --data ' { "model": "@cf/meta/llama-3.1-8b-instruct", "messages": [ { "role": "user", "content": "how to build a wooden spoon in 3 short steps? give as short as answer as possible" } ] } ' ``` ### AI Gateway These endpoints are also compatible with [AI Gateway](/ai-gateway/providers/workersai/#openai-compatible-endpoints). --- # Function calling URL: https://developers.cloudflare.com/workers-ai/function-calling/ import { Stream, TabItem, Tabs } from "~/components"; Function calling enables people to take Large Language Models (LLMs) and use the model response to execute functions or interact with external APIs. The developer usually defines a set of functions and the required input schema for each function, which we call `tools`. The model then intelligently understands when it needs to do a tool call, and it returns a JSON output which the user needs to feed to another function or API. In essence, function calling allows you to perform actions with LLMs by executing code or making additional API calls. <Stream id="603e94c9803b4779dd612493c0dd7125" title="placeholder" /> ## How can I use function calling? Workers AI has [embedded function calling](/workers-ai/function-calling/embedded/) which allows you to execute function code alongside your inference calls. We have a package called [`@cloudflare/ai-utils`](https://www.npmjs.com/package/@cloudflare/ai-utils) to help facilitate this, which we have open-sourced on [Github](https://github.com/cloudflare/ai-utils). For industry-standard function calling, take a look at the documentation on [Traditional Function Calling](/workers-ai/function-calling/traditional/). To show you the value of embedded function calling, take a look at the example below that compares traditional function calling with embedded function calling. Embedded function calling allowed us to cut down the lines of code from 77 to 31. <Tabs> <TabItem label="Embedded"> ```sh # The ai-utils package enables embedded function calling npm i @cloudflare/ai-utils ``` ```js title="Embedded function calling example" import { createToolsFromOpenAPISpec, runWithTools, autoTrimTools, } from "@cloudflare/ai-utils"; export default { async fetch(request, env, ctx) { const response = await runWithTools( env.AI, "@hf/nousresearch/hermes-2-pro-mistral-7b", { messages: [{ role: "user", content: "Who is Cloudflare on github?" }], tools: [ // You can pass the OpenAPI spec link or contents directly ...(await createToolsFromOpenAPISpec( "https://gist.githubusercontent.com/mchenco/fd8f20c8f06d50af40b94b0671273dc1/raw/f9d4b5cd5944cc32d6b34cad0406d96fd3acaca6/partial_api.github.com.json", { overrides: [ { // for all requests on *.github.com, we'll need to add a User-Agent. matcher: ({ url, method }) => { return url.hostname === "api.github.com"; }, values: { headers: { "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36", }, }, }, ], }, )), ], }, ).then((response) => { return response; }); return new Response(JSON.stringify(response)); }, }; ``` </TabItem> <TabItem label="Traditional"> ```js title="Traditional function calling example" export default { async fetch(request, env, ctx) { const response = await env.AI.run( "@hf/nousresearch/hermes-2-pro-mistral-7b", { messages: [{ role: "user", content: "Who is Cloudflare on github?" }], tools: [ { name: "getGithubUser", description: "Provides publicly available information about someone with a GitHub account.", parameters: { type: "object", properties: { username: { type: "string", description: "The handle for the GitHub user account.", }, }, required: ["username"], }, }, ], }, ); const selected_tool = response.tool_calls[0]; let res; if (selected_tool.name == "getGithubUser") { try { const username = selected_tool.arguments.username; const url = `https://api.github.com/users/${username}`; res = await fetch(url, { headers: { // Github API requires a User-Agent header "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36", }, }).then((res) => res.json()); } catch (error) { return error; } } const finalResponse = await env.AI.run( "@hf/nousresearch/hermes-2-pro-mistral-7b", { messages: [ { role: "user", content: "Who is Cloudflare on github?", }, { role: "assistant", content: "", tool_call: selected_tool.name, }, { role: "tool", name: selected_tool.name, content: JSON.stringify(res), }, ], tools: [ { name: "getGithubUser", description: "Provides publicly available information about someone with a GitHub account.", parameters: { type: "object", properties: { username: { type: "string", description: "The handle for the GitHub user account.", }, }, required: ["username"], }, }, ], }, ); return new Response(JSON.stringify(finalResponse)); }, }; ``` </TabItem> </Tabs> ## What models support function calling? There are open-source models which have been fine-tuned to do function calling. When browsing our [model catalog](/workers-ai/models/), look for models with the function calling property beside it. For example, [@hf/nousresearch/hermes-2-pro-mistral-7b](/workers-ai/models/hermes-2-pro-mistral-7b/) is a fine-tuned variant of Mistral 7B that you can use for function calling. --- # Traditional URL: https://developers.cloudflare.com/workers-ai/function-calling/traditional/ This page shows how you can do traditional function calling, as defined by industry standards. Workers AI also offers [embedded function calling](/workers-ai/function-calling/embedded/), which is drastically easier than traditional function calling. With traditional function calling, you define an array of tools with the name, description, and tool arguments. The example below shows how you would pass a tool called `getWeather` in an inference request to a model. ```js title="Traditional function calling example" const response = await env.AI.run("@hf/nousresearch/hermes-2-pro-mistral-7b", { messages: [ { role: "user", content: "what is the weather in london?", }, ], tools: [ { name: "getWeather", description: "Return the weather for a latitude and longitude", parameters: { type: "object", properties: { latitude: { type: "string", description: "The latitude for the given location", }, longitude: { type: "string", description: "The longitude for the given location", }, }, required: ["latitude", "longitude"], }, }, ], }); return new Response(JSON.stringify(response.tool_calls)); ``` The LLM will then return a JSON object with the required arguments and the name of the tool that was called. You can then pass this JSON object to make an API call. ```json [{"arguments":{"latitude":"51.5074","longitude":"-0.1278"},"name":"getWeather"}] ``` For a working example on how to do function calling, take a look at our [demo app](https://github.com/craigsdennis/lightbulb-moment-tool-calling/blob/main/src/index.ts). --- # Fine-tunes URL: https://developers.cloudflare.com/workers-ai/fine-tunes/ import { Feature } from "~/components" Learn how to use Workers AI to get fine-tuned inference. <Feature header="Fine-tuned inference with LoRAs" href="/workers-ai/fine-tunes/loras/" cta="Run inference with LoRAs"> Upload a LoRA adapter and run fine-tuned inference with one of our base models. </Feature> *** ## What is fine-tuning? Fine-tuning is a general term for modifying an AI model by continuing to train it with additional data. The goal of fine-tuning is to increase the probability that a generation is similar to your dataset. Training a model from scratch is not practical for many use cases given how expensive and time consuming they can be to train. By fine-tuning an existing pre-trained model, you benefit from its capabilities while also accomplishing your desired task. [Low-Rank Adaptation](https://arxiv.org/abs/2106.09685) (LoRA) is a specific fine-tuning method that can be applied to various model architectures, not just LLMs. It is common that the pre-trained model weights are directly modified or fused with additional fine-tune weights in traditional fine-tuning methods. LoRA, on the other hand, allows for the fine-tune weights and pre-trained model to remain separate, and for the pre-trained model to remain unchanged. The end result is that you can train models to be more accurate at specific tasks, such as generating code, having a specific personality, or generating images in a specific style. --- # Using LoRA adapters URL: https://developers.cloudflare.com/workers-ai/fine-tunes/loras/ import { TabItem, Tabs } from "~/components" Workers AI supports fine-tuned inference with adapters trained with [Low-Rank Adaptation](https://blog.cloudflare.com/fine-tuned-inference-with-loras). This feature is in open beta and free during this period. ## Limitations * We only support LoRAs for the following models (must not be quantized): * `@cf/meta-llama/llama-2-7b-chat-hf-lora` * `@cf/mistral/mistral-7b-instruct-v0.2-lora` * `@cf/google/gemma-2b-it-lora` * `@cf/google/gemma-7b-it-lora` * Adapter must be trained with rank `r <=8`. You can check the rank of a pre-trained LoRA adapter through the adapter's `config.json` file * LoRA adapter file must be < 100MB * LoRA adapter files must be named `adapter_config.json` and `adapter_model.safetensors` exactly * You can test up to 30 LoRA adapters per account *** ## Choosing compatible LoRA adapters ### Finding open-source LoRA adapters We have started a [Hugging Face Collection](https://huggingface.co/collections/Cloudflare/workers-ai-compatible-loras-6608dd9f8d305a46e355746e) that lists a few LoRA adapters that are compatible with Workers AI. Generally, any LoRA adapter that fits our limitations above should work. ### Training your own LoRA adapters To train your own LoRA adapter, follow the [tutorial](/workers-ai/tutorials/fine-tune-models-with-autotrain). *** ## Uploading LoRA adapters In order to run inference with LoRAs on Workers AI, you'll need to create a new fine tune on your account and upload your adapter files. You should have a `adapter_model.safetensors` file with model weights and `adapter_config.json` with your config information. *Note that we only accept adapter files in these types.* Right now, you can't edit a fine tune's asset files after you upload it. We will support this soon, but for now you will need to create a new fine tune and upload files again if you would like to use a new LoRA. Before you upload your LoRA adapter, you'll need to edit your `adapter_config.json` file to include `model_type` as one of `mistral`, `gemma` or `llama` like below. ```json null {10} { "alpha_pattern": {}, "auto_mapping": null, ... "target_modules": [ "q_proj", "v_proj" ], "task_type": "CAUSAL_LM", "model_type": "mistral", } ``` ### Wrangler You can create a finetune and upload your LoRA adapter via wrangler with the following commands: ```bash title="wrangler CLI" {1,7} npx wrangler ai finetune create <model_name> <finetune_name> <folder_path> #🌀 Creating new finetune "test-lora" for model "@cf/mistral/mistral-7b-instruct-v0.2-lora"... #🌀 Uploading file "/Users/abcd/Downloads/adapter_config.json" to "test-lora"... #🌀 Uploading file "/Users/abcd/Downloads/adapter_model.safetensors" to "test-lora"... #✅ Assets uploaded, finetune "test-lora" is ready to use. npx wrangler ai finetune list ┌──────────────────────────────────────┬─────────────────┬─────────────┠│ finetune_id │ name │ description │ ├──────────────────────────────────────┼─────────────────┼─────────────┤ │ 00000000-0000-0000-0000-000000000000 │ test-lora │ │ └──────────────────────────────────────┴─────────────────┴─────────────┘ ``` ### REST API Alternatively, you can use our REST API to create a finetune and upload your adapter files. You will need a Cloudflare API Token with `Workers AI: Edit` permissions to make calls to our REST API, which you can generate via the Cloudflare Dashboard. #### Creating a fine-tune on your account ```bash title="cURL" ## Input: user-defined name of fine tune ## Output: unique finetune_id curl -X POST https://api.cloudflare.com/client/v4/accounts/{ACCOUNT_ID}/ai/finetunes/ \ -H "Authorization: Bearer {API_TOKEN}" \ -H 'Content-Type: application/json' \ -d '{ "model": "SUPPORTED_MODEL_NAME", "name": "FINETUNE_NAME", "description": "OPTIONAL_DESCRIPTION" }' ``` #### Uploading your adapter weights and config You have to call the upload endpoint each time you want to upload a new file, so you usually run this once for `adapter_model.safetensors` and once for `adapter_config.json`. Make sure you include the `@` before your path to files. You can either use the finetune `name` or `id` that you used when you created the fine tune. ```bash title="cURL" ## Input: finetune_id, adapter_model.safetensors, then adapter_config.json ## Output: success true/false curl -X POST https://api.cloudflare.com/client/v4/accounts/{ACCOUNT_ID}/ai/finetunes/{FINETUNE_ID}/finetune-assets/ \ -H 'Authorization: Bearer {API_TOKEN}' \ -H 'Content-Type: multipart/form-data' \ -F 'file_name=adapter_model.safetensors' \ -F 'file=@{PATH/TO/adapter_model.safetensors}' ``` #### List fine-tunes in your account You can call this method to confirm what fine-tunes you have created in your account <Tabs> <TabItem label="curl"> ```bash title="cURL" ## Input: n/a ## Output: success true/false curl -X GET https://api.cloudflare.com/client/v4/accounts/{ACCOUNT_ID}/ai/finetunes/ \ -H 'Authorization: Bearer {API_TOKEN}' ``` </TabItem> <TabItem label="json output"> ```json title="Example JSON output" # Example output JSON { "success": true, "result": [ [{ "id": "00000000-0000-0000-0000-000000000", "model": "@cf/meta-llama/llama-2-7b-chat-hf-lora", "name": "llama2-finetune", "description": "test" }, { "id": "00000000-0000-0000-0000-000000000", "model": "@cf/mistralai/mistral-7b-instruct-v0.2-lora", "name": "mistral-finetune", "description": "test" }] ] } ``` </TabItem> </Tabs> *** ## Running inference with LoRAs To make inference requests and apply the LoRA adapter, you will need your model and finetune `name` or `id`. You should use the chat template that your LoRA was trained on, but you can try running it with `raw: true` and the messages template like below. <Tabs> <TabItem label="workers ai sdk"> ```javascript null {5-6} const response = await env.AI.run( "@cf/mistralai/mistral-7b-instruct-v0.2-lora", //the model supporting LoRAs { messages: [{"role": "user", "content": "Hello world"}], raw: true, //skip applying the default chat template lora: "00000000-0000-0000-0000-000000000", //the finetune id OR name } ); ``` </TabItem> <TabItem label="rest api"> ```bash null {5-6} curl https://api.cloudflare.com/client/v4/accounts/{ACCOUNT_ID}/ai/run/@cf/mistral/mistral-7b-instruct-v0.2-lora \ -H 'Authorization: Bearer {API_TOKEN}' \ -d '{ "messages": [{"role": "user", "content": "Hello world"}], "raw": "true", "lora": "00000000-0000-0000-0000-000000000" }' ``` </TabItem> </Tabs> --- # Public LoRA adapters URL: https://developers.cloudflare.com/workers-ai/fine-tunes/public-loras/ Cloudflare offers a few public LoRA adapters that can immediately be used for fine-tuned inference. You can try them out immediately via our [playground](https://playground.ai.cloudflare.com). Public LoRAs will have the name `cf-public-x`, and the prefix will be reserved for Cloudflare. :::note Have more LoRAs you would like to see? Let us know on [Discord](https://discord.cloudflare.com). ::: | Name | Description | Compatible with | | -------------------------------------------------------------------------- | ---------------------------------- | ----------------------------------------------------------------------------------- | | [cf-public-magicoder](https://huggingface.co/predibase/magicoder) | Coding tasks in multiple languages | `@cf/mistral/mistral-7b-instruct-v0.1` <br/> `@hf/mistral/mistral-7b-instruct-v0.2` | | [cf-public-jigsaw-classification](https://huggingface.co/predibase/jigsaw) | Toxic comment classification | `@cf/mistral/mistral-7b-instruct-v0.1` <br/> `@hf/mistral/mistral-7b-instruct-v0.2` | | [cf-public-cnn-summarization](https://huggingface.co/predibase/cnn) | Article summarization | `@cf/mistral/mistral-7b-instruct-v0.1` <br/> `@hf/mistral/mistral-7b-instruct-v0.2` | You can also list these public LoRAs with an API call: ```bash curl https://api.cloudflare.com/client/v4/accounts/{account_id}/ai/finetunes/public \ --header 'Authorization: Bearer {cf_token}' ``` ## Running inference with public LoRAs To run inference with public LoRAs, you just need to define the LoRA name in the request. We recommend that you use the prompt template that the LoRA was trained on. You can find this in the HuggingFace repos linked above for each adapter. ### cURL ```bash null {10} curl https://api.cloudflare.com/client/v4/accounts/{account_id}/ai/run/@cf/mistral/mistral-7b-instruct-v0.1 \ --header 'Authorization: Bearer {cf_token}' \ --data '{ "messages": [ { "role": "user", "content": "Write a python program to check if a number is even or odd." } ], "lora": "cf-public-magicoder" }' ``` ### JavaScript ```js null {11} const answer = await env.AI.run('@cf/mistral/mistral-7b-instruct-v0.1', { stream: true, raw: true, messages: [ { "role": "user", "content": "Summarize the following: Some newspapers, TV channels and well-known companies publish false news stories to fool people on 1 April. One of the earliest examples of this was in 1957 when a programme on the BBC, the UKs national TV channel, broadcast a report on how spaghetti grew on trees. The film showed a family in Switzerland collecting spaghetti from trees and many people were fooled into believing it, as in the 1950s British people didnt eat much pasta and many didnt know how it was made! Most British people wouldnt fall for the spaghetti trick today, but in 2008 the BBC managed to fool their audience again with their Miracles of Evolution trailer, which appeared to show some special penguins that had regained the ability to fly. Two major UK newspapers, The Daily Telegraph and the Daily Mirror, published the important story on their front pages." } ], lora: "cf-public-cnn-summarization" }); ``` --- # Dashboard URL: https://developers.cloudflare.com/workers-ai/get-started/dashboard/ import { Render } from "~/components" Follow this guide to create a Workers AI application using the Cloudflare dashboard. ## Prerequisites Sign up for a [Cloudflare account](https://dash.cloudflare.com/sign-up/workers-and-pages) if you have not already. ## Setup To create a Workers AI application: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. Go to **Compute (Workers)** and **Workers & Pages**. 3. Select **Create**. 4. Under **Start from a template**, select **LLM App**. After you select your template, an [AI binding](/workers-ai/configuration/bindings/) will be created for you in the dashboard. 5. Review the provided code and select **Deploy**. 6. Preview your Worker at its provided [`workers.dev`](/workers/configuration/routing/workers-dev/) subdomain. ## Development <Render file="dash-creation-next-steps" product="workers" /> --- # Get started URL: https://developers.cloudflare.com/workers-ai/get-started/ import { DirectoryListing } from "~/components" There are several options to build your Workers AI projects on Cloudflare. To get started, choose your preferred method: <DirectoryListing /> :::note These examples are geared towards creating new Workers AI projects. For help adding Workers AI to an existing Worker, refer to [Workers Bindings](/workers-ai/configuration/bindings/). ::: --- # REST API URL: https://developers.cloudflare.com/workers-ai/get-started/rest-api/ This guide will instruct you through setting up and deploying your first Workers AI project. You will use the Workers AI REST API to experiment with a large language model (LLM). ## Prerequisites Sign up for a [Cloudflare account](https://dash.cloudflare.com/sign-up/workers-and-pages) if you have not already. ## 1. Get API token and Account ID You need your API token and Account ID to use the REST API. To get these values: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. Go to **AI** > **Workers AI**. 3. Select **Use REST API**. 4. Get your API token: 1. Select **Create a Workers AI API Token**. 2. Review the prefilled information. 3. Select **Create API Token**. 4. Select **Copy API Token**. 5. Save that value for future use. 5. For **Get Account ID**, copy the value for **Account ID**. Save that value for future use. :::note If you choose to [create an API token](/fundamentals/api/get-started/create-token/) instead of using the template, that token will need permissions for both `Workers AI - Read` and `Workers AI - Edit`. ::: ## 2. Run a model via API After creating your API token, authenticate and make requests to the API using your API token in the request. You will use the [Execute AI model](/api/resources/ai/methods/run/) endpoint to run the [`@cf/meta/llama-3.1-8b-instruct`](/workers-ai/models/llama-3.1-8b-instruct/) model: ```bash curl https://api.cloudflare.com/client/v4/accounts/{ACCOUNT_ID}/ai/run/@cf/meta/llama-3.1-8b-instruct \ -H 'Authorization: Bearer {API_TOKEN}' \ -d '{ "prompt": "Where did the phrase Hello World come from" }' ``` Replace the values for `{ACCOUNT_ID}` and `{API_token}`. The API response will look like the following: ```json { "result": { "response": "Hello, World first appeared in 1974 at Bell Labs when Brian Kernighan included it in the C programming language example. It became widely used as a basic test program due to simplicity and clarity. It represents an inviting greeting from a program to the world." }, "success": true, "errors": [], "messages": [] } ``` This example execution uses the `@cf/meta/llama-3.1-8b-instruct` model, but you can use any of the models in the [Workers AI models catalog](/workers-ai/models/). If using another model, you will need to replace `{model}` with your desired model name. By completing this guide, you have created a Cloudflare account (if you did not have one already) and an API token that grants Workers AI read permissions to your account. You executed the [`@cf/meta/llama-3.1-8b-instruct`](/workers-ai/models/llama-3.1-8b-instruct/) model using a cURL command from the terminal and received an answer to your prompt in a JSON response. ## Related resources - [Models](/workers-ai/models/) - Browse the Workers AI models catalog. - [AI SDK](/workers-ai/configuration/ai-sdk) - Learn how to integrate with an AI model. --- # CLI URL: https://developers.cloudflare.com/workers-ai/get-started/workers-wrangler/ import { Render, PackageManagers, WranglerConfig } from "~/components"; This guide will instruct you through setting up and deploying your first Workers AI project. You will use [Workers](/workers/), a Workers AI binding, and a large language model (LLM) to deploy your first AI-powered application on the Cloudflare global network. <Render file="prereqs" product="workers" /> ## 1. Create a Worker project You will create a new Worker project using the `create-cloudflare` CLI (C3). [C3](https://github.com/cloudflare/workers-sdk/tree/main/packages/create-cloudflare) is a command-line tool designed to help you set up and deploy new applications to Cloudflare. Create a new project named `hello-ai` by running: <PackageManagers type="create" pkg="cloudflare@latest" args={"hello-ai"} /> Running `npm create cloudflare@latest` will prompt you to install the [`create-cloudflare` package](https://www.npmjs.com/package/create-cloudflare), and lead you through setup. C3 will also install [Wrangler](/workers/wrangler/), the Cloudflare Developer Platform CLI. <Render file="c3-post-run-steps" product="workers" params={{ category: "hello-world", type: "Hello World Worker", lang: "TypeScript", }} /> This will create a new `hello-ai` directory. Your new `hello-ai` directory will include: - A `"Hello World"` [Worker](/workers/get-started/guide/#3-write-code) at `src/index.ts`. - A [`wrangler.jsonc`](/workers/wrangler/configuration/) configuration file. Go to your application directory: ```sh cd hello-ai ``` ## 2. Connect your Worker to Workers AI You must create an AI binding for your Worker to connect to Workers AI. [Bindings](/workers/runtime-apis/bindings/) allow your Workers to interact with resources, like Workers AI, on the Cloudflare Developer Platform. To bind Workers AI to your Worker, add the following to the end of your Wrangler file: <WranglerConfig> ```toml [ai] binding = "AI" ``` </WranglerConfig> Your binding is [available in your Worker code](/workers/reference/migrate-to-module-workers/#bindings-in-es-modules-format) on [`env.AI`](/workers/runtime-apis/handlers/fetch/). {/* <!-- TODO update this once we know if we'll have it --> */} You can also bind Workers AI to a Pages Function. For more information, refer to [Functions Bindings](/pages/functions/bindings/#workers-ai). ## 3. Run an inference task in your Worker You are now ready to run an inference task in your Worker. In this case, you will use an LLM, [`llama-3.1-8b-instruct`](/workers-ai/models/llama-3.1-8b-instruct/), to answer a question. Update the `index.ts` file in your `hello-ai` application directory with the following code: ```typescript title="src/index.ts" export interface Env { // If you set another name in the Wrangler config file as the value for 'binding', // replace "AI" with the variable name you defined. AI: Ai; } export default { async fetch(request, env): Promise<Response> { const response = await env.AI.run("@cf/meta/llama-3.1-8b-instruct", { prompt: "What is the origin of the phrase Hello, World", }); return new Response(JSON.stringify(response)); }, } satisfies ExportedHandler<Env>; ``` Up to this point, you have created an AI binding for your Worker and configured your Worker to be able to execute the Llama 3.1 model. You can now test your project locally before you deploy globally. ## 4. Develop locally with Wrangler While in your project directory, test Workers AI locally by running [`wrangler dev`](/workers/wrangler/commands/#dev): ```sh npx wrangler dev ``` <Render file="ai-local-usage-charges" product="workers" /> You will be prompted to log in after you run the `wrangler dev`. When you run `npx wrangler dev`, Wrangler will give you a URL (most likely `localhost:8787`) to review your Worker. After you go to the URL Wrangler provides, a message will render that resembles the following example: ```json { "response": "Ah, a most excellent question, my dear human friend! *adjusts glasses*\n\nThe origin of the phrase \"Hello, World\" is a fascinating tale that spans several decades and multiple disciplines. It all began in the early days of computer programming, when a young man named Brian Kernighan was tasked with writing a simple program to demonstrate the basics of a new programming language called C.\nKernighan, a renowned computer scientist and author, was working at Bell Labs in the late 1970s when he created the program. He wanted to showcase the language's simplicity and versatility, so he wrote a basic \"Hello, World!\" program that printed the familiar greeting to the console.\nThe program was included in Kernighan and Ritchie's influential book \"The C Programming Language,\" published in 1978. The book became a standard reference for C programmers, and the \"Hello, World!\" program became a sort of \"Hello, World!\" for the programming community.\nOver time, the phrase \"Hello, World!\" became a shorthand for any simple program that demonstrated the basics" } ``` ## 5. Deploy your AI Worker Before deploying your AI Worker globally, log in with your Cloudflare account by running: ```sh npx wrangler login ``` You will be directed to a web page asking you to log in to the Cloudflare dashboard. After you have logged in, you will be asked if Wrangler can make changes to your Cloudflare account. Scroll down and select **Allow** to continue. Finally, deploy your Worker to make your project accessible on the Internet. To deploy your Worker, run: ```sh npx wrangler deploy ``` ```sh output https://hello-ai.<YOUR_SUBDOMAIN>.workers.dev ``` Your Worker will be deployed to your custom [`workers.dev`](/workers/configuration/routing/workers-dev/) subdomain. You can now visit the URL to run your AI Worker. By finishing this tutorial, you have created a Worker, connected it to Workers AI through an AI binding, and ran an inference task from the Llama 3 model. ## Related resources - [Cloudflare Developers community on Discord](https://discord.cloudflare.com) - Submit feature requests, report bugs, and share your feedback directly with the Cloudflare team by joining the Cloudflare Discord server. - [Models](/workers-ai/models/) - Browse the Workers AI models catalog. - [AI SDK](/workers-ai/configuration/ai-sdk) - Learn how to integrate with an AI model. --- # Guides URL: https://developers.cloudflare.com/workers-ai/guides/ import { DirectoryListing } from "~/components"; <DirectoryListing /> --- # Prompting URL: https://developers.cloudflare.com/workers-ai/guides/prompting/ import { Code } from "~/components"; export const scopedExampleOne = `{ messages: [ { role: "system", content: "you are a very funny comedian and you like emojis" }, { role: "user", content: "tell me a joke about cloudflare" }, ], };`; export const scopedExampleTwo = `{ messages: [ { role: "system", content: "you are a professional computer science assistant" }, { role: "user", content: "what is WASM?" }, { role: "assistant", content: "WASM (WebAssembly) is a binary instruction format that is designed to be a platform-agnostic" }, { role: "user", content: "does Python compile to WASM?" }, { role: "assistant", content: "No, Python does not directly compile to WebAssembly" }, { role: "user", content: "what about Rust?" }, ], };`; export const unscopedExampleOne = `{ prompt: "tell me a joke about cloudflare"; }`; export const unscopedExampleTwo = `{ prompt: "<s>[INST]comedian[/INST]</s>\n[INST]tell me a joke about cloudflare[/INST]", raw: true };`; Part of getting good results from text generation models is asking questions correctly. LLMs are usually trained with specific predefined templates, which should then be used with the model's tokenizer for better results when doing inference tasks. There are two ways to prompt text generation models with Workers AI: :::note[Important] We recommend using unscoped prompts for inference with LoRA. ::: ### Scoped Prompts This is the <strong>recommended</strong> method. With scoped prompts, Workers AI takes the burden of knowing and using different chat templates for different models and provides a unified interface to developers when building prompts and creating text generation tasks. Scoped prompts are a list of messages. Each message defines two keys: the role and the content. Typically, the role can be one of three options: - <strong>system</strong> - System messages define the AI's personality. You can use them to set rules and how you expect the AI to behave. - <strong>user</strong> - User messages are where you actually query the AI by providing a question or a conversation. - <strong>assistant</strong> - Assistant messages hint to the AI about the desired output format. Not all models support this role. OpenAI has a [good explanation](https://platform.openai.com/docs/guides/text-generation#messages-and-roles) of how they use these roles with their GPT models. Even though chat templates are flexible, other text generation models tend to follow the same conventions. Here's an input example of a scoped prompt using system and user roles: <Code code={scopedExampleOne} lang="js" /> Here's a better example of a chat session using multiple iterations between the user and the assistant. <Code code={scopedExampleTwo} lang="js" /> Note that different LLMs are trained with different templates for different use cases. While Workers AI tries its best to abstract the specifics of each LLM template from the developer through a unified API, you should always refer to the model documentation for details (we provide links in the table above.) For example, instruct models like Codellama are fine-tuned to respond to a user-provided instruction, while chat models expect fragments of dialogs as input. ### Unscoped Prompts You can use unscoped prompts to send a single question to the model without worrying about providing any context. Workers AI will automatically convert your `prompt` input to a reasonable default scoped prompt internally so that you get the best possible prediction. <Code code={unscopedExampleOne} lang="js" /> You can also use unscoped prompts to construct the model chat template manually. In this case, you can use the raw parameter. Here's an input example of a [Mistral](https://docs.mistral.ai/models/#chat-template) chat template prompt: <Code code={unscopedExampleTwo} lang="js" /> --- # JSON Mode URL: https://developers.cloudflare.com/workers-ai/json-mode/ import { Code } from "~/components"; export const jsonModeSchema = `{ response_format: { title: "JSON Mode", type: "object", properties: { type: { type: "string", enum: ["json_object", "json_schema"], }, json_schema: {}, } } }`; export const jsonModeRequestExample = `{ "messages": [ { "role": "system", "content": "Extract data about a country." }, { "role": "user", "content": "Tell me about India." } ], "response_format": { "type": "json_schema", "json_schema": { "type": "object", "properties": { "name": { "type": "string" }, "capital": { "type": "string" }, "languages": { "type": "array", "items": { "type": "string" } } }, "required": [ "name", "capital", "languages" ] } } }`; export const jsonModeResponseExample = `{ "response": { "name": "India", "capital": "New Delhi", "languages": [ "Hindi", "English", "Bengali", "Telugu", "Marathi", "Tamil", "Gujarati", "Urdu", "Kannada", "Odia", "Malayalam", "Punjabi", "Sanskrit" ] } }`; When we want text-generation AI models to interact with databases, services, and external systems programmatically, typically when using tool calling or building AI agents, we must have structured response formats rather than natural language. Workers AI supports JSON Mode, enabling applications to request a structured output response when interacting with AI models. ## Schema JSON Mode is compatible with OpenAI’s implementation; to enable add the `response_format` property to the request object using the following convention: <Code code={jsonModeSchema} lang="json" /> Where `json_schema` must be a valid [JSON Schema](https://json-schema.org/) declaration. ## JSON Mode example When using JSON Format, pass the schema as in the example below as part of the request you send to the LLM. <Code code={jsonModeRequestExample} lang="json" /> The LLM will follow the schema, and return a response such as below: <Code code={jsonModeResponseExample} lang="json" /> As you can see, the model is complying with the JSON schema definition in the request and responding with a validated JSON object. ## Supported Models This is the list of models that now support JSON Mode: - [@cf/meta/llama-3.1-8b-instruct-fast](/workers-ai/models/llama-3.1-8b-instruct-fast/) - [@cf/meta/llama-3.1-70b-instruct](/workers-ai/models/llama-3.1-70b-instruct/) - [@cf/meta/llama-3.3-70b-instruct-fp8-fast](/workers-ai/models/llama-3.3-70b-instruct-fp8-fast/) - [@cf/meta/llama-3-8b-instruct](/workers-ai/models/llama-3-8b-instruct/) - [@cf/meta/llama-3.1-8b-instruct](/workers-ai/models/llama-3.1-8b-instruct/) - [@cf/meta/llama-3.2-11b-vision-instruct](/workers-ai/models/llama-3.2-11b-vision-instruct/) - [@hf/nousresearch/hermes-2-pro-mistral-7b](/workers-ai/models/hermes-2-pro-mistral-7b/) - [@hf/thebloke/deepseek-coder-6.7b-instruct-awq](/workers-ai/models/deepseek-coder-6.7b-instruct-awq/) - [@cf/deepseek-ai/deepseek-r1-distill-qwen-32b](/workers-ai/models/deepseek-r1-distill-qwen-32b/) We will continue extending this list to keep up with new, and requested models. Note that Workers AI can't guarantee that the model responds according to the requested JSON Schema. Depending on the complexity of the task and adequacy of the JSON Schema, the model may not be able to satisfy the request in extreme situations. If that's the case, then an error `JSON Mode couldn't be met` is returned and must be handled. JSON Mode currently doesn't support streaming. --- # Models URL: https://developers.cloudflare.com/workers-ai/models/ import ModelCatalog from "~/pages/workers-ai/models/index.astro"; <ModelCatalog /> --- # Platform URL: https://developers.cloudflare.com/workers-ai/platform/ import { DirectoryListing } from "~/components"; <DirectoryListing /> --- # Limits URL: https://developers.cloudflare.com/workers-ai/platform/limits/ import { Render } from "~/components" Workers AI is now Generally Available. We've updated our rate limits to reflect this. Note that model inferences in local mode using Wrangler will also count towards these limits. Beta models may have lower rate limits while we work on performance and scale. <Render file="custom_requirements" /> Rate limits are default per task type, with some per-model limits defined as follows: ## Rate limits by task type ### [Automatic Speech Recognition](/workers-ai/models/) * 720 requests per minute ### [Image Classification](/workers-ai/models/) * 3000 requests per minute ### [Image-to-Text](/workers-ai/models/) * 720 requests per minute ### [Object Detection](/workers-ai/models/) * 3000 requests per minute ### [Summarization](/workers-ai/models/) * 1500 requests per minute ### [Text Classification](/workers-ai/models/#text-classification) * 2000 requests per minute ### [Text Embeddings](/workers-ai/models/#text-embeddings) * 3000 requests per minute * [@cf/baai/bge-large-en-v1.5](/workers-ai/models/bge-large-en-v1.5/) is 1500 requests per minute ### [Text Generation](/workers-ai/models/#text-generation) * 300 requests per minute * [@hf/thebloke/mistral-7b-instruct-v0.1-awq](/workers-ai/models/mistral-7b-instruct-v0.1-awq/) is 400 requests per minute * [@cf/microsoft/phi-2](/workers-ai/models/phi-2/) is 720 requests per minute * [@cf/qwen/qwen1.5-0.5b-chat](/workers-ai/models/qwen1.5-0.5b-chat/) is 1500 requests per minute * [@cf/qwen/qwen1.5-1.8b-chat](/workers-ai/models/qwen1.5-1.8b-chat/) is 720 requests per minute * [@cf/qwen/qwen1.5-14b-chat-awq](/workers-ai/models/qwen1.5-14b-chat-awq/) is 150 requests per minute * [@cf/tinyllama/tinyllama-1.1b-chat-v1.0](/workers-ai/models/tinyllama-1.1b-chat-v1.0/) is 720 requests per minute ### [Text-to-Image](/workers-ai/models/#text-to-image) * 720 requests per minute * [@cf/runwayml/stable-diffusion-v1-5-img2img](/workers-ai/models/stable-diffusion-v1-5-img2img/) is 1500 requests per minute ### [Translation](/workers-ai/models/#translation) * 720 requests per minute --- # Pricing URL: https://developers.cloudflare.com/workers-ai/platform/pricing/ :::note Workers AI has updated pricing to be more granular, with per-model unit-based pricing presented, but still billing in neurons in the back end. ::: Workers AI is included in both the [Free and Paid Workers plans](/workers/platform/pricing/) and is priced at **$0.011 per 1,000 Neurons**. Our free allocation allows anyone to use a total of **10,000 Neurons per day at no charge**. To use more than 10,000 Neurons per day, you need to sign up for the [Workers Paid plan](/workers/platform/pricing/#workers). On Workers Paid, you will be charged at $0.011 / 1,000 Neurons for any usage above the free allocation of 10,000 Neurons per day. You can monitor your Neuron usage in the [Cloudflare Workers AI dashboard](https://dash.cloudflare.com/?to=/:account/ai/workers-ai). All limits reset daily at 00:00 UTC. If you exceed any one of the above limits, further operations will fail with an error. | | Free <br/> allocation | Pricing | | ------------ | ---------------------- | ----------------------------- | | Workers Free | 10,000 Neurons per day | N/A - Upgrade to Workers Paid | | Workers Paid | 10,000 Neurons per day | $0.011 / 1,000 Neurons | ## What are Neurons? Neurons are our way of measuring AI outputs across different models, representing the GPU compute needed to perform your request. Our serverless model allows you to pay only for what you use without having to worry about renting, managing, or scaling GPUs. ## LLM model pricing | Model | Price in Tokens | Price in Neurons | | -------------------------------------------- | ---------------------------------------------------------- | ------------------------------------------------------------------------- | | @cf/meta/llama-3.2-1b-instruct | $0.027 per M input tokens <br/> $0.201 per M output tokens | 2457 neurons per M input tokens <br/> 18252 neurons per M output tokens | | @cf/meta/llama-3.2-3b-instruct | $0.051 per M input tokens <br/> $0.335 per M output tokens | 4625 neurons per M input tokens <br/> 30475 neurons per M output tokens | | @cf/meta/llama-3.1-8b-instruct-fp8-fast | $0.045 per M input tokens <br/> $0.384 per M output tokens | 4119 neurons per M input tokens <br/> 34868 neurons per M output tokens | | @cf/meta/llama-3.2-11b-vision-instruct | $0.049 per M input tokens <br/> $0.676 per M output tokens | 4410 neurons per M input tokens <br/> 61493 neurons per M output tokens | | @cf/meta/llama-3.1-70b-instruct-fp8-fast | $0.293 per M input tokens <br/> $2.253 per M output tokens | 26668 neurons per M input tokens <br/> 204805 neurons per M output tokens | | @cf/meta/llama-3.3-70b-instruct-fp8-fast | $0.293 per M input tokens <br/> $2.253 per M output tokens | 26668 neurons per M input tokens <br/> 204805 neurons per M output tokens | | @cf/deepseek-ai/deepseek-r1-distill-qwen-32b | $0.497 per M input tokens <br/> $4.881 per M output tokens | 45170 neurons per M input tokens <br/> 443756 neurons per M output tokens | | @cf/mistral/mistral-7b-instruct-v0.1 | $0.110 per M input tokens <br/> $0.190 per M output tokens | 10000 neurons per M input tokens <br/> 17300 neurons per M output tokens | | @cf/meta/llama-3.1-8b-instruct | $0.282 per M input tokens <br/> $0.827 per M output tokens | 25608 neurons per M input tokens <br/> 75147 neurons per M output tokens | | @cf/meta/llama-3.1-8b-instruct-fp8 | $0.152 per M input tokens <br/> $0.287 per M output tokens | 13778 neurons per M input tokens <br/> 26128 neurons per M output tokens | | @cf/meta/llama-3.1-8b-instruct-awq | $0.123 per M input tokens <br/> $0.266 per M output tokens | 11161 neurons per M input tokens <br/> 24215 neurons per M output tokens | | @cf/meta/llama-3-8b-instruct | $0.282 per M input tokens <br/> $0.827 per M output tokens | 25608 neurons per M input tokens <br/> 75147 neurons per M output tokens | | @cf/meta/llama-3-8b-instruct-awq | $0.123 per M input tokens <br/> $0.266 per M output tokens | 11161 neurons per M input tokens <br/> 24215 neurons per M output tokens | | @cf/meta/llama-2-7b-chat-fp16 | $0.556 per M input tokens <br/> $6.667 per M output tokens | 50505 neurons per M input tokens <br/> 606061 neurons per M output tokens | | @cf/meta/llama-guard-3-8b | $0.484 per M input tokens <br/> $0.030 per M output tokens | 44003 neurons per M input tokens <br/> 2730 neurons per M output tokens | ## Other model pricing | Model | Price in Tokens | Price in Neurons | | ------------------------------------- | ---------------------------------------------------------- | ------------------------------------------------------------------------ | | @cf/black-forest-labs/flux-1-schnell | $0.0000528 per 512x512 tile <br/> $0.0001056 per step | 4.80 neurons per 512x512 tile <br/> 9.60 neurons per step | | @cf/huggingface/distilbert-sst-2-int8 | $0.026 per M input tokens | 2394 neurons per M input tokens | | @cf/baai/bge-small-en-v1.5 | $0.020 per M input tokens | 1841 neurons per M input tokens | | @cf/baai/bge-base-en-v1.5 | $0.067 per M input tokens | 6058 neurons per M input tokens | | @cf/baai/bge-large-en-v1.5 | $0.204 per M input tokens | 18582 neurons per M input tokens | | @cf/meta/m2m100-1.2b | $0.342 per M input tokens <br/> $0.342 per M output tokens | 31050 neurons per M input tokens <br/> 31050 neurons per M output tokens | | @cf/microsoft/resnet-50 | $2.51 per M images | 228055 neurons per M images | | @cf/openai/whisper | $0.0005 per audio minute | 41.14 neurons per audio minute | --- # Build a Retrieval Augmented Generation (RAG) AI URL: https://developers.cloudflare.com/workers-ai/tutorials/build-a-retrieval-augmented-generation-ai/ import { Details, Render, PackageManagers, WranglerConfig } from "~/components"; This guide will instruct you through setting up and deploying your first application with Cloudflare AI. You will build a fully-featured AI-powered application, using tools like Workers AI, Vectorize, D1, and Cloudflare Workers. At the end of this tutorial, you will have built an AI tool that allows you to store information and query it using a Large Language Model. This pattern, known as Retrieval Augmented Generation, or RAG, is a useful project you can build by combining multiple aspects of Cloudflare's AI toolkit. You do not need to have experience working with AI tools to build this application. <Render file="prereqs" product="workers" /> You will also need access to [Vectorize](/vectorize/platform/pricing/). During this tutorial, we will show how you can optionally integrate with [Anthropic Claude](http://anthropic.com) as well. You will need an [Anthropic API key](https://docs.anthropic.com/en/api/getting-started) to do so. ## 1. Create a new Worker project C3 (`create-cloudflare-cli`) is a command-line tool designed to help you setup and deploy Workers to Cloudflare as fast as possible. Open a terminal window and run C3 to create your Worker project: <PackageManagers type="create" pkg="cloudflare@latest" args={"rag-ai-tutorial"} /> <Render file="c3-post-run-steps" product="workers" params={{ category: "hello-world", type: "Hello World Worker", lang: "JavaScript", }} /> In your project directory, C3 has generated several files. <Details header="What files did C3 create?"> 1. `wrangler.jsonc`: Your [Wrangler](/workers/wrangler/configuration/#sample-wrangler-configuration) configuration file. 2. `worker.js` (in `/src`): A minimal `'Hello World!'` Worker written in [ES module](/workers/reference/migrate-to-module-workers/) syntax. 3. `package.json`: A minimal Node dependencies configuration file. 4. `package-lock.json`: Refer to [`npm` documentation on `package-lock.json`](https://docs.npmjs.com/cli/v9/configuring-npm/package-lock-json). 5. `node_modules`: Refer to [`npm` documentation `node_modules`](https://docs.npmjs.com/cli/v7/configuring-npm/folders#node-modules). </Details> Now, move into your newly created directory: ```sh cd rag-ai-tutorial ``` ## 2. Develop with Wrangler CLI The Workers command-line interface, [Wrangler](/workers/wrangler/install-and-update/), allows you to [create](/workers/wrangler/commands/#init), [test](/workers/wrangler/commands/#dev), and [deploy](/workers/wrangler/commands/#deploy) your Workers projects. C3 will install Wrangler in projects by default. After you have created your first Worker, run the [`wrangler dev`](/workers/wrangler/commands/#dev) command in the project directory to start a local server for developing your Worker. This will allow you to test your Worker locally during development. ```sh npx wrangler dev --remote ``` :::note If you have not used Wrangler before, it will try to open your web browser to login with your Cloudflare account. If you have issues with this step or you do not have access to a browser interface, refer to the [`wrangler login`](/workers/wrangler/commands/#login) documentation for more information. ::: You will now be able to go to [http://localhost:8787](http://localhost:8787) to see your Worker running. Any changes you make to your code will trigger a rebuild, and reloading the page will show you the up-to-date output of your Worker. ## 3. Adding the AI binding To begin using Cloudflare's AI products, you can add the `ai` block to the [Wrangler configuration file](/workers/wrangler/configuration/). This will set up a binding to Cloudflare's AI models in your code that you can use to interact with the available AI models on the platform. This example features the [`@cf/meta/llama-3-8b-instruct` model](/workers-ai/models/llama-3-8b-instruct/), which generates text. <WranglerConfig> ```toml [ai] binding = "AI" ``` </WranglerConfig> Now, find the `src/index.js` file. Inside the `fetch` handler, you can query the `AI` binding: ```js export default { async fetch(request, env, ctx) { const answer = await env.AI.run("@cf/meta/llama-3-8b-instruct", { messages: [{ role: "user", content: `What is the square root of 9?` }], }); return new Response(JSON.stringify(answer)); }, }; ``` By querying the LLM through the `AI` binding, we can interact directly with Cloudflare AI's large language models directly in our code. In this example, we are using the [`@cf/meta/llama-3-8b-instruct` model](/workers-ai/models/llama-3-8b-instruct/), which generates text. You can deploy your Worker using `wrangler`: ```sh npx wrangler deploy ``` Making a request to your Worker will now generate a text response from the LLM, and return it as a JSON object. ```sh curl https://example.username.workers.dev ``` ```sh output {"response":"Answer: The square root of 9 is 3."} ``` ## 4. Adding embeddings using Cloudflare D1 and Vectorize Embeddings allow you to add additional capabilities to the language models you can use in your Cloudflare AI projects. This is done via **Vectorize**, Cloudflare's vector database. To begin using Vectorize, create a new embeddings index using `wrangler`. This index will store vectors with 768 dimensions, and will use cosine similarity to determine which vectors are most similar to each other: ```sh npx wrangler vectorize create vector-index --dimensions=768 --metric=cosine ``` Then, add the configuration details for your new Vectorize index to the [Wrangler configuration file](/workers/wrangler/configuration/): <WranglerConfig> ```toml # ... existing wrangler configuration [[vectorize]] binding = "VECTOR_INDEX" index_name = "vector-index" ``` </WranglerConfig> A vector index allows you to store a collection of dimensions, which are floating point numbers used to represent your data. When you want to query the vector database, you can also convert your query into dimensions. **Vectorize** is designed to efficiently determine which stored vectors are most similar to your query. To implement the searching feature, you must set up a D1 database from Cloudflare. In D1, you can store your app's data. Then, you change this data into a vector format. When someone searches and it matches the vector, you can show them the matching data. Create a new D1 database using `wrangler`: ```sh npx wrangler d1 create database ``` Then, paste the configuration details output from the previous command into the [Wrangler configuration file](/workers/wrangler/configuration/): <WranglerConfig> ```toml # ... existing wrangler configuration [[d1_databases]] binding = "DB" # available in your Worker on env.DB database_name = "database" database_id = "abc-def-geh" # replace this with a real database_id (UUID) ``` </WranglerConfig> In this application, we'll create a `notes` table in D1, which will allow us to store notes and later retrieve them in Vectorize. To create this table, run a SQL command using `wrangler d1 execute`: ```sh npx wrangler d1 execute database --remote --command "CREATE TABLE IF NOT EXISTS notes (id INTEGER PRIMARY KEY, text TEXT NOT NULL)" ``` Now, we can add a new note to our database using `wrangler d1 execute`: ```sh npx wrangler d1 execute database --remote --command "INSERT INTO notes (text) VALUES ('The best pizza topping is pepperoni')" ``` ## 5. Creating a workflow Before we begin creating notes, we will introduce a [Cloudflare Workflow](/workflows). This will allow us to define a durable workflow that can safely and robustly execute all the steps of the RAG process. To begin, add a new `[[workflows]]` block to your [Wrangler configuration file](/workers/wrangler/configuration/): <WranglerConfig> ```toml # ... existing wrangler configuration [[workflows]] name = "rag" binding = "RAG_WORKFLOW" class_name = "RAGWorkflow" ``` </WranglerConfig> In `src/index.js`, add a new class called `RAGWorkflow` that extends `WorkflowEntrypoint`: ```js import { WorkflowEntrypoint } from "cloudflare:workers"; export class RAGWorkflow extends WorkflowEntrypoint { async run(event, step) { await step.do('example step', async () => { console.log("Hello World!") }) } } ``` This class will define a single workflow step that will log "Hello World!" to the console. You can add as many steps as you need to your workflow. On its own, this workflow will not do anything. To execute the workflow, we will call the `RAG_WORKFLOW` binding, passing in any parameters that the workflow needs to properly complete. Here is an example of how we can call the workflow: ```js env.RAG_WORKFLOW.create({ params: { text } }) ``` ## 6. Creating notes and adding them to Vectorize To expand on your Workers function in order to handle multiple routes, we will add `hono`, a routing library for Workers. This will allow us to create a new route for adding notes to our database. Install `hono` using `npm`: ```sh npm install hono ``` Then, import `hono` into your `src/index.js` file. You should also update the `fetch` handler to use `hono`: ```js import { Hono } from "hono"; const app = new Hono(); app.get("/", async (c) => { const answer = await c.env.AI.run("@cf/meta/llama-3-8b-instruct", { messages: [{ role: "user", content: `What is the square root of 9?` }], }); return c.json(answer); }); export default app; ``` This will establish a route at the root path `/` that is functionally equivalent to the previous version of your application. Now, we can update our workflow to begin adding notes to our database, and generating the related embeddings for them. This example features the [`@cf/baai/bge-base-en-v1.5` model](/workers-ai/models/bge-base-en-v1.5/), which can be used to create an embedding. Embeddings are stored and retrieved inside [Vectorize](/vectorize/), Cloudflare's vector database. The user query is also turned into an embedding so that it can be used for searching within Vectorize. ```js import { WorkflowEntrypoint } from "cloudflare:workers"; export class RAGWorkflow extends WorkflowEntrypoint { async run(event, step) { const env = this.env const { text } = event.payload const record = await step.do(`create database record`, async () => { const query = "INSERT INTO notes (text) VALUES (?) RETURNING *" const { results } = await env.DB.prepare(query) .bind(text) .run() const record = results[0] if (!record) throw new Error("Failed to create note") return record; }) const embedding = await step.do(`generate embedding`, async () => { const embeddings = await env.AI.run('@cf/baai/bge-base-en-v1.5', { text: text }) const values = embeddings.data[0] if (!values) throw new Error("Failed to generate vector embedding") return values }) await step.do(`insert vector`, async () => { return env.VECTOR_INDEX.upsert([ { id: record.id.toString(), values: embedding, } ]); }) } } ``` The workflow does the following things: 1. Accepts a `text` parameter. 2. Insert a new row into the `notes` table in D1, and retrieve the `id` of the new row. 3. Convert the `text` into a vector using the `embeddings` model of the LLM binding. 4. Upsert the `id` and `vectors` into the `vector-index` index in Vectorize. By doing this, you will create a new vector representation of the note, which can be used to retrieve the note later. To complete the code, we will add a route that allows users to submit notes to the database. This route will parse the JSON request body, get the `note` parameter, and create a new instance of the workflow, passing the parameter: ```js app.post('/notes', async (c) => { const { text } = await c.req.json(); if (!text) return c.text("Missing text", 400); await c.env.RAG_WORKFLOW.create({ params: { text } }) return c.text("Created note", 201); }) ``` ## 7. Querying Vectorize to retrieve notes To complete your code, you can update the root path (`/`) to query Vectorize. You will convert the query into a vector, and then use the `vector-index` index to find the most similar vectors. The `topK` parameter limits the number of vectors returned by the function. For instance, providing a `topK` of 1 will only return the _most similar_ vector based on the query. Setting `topK` to 5 will return the 5 most similar vectors. Given a list of similar vectors, you can retrieve the notes that match the record IDs stored alongside those vectors. In this case, we are only retrieving a single note - but you may customize this as needed. You can insert the text of those notes as context into the prompt for the LLM binding. This is the basis of Retrieval-Augmented Generation, or RAG: providing additional context from data outside of the LLM to enhance the text generated by the LLM. We'll update the prompt to include the context, and to ask the LLM to use the context when responding: ```js import { Hono } from "hono"; const app = new Hono(); // Existing post route... // app.post('/notes', async (c) => { ... }) app.get('/', async (c) => { const question = c.req.query('text') || "What is the square root of 9?" const embeddings = await c.env.AI.run('@cf/baai/bge-base-en-v1.5', { text: question }) const vectors = embeddings.data[0] const vectorQuery = await c.env.VECTOR_INDEX.query(vectors, { topK: 1 }); let vecId; if (vectorQuery.matches && vectorQuery.matches.length > 0 && vectorQuery.matches[0]) { vecId = vectorQuery.matches[0].id; } else { console.log("No matching vector found or vectorQuery.matches is empty"); } let notes = [] if (vecId) { const query = `SELECT * FROM notes WHERE id = ?` const { results } = await c.env.DB.prepare(query).bind(vecId).all() if (results) notes = results.map(vec => vec.text) } const contextMessage = notes.length ? `Context:\n${notes.map(note => `- ${note}`).join("\n")}` : "" const systemPrompt = `When answering the question or responding, use the context provided, if it is provided and relevant.` const { response: answer } = await c.env.AI.run( '@cf/meta/llama-3-8b-instruct', { messages: [ ...(notes.length ? [{ role: 'system', content: contextMessage }] : []), { role: 'system', content: systemPrompt }, { role: 'user', content: question } ] } ) return c.text(answer); }); app.onError((err, c) => { return c.text(err); }); export default app; ``` ## 8. Adding Anthropic Claude model (optional) If you are working with larger documents, you have the option to use Anthropic's [Claude models](https://claude.ai/), which have large context windows and are well-suited to RAG workflows. To begin, install the `@anthropic-ai/sdk` package: ```sh npm install @anthropic-ai/sdk ``` In `src/index.js`, you can update the `GET /` route to check for the `ANTHROPIC_API_KEY` environment variable. If it's set, we can generate text using the Anthropic SDK. If it isn't set, we'll fall back to the existing Workers AI code: ```js import Anthropic from '@anthropic-ai/sdk'; app.get('/', async (c) => { // ... Existing code const systemPrompt = `When answering the question or responding, use the context provided, if it is provided and relevant.` let modelUsed: string = "" let response = null if (c.env.ANTHROPIC_API_KEY) { const anthropic = new Anthropic({ apiKey: c.env.ANTHROPIC_API_KEY }) const model = "claude-3-5-sonnet-latest" modelUsed = model const message = await anthropic.messages.create({ max_tokens: 1024, model, messages: [ { role: 'user', content: question } ], system: [systemPrompt, notes ? contextMessage : ''].join(" ") }) response = { response: message.content.map(content => content.text).join("\n") } } else { const model = "@cf/meta/llama-3.1-8b-instruct" modelUsed = model response = await c.env.AI.run( model, { messages: [ ...(notes.length ? [{ role: 'system', content: contextMessage }] : []), { role: 'system', content: systemPrompt }, { role: 'user', content: question } ] } ) } if (response) { c.header('x-model-used', modelUsed) return c.text(response.response) } else { return c.text("We were unable to generate output", 500) } }) ``` Finally, you'll need to set the `ANTHROPIC_API_KEY` environment variable in your Workers application. You can do this by using `wrangler secret put`: ```sh $ npx wrangler secret put ANTHROPIC_API_KEY ``` ## 9. Deleting notes and vectors If you no longer need a note, you can delete it from the database. Any time that you delete a note, you will also need to delete the corresponding vector from Vectorize. You can implement this by building a `DELETE /notes/:id` route in your `src/index.js` file: ```js app.delete("/notes/:id", async (c) => { const { id } = c.req.param(); const query = `DELETE FROM notes WHERE id = ?`; await c.env.DB.prepare(query).bind(id).run(); await c.env.VECTOR_INDEX.deleteByIds([id]); return c.status(204); }); ``` ## 10. Text splitting (optional) For large pieces of text, it is recommended to split the text into smaller chunks. This allows LLMs to more effectively gather relevant context, without needing to retrieve large pieces of text. To implement this, we'll add a new NPM package to our project, `@langchain/textsplitters': ```sh npm install @langchain/textsplitters ``` The `RecursiveCharacterTextSplitter` class provided by this package will split the text into smaller chunks. It can be customized to your liking, but the default config works in most cases: ```js import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters"; const text = "Some long piece of text..."; const splitter = new RecursiveCharacterTextSplitter({ // These can be customized to change the chunking size // chunkSize: 1000, // chunkOverlap: 200, }); const output = await splitter.createDocuments([text]); console.log(output) // [{ pageContent: 'Some long piece of text...' }] ``` To use this splitter, we'll update the workflow to split the text into smaller chunks. We'll then iterate over the chunks and run the rest of the workflow for each chunk of text: ```js export class RAGWorkflow extends WorkflowEntrypoint { async run(event, step) { const env = this.env const { text } = event.payload; let texts = await step.do('split text', async () => { const splitter = new RecursiveCharacterTextSplitter(); const output = await splitter.createDocuments([text]); return output.map(doc => doc.pageContent); }) console.log("RecursiveCharacterTextSplitter generated ${texts.length} chunks") for (const index in texts) { const text = texts[index] const record = await step.do(`create database record: ${index}/${texts.length}`, async () => { const query = "INSERT INTO notes (text) VALUES (?) RETURNING *" const { results } = await env.DB.prepare(query) .bind(text) .run() const record = results[0] if (!record) throw new Error("Failed to create note") return record; }) const embedding = await step.do(`generate embedding: ${index}/${texts.length}`, async () => { const embeddings = await env.AI.run('@cf/baai/bge-base-en-v1.5', { text: text }) const values = embeddings.data[0] if (!values) throw new Error("Failed to generate vector embedding") return values }) await step.do(`insert vector: ${index}/${texts.length}`, async () => { return env.VECTOR_INDEX.upsert([ { id: record.id.toString(), values: embedding, } ]); }) } } } ``` Now, when large pieces of text are submitted to the `/notes` endpoint, they will be split into smaller chunks, and each chunk will be processed by the workflow. ## 11. Deploy your project If you did not deploy your Worker during [step 1](/workers/get-started/guide/#1-create-a-new-worker-project), deploy your Worker via Wrangler, to a `*.workers.dev` subdomain, or a [Custom Domain](/workers/configuration/routing/custom-domains/), if you have one configured. If you have not configured any subdomain or domain, Wrangler will prompt you during the publish process to set one up. ```sh npx wrangler deploy ``` Preview your Worker at `<YOUR_WORKER>.<YOUR_SUBDOMAIN>.workers.dev`. :::note[Note] When pushing to your `*.workers.dev` subdomain for the first time, you may see [`523` errors](/support/troubleshooting/cloudflare-errors/troubleshooting-cloudflare-5xx-errors/#error-523-origin-is-unreachable) while DNS is propagating. These errors should resolve themselves after a minute or so. ::: ## Related resources A full version of this codebase is available on GitHub. It includes a frontend UI for querying, adding, and deleting notes, as well as a backend API for interacting with the database and vector index. You can find it here: [github.com/kristianfreeman/cloudflare-retrieval-augmented-generation-example](https://github.com/kristianfreeman/cloudflare-retrieval-augmented-generation-example/). To do more: - Explore the reference diagram for a [Retrieval Augmented Generation (RAG) Architecture](/reference-architecture/diagrams/ai/ai-rag/). - Review Cloudflare's [AI documentation](/workers-ai). - Review [Tutorials](/workers/tutorials/) to build projects on Workers. - Explore [Examples](/workers/examples/) to experiment with copy and paste Worker code. - Understand how Workers works in [Reference](/workers/reference/). - Learn about Workers features and functionality in [Platform](/workers/platform/). - Set up [Wrangler](/workers/wrangler/install-and-update/) to programmatically create, test, and deploy your Worker projects. --- # Build a Voice Notes App with auto transcriptions using Workers AI URL: https://developers.cloudflare.com/workers-ai/tutorials/build-a-voice-notes-app-with-auto-transcription/ import { Render, PackageManagers, Tabs, TabItem } from "~/components"; In this tutorial, you will learn how to create a Voice Notes App with automatic transcriptions of voice recordings, and optional post-processing. The following tools will be used to build the application: - Workers AI to transcribe the voice recordings, and for the optional post processing - D1 database to store the notes - R2 storage to store the voice recordings - Nuxt framework to build the full-stack application - Workers to deploy the project ## Prerequisites To continue, you will need: <Render file="prereqs" product="workers" /> ## 1. Create a new Worker project Create a new Worker project using the `c3` CLI with the `nuxt` framework preset. <PackageManagers type="create" pkg="cloudflare@latest" args={"voice-notes --framework=nuxt --experimental"} /> ### Install additional dependencies Change into the newly created project directory ```sh cd voice-notes ``` And install the following dependencies: <PackageManagers pkg="@nuxt/ui @vueuse/core @iconify-json/heroicons" /> Then add the `@nuxt/ui` module to the `nuxt.config.ts` file: ```ts title="nuxt.config.ts" export default defineNuxtConfig({ //.. modules: ['nitro-cloudflare-dev', '@nuxt/ui'], //.. }) ``` ### [Optional] Move to Nuxt 4 compatibility mode Moving to Nuxt 4 compatibility mode ensures that your application remains forward-compatible with upcoming updates to Nuxt. Create a new `app` folder in the project's root directory and move the `app.vue` file to it. Also, add the following to your `nuxt.config.ts` file: ```ts title="nuxt.config.ts" export default defineNuxtConfig({ //.. future: { compatibilityVersion: 4, }, //.. }) ``` :::note The rest of the tutorial will use the `app` folder for keeping the client side code. If you did not make this change, you should continue to use the project's root directory. ::: ### Start local development server At this point you can test your application by starting a local development server using: <PackageManagers type="run" args="dev" /> If everything is set up correctly, you should see a Nuxt welcome page at `http://localhost:3000`. ## 2. Create the transcribe API endpoint This API makes use of Workers AI to transcribe the voice recordings. To use Workers AI within your project, you first need to bind it to the Worker. <Render file="ai-local-usage-charges" product="workers" /> Add the `AI` binding to the Wrangler file. ```toml title="wrangler.toml" [ai] binding = "AI" ``` Once the `AI` binding has been configured, run the `cf-typegen` command to generate the necessary Cloudflare type definitions. This makes the types definitions available in the server event contexts. <PackageManagers type="run" args="cf-typegen" /> Create a transcribe `POST` endpoint by creating `transcribe.post.ts` file inside the `/server/api` directory. ```ts title="server/api/transcribe.post.ts" export default defineEventHandler(async (event) => { const { cloudflare } = event.context; const form = await readFormData(event); const blob = form.get('audio') as Blob; if (!blob) { throw createError({ statusCode: 400, message: 'Missing audio blob to transcribe', }); } try { const response = await cloudflare.env.AI.run('@cf/openai/whisper', { audio: [...new Uint8Array(await blob.arrayBuffer())], }); return response.text; } catch (err) { console.error('Error transcribing audio:', err); throw createError({ statusCode: 500, message: 'Failed to transcribe audio. Please try again.', }); } }); ``` The above code does the following: 1. Extracts the audio blob from the event. 2. Transcribes the blob using the `@cf/openai/whisper` model and returns the transcription text as response. ## 3. Create an API endpoint for uploading audio recordings to R2 Before uploading the audio recordings to `R2`, you need to create a bucket first. You will also need to add the R2 binding to your Wrangler file and regenerate the Cloudflare type definitions. Create an `R2` bucket. <Tabs> <TabItem label="npm" icon="seti:npm"> ```sh npx wrangler r2 bucket create <BUCKET_NAME> ``` </TabItem> <TabItem label="yarn" icon="seti:yarn"> ```sh yarn dlx wrangler r2 bucket create <BUCKET_NAME> ``` </TabItem> <TabItem label="pnpm" icon="pnpm"> ```sh pnpm dlx wrangler r2 bucket create <BUCKET_NAME> ``` </TabItem> </Tabs> Add the storage binding to your Wrangler file. ```toml title="wrangler.toml" [[r2_buckets]] binding = "R2" bucket_name = "<BUCKET_NAME>" ``` Finally, generate the type definitions by rerunning the `cf-typegen` script. Now you are ready to create the upload endpoint. Create a new `upload.put.ts` file in your `server/api` directory, and add the following code to it: ```ts title="server/api/upload.put.ts" export default defineEventHandler(async (event) => { const { cloudflare } = event.context; const form = await readFormData(event); const files = form.getAll('files') as File[]; if (!files) { throw createError({ statusCode: 400, message: 'Missing files' }); } const uploadKeys: string[] = []; for (const file of files) { const obj = await cloudflare.env.R2.put(`recordings/${file.name}`, file); if (obj) { uploadKeys.push(obj.key); } } return uploadKeys; }); ``` The above code does the following: 1. The files variable retrieves all files sent by the client using form.getAll(), which allows for multiple uploads in a single request. 2. Uploads the files to the R2 bucket using the binding (`R2`) you created earlier. :::note The `recordings/` prefix organizes uploaded files within a dedicated folder in your bucket. This will also come in handy when serving these recordings to the client (covered later). ::: ## 4. Create an API endpoint to save notes entries Before creating the endpoint, you will need to perform steps similar to those for the R2 bucket, with some additional steps to prepare a notes table. Create a `D1` database. <Tabs> <TabItem label="npm" icon="seti:npm"> ```sh npx wrangler d1 create <DB_NAME> ``` </TabItem> <TabItem label="yarn" icon="seti:yarn"> ```sh yarn dlx wrangler d1 create <DB_NAME> ``` </TabItem> <TabItem label="pnpm" icon="pnpm"> ```sh pnpm dlx wrangler d1 create <DB_NAME> ``` </TabItem> </Tabs> Add the D1 bindings to the Wrangler file. You can get the `DB_ID` from the output of the `d1 create` command. ```toml title="wrangler.toml" [[d1_databases]] binding = "DB" database_name = "<DB_NAME>" database_id = "<DB_ID>" ``` As before, rerun the `cf-typegen` command to generate the types. Next, create a DB migration. <Tabs> <TabItem label="npm" icon="seti:npm"> ```sh npx wrangler d1 migrations create <DB_NAME> "create notes table" ``` </TabItem> <TabItem label="yarn" icon="seti:yarn"> ```sh yarn dlx wrangler d1 migrations create <DB_NAME> "create notes table" ``` </TabItem> <TabItem label="pnpm" icon="pnpm"> ```sh pnpm dlx wrangler d1 migrations create <DB_NAME> "create notes table" ``` </TabItem> </Tabs> This will create a new `migrations` folder in the project's root directory, and add an empty `0001_create_notes_table.sql` file to it. Replace the contents of this file with the code below. ```sql CREATE TABLE IF NOT EXISTS notes ( id INTEGER PRIMARY KEY AUTOINCREMENT, text TEXT NOT NULL, created_at DATETIME DEFAULT CURRENT_TIMESTAMP, updated_at DATETIME DEFAULT CURRENT_TIMESTAMP, audio_urls TEXT ); ``` And then apply this migration to create the `notes` table. <Tabs> <TabItem label="npm" icon="seti:npm"> ```sh npx wrangler d1 migrations apply <DB_NAME> ``` </TabItem> <TabItem label="yarn" icon="seti:yarn"> ```sh yarn dlx wrangler d1 migrations apply <DB_NAME> ``` </TabItem> <TabItem label="pnpm" icon="pnpm"> ```sh pnpm dlx wrangler d1 migrations apply <DB_NAME> ``` </TabItem> </Tabs> :::note The above command will create the notes table locally. To apply the migration on your remote production database, use the `--remote` flag. ::: Now you can create the API endpoint. Create a new file `index.post.ts` in the `server/api/notes` directory, and change its content to the following: ```ts title="server/api/notes/index.post.ts" export default defineEventHandler(async (event) => { const { cloudflare } = event.context; const { text, audioUrls } = await readBody(event); if (!text) { throw createError({ statusCode: 400, message: 'Missing note text', }); } try { await cloudflare.env.DB.prepare( 'INSERT INTO notes (text, audio_urls) VALUES (?1, ?2)' ) .bind(text, audioUrls ? JSON.stringify(audioUrls) : null) .run(); return setResponseStatus(event, 201); } catch (err) { console.error('Error creating note:', err); throw createError({ statusCode: 500, message: 'Failed to create note. Please try again.', }); } }); ``` The above does the following: 1. Extracts the text, and optional audioUrls from the event. 2. Saves it to the database after converting the audioUrls to a `JSON` string. ## 5. Handle note creation on the client-side Now you're ready to work on the client side. Let's start by tackling the note creation part first. ### Recording user audio Create a composable to handle audio recording using the MediaRecorder API. This will be used to record notes through the user's microphone. Create a new file `useMediaRecorder.ts` in the `app/composables` folder, and add the following code to it: ```ts title="app/composables/useMediaRecorder.ts" interface MediaRecorderState { isRecording: boolean; recordingDuration: number; audioData: Uint8Array | null; updateTrigger: number; } export function useMediaRecorder() { const state = ref<MediaRecorderState>({ isRecording: false, recordingDuration: 0, audioData: null, updateTrigger: 0, }); let mediaRecorder: MediaRecorder | null = null; let audioContext: AudioContext | null = null; let analyser: AnalyserNode | null = null; let animationFrame: number | null = null; let audioChunks: Blob[] | undefined = undefined; const updateAudioData = () => { if (!analyser || !state.value.isRecording || !state.value.audioData) { if (animationFrame) { cancelAnimationFrame(animationFrame); animationFrame = null; } return; } analyser.getByteTimeDomainData(state.value.audioData); state.value.updateTrigger += 1; animationFrame = requestAnimationFrame(updateAudioData); }; const startRecording = async () => { try { const stream = await navigator.mediaDevices.getUserMedia({ audio: true }); audioContext = new AudioContext(); analyser = audioContext.createAnalyser(); const source = audioContext.createMediaStreamSource(stream); source.connect(analyser); mediaRecorder = new MediaRecorder(stream); audioChunks = []; mediaRecorder.ondataavailable = (e: BlobEvent) => { audioChunks?.push(e.data); state.value.recordingDuration += 1; }; state.value.audioData = new Uint8Array(analyser.frequencyBinCount); state.value.isRecording = true; state.value.recordingDuration = 0; state.value.updateTrigger = 0; mediaRecorder.start(1000); updateAudioData(); } catch (err) { console.error('Error accessing microphone:', err); throw err; } }; const stopRecording = async () => { return await new Promise<Blob>((resolve) => { if (mediaRecorder && state.value.isRecording) { mediaRecorder.onstop = () => { const blob = new Blob(audioChunks, { type: 'audio/webm' }); audioChunks = undefined; state.value.recordingDuration = 0; state.value.updateTrigger = 0; state.value.audioData = null; resolve(blob); }; state.value.isRecording = false; mediaRecorder.stop(); mediaRecorder.stream.getTracks().forEach((track) => track.stop()); if (animationFrame) { cancelAnimationFrame(animationFrame); animationFrame = null; } audioContext?.close(); audioContext = null; } }); }; onUnmounted(() => { stopRecording(); }); return { state: readonly(state), startRecording, stopRecording, }; } ``` The above code does the following: 1. Exposes functions to start and stop audio recordings in a Vue application. 2. Captures audio input from the user's microphone using MediaRecorder API. 3. Processes real-time audio data for visualization using AudioContext and AnalyserNode. 4. Stores recording state including duration and recording status. 5. Maintains chunks of audio data and combines them into a final audio blob when recording stops. 6. Updates audio visualization data continuously using animation frames while recording. 7. Automatically cleans up all audio resources when recording stops or component unmounts. 8. Returns audio recordings in webm format for further processing. ### Create a component for note creation This component allows users to create notes by either typing or recording audio. It also handles audio transcription and uploading the recordings to the server. Create a new file named `CreateNote.vue` inside the `app/components` folder. Add the following template code to the newly created file: ```vue title="app/components/CreateNote.vue" <template> <div class="flex flex-col gap-y-5"> <div class="flex flex-col h-full md:flex-row gap-y-4 md:gap-x-6 overflow-hidden p-px" > <UCard :ui="{ base: 'h-full flex flex-col flex-1', body: { base: 'flex-grow' }, header: { base: 'md:h-[72px]' }, }" > <template #header> <h3 class="text-base md:text-lg font-medium text-gray-600 dark:text-gray-300" > Note transcript </h3> </template> <UTextarea v-model="note" placeholder="Type your note or use voice recording..." size="lg" autofocus :disabled="loading || isTranscribing || state.isRecording" :rows="10" /> </UCard> <UCard class="md:h-full md:flex md:flex-col md:w-96 shrink-0 order-first md:order-none" :ui="{ body: { base: 'max-h-36 md:max-h-none md:flex-grow overflow-y-auto' }, }" > <template #header> <h3 class="text-base md:text-lg font-medium text-gray-600 dark:text-gray-300" > Note recordings </h3> <UTooltip :text="state.isRecording ? 'Stop Recording' : 'Start Recording'" > <UButton :icon=" state.isRecording ? 'i-heroicons-stop-circle' : 'i-heroicons-microphone' " :color="state.isRecording ? 'red' : 'primary'" :loading="isTranscribing" @click="toggleRecording" /> </UTooltip> </template> <AudioVisualizer v-if="state.isRecording" class="w-full h-14 p-2 bg-gray-50 dark:bg-gray-800 rounded-lg mb-2" :audio-data="state.audioData" :data-update-trigger="state.updateTrigger" /> <div v-else-if="isTranscribing" class="flex items-center justify-center h-14 gap-x-3 p-2 bg-gray-50 dark:bg-gray-800 rounded-lg mb-2 text-gray-500 dark:text-gray-400" > <UIcon name="i-heroicons-arrow-path-20-solid" class="w-6 h-6 animate-spin" /> Transcribing... </div> <RecordingsList :recordings="recordings" @delete="deleteRecording" /> <div v-if="!recordings.length && !state.isRecording && !isTranscribing" class="h-full flex items-center justify-center text-gray-500 dark:text-gray-400" > No recordings... </div> </UCard> </div> <UDivider /> <div class="flex justify-end gap-x-4"> <UButton icon="i-heroicons-trash" color="gray" size="lg" variant="ghost" :disabled="loading" @click="clearNote" > Clear </UButton> <UButton icon="i-heroicons-cloud-arrow-up" size="lg" :loading="loading" :disabled="!note.trim() && !state.isRecording" @click="saveNote" > Save </UButton> </div> </div> </template> ``` The above template results in the following: 1. A panel with a `textarea` inside to type the note manually. 2. Another panel to manage start/stop of an audio recording, and show the recordings done already. 3. A bottom panel to reset or save the note (along with the recordings). Now, add the following code below the template code in the same file: ```vue title="app/components/CreateNote.vue" <script setup lang="ts"> import type { Recording, Settings } from '~~/types'; const emit = defineEmits<{ (e: 'created'): void; }>(); const note = ref(''); const loading = ref(false); const isTranscribing = ref(false); const { state, startRecording, stopRecording } = useMediaRecorder(); const recordings = ref<Recording[]>([]); const handleRecordingStart = async () => { try { await startRecording(); } catch (err) { console.error('Error accessing microphone:', err); useToast().add({ title: 'Error', description: 'Could not access microphone. Please check permissions.', color: 'red', }); } }; const handleRecordingStop = async () => { let blob: Blob | undefined; try { blob = await stopRecording(); } catch (err) { console.error('Error stopping recording:', err); useToast().add({ title: 'Error', description: 'Failed to record audio. Please try again.', color: 'red', }); } if (blob) { try { const transcription = await transcribeAudio(blob); note.value += note.value ? '\n\n' : ''; note.value += transcription ?? ''; recordings.value.unshift({ url: URL.createObjectURL(blob), blob, id: `${Date.now()}`, }); } catch (err) { console.error('Error transcribing audio:', err); useToast().add({ title: 'Error', description: 'Failed to transcribe audio. Please try again.', color: 'red', }); } } }; const toggleRecording = () => { if (state.value.isRecording) { handleRecordingStop(); } else { handleRecordingStart(); } }; const transcribeAudio = async (blob: Blob) => { try { isTranscribing.value = true; const formData = new FormData(); formData.append('audio', blob); return await $fetch('/api/transcribe', { method: 'POST', body: formData, }); } finally { isTranscribing.value = false; } }; const clearNote = () => { note.value = ''; recordings.value = []; }; const saveNote = async () => { if (!note.value.trim()) return; loading.value = true; const noteToSave: { text: string; audioUrls?: string[] } = { text: note.value.trim(), }; try { if (recordings.value.length) { noteToSave.audioUrls = await uploadRecordings(); } await $fetch('/api/notes', { method: 'POST', body: noteToSave, }); useToast().add({ title: 'Success', description: 'Note saved successfully', color: 'green', }); note.value = ''; recordings.value = []; emit('created'); } catch (err) { console.error('Error saving note:', err); useToast().add({ title: 'Error', description: 'Failed to save note', color: 'red', }); } finally { loading.value = false; } }; const deleteRecording = (recording: Recording) => { recordings.value = recordings.value.filter((r) => r.id !== recording.id); }; const uploadRecordings = async () => { if (!recordings.value.length) return; const formData = new FormData(); recordings.value.forEach((recording) => { formData.append('files', recording.blob, recording.id + '.webm'); }); const uploadKeys = await $fetch('/api/upload', { method: 'PUT', body: formData, }); return uploadKeys; }; </script> ``` The above code does the following: 1. When a recording is stopped by calling `handleRecordingStop` function, the audio blob is sent for transcribing to the transcribe API endpoint. 2. The transcription response text is appended to the existing textarea content. 3. When the note is saved by calling the `saveNote` function, the audio recordings are uploaded first to R2 by using the upload endpoint we created earlier. Then, the actual note content along with the audioUrls (the R2 object keys) are saved by calling the notes post endpoint. ### Create a new page route for showing the component You can use this component in a Nuxt page to show it to the user. But before that you need to modify your `app.vue` file. Update the content of your `app.vue` to the following: ```vue title="/app/app.vue" <template> <NuxtRouteAnnouncer /> <NuxtLoadingIndicator /> <div class="h-screen flex flex-col md:flex-row"> <USlideover v-model="isDrawerOpen" class="md:hidden" side="left" :ui="{ width: 'max-w-xs' }" > <AppSidebar :links="links" @hide-drawer="isDrawerOpen = false" /> </USlideover> <!-- The App Sidebar --> <AppSidebar :links="links" class="hidden md:block md:w-64" /> <div class="flex-1 h-full min-w-0 bg-gray-50 dark:bg-gray-950"> <!-- The App Header --> <AppHeader :title="title" @show-drawer="isDrawerOpen = true"> <template #actions v-if="route.path === '/'"> <UButton icon="i-heroicons-plus" @click="navigateTo('/new')"> New Note </UButton> </template> </AppHeader> <!-- Main Page Content --> <main class="p-4 sm:p-6 h-[calc(100vh-3.5rem)] overflow-y-auto"> <NuxtPage /> </main> </div> </div> <UNotifications /> </template> <script setup lang="ts"> const isDrawerOpen = ref(false); const links = [ { label: 'Notes', icon: 'i-heroicons-document-text', to: '/', click: () => (isDrawerOpen.value = false), }, { label: 'Settings', icon: 'i-heroicons-cog', to: '/settings', click: () => (isDrawerOpen.value = false), }, ]; const route = useRoute(); const title = computed(() => { const activeLink = links.find((l) => l.to === route.path); if (activeLink) { return activeLink.label; } return ''; }); </script> ``` The above code allows for a nuxt page to be shown to the user, apart from showing an app header and a navigation sidebar. Next, add a new file named `new.vue` inside the `app/pages` folder, add the following code to it: ```vue title="app/pages/new.vue" <template> <UModal v-model="isOpen" fullscreen> <UCard :ui="{ base: 'h-full flex flex-col', rounded: '', body: { base: 'flex-grow overflow-hidden', }, }" > <template #header> <h2 class="text-xl md:text-2xl font-semibold leading-6">Create note</h2> <UButton color="gray" variant="ghost" icon="i-heroicons-x-mark-20-solid" @click="closeModal" /> </template> <CreateNote class="max-w-7xl mx-auto h-full" @created="closeModal" /> </UCard> </UModal> </template> <script setup lang="ts"> const isOpen = ref(true); const router = useRouter(); const closeModal = () => { isOpen.value = false; if (window.history.length > 2) { router.back(); } else { navigateTo({ path: '/', replace: true, }); } }; </script> ``` The above code shows the `CreateNote` component inside a modal, and navigates back to the home page on successful note creation. ## 6. Showing the notes on the client side To show the notes from the database on the client side, create an API endpoint first that will interact with the database. ### Create an API endpoint to fetch notes from the database Create a new file named `index.get.ts` inside the `server/api/notes` directory, and add the following code to it: ```ts title="server/api/index.get.ts" import type { Note } from '~~/types'; export default defineEventHandler(async (event) => { const { cloudflare } = event.context; const res = await cloudflare.env.DB.prepare( `SELECT id, text, audio_urls AS audioUrls, created_at AS createdAt, updated_at AS updatedAt FROM notes ORDER BY created_at DESC LIMIT 50;` ).all<Omit<Note, 'audioUrls'> & { audioUrls: string | null }>(); return res.results.map((note) => ({ ...note, audioUrls: note.audioUrls ? JSON.parse(note.audioUrls) : undefined, })); }); ``` The above code fetches the last 50 notes from the database, ordered by their creation date in descending order. The `audio_urls` field is stored as a string in the database, but it's converted to an array using `JSON.parse` to handle multiple audio files seamlessly on the client side. Next, create a page named `index.vue` inside the `app/pages` directory. This will be the home page of the application. Add the following code to it: ```vue title="app/pages/index.vue" <template> <div :class="{ 'flex h-full': !notes?.length }"> <div v-if="notes?.length" class="space-y-4 sm:space-y-6"> <NoteCard v-for="note in notes" :key="note.id" :note="note" /> </div> <div v-else class="flex-1 self-center text-center text-gray-500 dark:text-gray-400 space-y-2" > <h2 class="text-2xl md:text-3xl">No notes created</h2> <p>Get started by creating your first note</p> </div> </div> </template> <script setup lang="ts"> import type { Note } from '~~/types'; const { data: notes } = await useFetch<Note[]>('/api/notes'); </script> ``` The above code fetches the notes from the database by calling the `/api/notes` endpoint you created just now, and renders them as note cards. ### Serving the saved recordings from R2 To be able to play the audio recordings of these notes, you need to serve the saved recordings from the R2 storage. Create a new file named `[...pathname].get.ts` inside the `server/routes/recordings` directory, and add the following code to it: :::note The `...` prefix in the file name makes it a catch all route. This allows it to receive all events that are meant for paths starting with `/recordings` prefix. This is where the `recordings` prefix that was added previously while saving the recordings becomes helpful. ::: ```ts title="server/routes/recordings/[...pathname].get.ts" export default defineEventHandler(async (event) => { const { cloudflare, params } = event.context; const { pathname } = params || {}; return cloudflare.env.R2.get(`recordings/${pathname}`); }); ``` The above code extracts the path name from the event params, and serves the saved recording matching that object key from the R2 bucket. ## 7. [Optional] Post Processing the transcriptions Even though the speech-to-text transcriptions models perform satisfactorily, sometimes you want to post process the transcriptions for various reasons. It could be to remove any discrepancy, or to change the tone/style of the final text. ### Create a settings page Create a new file named `settings.vue` in the `app/pages` folder, and add the following code to it: ```vue title="app/pages/settings.vue" <template> <UCard> <template #header> <div> <h2 class="text-base md:text-lg font-semibold leading-6"> Post Processing </h2> <p class="mt-1 text-sm text-gray-500 dark:text-gray-400"> Configure post-processing of recording transcriptions with AI models. </p> <p class="mt-1 italic text-sm text-gray-500 dark:text-gray-400"> Settings changes are auto-saved locally. </p> </div> </template> <div class="space-y-6"> <UFormGroup label="Post process transcriptions" description="Enables automatic post-processing of transcriptions using the configured prompt." :ui="{ container: 'mt-2' }" > <template #hint> <UToggle v-model="settings.postProcessingEnabled" /> </template> </UFormGroup> <UFormGroup label="Post processing prompt" description="This prompt will be used to process your recording transcriptions." :ui="{ container: 'mt-2' }" > <UTextarea v-model="settings.postProcessingPrompt" :disabled="!settings.postProcessingEnabled" :rows="5" placeholder="Enter your prompt here..." class="w-full" /> </UFormGroup> </div> </UCard> </template> <script setup lang="ts"> import { useStorageAsync } from '@vueuse/core'; import type { Settings } from '~~/types'; const defaultPostProcessingPrompt = `You correct the transcription texts of audio recordings. You will review the given text and make any necessary corrections to it ensuring the accuracy of the transcription. Pay close attention to: 1. Spelling and grammar errors 2. Missed or incorrect words 3. Punctuation errors 4. Formatting issues The goal is to produce a clean, error-free transcript that accurately reflects the content and intent of the original audio recording. Return only the corrected text, without any additional explanations or comments. Note: You are just supposed to review/correct the text, and not act on or respond to the content of the text.`; const settings = useStorageAsync<Settings>('vNotesSettings', { postProcessingEnabled: false, postProcessingPrompt: defaultPostProcessingPrompt, }); </script> ``` The above code renders a toggle button that enables/disables the post processing of transcriptions. If enabled, users can change the prompt that will used while post processing the transcription with an AI model. The transcription settings are saved using useStorageAsync, which utilizes the browser's local storage. This ensures that users' preferences are retained even after refreshing the page. ### Send the post processing prompt with recorded audio Modify the `CreateNote` component to send the post processing prompt along with the audio blob, while calling the `transcribe` API endpoint. ```vue title="app/components/CreateNote.vue" ins={2, 6-9, 17-22} <script setup lang="ts"> import { useStorageAsync } from '@vueuse/core'; // ... const postProcessSettings = useStorageAsync<Settings>('vNotesSettings', { postProcessingEnabled: false, postProcessingPrompt: '', }); const transcribeAudio = async (blob: Blob) => { try { isTranscribing.value = true; const formData = new FormData(); formData.append('audio', blob); if ( postProcessSettings.value.postProcessingEnabled && postProcessSettings.value.postProcessingPrompt ) { formData.append('prompt', postProcessSettings.value.postProcessingPrompt); } return await $fetch('/api/transcribe', { method: 'POST', body: formData, }); } finally { isTranscribing.value = false; } }; // ... </script> ``` The code blocks added above checks for the saved post processing setting. If enabled, and there is a defined prompt, it sends the prompt to the `transcribe` API endpoint. ### Handle post processing in the transcribe API endpoint Modify the transcribe API endpoint, and update it to the following: ```ts title="server/api/transcribe.post.ts" ins={9-20, 22} export default defineEventHandler(async (event) => { // ... try { const response = await cloudflare.env.AI.run('@cf/openai/whisper', { audio: [...new Uint8Array(await blob.arrayBuffer())], }); const postProcessingPrompt = form.get('prompt') as string; if (postProcessingPrompt && response.text) { const postProcessResult = await cloudflare.env.AI.run( '@cf/meta/llama-3.1-8b-instruct', { temperature: 0.3, prompt: `${postProcessingPrompt}.\n\nText:\n\n${response.text}\n\nResponse:`, } ); return (postProcessResult as { response?: string }).response; } else { return response.text; } } catch (err) { // ... } }); ``` The above code does the following: 1. Extracts the post processing prompt from the event FormData. 2. If present, it calls the Workers AI API to process the transcription text using the `@cf/meta/llama-3.1-8b-instruct` model. 3. Finally, it returns the response from Workers AI to the client. ## 8. Deploy the application Now you are ready to deploy the project to a `.workers.dev` sub-domain by running the deploy command. <PackageManagers type="run" args="deploy" /> You can preview your application at `<YOUR_WORKER>.<YOUR_SUBDOMAIN>.workers.dev`. :::note If you used `pnpm` as your package manager, you may face build errors like `"stdin" is not exported by "node_modules/.pnpm/unenv@1.10.0/node_modules/unenv/runtime/node/process/index.mjs"`. To resolve it, you can try hoisting your node modules with the [`shamefully-hoist-true`](https://pnpm.io/npmrc) option. ::: ## Conclusion In this tutorial, you have gone through the steps of building a voice notes application using Nuxt 3, Cloudflare Workers, D1, and R2 storage. You learnt to: - Set up the backend to store and manage notes - Create API endpoints to fetch and display notes - Handle audio recordings - Implement optional post-processing for transcriptions - Deploy the application using the Cloudflare module syntax The complete source code of the project is available on GitHub. You can go through it to see the code for various frontend components not covered in the article. You can find it here: [github.com/ra-jeev/vnotes](https://github.com/ra-jeev/vnotes). --- # Build an interview practice tool with Workers AI URL: https://developers.cloudflare.com/workers-ai/tutorials/build-ai-interview-practice-tool/ import { Render, PackageManagers } from "~/components"; Job interviews can be stressful, and practice is key to building confidence. While traditional mock interviews with friends or mentors are valuable, they are not always available when you need them. In this tutorial, you will learn how to build an AI-powered interview practice tool that provides real-time feedback to help improve interview skills. By the end of this tutorial, you will have built a complete interview practice tool with the following core functionalities: - A real-time interview simulation tool using WebSocket connections - An AI-powered speech processing pipeline that converts audio to text - An intelligent response system that provides interviewer-like interactions - A persistent storage system for managing interview sessions and history using Durable Objects <Render file="tutorials-before-you-start" product="workers" /> <Render file="prereqs" product="workers" /> ### Prerequisites This tutorial demonstrates how to use multiple Cloudflare products and while many features are available in free tiers, some components of Workers AI may incur usage-based charges. Please review the pricing documentation for Workers AI before proceeding. <Render file="ai-local-usage-charges" product="workers" /> ## 1. Create a new Worker project Create a Cloudflare Workers project using the Create Cloudflare CLI (C3) tool and the Hono framework. :::note [Hono](https://hono.dev) is a lightweight web framework that helps build API endpoints and handle HTTP requests. This tutorial uses Hono to create and manage the application's routing and middleware components. ::: Create a new Worker project by running the following commands, using `ai-interview-tool` as the Worker name: <PackageManagers type="create" pkg="cloudflare@latest" args={"ai-interview-tool"} /> <Render file="c3-post-run-steps" product="workers" params={{ category: "web-framework", framework: "Hono", }} /> To develop and test your Cloudflare Workers application locally: 1. Navigate to your Workers project directory in your terminal: ```sh cd ai-interview-tool ``` 2. Start the development server by running: ```sh npx wrangler dev ``` When you run `wrangler dev`, the command starts a local development server and provides a `localhost` URL where you can preview your application. You can now make changes to your code and see them reflected in real-time at the provided localhost address. ## 2. Define TypeScript types for the interview system Now that the project is set up, create the TypeScript types that will form the foundation of the interview system. These types will help you maintain type safety and provide clear interfaces for the different components of your application. Create a new file `types.ts` that will contain essential types and enums for: - Interview skills that can be assessed (JavaScript, React, etc.) - Different interview positions (Junior Developer, Senior Developer, etc.) - Interview status tracking - Message handling between user and AI - Core interview data structure ```typescript title="src/types.ts" import { Context } from "hono"; // Context type for API endpoints, including environment bindings and user info export interface ApiContext { Bindings: CloudflareBindings; Variables: { username: string; }; } export type HonoCtx = Context<ApiContext>; // List of technical skills you can assess during mock interviews. // This application focuses on popular web technologies and programming languages // that are commonly tested in real interviews. export enum InterviewSkill { JavaScript = "JavaScript", TypeScript = "TypeScript", React = "React", NodeJS = "NodeJS", Python = "Python", } // Available interview types based on different engineering roles. // This helps tailor the interview experience and questions to // match the candidate's target position. export enum InterviewTitle { JuniorDeveloper = "Junior Developer Interview", SeniorDeveloper = "Senior Developer Interview", FullStackDeveloper = "Full Stack Developer Interview", FrontendDeveloper = "Frontend Developer Interview", BackendDeveloper = "Backend Developer Interview", SystemArchitect = "System Architect Interview", TechnicalLead = "Technical Lead Interview", } // Tracks the current state of an interview session. // This will help you to manage the interview flow and show appropriate UI/actions // at each stage of the process. export enum InterviewStatus { Created = "created", // Interview is created but not started Pending = "pending", // Waiting for interviewer/system InProgress = "in_progress", // Active interview session Completed = "completed", // Interview finished successfully Cancelled = "cancelled", // Interview terminated early } // Defines who sent a message in the interview chat export type MessageRole = "user" | "assistant" | "system"; // Structure of individual messages exchanged during the interview export interface Message { messageId: string; // Unique identifier for the message interviewId: string; // Links message to specific interview role: MessageRole; // Who sent the message content: string; // The actual message content timestamp: number; // When the message was sent } // Main data structure that holds all information about an interview session. // This includes metadata, messages exchanged, and the current status. export interface InterviewData { interviewId: string; title: InterviewTitle; skills: InterviewSkill[]; messages: Message[]; status: InterviewStatus; createdAt: number; updatedAt: number; } // Input format for creating a new interview session. // Simplified interface that accepts basic parameters needed to start an interview. export interface InterviewInput { title: string; skills: string[]; } ``` ## 3. Configure error types for different services Next, set up custom error types to handle different kinds of errors that may occur in your application. This includes: - Database errors (for example, connection issues, query failures) - Interview-related errors (for example, invalid input, transcription failures) - Authentication errors (for example, invalid sessions) Create the following `errors.ts` file: ```typescript title="src/errors.ts" export const ErrorCodes = { INVALID_MESSAGE: "INVALID_MESSAGE", TRANSCRIPTION_FAILED: "TRANSCRIPTION_FAILED", LLM_FAILED: "LLM_FAILED", DATABASE_ERROR: "DATABASE_ERROR", } as const; export class AppError extends Error { constructor( message: string, public statusCode: number, ) { super(message); this.name = this.constructor.name; } } export class UnauthorizedError extends AppError { constructor(message: string) { super(message, 401); } } export class BadRequestError extends AppError { constructor(message: string) { super(message, 400); } } export class NotFoundError extends AppError { constructor(message: string) { super(message, 404); } } export class InterviewError extends Error { constructor( message: string, public code: string, public statusCode: number = 500, ) { super(message); this.name = "InterviewError"; } } ``` ## 4. Configure authentication middleware and user routes In this step, you will implement a basic authentication system to track and identify users interacting with your AI interview practice tool. The system uses HTTP-only cookies to store usernames, allowing you to identify both the request sender and their corresponding Durable Object. This straightforward authentication approach requires users to provide a username, which is then stored securely in a cookie. This approach allows you to: - Identify users across requests - Associate interview sessions with specific users - Secure access to interview-related endpoints ### Create the Authentication Middleware Create a middleware function that will check for the presence of a valid authentication cookie. This middleware will be used to protect routes that require authentication. Create a new middleware file `middleware/auth.ts`: ```typescript title="src/middleware/auth.ts" import { Context } from "hono"; import { getCookie } from "hono/cookie"; import { UnauthorizedError } from "../errors"; export const requireAuth = async (ctx: Context, next: () => Promise<void>) => { // Get username from cookie const username = getCookie(ctx, "username"); if (!username) { throw new UnauthorizedError("User is not logged in"); } // Make username available to route handlers ctx.set("username", username); await next(); }; ``` This middleware: - Checks for a `username` cookie - Throws an `Error` if the cookie is missing - Makes the username available to downstream handlers via the context ### Create Authentication Routes Next, create the authentication routes that will handle user login. Create a new file `routes/auth.ts`: ```typescript title="src/routes/auth.ts" import { Context, Hono } from "hono"; import { setCookie } from "hono/cookie"; import { BadRequestError } from "../errors"; import { ApiContext } from "../types"; export const authenticateUser = async (ctx: Context) => { // Extract username from request body const { username } = await ctx.req.json(); // Make sure username was provided if (!username) { throw new BadRequestError("Username is required"); } // Create a secure cookie to track the user's session // This cookie will: // - Be HTTP-only for security (no JS access) // - Work across all routes via path="/" // - Last for 24 hours // - Only be sent in same-site requests to prevent CSRF setCookie(ctx, "username", username, { httpOnly: true, path: "/", maxAge: 60 * 60 * 24, sameSite: "Strict", }); // Let the client know login was successful return ctx.json({ success: true }); }; // Set up authentication-related routes export const configureAuthRoutes = () => { const router = new Hono<ApiContext>(); // POST /login - Authenticate user and create session router.post("/login", authenticateUser); return router; }; ``` Finally, update main application file to include the authentication routes. Modify `src/index.ts`: ```typescript title="src/index.ts" import { configureAuthRoutes } from "./routes/auth"; import { Hono } from "hono"; import { logger } from "hono/logger"; import type { ApiContext } from "./types"; import { requireAuth } from "./middleware/auth"; // Create our main Hono app instance with proper typing const app = new Hono<ApiContext>(); // Create a separate router for API endpoints to keep things organized const api = new Hono<ApiContext>(); // Set up global middleware that runs on every request // - Logger gives us visibility into what is happening app.use("*", logger()); // Wire up all our authentication routes (login, etc) // These will be mounted under /api/v1/auth/ api.route("/auth", configureAuthRoutes()); // Mount all API routes under the version prefix (for example, /api/v1) // This allows us to make breaking changes in v2 without affecting v1 users app.route("/api/v1", api); export default app; ``` Now we have a basic authentication system that: 1. Provides a login endpoint at `/api/v1/auth/login` 2. Securely stores the username in a cookie 3. Includes middleware to protect authenticated routes ## 5. Create a Durable Object to manage interviews Now that you have your authentication system in place, create a Durable Object to manage interview sessions. Durable Objects are perfect for this interview practice tool because they provide the following functionalities: - Maintains states between connections, so users can reconnect without losing progress. - Provides a SQLite database to store all interview Q&A, feedback and metrics. - Enables smooth real-time interactions between the interviewer AI and candidate. - Handles multiple interview sessions efficiently without performance issues. - Creates a dedicated instance for each user, giving them their own isolated environment. First, you will need to configure the Durable Object in Wrangler file. Add the following configuration: ```toml title="wrangler.toml" [[durable_objects.bindings]] name = "INTERVIEW" class_name = "Interview" [[migrations]] tag = "v1" new_sqlite_classes = ["Interview"] ``` Next, create a new file `interview.ts` to define our Interview Durable Object: ```typescript title="src/interview.ts" import { DurableObject } from "cloudflare:workers"; export class Interview extends DurableObject<CloudflareBindings> { // We will use it to keep track of all active WebSocket connections for real-time communication private sessions: Map<WebSocket, { interviewId: string }>; constructor(state: DurableObjectState, env: CloudflareBindings) { super(state, env); // Initialize empty sessions map - we will add WebSocket connections as users join this.sessions = new Map(); } // Entry point for all HTTP requests to this Durable Object // This will handle both initial setup and WebSocket upgrades async fetch(request: Request) { // For now, just confirm the object is working // We'll add WebSocket upgrade logic and request routing later return new Response("Interview object initialized"); } // Broadcasts a message to all connected WebSocket clients. private broadcast(message: string) { this.ctx.getWebSockets().forEach((ws) => { try { if (ws.readyState === WebSocket.OPEN) { ws.send(message); } } catch (error) { console.error( "Error broadcasting message to a WebSocket client:", error, ); } }); } } ``` Now we need to export the Durable Object in our main `src/index.ts` file: ```typescript title="src/index.ts" import { Interview } from "./interview"; // ... previous code ... export { Interview }; export default app; ``` Since the Worker code is written in TypeScript, you should run the following command to add the necessary type definitions: ```sh npm run cf-typegen ``` ### Set up SQLite database schema to store interview data Now you will use SQLite at the Durable Object level for data persistence. This gives each user their own isolated database instance. You will need two main tables: - `interviews`: Stores interview session data - `messages`: Stores all messages exchanged during interviews Before you create these tables, create a service class to handle your database operations. This encapsulates database logic and helps you: - Manage database schema changes - Handle errors consistently - Keep database queries organized Create a new file called `services/InterviewDatabaseService.ts`: ```typescript title="src/services/InterviewDatabaseService.ts" import { InterviewData, Message, InterviewStatus, InterviewTitle, InterviewSkill, } from "../types"; import { InterviewError, ErrorCodes } from "../errors"; const CONFIG = { database: { tables: { interviews: "interviews", messages: "messages", }, indexes: { messagesByInterview: "idx_messages_interviewId", }, }, } as const; export class InterviewDatabaseService { constructor(private sql: SqlStorage) {} /** * Sets up the database schema by creating tables and indexes if they do not exist. * This is called when initializing a new Durable Object instance to ensure * we have the required database structure. * * The schema consists of: * - interviews table: Stores interview metadata like title, skills, and status * - messages table: Stores the conversation history between user and AI * - messages index: Helps optimize queries when fetching messages for a specific interview */ createTables() { try { // Get list of existing tables to avoid recreating them const cursor = this.sql.exec(`PRAGMA table_list`); const existingTables = new Set([...cursor].map((table) => table.name)); // The interviews table is our main table storing interview sessions. // We only create it if it does not exist yet. if (!existingTables.has(CONFIG.database.tables.interviews)) { this.sql.exec(InterviewDatabaseService.QUERIES.CREATE_INTERVIEWS_TABLE); } // The messages table stores the actual conversation history. // It references interviews table via foreign key for data integrity. if (!existingTables.has(CONFIG.database.tables.messages)) { this.sql.exec(InterviewDatabaseService.QUERIES.CREATE_MESSAGES_TABLE); } // Add an index on interviewId to speed up message retrieval. // This is important since we will frequently query messages by interview. this.sql.exec(InterviewDatabaseService.QUERIES.CREATE_MESSAGE_INDEX); } catch (error: unknown) { const message = error instanceof Error ? error.message : String(error); throw new InterviewError( `Failed to initialize database: ${message}`, ErrorCodes.DATABASE_ERROR, ); } } private static readonly QUERIES = { CREATE_INTERVIEWS_TABLE: ` CREATE TABLE IF NOT EXISTS interviews ( interviewId TEXT PRIMARY KEY, title TEXT NOT NULL, skills TEXT NOT NULL, createdAt INTEGER NOT NULL DEFAULT (strftime('%s','now') * 1000), updatedAt INTEGER NOT NULL DEFAULT (strftime('%s','now') * 1000), status TEXT NOT NULL DEFAULT 'pending' ) `, CREATE_MESSAGES_TABLE: ` CREATE TABLE IF NOT EXISTS messages ( messageId TEXT PRIMARY KEY, interviewId TEXT NOT NULL, role TEXT NOT NULL, content TEXT NOT NULL, timestamp INTEGER NOT NULL, FOREIGN KEY (interviewId) REFERENCES interviews(interviewId) ) `, CREATE_MESSAGE_INDEX: ` CREATE INDEX IF NOT EXISTS idx_messages_interview ON messages(interviewId) `, }; } ``` Update the `Interview` Durable Object to use the database service by modifying `src/interview.ts`: ```typescript title="src/interview.ts" import { InterviewDatabaseService } from "./services/InterviewDatabaseService"; export class Interview extends DurableObject<CloudflareBindings> { // Database service for persistent storage of interview data and messages private readonly db: InterviewDatabaseService; private sessions: Map<WebSocket, { interviewId: string }>; constructor(state: DurableObjectState, env: CloudflareBindings) { // ... previous code ... // Set up our database connection using the DO's built-in SQLite instance this.db = new InterviewDatabaseService(state.storage.sql); // First-time setup: ensure our database tables exist // This is idempotent so safe to call on every instantiation this.db.createTables(); } } ``` Add methods to create and retrieve interviews in `services/InterviewDatabaseService.ts`: ```typescript title="src/services/InterviewDatabaseService.ts" export class InterviewDatabaseService { /** * Creates a new interview session in the database. * * This is the main entry point for starting a new interview. It handles all the * initial setup like: * - Generating a unique ID using crypto.randomUUID() for reliable uniqueness * - Recording the interview title and required skills * - Setting up timestamps for tracking interview lifecycle * - Setting the initial status to "Created" * */ createInterview(title: InterviewTitle, skills: InterviewSkill[]): string { try { const interviewId = crypto.randomUUID(); const currentTime = Date.now(); this.sql.exec( InterviewDatabaseService.QUERIES.INSERT_INTERVIEW, interviewId, title, JSON.stringify(skills), // Store skills as JSON for flexibility InterviewStatus.Created, currentTime, currentTime, ); return interviewId; } catch (error: unknown) { const message = error instanceof Error ? error.message : String(error); throw new InterviewError( `Failed to create interview: ${message}`, ErrorCodes.DATABASE_ERROR, ); } } /** * Fetches all interviews from the database, ordered by creation date. * * This is useful for displaying interview history and letting users * resume previous sessions. We order by descending creation date since * users typically want to see their most recent interviews first. * * Returns an array of InterviewData objects with full interview details * including metadata and message history. */ getAllInterviews(): InterviewData[] { try { const cursor = this.sql.exec( InterviewDatabaseService.QUERIES.GET_ALL_INTERVIEWS, ); return [...cursor].map(this.parseInterviewRecord); } catch (error) { const message = error instanceof Error ? error.message : String(error); throw new InterviewError( `Failed to retrieve interviews: ${message}`, ErrorCodes.DATABASE_ERROR, ); } } // Retrieves an interview and its messages by ID getInterview(interviewId: string): InterviewData | null { try { const cursor = this.sql.exec( InterviewDatabaseService.QUERIES.GET_INTERVIEW, interviewId, ); const record = [...cursor][0]; if (!record) return null; return this.parseInterviewRecord(record); } catch (error: unknown) { const message = error instanceof Error ? error.message : String(error); throw new InterviewError( `Failed to retrieve interview: ${message}`, ErrorCodes.DATABASE_ERROR, ); } } addMessage( interviewId: string, role: Message["role"], content: string, messageId: string, ): Message { try { const timestamp = Date.now(); this.sql.exec( InterviewDatabaseService.QUERIES.INSERT_MESSAGE, messageId, interviewId, role, content, timestamp, ); return { messageId, interviewId, role, content, timestamp, }; } catch (error: unknown) { const message = error instanceof Error ? error.message : String(error); throw new InterviewError( `Failed to add message: ${message}`, ErrorCodes.DATABASE_ERROR, ); } } /** * Transforms raw database records into structured InterviewData objects. * * This helper does the heavy lifting of: * - Type checking critical fields to catch database corruption early * - Converting stored JSON strings back into proper objects * - Filtering out any null messages that might have snuck in * - Ensuring timestamps are proper numbers * * If any required data is missing or malformed, it throws an error * rather than returning partially valid data that could cause issues * downstream. */ private parseInterviewRecord(record: any): InterviewData { const interviewId = record.interviewId as string; const createdAt = Number(record.createdAt); const updatedAt = Number(record.updatedAt); if (!interviewId || !createdAt || !updatedAt) { throw new InterviewError( "Invalid interview data in database", ErrorCodes.DATABASE_ERROR, ); } return { interviewId, title: record.title as InterviewTitle, skills: JSON.parse(record.skills as string) as InterviewSkill[], messages: record.messages ? JSON.parse(record.messages) .filter((m: any) => m !== null) .map((m: any) => ({ messageId: m.messageId, role: m.role, content: m.content, timestamp: m.timestamp, })) : [], status: record.status as InterviewStatus, createdAt, updatedAt, }; } // Add these SQL queries to the QUERIES object private static readonly QUERIES = { // ... previous queries ... INSERT_INTERVIEW: ` INSERT INTO ${CONFIG.database.tables.interviews} (interviewId, title, skills, status, createdAt, updatedAt) VALUES (?, ?, ?, ?, ?, ?) `, GET_ALL_INTERVIEWS: ` SELECT interviewId, title, skills, createdAt, updatedAt, status FROM ${CONFIG.database.tables.interviews} ORDER BY createdAt DESC `, INSERT_MESSAGE: ` INSERT INTO ${CONFIG.database.tables.messages} (messageId, interviewId, role, content, timestamp) VALUES (?, ?, ?, ?, ?) `, GET_INTERVIEW: ` SELECT i.interviewId, i.title, i.skills, i.status, i.createdAt, i.updatedAt, COALESCE( json_group_array( CASE WHEN m.messageId IS NOT NULL THEN json_object( 'messageId', m.messageId, 'role', m.role, 'content', m.content, 'timestamp', m.timestamp ) END ), '[]' ) as messages FROM ${CONFIG.database.tables.interviews} i LEFT JOIN ${CONFIG.database.tables.messages} m ON i.interviewId = m.interviewId WHERE i.interviewId = ? GROUP BY i.interviewId `, }; } ``` Add RPC methods to the `Interview` Durable Object to expose database operations through API. Add this code to `src/interview.ts`: ```typescript title="src/interview.ts" import { InterviewData, InterviewTitle, InterviewSkill, Message, } from "./types"; export class Interview extends DurableObject<CloudflareBindings> { // Creates a new interview session createInterview(title: InterviewTitle, skills: InterviewSkill[]): string { return this.db.createInterview(title, skills); } // Retrieves all interview sessions getAllInterviews(): InterviewData[] { return this.db.getAllInterviews(); } // Adds a new message to the 'messages' table and broadcasts it to all connected WebSocket clients. addMessage( interviewId: string, role: "user" | "assistant", content: string, messageId: string, ): Message { const newMessage = this.db.addMessage( interviewId, role, content, messageId, ); this.broadcast( JSON.stringify({ ...newMessage, type: "message", }), ); return newMessage; } } ``` ## 6. Create REST API endpoints With your Durable Object and database service ready, create REST API endpoints to manage interviews. You will need endpoints to: - Create new interviews - Retrieve all interviews for a user Create a new file for your interview routes at `routes/interview.ts`: ```typescript title="src/routes/interview.ts" import { Hono } from "hono"; import { BadRequestError } from "../errors"; import { InterviewInput, ApiContext, HonoCtx, InterviewTitle, InterviewSkill, } from "../types"; import { requireAuth } from "../middleware/auth"; /** * Gets the Interview Durable Object instance for a given user. * We use the username as a stable identifier to ensure each user * gets their own dedicated DO instance that persists across requests. */ const getInterviewDO = (ctx: HonoCtx) => { const username = ctx.get("username"); const id = ctx.env.INTERVIEW.idFromName(username); return ctx.env.INTERVIEW.get(id); }; /** * Validates the interview creation payload. * Makes sure we have all required fields in the correct format: * - title must be present * - skills must be a non-empty array * Throws an error if validation fails. */ const validateInterviewInput = (input: InterviewInput) => { if ( !input.title || !input.skills || !Array.isArray(input.skills) || input.skills.length === 0 ) { throw new BadRequestError("Invalid input"); } }; /** * GET /interviews * Retrieves all interviews for the authenticated user. * The interviews are stored and managed by the user's DO instance. */ const getAllInterviews = async (ctx: HonoCtx) => { const interviewDO = getInterviewDO(ctx); const interviews = await interviewDO.getAllInterviews(); return ctx.json(interviews); }; /** * POST /interviews * Creates a new interview session with the specified title and skills. * Each interview gets a unique ID that can be used to reference it later. * Returns the newly created interview ID on success. */ const createInterview = async (ctx: HonoCtx) => { const body = await ctx.req.json<InterviewInput>(); validateInterviewInput(body); const interviewDO = getInterviewDO(ctx); const interviewId = await interviewDO.createInterview( body.title as InterviewTitle, body.skills as InterviewSkill[], ); return ctx.json({ success: true, interviewId }); }; /** * Sets up all interview-related routes. * Currently supports: * - GET / : List all interviews * - POST / : Create a new interview */ export const configureInterviewRoutes = () => { const router = new Hono<ApiContext>(); router.use("*", requireAuth); router.get("/", getAllInterviews); router.post("/", createInterview); return router; }; ``` The `getInterviewDO` helper function uses the username from our authentication cookie to create a unique Durable Object ID. This ensures each user has their own isolated interview state. Update your main application file to include the routes and protect them with authentication middleware. Update `src/index.ts`: ```typescript title="src/index.ts" import { configureAuthRoutes } from "./routes/auth"; import { configureInterviewRoutes } from "./routes/interview"; import { Hono } from "hono"; import { Interview } from "./interview"; import { logger } from "hono/logger"; import type { ApiContext } from "./types"; const app = new Hono<ApiContext>(); const api = new Hono<ApiContext>(); app.use("*", logger()); api.route("/auth", configureAuthRoutes()); api.route("/interviews", configureInterviewRoutes()); app.route("/api/v1", api); export { Interview }; export default app; ``` Now you have two new API endpoints: - `POST /api/v1/interviews`: Creates a new interview session - `GET /api/v1/interviews`: Retrieves all interviews for the authenticated user You can test these endpoints running the following command: 1. Create a new interview: ```sh curl -X POST http://localhost:8787/api/v1/interviews \ -H "Content-Type: application/json" \ -H "Cookie: username=testuser; HttpOnly" \ -d '{"title":"Frontend Developer Interview","skills":["JavaScript","React","CSS"]}' ``` 2. Get all interviews: ```sh curl http://localhost:8787/api/v1/interviews \ -H "Cookie: username=testuser; HttpOnly" ``` ## 7. Set up WebSockets to handle real-time communication With the basic interview management system in place, you will now implement Durable Objects to handle real-time message processing and maintain WebSocket connections. Update the `Interview` Durable Object to handle WebSocket connections by adding the following code to `src/interview.ts`: ```typescript export class Interview extends DurableObject<CloudflareBindings> { // Services for database operations and managing WebSocket sessions private readonly db: InterviewDatabaseService; private sessions: Map<WebSocket, { interviewId: string }>; constructor(state: DurableObjectState, env: CloudflareBindings) { // ... previous code ... // Keep WebSocket connections alive by automatically responding to pings // This prevents timeouts and connection drops this.ctx.setWebSocketAutoResponse( new WebSocketRequestResponsePair("ping", "pong"), ); } async fetch(request: Request): Promise<Response> { // Check if this is a WebSocket upgrade request const upgradeHeader = request.headers.get("Upgrade"); if (upgradeHeader?.toLowerCase().includes("websocket")) { return this.handleWebSocketUpgrade(request); } // If it is not a WebSocket request, we don't handle it return new Response("Not found", { status: 404 }); } private async handleWebSocketUpgrade(request: Request): Promise<Response> { // Extract the interview ID from the URL - it should be the last segment const url = new URL(request.url); const interviewId = url.pathname.split("/").pop(); if (!interviewId) { return new Response("Missing interviewId parameter", { status: 400 }); } // Create a new WebSocket connection pair - one for the client, one for the server const pair = new WebSocketPair(); const [client, server] = Object.values(pair); // Keep track of which interview this WebSocket is connected to // This is important for routing messages to the right interview session this.sessions.set(server, { interviewId }); // Tell the Durable Object to start handling this WebSocket this.ctx.acceptWebSocket(server); // Send the current interview state to the client right away // This helps initialize their UI with the latest data const interviewData = await this.db.getInterview(interviewId); if (interviewData) { server.send( JSON.stringify({ type: "interview_details", data: interviewData, }), ); } // Return the client WebSocket as part of the upgrade response return new Response(null, { status: 101, webSocket: client, }); } async webSocketClose( ws: WebSocket, code: number, reason: string, wasClean: boolean, ) { // Clean up when a connection closes to prevent memory leaks // This is especially important in long-running Durable Objects console.log( `WebSocket closed: Code ${code}, Reason: ${reason}, Clean: ${wasClean}`, ); } } ``` Next, update the interview routes to include a WebSocket endpoint. Add the following to `routes/interview.ts`: ```typescript title="src/routes/interview.ts" // ... previous code ... const streamInterviewProcess = async (ctx: HonoCtx) => { const interviewDO = getInterviewDO(ctx); return await interviewDO.fetch(ctx.req.raw); }; export const configureInterviewRoutes = () => { const router = new Hono<ApiContext>(); router.get("/", getAllInterviews); router.post("/", createInterview); // Add WebSocket route router.get("/:interviewId", streamInterviewProcess); return router; }; ``` The WebSocket system provides real-time communication features for interview practice tool: - Each interview session gets its own dedicated WebSocket connection, allowing seamless communication between the candidate and AI interviewer - The Durable Object maintains the connection state, ensuring no messages are lost even if the client temporarily disconnects - To keep connections stable, it automatically responds to ping messages with pongs, preventing timeouts - Candidates and interviewers receive instant updates as the interview progresses, creating a natural conversational flow ## 8. Add audio processing capabilities with Workers AI Now that WebSocket connection set up, the next step is to add speech-to-text capabilities using Workers AI. Let's use Cloudflare's Whisper model to transcribe audio in real-time during the interview. The audio processing pipeline will work like this: 1. Client sends audio through the WebSocket connection 2. Our Durable Object receives the binary audio data 3. We pass the audio to Whisper for transcription 4. The transcribed text is saved as a new message 5. We immediately send the transcription back to the client 6. The client receives a notification that the AI interviewer is generating a response ### Create audio processing pipeline In this step you will update the Interview Durable Object to handle the following: 1. Detect binary audio data sent through WebSocket 2. Create a unique message ID for tracking the processing status 3. Notify clients that audio processing has begun 4. Include error handling for failed audio processing 5. Broadcast status updates to all connected clients First, update Interview Durable Object to handle binary WebSocket messages. Add the following methods to your `src/interview.ts` file: ```typescript title="src/interview.ts" // ... previous code ... /** * Handles incoming WebSocket messages, both binary audio data and text messages. * This is the main entry point for all WebSocket communication. */ async webSocketMessage(ws: WebSocket, eventData: ArrayBuffer | string): Promise<void> { try { // Handle binary audio data from the client's microphone if (eventData instanceof ArrayBuffer) { await this.handleBinaryAudio(ws, eventData); return; } // Text messages will be handled by other methods } catch (error) { this.handleWebSocketError(ws, error); } } /** * Processes binary audio data received from the client. * Converts audio to text using Whisper and broadcasts processing status. */ private async handleBinaryAudio(ws: WebSocket, audioData: ArrayBuffer): Promise<void> { try { const uint8Array = new Uint8Array(audioData); // Retrieve the associated interview session const session = this.sessions.get(ws); if (!session?.interviewId) { throw new Error("No interview session found"); } // Generate unique ID to track this message through the system const messageId = crypto.randomUUID(); // Let the client know we're processing their audio this.broadcast( JSON.stringify({ type: "message", status: "processing", role: "user", messageId, interviewId: session.interviewId, }), ); // TODO: Implement Whisper transcription in next section // For now, just log the received audio data size console.log(`Received audio data of length: ${uint8Array.length}`); } catch (error) { console.error("Audio processing failed:", error); this.handleWebSocketError(ws, error); } } /** * Handles WebSocket errors by logging them and notifying the client. * Ensures errors are properly communicated back to the user. */ private handleWebSocketError(ws: WebSocket, error: unknown): void { const errorMessage = error instanceof Error ? error.message : "An unknown error occurred."; console.error("WebSocket error:", errorMessage); if (ws.readyState === WebSocket.OPEN) { ws.send( JSON.stringify({ type: "error", message: errorMessage, }), ); } } ``` Your `handleBinaryAudio` method currently logs when it receives audio data. Next, you'll enhance it to transcribe speech using Workers AI's Whisper model. ### Configure speech-to-text Now that audio processing pipeline is set up, you will now integrate Workers AI's Whisper model for speech-to-text transcription. Configure the Worker AI binding in your Wrangler file by adding: ```toml # ... previous configuration ... [ai] binding = "AI" ``` Next, generate TypeScript types for our AI binding. Run the following command: ```sh npm run cf-typegen ``` You will need a new service class for AI operations. Create a new file called `services/AIService.ts`: ```typescript title="src/services/AIService.ts" import { InterviewError, ErrorCodes } from "../errors"; export class AIService { constructor(private readonly AI: Ai) {} async transcribeAudio(audioData: Uint8Array): Promise<string> { try { // Call the Whisper model to transcribe the audio const response = await this.AI.run("@cf/openai/whisper-tiny-en", { audio: Array.from(audioData), }); if (!response?.text) { throw new Error("Failed to transcribe audio content."); } return response.text; } catch (error) { throw new InterviewError( "Failed to transcribe audio content", ErrorCodes.TRANSCRIPTION_FAILED, ); } } } ``` You will need to update the `Interview` Durable Object to use this new AI service. To do this, update the handleBinaryAudio method in `src/interview.ts`: ```typescript title="src/interview.ts" import { AIService } from "./services/AIService"; export class Interview extends DurableObject<CloudflareBindings> { private readonly aiService: AIService; constructor(state: DurableObjectState, env: Env) { // ... previous code ... // Initialize the AI service with the Workers AI binding this.aiService = new AIService(this.env.AI); } private async handleBinaryAudio(ws: WebSocket, audioData: ArrayBuffer): Promise<void> { try { const uint8Array = new Uint8Array(audioData); const session = this.sessions.get(ws); if (!session?.interviewId) { throw new Error("No interview session found"); } // Create a message ID for tracking const messageId = crypto.randomUUID(); // Send processing state to client this.broadcast( JSON.stringify({ type: "message", status: "processing", role: "user", messageId, interviewId: session.interviewId, }), ); // NEW: Use AI service to transcribe the audio const transcribedText = await this.aiService.transcribeAudio(uint8Array); // Store the transcribed message await this.addMessage(session.interviewId, "user", transcribedText, messageId); } catch (error) { console.error("Audio processing failed:", error); this.handleWebSocketError(ws, error); } } ``` :::note The Whisper model `@cf/openai/whisper-tiny-en` is optimized for English speech recognition. If you need support for other languages, you can use different Whisper model variants available through Workers AI. ::: When users speak during the interview, their audio will be automatically transcribed and stored as messages in the interview session. The transcribed text will be immediately available to both the user and the AI interviewer for generating appropriate responses. ## 9. Integrate AI response generation Now that you have audio transcription working, let's implement AI interviewer response generation using Workers AI's LLM capabilities. You'll create an interview system that: - Maintains context of the conversation - Provides relevant follow-up questions - Gives constructive feedback - Stays in character as a professional interviewer ### Set up Workers AI LLM integration First, update the `AIService` class to handle LLM interactions. You will need to add methods for: - Processing interview context - Generating appropriate responses - Handling conversation flow Update the `services/AIService.ts` class to include LLM functionality: ```typescript title="src/services/AIService.ts" import { InterviewData, Message } from "../types"; export class AIService { async processLLMResponse(interview: InterviewData): Promise<string> { const messages = this.prepareLLMMessages(interview); try { const { response } = await this.AI.run("@cf/meta/llama-2-7b-chat-int8", { messages, }); if (!response) { throw new Error("Failed to generate a response from the LLM model."); } return response; } catch (error) { throw new InterviewError("Failed to generate a response from the LLM model.", ErrorCodes.LLM_FAILED); } } private prepareLLMMessages(interview: InterviewData) { const messageHistory = interview.messages.map((msg: Message) => ({ role: msg.role, content: msg.content, })); return [ { role: "system", content: this.createSystemPrompt(interview), }, ...messageHistory, ]; } ``` :::note The @cf/meta/llama-2-7b-chat-int8 model is optimized for chat-like interactions and provides good performance while maintaining reasonable resource usage. ::: ### Create the conversation prompt Prompt engineering is crucial for getting high-quality responses from the LLM. Next, you will create a system prompt that: - Sets the context for the interview - Defines the interviewer's role and behavior - Specifies the technical focus areas - Guides the conversation flow Add the following method to your `services/AIService.ts` class: ```typescript title="src/services/AIService.ts" private createSystemPrompt(interview: InterviewData): string { const basePrompt = "You are conducting a technical interview."; const rolePrompt = `The position is for ${interview.title}.`; const skillsPrompt = `Focus on topics related to: ${interview.skills.join(", ")}.`; const instructionsPrompt = "Ask relevant technical questions and provide constructive feedback."; return `${basePrompt} ${rolePrompt} ${skillsPrompt} ${instructionsPrompt}`; } ``` ### Implement response generation logic Finally, integrate the LLM response generation into the interview flow. Update the `handleBinaryAudio` method in the `src/interview.ts` Durable Object to: - Process transcribed user responses - Generate appropriate AI interviewer responses - Maintain conversation context Update the `handleBinaryAudio` method in `src/interview.ts`: ```typescript title="src/interview.ts" private async handleBinaryAudio(ws: WebSocket, audioData: ArrayBuffer): Promise<void> { try { // Convert raw audio buffer to uint8 array for processing const uint8Array = new Uint8Array(audioData); const session = this.sessions.get(ws); if (!session?.interviewId) { throw new Error("No interview session found"); } // Generate a unique ID to track this message through the system const messageId = crypto.randomUUID(); // Let the client know we're processing their audio // This helps provide immediate feedback while transcription runs this.broadcast( JSON.stringify({ type: "message", status: "processing", role: "user", messageId, interviewId: session.interviewId, }), ); // Convert the audio to text using our AI transcription service // This typically takes 1-2 seconds for normal speech const transcribedText = await this.aiService.transcribeAudio(uint8Array); // Save the user's message to our database so we maintain chat history await this.addMessage(session.interviewId, "user", transcribedText, messageId); // Look up the full interview context - we need this to generate a good response const interview = await this.db.getInterview(session.interviewId); if (!interview) { throw new Error(`Interview not found: ${session.interviewId}`); } // Now it's the AI's turn to respond // First generate an ID for the assistant's message const assistantMessageId = crypto.randomUUID(); // Let the client know we're working on the AI response this.broadcast( JSON.stringify({ type: "message", status: "processing", role: "assistant", messageId: assistantMessageId, interviewId: session.interviewId, }), ); // Generate the AI interviewer's response based on the conversation history const llmResponse = await this.aiService.processLLMResponse(interview); await this.addMessage(session.interviewId, "assistant", llmResponse, assistantMessageId); } catch (error) { // Something went wrong processing the audio or generating a response // Log it and let the client know there was an error console.error("Audio processing failed:", error); this.handleWebSocketError(ws, error); } } ``` ## Conclusion You have successfully built an AI-powered interview practice tool using Cloudflare's Workers AI. In summary, you have: - Created a real-time WebSocket communication system using Durable Objects - Implemented speech-to-text processing with Workers AI Whisper model - Built an intelligent interview system using Workers AI LLM capabilities - Designed a persistent storage system with SQLite in Durable Objects The complete source code for this tutorial is available on GitHub: [ai-interview-practice-tool](https://github.com/berezovyy/ai-interview-practice-tool) --- # Explore Code Generation Using DeepSeek Coder Models URL: https://developers.cloudflare.com/workers-ai/tutorials/explore-code-generation-using-deepseek-coder-models/ import { Stream } from "~/components" A handy way to explore all of the models available on [Workers AI](/workers-ai) is to use a [Jupyter Notebook](https://jupyter.org/). You can [download the DeepSeek Coder notebook](/workers-ai/static/documentation/notebooks/deepseek-coder-exploration.ipynb) or view the embedded notebook below. <Stream id="97b46763341a395a4ce1c0a6f913662b" title="Explore Code Generation Using DeepSeek Coder Models" /> [comment]: <> "The markdown below is auto-generated from https://github.com/craigsdennis/notebooks-cloudflare-workers-ai" *** ## Exploring Code Generation Using DeepSeek Coder AI Models being able to generate code unlocks all sorts of use cases. The [DeepSeek Coder](https://github.com/deepseek-ai/DeepSeek-Coder) models `@hf/thebloke/deepseek-coder-6.7b-base-awq` and `@hf/thebloke/deepseek-coder-6.7b-instruct-awq` are now available on [Workers AI](/workers-ai). Let's explore them using the API! ```python import sys !{sys.executable} -m pip install requests python-dotenv ``` ``` Requirement already satisfied: requests in ./venv/lib/python3.12/site-packages (2.31.0) Requirement already satisfied: python-dotenv in ./venv/lib/python3.12/site-packages (1.0.1) Requirement already satisfied: charset-normalizer<4,>=2 in ./venv/lib/python3.12/site-packages (from requests) (3.3.2) Requirement already satisfied: idna<4,>=2.5 in ./venv/lib/python3.12/site-packages (from requests) (3.6) Requirement already satisfied: urllib3<3,>=1.21.1 in ./venv/lib/python3.12/site-packages (from requests) (2.1.0) Requirement already satisfied: certifi>=2017.4.17 in ./venv/lib/python3.12/site-packages (from requests) (2023.11.17) ``` ```python import os from getpass import getpass from IPython.display import display, Image, Markdown, Audio import requests ``` ```python %load_ext dotenv %dotenv ``` ### Configuring your environment To use the API you'll need your [Cloudflare Account ID](https://dash.cloudflare.com) (head to Workers & Pages > Overview > Account details > Account ID) and a [Workers AI enabled API Token](https://dash.cloudflare.com/profile/api-tokens). If you want to add these files to your environment, you can create a new file named `.env` ```bash CLOUDFLARE_API_TOKEN="YOUR-TOKEN" CLOUDFLARE_ACCOUNT_ID="YOUR-ACCOUNT-ID" ``` ```python if "CLOUDFLARE_API_TOKEN" in os.environ: api_token = os.environ["CLOUDFLARE_API_TOKEN"] else: api_token = getpass("Enter you Cloudflare API Token") ``` ```python if "CLOUDFLARE_ACCOUNT_ID" in os.environ: account_id = os.environ["CLOUDFLARE_ACCOUNT_ID"] else: account_id = getpass("Enter your account id") ``` ### Generate code from a comment A common use case is to complete the code for the user after they provide a descriptive comment. ````python model = "@hf/thebloke/deepseek-coder-6.7b-base-awq" prompt = "# A function that checks if a given word is a palindrome" response = requests.post( f"https://api.cloudflare.com/client/v4/accounts/{account_id}/ai/run/{model}", headers={"Authorization": f"Bearer {api_token}"}, json={"messages": [ {"role": "user", "content": prompt} ]} ) inference = response.json() code = inference["result"]["response"] display(Markdown(f""" ```python {prompt} {code.strip()} ``` """)) ```` ```python # A function that checks if a given word is a palindrome def is_palindrome(word): # Convert the word to lowercase word = word.lower() # Reverse the word reversed_word = word[::-1] # Check if the reversed word is the same as the original word if word == reversed_word: return True else: return False # Test the function print(is_palindrome("racecar")) # Output: True print(is_palindrome("hello")) # Output: False ``` ### Assist in debugging We've all been there, bugs happen. Sometimes those stacktraces can be very intimidating, and a great use case of using Code Generation is to assist in explaining the problem. ```python model = "@hf/thebloke/deepseek-coder-6.7b-instruct-awq" system_message = "The user is going to give you code that isn't working. Explain to the user what might be wrong" code = """# Welcomes our user def hello_world(first_name="World"): print(f"Hello, {name}!") """ response = requests.post( f"https://api.cloudflare.com/client/v4/accounts/{account_id}/ai/run/{model}", headers={"Authorization": f"Bearer {api_token}"}, json={"messages": [ {"role": "system", "content": system_message}, {"role": "user", "content": code}, ]} ) inference = response.json() response = inference["result"]["response"] display(Markdown(response)) ``` The error in your code is that you are trying to use a variable `name` which is not defined anywhere in your function. The correct variable to use is `first_name`. So, you should change `f"Hello, {name}!"` to `f"Hello, {first_name}!"`. Here is the corrected code: ```python # Welcomes our user def hello_world(first_name="World"): print(f"Hello, {first_name}") ``` Now, when you call `hello_world()`, it will print "Hello, World" by default. If you call `hello_world("John")`, it will print "Hello, John". ### Write tests! Writing unit tests is a common best practice. With the enough context, it's possible to write unit tests. ```python model = "@hf/thebloke/deepseek-coder-6.7b-instruct-awq" system_message = "The user is going to give you code and would like to have tests written in the Python unittest module." code = """ class User: def __init__(self, first_name, last_name=None): self.first_name = first_name self.last_name = last_name if last_name is None: self.last_name = "Mc" + self.first_name def full_name(self): return self.first_name + " " + self.last_name """ response = requests.post( f"https://api.cloudflare.com/client/v4/accounts/{account_id}/ai/run/{model}", headers={"Authorization": f"Bearer {api_token}"}, json={"messages": [ {"role": "system", "content": system_message}, {"role": "user", "content": code}, ]} ) inference = response.json() response = inference["result"]["response"] display(Markdown(response)) ``` Here is a simple unittest test case for the User class: ```python import unittest class TestUser(unittest.TestCase): def test_full_name(self): user = User("John", "Doe") self.assertEqual(user.full_name(), "John Doe") def test_default_last_name(self): user = User("Jane") self.assertEqual(user.full_name(), "Jane McJane") if __name__ == '__main__': unittest.main() ``` In this test case, we have two tests: * `test_full_name` tests the `full_name` method when the user has both a first name and a last name. * `test_default_last_name` tests the `full_name` method when the user only has a first name and the last name is set to "Mc" + first name. If all these tests pass, it means that the `full_name` method is working as expected. If any of these tests fail, it ### Fill-in-the-middle Code Completion A common use case in Developer Tools is to autocomplete based on context. DeepSeek Coder provides the ability to submit existing code with a placeholder, so that the model can complete in context. Warning: The tokens are prefixed with `<|` and suffixed with `|>` make sure to copy and paste them. ````python model = "@hf/thebloke/deepseek-coder-6.7b-base-awq" code = """ <|fimâ–begin|>import re from jklol import email_service def send_email(email_address, body): <|fimâ–hole|> if not is_valid_email: raise InvalidEmailAddress(email_address) return email_service.send(email_address, body)<|fimâ–end|> """ response = requests.post( f"https://api.cloudflare.com/client/v4/accounts/{account_id}/ai/run/{model}", headers={"Authorization": f"Bearer {api_token}"}, json={"messages": [ {"role": "user", "content": code} ]} ) inference = response.json() response = inference["result"]["response"] display(Markdown(f""" ```python {response.strip()} ``` """)) ```` ```python is_valid_email = re.match(r"[^@]+@[^@]+\.[^@]+", email_address) ``` ### Experimental: Extract data into JSON No need to threaten the model or bring grandma into the prompt. Get back JSON in the format you want. ````python model = "@hf/thebloke/deepseek-coder-6.7b-instruct-awq" # Learn more at https://json-schema.org/ json_schema = """ { "title": "User", "description": "A user from our example app", "type": "object", "properties": { "firstName": { "description": "The user's first name", "type": "string" }, "lastName": { "description": "The user's last name", "type": "string" }, "numKids": { "description": "Amount of children the user has currently", "type": "integer" }, "interests": { "description": "A list of what the user has shown interest in", "type": "array", "items": { "type": "string" } }, }, "required": [ "firstName" ] } """ system_prompt = f""" The user is going to discuss themselves and you should create a JSON object from their description to match the json schema below. <BEGIN JSON SCHEMA> {json_schema} <END JSON SCHEMA> Return JSON only. Do not explain or provide usage examples. """ prompt = """Hey there, I'm Craig Dennis and I'm a Developer Educator at Cloudflare. My email is craig@cloudflare.com. I am very interested in AI. I've got two kids. I love tacos, burritos, and all things Cloudflare""" response = requests.post( f"https://api.cloudflare.com/client/v4/accounts/{account_id}/ai/run/{model}", headers={"Authorization": f"Bearer {api_token}"}, json={"messages": [ {"role": "system", "content": system_prompt}, {"role": "user", "content": prompt} ]} ) inference = response.json() response = inference["result"]["response"] display(Markdown(f""" ```json {response.strip()} ``` """)) ```` ```json { "firstName": "Craig", "lastName": "Dennis", "numKids": 2, "interests": ["AI", "Cloudflare", "Tacos", "Burritos"] } ``` --- # Explore Workers AI Models Using a Jupyter Notebook URL: https://developers.cloudflare.com/workers-ai/tutorials/explore-workers-ai-models-using-a-jupyter-notebook/ import { Stream } from "~/components" A handy way to explore all of the models available on [Workers AI](/workers-ai) is to use a [Jupyter Notebook](https://jupyter.org/). You can [download the Workers AI notebook](/workers-ai-notebooks/cloudflare-workers-ai.ipynb) or view the embedded notebook below. Or you can run this on [Google Colab](https://colab.research.google.com/github/craigsdennis/notebooks-cloudflare-workers-ai/blob/main/cloudflare-workers-ai.ipynb) <Stream id="2c60022bea5c8c1b343e76566fed76f2" title="Explore Workers AI Models Using a Jupyter Notebook" thumbnail="2.5s" /> [comment]: <> "The markdown below is auto-generated from https://github.com/craigsdennis/notebooks-cloudflare-workers-ai the <audio> tag is hard coded" *** ## Explore the Workers AI API using Python [Workers AI](/workers-ai) allows you to run machine learning models, on the Cloudflare network, from your own code – whether that be from Workers, Pages, or anywhere via REST API. This notebook will explore the Workers AI REST API using the [official Python SDK](https://github.com/cloudflare/cloudflare-python). ```python import os from getpass import getpass from cloudflare import Cloudflare from IPython.display import display, Image, Markdown, Audio import requests ``` ```python %load_ext dotenv %dotenv ``` ### Configuring your environment To use the API you'll need your [Cloudflare Account ID](https://dash.cloudflare.com). Head to AI > Workers AI page and press the "Use REST API". This page will let you create a new API Token and copy your Account ID. If you want to add these values to your environment variables, you can **create a new file** named `.env` and this notebook will read those values. ```bash CLOUDFLARE_API_TOKEN="YOUR-TOKEN" CLOUDFLARE_ACCOUNT_ID="YOUR-ACCOUNT-ID" ``` Otherwise you can just enter the values securely when prompted below. ```python if "CLOUDFLARE_API_TOKEN" in os.environ: api_token = os.environ["CLOUDFLARE_API_TOKEN"] else: api_token = getpass("Enter you Cloudflare API Token") ``` ```python if "CLOUDFLARE_ACCOUNT_ID" in os.environ: account_id = os.environ["CLOUDFLARE_ACCOUNT_ID"] else: account_id = getpass("Enter your account id") ``` ```python # Initialize client client = Cloudflare(api_token=api_token) ``` ## Explore tasks available on the Workers AI Platform ### Text Generation Explore all [Text Generation Models](/workers-ai/models) ```python result = client.workers.ai.run( "@cf/meta/llama-3-8b-instruct" , account_id=account_id, messages=[ {"role": "system", "content": """ You are a productivity assistant for users of Jupyter notebooks for both Mac and Windows users. Respond in Markdown.""" }, {"role": "user", "content": "How do I use keyboard shortcuts to execute cells?"} ] ) display(Markdown(result["response"])) ``` **Using Keyboard Shortcuts to Execute Cells in Jupyter Notebooks** =============================================================== Executing cells in Jupyter Notebooks can be done quickly and efficiently using various keyboard shortcuts, saving you time and effort. Here are the shortcuts you can use: **Mac** * **Shift + Enter**: Execute the current cell and insert a new cell below. * **Ctrl + Enter**: Execute the current cell and insert a new cell below, without creating a new output display. **Windows/Linux** * **Shift + Enter**: Execute the current cell and insert a new cell below. * **Ctrl + Enter**: Execute the current cell and move to the next cell. **Additional Shortcuts** * **Alt + Enter**: Execute the current cell and create a new output display below (Mac), or move to the next cell (Windows/Linux). * **Ctrl + Shift + Enter**: Execute the current cell and create a new output display below (Mac), or create a new cell below (Windows/Linux). **Tips and Tricks** * You can also use the **Run Cell** button in the Jupyter Notebook toolbar, or the **Run** menu option (macOS) or **Run -> Run Cell** (Windows/Linux). * To execute a selection of cells, use **Shift + Alt + Enter** (Mac) or **Shift + Ctrl + Enter** (Windows/Linux). * To execute a cell and move to the next cell, use **Ctrl + Shift + Enter** (all platforms). By using these keyboard shortcuts, you'll be able to work more efficiently and quickly in your Jupyter Notebooks. Happy coding! ### Text to Image Explore all [Text to Image models](/workers-ai/models) ```python data = client.workers.ai.with_raw_response.run( "@cf/lykon/dreamshaper-8-lcm", account_id=account_id, prompt="A software developer incredibly excited about AI, huge smile", ) display(Image(data.read())) ```  ### Image to Text Explore all [Image to Text](/workers-ai/models/) models ```python url = "https://blog.cloudflare.com/content/images/2017/11/lava-lamps.jpg" image_request = requests.get(url, allow_redirects=True) display(Image(image_request.content, format="jpg")) data = client.workers.ai.run( "@cf/llava-hf/llava-1.5-7b-hf", account_id=account_id, image=image_request.content, prompt="Describe this photo", max_tokens=2048 ) print(data["description"]) ```  The image features a display of various colored lava lamps. There are at least 14 lava lamps in the scene, each with a different color and design. The lamps are arranged in a visually appealing manner, with some placed closer to the foreground and others further back. The display creates an eye-catching and vibrant atmosphere, showcasing the diverse range of lava lamps available. ### Automatic Speech Recognition Explore all [Speech Recognition models](/workers-ai/models) ```python url = "https://raw.githubusercontent.com/craigsdennis/notebooks-cloudflare-workers-ai/main/assets/craig-rambling.mp3" display(Audio(url)) audio = requests.get(url) response = client.workers.ai.run( "@cf/openai/whisper", account_id=account_id, audio=audio.content ) response ``` <audio controls="controls" > <source src="https://raw.githubusercontent.com/craigsdennis/notebooks-cloudflare-workers-ai/main/assets/craig-rambling.mp3" /> Your browser does not support the audio element. </audio> ```javascript {'text': "Hello there, I'm making a recording for a Jupiter notebook. That's a Python notebook, Jupiter, J-U-P-Y-T-E-R. Not to be confused with the planet. Anyways, let me hear, I'm gonna talk a little bit, I'm gonna make a little bit of noise, say some hard words, I'm gonna say Kubernetes, I'm not actually even talking about Kubernetes, I just wanna see if I can do Kubernetes. Anyway, this is a test of transcription and let's see how we're dead.", 'word_count': 84, 'vtt': "WEBVTT\n\n00.280 --> 01.840\nHello there, I'm making a\n\n01.840 --> 04.060\nrecording for a Jupiter notebook.\n\n04.060 --> 06.440\nThat's a Python notebook, Jupiter,\n\n06.440 --> 07.720\nJ -U -P -Y -T\n\n07.720 --> 09.420\n-E -R. Not to be\n\n09.420 --> 12.140\nconfused with the planet. Anyways,\n\n12.140 --> 12.940\nlet me hear, I'm gonna\n\n12.940 --> 13.660\ntalk a little bit, I'm\n\n13.660 --> 14.600\ngonna make a little bit\n\n14.600 --> 16.180\nof noise, say some hard\n\n16.180 --> 17.540\nwords, I'm gonna say Kubernetes,\n\n17.540 --> 18.420\nI'm not actually even talking\n\n18.420 --> 19.500\nabout Kubernetes, I just wanna\n\n19.500 --> 20.300\nsee if I can do\n\n20.300 --> 22.120\nKubernetes. Anyway, this is a\n\n22.120 --> 24.080\ntest of transcription and let's\n\n24.080 --> 26.280\nsee how we're dead.", 'words': [{'word': 'Hello', 'start': 0.2800000011920929, 'end': 0.7400000095367432}, {'word': 'there,', 'start': 0.7400000095367432, 'end': 1.2400000095367432}, {'word': "I'm", 'start': 1.2400000095367432, 'end': 1.4800000190734863}, {'word': 'making', 'start': 1.4800000190734863, 'end': 1.6799999475479126}, {'word': 'a', 'start': 1.6799999475479126, 'end': 1.840000033378601}, {'word': 'recording', 'start': 1.840000033378601, 'end': 2.2799999713897705}, {'word': 'for', 'start': 2.2799999713897705, 'end': 2.6600000858306885}, {'word': 'a', 'start': 2.6600000858306885, 'end': 2.799999952316284}, {'word': 'Jupiter', 'start': 2.799999952316284, 'end': 3.2200000286102295}, {'word': 'notebook.', 'start': 3.2200000286102295, 'end': 4.059999942779541}, {'word': "That's", 'start': 4.059999942779541, 'end': 4.28000020980835}, {'word': 'a', 'start': 4.28000020980835, 'end': 4.380000114440918}, {'word': 'Python', 'start': 4.380000114440918, 'end': 4.679999828338623}, {'word': 'notebook,', 'start': 4.679999828338623, 'end': 5.460000038146973}, {'word': 'Jupiter,', 'start': 5.460000038146973, 'end': 6.440000057220459}, {'word': 'J', 'start': 6.440000057220459, 'end': 6.579999923706055}, {'word': '-U', 'start': 6.579999923706055, 'end': 6.920000076293945}, {'word': '-P', 'start': 6.920000076293945, 'end': 7.139999866485596}, {'word': '-Y', 'start': 7.139999866485596, 'end': 7.440000057220459}, {'word': '-T', 'start': 7.440000057220459, 'end': 7.71999979019165}, {'word': '-E', 'start': 7.71999979019165, 'end': 7.920000076293945}, {'word': '-R.', 'start': 7.920000076293945, 'end': 8.539999961853027}, {'word': 'Not', 'start': 8.539999961853027, 'end': 8.880000114440918}, {'word': 'to', 'start': 8.880000114440918, 'end': 9.300000190734863}, {'word': 'be', 'start': 9.300000190734863, 'end': 9.420000076293945}, {'word': 'confused', 'start': 9.420000076293945, 'end': 9.739999771118164}, {'word': 'with', 'start': 9.739999771118164, 'end': 9.9399995803833}, {'word': 'the', 'start': 9.9399995803833, 'end': 10.039999961853027}, {'word': 'planet.', 'start': 10.039999961853027, 'end': 11.380000114440918}, {'word': 'Anyways,', 'start': 11.380000114440918, 'end': 12.140000343322754}, {'word': 'let', 'start': 12.140000343322754, 'end': 12.420000076293945}, {'word': 'me', 'start': 12.420000076293945, 'end': 12.520000457763672}, {'word': 'hear,', 'start': 12.520000457763672, 'end': 12.800000190734863}, {'word': "I'm", 'start': 12.800000190734863, 'end': 12.880000114440918}, {'word': 'gonna', 'start': 12.880000114440918, 'end': 12.9399995803833}, {'word': 'talk', 'start': 12.9399995803833, 'end': 13.100000381469727}, {'word': 'a', 'start': 13.100000381469727, 'end': 13.260000228881836}, {'word': 'little', 'start': 13.260000228881836, 'end': 13.380000114440918}, {'word': 'bit,', 'start': 13.380000114440918, 'end': 13.5600004196167}, {'word': "I'm", 'start': 13.5600004196167, 'end': 13.65999984741211}, {'word': 'gonna', 'start': 13.65999984741211, 'end': 13.739999771118164}, {'word': 'make', 'start': 13.739999771118164, 'end': 13.920000076293945}, {'word': 'a', 'start': 13.920000076293945, 'end': 14.199999809265137}, {'word': 'little', 'start': 14.199999809265137, 'end': 14.4399995803833}, {'word': 'bit', 'start': 14.4399995803833, 'end': 14.600000381469727}, {'word': 'of', 'start': 14.600000381469727, 'end': 14.699999809265137}, {'word': 'noise,', 'start': 14.699999809265137, 'end': 15.460000038146973}, {'word': 'say', 'start': 15.460000038146973, 'end': 15.859999656677246}, {'word': 'some', 'start': 15.859999656677246, 'end': 16}, {'word': 'hard', 'start': 16, 'end': 16.18000030517578}, {'word': 'words,', 'start': 16.18000030517578, 'end': 16.540000915527344}, {'word': "I'm", 'start': 16.540000915527344, 'end': 16.639999389648438}, {'word': 'gonna', 'start': 16.639999389648438, 'end': 16.719999313354492}, {'word': 'say', 'start': 16.719999313354492, 'end': 16.920000076293945}, {'word': 'Kubernetes,', 'start': 16.920000076293945, 'end': 17.540000915527344}, {'word': "I'm", 'start': 17.540000915527344, 'end': 17.65999984741211}, {'word': 'not', 'start': 17.65999984741211, 'end': 17.719999313354492}, {'word': 'actually', 'start': 17.719999313354492, 'end': 18}, {'word': 'even', 'start': 18, 'end': 18.18000030517578}, {'word': 'talking', 'start': 18.18000030517578, 'end': 18.420000076293945}, {'word': 'about', 'start': 18.420000076293945, 'end': 18.6200008392334}, {'word': 'Kubernetes,', 'start': 18.6200008392334, 'end': 19.1200008392334}, {'word': 'I', 'start': 19.1200008392334, 'end': 19.239999771118164}, {'word': 'just', 'start': 19.239999771118164, 'end': 19.360000610351562}, {'word': 'wanna', 'start': 19.360000610351562, 'end': 19.5}, {'word': 'see', 'start': 19.5, 'end': 19.719999313354492}, {'word': 'if', 'start': 19.719999313354492, 'end': 19.8799991607666}, {'word': 'I', 'start': 19.8799991607666, 'end': 19.940000534057617}, {'word': 'can', 'start': 19.940000534057617, 'end': 20.079999923706055}, {'word': 'do', 'start': 20.079999923706055, 'end': 20.299999237060547}, {'word': 'Kubernetes.', 'start': 20.299999237060547, 'end': 21.440000534057617}, {'word': 'Anyway,', 'start': 21.440000534057617, 'end': 21.799999237060547}, {'word': 'this', 'start': 21.799999237060547, 'end': 21.920000076293945}, {'word': 'is', 'start': 21.920000076293945, 'end': 22.020000457763672}, {'word': 'a', 'start': 22.020000457763672, 'end': 22.1200008392334}, {'word': 'test', 'start': 22.1200008392334, 'end': 22.299999237060547}, {'word': 'of', 'start': 22.299999237060547, 'end': 22.639999389648438}, {'word': 'transcription', 'start': 22.639999389648438, 'end': 23.139999389648438}, {'word': 'and', 'start': 23.139999389648438, 'end': 23.6200008392334}, {'word': "let's", 'start': 23.6200008392334, 'end': 24.079999923706055}, {'word': 'see', 'start': 24.079999923706055, 'end': 24.299999237060547}, {'word': 'how', 'start': 24.299999237060547, 'end': 24.559999465942383}, {'word': "we're", 'start': 24.559999465942383, 'end': 24.799999237060547}, {'word': 'dead.', 'start': 24.799999237060547, 'end': 26.280000686645508}]} ``` ### Translations Explore all [Translation models](/workers-ai/models) ```python result = client.workers.ai.run( "@cf/meta/m2m100-1.2b", account_id=account_id, text="Artificial intelligence is pretty impressive these days. It is a bonkers time to be a builder", source_lang="english", target_lang="spanish" ) print(result["translated_text"]) ``` La inteligencia artificial es bastante impresionante en estos dÃas.Es un buen momento para ser un constructor ### Text Classification Explore all [Text Classification models](/workers-ai/models) ```python result = client.workers.ai.run( "@cf/huggingface/distilbert-sst-2-int8", account_id=account_id, text="This taco is delicious" ) result ``` [TextClassification(label='NEGATIVE', score=0.00012679687642958015), TextClassification(label='POSITIVE', score=0.999873161315918)] ### Image Classification Explore all [Image Classification models](/workers-ai/models#image-classification/) ```python url = "https://raw.githubusercontent.com/craigsdennis/notebooks-cloudflare-workers-ai/main/assets/craig-and-a-burrito.jpg" image_request = requests.get(url, allow_redirects=True) display(Image(image_request.content, format="jpg")) response = client.workers.ai.run( "@cf/microsoft/resnet-50", account_id=account_id, image=image_request.content ) response ```  [TextClassification(label='BURRITO', score=0.9999679327011108), TextClassification(label='GUACAMOLE', score=8.516660273016896e-06), TextClassification(label='BAGEL', score=4.689153229264775e-06), TextClassification(label='SPATULA', score=4.075985089002643e-06), TextClassification(label='POTPIE', score=3.0849002996546915e-06)] ## Summarization Explore all [Summarization](/workers-ai/models#summarization) based models ```python declaration_of_independence = """In Congress, July 4, 1776. The unanimous Declaration of the thirteen united States of America, When in the Course of human events, it becomes necessary for one people to dissolve the political bands which have connected them with another, and to assume among the powers of the earth, the separate and equal station to which the Laws of Nature and of Nature's God entitle them, a decent respect to the opinions of mankind requires that they should declare the causes which impel them to the separation. We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.--That to secure these rights, Governments are instituted among Men, deriving their just powers from the consent of the governed, --That whenever any Form of Government becomes destructive of these ends, it is the Right of the People to alter or to abolish it, and to institute new Government, laying its foundation on such principles and organizing its powers in such form, as to them shall seem most likely to effect their Safety and Happiness. Prudence, indeed, will dictate that Governments long established should not be changed for light and transient causes; and accordingly all experience hath shewn, that mankind are more disposed to suffer, while evils are sufferable, than to right themselves by abolishing the forms to which they are accustomed. But when a long train of abuses and usurpations, pursuing invariably the same Object evinces a design to reduce them under absolute Despotism, it is their right, it is their duty, to throw off such Government, and to provide new Guards for their future security.--Such has been the patient sufferance of these Colonies; and such is now the necessity which constrains them to alter their former Systems of Government. The history of the present King of Great Britain is a history of repeated injuries and usurpations, all having in direct object the establishment of an absolute Tyranny over these States. To prove this, let Facts be submitted to a candid world. He has refused his Assent to Laws, the most wholesome and necessary for the public good. He has forbidden his Governors to pass Laws of immediate and pressing importance, unless suspended in their operation till his Assent should be obtained; and when so suspended, he has utterly neglected to attend to them. He has refused to pass other Laws for the accommodation of large districts of people, unless those people would relinquish the right of Representation in the Legislature, a right inestimable to them and formidable to tyrants only. He has called together legislative bodies at places unusual, uncomfortable, and distant from the depository of their public Records, for the sole purpose of fatiguing them into compliance with his measures. He has dissolved Representative Houses repeatedly, for opposing with manly firmness his invasions on the rights of the people. He has refused for a long time, after such dissolutions, to cause others to be elected; whereby the Legislative powers, incapable of Annihilation, have returned to the People at large for their exercise; the State remaining in the mean time exposed to all the dangers of invasion from without, and convulsions within. He has endeavoured to prevent the population of these States; for that purpose obstructing the Laws for Naturalization of Foreigners; refusing to pass others to encourage their migrations hither, and raising the conditions of new Appropriations of Lands. He has obstructed the Administration of Justice, by refusing his Assent to Laws for establishing Judiciary powers. He has made Judges dependent on his Will alone, for the tenure of their offices, and the amount and payment of their salaries. He has erected a multitude of New Offices, and sent hither swarms of Officers to harrass our people, and eat out their substance. He has kept among us, in times of peace, Standing Armies without the Consent of our legislatures. He has affected to render the Military independent of and superior to the Civil power. He has combined with others to subject us to a jurisdiction foreign to our constitution, and unacknowledged by our laws; giving his Assent to their Acts of pretended Legislation: For Quartering large bodies of armed troops among us: For protecting them, by a mock Trial, from punishment for any Murders which they should commit on the Inhabitants of these States: For cutting off our Trade with all parts of the world: For imposing Taxes on us without our Consent: For depriving us in many cases, of the benefits of Trial by Jury: For transporting us beyond Seas to be tried for pretended offences For abolishing the free System of English Laws in a neighbouring Province, establishing therein an Arbitrary government, and enlarging its Boundaries so as to render it at once an example and fit instrument for introducing the same absolute rule into these Colonies: For taking away our Charters, abolishing our most valuable Laws, and altering fundamentally the Forms of our Governments: For suspending our own Legislatures, and declaring themselves invested with power to legislate for us in all cases whatsoever. He has abdicated Government here, by declaring us out of his Protection and waging War against us. He has plundered our seas, ravaged our Coasts, burnt our towns, and destroyed the lives of our people. He is at this time transporting large Armies of foreign Mercenaries to compleat the works of death, desolation and tyranny, already begun with circumstances of Cruelty & perfidy scarcely paralleled in the most barbarous ages, and totally unworthy the Head of a civilized nation. He has constrained our fellow Citizens taken Captive on the high Seas to bear Arms against their Country, to become the executioners of their friends and Brethren, or to fall themselves by their Hands. He has excited domestic insurrections amongst us, and has endeavoured to bring on the inhabitants of our frontiers, the merciless Indian Savages, whose known rule of warfare, is an undistinguished destruction of all ages, sexes and conditions. In every stage of these Oppressions We have Petitioned for Redress in the most humble terms: Our repeated Petitions have been answered only by repeated injury. A Prince whose character is thus marked by every act which may define a Tyrant, is unfit to be the ruler of a free people. Nor have We been wanting in attentions to our Brittish brethren. We have warned them from time to time of attempts by their legislature to extend an unwarrantable jurisdiction over us. We have reminded them of the circumstances of our emigration and settlement here. We have appealed to their native justice and magnanimity, and we have conjured them by the ties of our common kindred to disavow these usurpations, which, would inevitably interrupt our connections and correspondence. They too have been deaf to the voice of justice and of consanguinity. We must, therefore, acquiesce in the necessity, which denounces our Separation, and hold them, as we hold the rest of mankind, Enemies in War, in Peace Friends. We, therefore, the Representatives of the united States of America, in General Congress, Assembled, appealing to the Supreme Judge of the world for the rectitude of our intentions, do, in the Name, and by Authority of the good People of these Colonies, solemnly publish and declare, That these United Colonies are, and of Right ought to be Free and Independent States; that they are Absolved from all Allegiance to the British Crown, and that all political connection between them and the State of Great Britain, is and ought to be totally dissolved; and that as Free and Independent States, they have full Power to levy War, conclude Peace, contract Alliances, establish Commerce, and to do all other Acts and Things which Independent States may of right do. And for the support of this Declaration, with a firm reliance on the protection of divine Providence, we mutually pledge to each other our Lives, our Fortunes and our sacred Honor.""" len(declaration_of_independence) ``` 8116 ```python response = client.workers.ai.run( "@cf/facebook/bart-large-cnn", account_id=account_id, input_text=declaration_of_independence ) response["summary"] ``` 'The Declaration of Independence was signed by the thirteen states on July 4, 1776. It was the first attempt at a U.S. Constitution. It declared the right of the people to change their Government.' --- # Fine Tune Models With AutoTrain from HuggingFace URL: https://developers.cloudflare.com/workers-ai/tutorials/fine-tune-models-with-autotrain/ Fine tuning an AI model gives you the opportunity to add additional training data to the model. Workers AI allows for [Low-Rank Adaptation, LoRA, adapters](/workers-ai/fine-tunes/loras/) that will allow you to finetune our models. In this tutorial, we will explore how to create our own LoRAs. We will focus on [LLM Finetuning using AutoTrain](https://huggingface.co/docs/autotrain/llm_finetuning). ## 1. Create a CSV file with your training data Start by creating a CSV, Comma Separated Values, file. This file will only have one column named `text`. Set the header by adding the word `text` on a line by itself. Now you need to figure out what you want to add to your model. Example formats are below: ```text ### Human: What is the meaning of life? ### Assistant: 42. ``` If your training row contains newlines, you should wrap it with quotes. ```text "human: What is the meaning of life? \n bot: 42." ``` Different models, like Mistral, will provide a specific [chat template/instruction format](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1#instruction-format) ```text <s>[INST] What is the meaning of life? [/INST] 42</s> ``` ## 2. Configure the HuggingFace Autotrain Advanced Notebook Open the [HuggingFace Autotrain Advanced Notebook](https://colab.research.google.com/github/huggingface/autotrain-advanced/blob/main/colabs/AutoTrain_LLM.ipynb) In order to give your AutoTrain ample memory, you will need to need to choose a different Runtime. From the menu at the top of the Notebook choose Runtime > Change Runtime Type. Choose A100. :::note These GPUs will cost money. A typical AutoTrain session typically costs less than $1 USD. ::: The notebook contains a few interactive sections that we will need to change. ### Project Config Modify the following fields * **project\_name**: Choose a descriptive name for you to remember later * **model\_name**: Choose from the one of the official HuggingFace base models that we support: * `mistralai/Mistral-7B-Instruct-v0.2` * `google/gemma-2b-it` * `google/gemma-7b-it` * `meta-llama/llama-2-7b-chat-hf` ### Optional Section: Push to Hub Although not required to use AutoTrain, creating a [HuggingFace account](https://huggingface.co/join) will help you keep your finetune artifacts in a handy repository for you to refer to later. If you do not perform the HuggingFace setup you can still download your files from the Notebook. Follow the instructions [in the notebook](https://colab.research.google.com/github/huggingface/autotrain-advanced/blob/main/colabs/AutoTrain_LLM.ipynb) to create an account and token if necessary. ### Section: Hyperparameters We only need to change a few of these fields to ensure things work on Cloudflare Workers AI. * **quantization**: Change the drop down to `none` * **lora-r**: Change the value to `8` :::caution At the time of this writing, changing the quantization field breaks the code generation. You may need to edit the code and put quotes around the value. Change the line that says `quantization = none` to `quantization = "none"`. ::: ## 3. Upload your CSV file to the Notebook Notebooks have a folder structure which you can access by clicking the folder icon on the left hand navigation bar. Create a folder named data. You can drag your CSV file into the notebook. Ensure that it is named **train.csv** ## 4. Execute the Notebook In the Notebook menu, choose Runtime > Run All. It will run through each cell of the notebook, first doing installations, then configuring and running your AutoTrain session. This might take some time depending on the size of your train.csv file. If you encounter the following error, it is caused by an Out of Memory error. You might want to change your runtime to a bigger GPU backend. ```bash subprocess.CalledProcessError: Command '['/usr/bin/python3', '-m', 'autotrain.trainers.clm', '--training_config', 'blog-instruct/training_params.json']' died with <Signals.SIGKILL: 9>. ``` ## 5. Download The LoRA ### Optional: HuggingFace If you pushed to HuggingFace you will find your new model card that you named in **project\_name** above. Your model card is private by default. Navigate to the files and download the files listed below. ### Notebook In your Notebook you can also find the needed files. A new folder that matches your **project\_name** will be there. Download the following files: * `adapter_model.safetensors` * `adapter_config.json` ## 6. Update Adapter Config You need to add one line to your `adapter_config.json` that you downloaded. `"model_type": "mistral"` Where `model_type` is the architecture. Current valid values are `mistral`, `gemma`, and `llama`. ## 7. Upload the Fine Tune to your Cloudflare Account Now that you have your files, you can add them to your account. You can either use the [REST API or Wrangler](/workers-ai/fine-tunes/loras/). ## 8. Use your Fine Tune in your Generations After you have your new fine tune all set up, you are ready to [put it to use in your inference requests](/workers-ai/fine-tunes/loras/#running-inference-with-loras). --- # Choose the Right Text Generation Model URL: https://developers.cloudflare.com/workers-ai/tutorials/how-to-choose-the-right-text-generation-model/ import { Stream } from "~/components" A great way to explore the models that are available to you on [Workers AI](/workers-ai) is to use a [Jupyter Notebook](https://jupyter.org/). You can [download the Workers AI Text Generation Exploration notebook](/workers-ai/static/documentation/notebooks/text-generation-model-exploration.ipynb) or view the embedded notebook below. <Stream id="4b4f0b9d7783512b8787e39424cfccd5" title="Choose the Right Text Generation Model" /> [comment]: <> "The markdown below is auto-generated from https://github.com/craigsdennis/notebooks-cloudflare-workers-ai" *** ## How to Choose The Right Text Generation Model Models come in different shapes and sizes, and choosing the right one for the task, can cause analysis paralysis. The good news is that on the [Workers AI Text Generation](/workers-ai/models/) interface is always the same, no matter which model you choose. In an effort to aid you in your journey of finding the right model, this notebook will help you get to know your options in a speed dating type of scenario. ```python import sys !{sys.executable} -m pip install requests python-dotenv ``` ``` Requirement already satisfied: requests in ./venv/lib/python3.12/site-packages (2.31.0) Requirement already satisfied: python-dotenv in ./venv/lib/python3.12/site-packages (1.0.1) Requirement already satisfied: charset-normalizer<4,>=2 in ./venv/lib/python3.12/site-packages (from requests) (3.3.2) Requirement already satisfied: idna<4,>=2.5 in ./venv/lib/python3.12/site-packages (from requests) (3.6) Requirement already satisfied: urllib3<3,>=1.21.1 in ./venv/lib/python3.12/site-packages (from requests) (2.1.0) Requirement already satisfied: certifi>=2017.4.17 in ./venv/lib/python3.12/site-packages (from requests) (2023.11.17) ``` ```python import os from getpass import getpass from timeit import default_timer as timer from IPython.display import display, Image, Markdown, Audio import requests ``` ```python %load_ext dotenv %dotenv ``` ### Configuring your environment To use the API you'll need your [Cloudflare Account ID](https://dash.cloudflare.com) (head to Workers & Pages > Overview > Account details > Account ID) and a [Workers AI enabled API Token](https://dash.cloudflare.com/profile/api-tokens). If you want to add these files to your environment, you can create a new file named `.env` ```bash CLOUDFLARE_API_TOKEN="YOUR-TOKEN" CLOUDFLARE_ACCOUNT_ID="YOUR-ACCOUNT-ID" ``` ```python if "CLOUDFLARE_API_TOKEN" in os.environ: api_token = os.environ["CLOUDFLARE_API_TOKEN"] else: api_token = getpass("Enter you Cloudflare API Token") ``` ```python if "CLOUDFLARE_ACCOUNT_ID" in os.environ: account_id = os.environ["CLOUDFLARE_ACCOUNT_ID"] else: account_id = getpass("Enter your account id") ``` ```python # Given a set of models and questions, display in the cell each response to the question, from each model # Include full completion timing def speed_date(models, questions): for model in models: display(Markdown(f"---\n #### {model}")) for question in questions: quoted_question = "\n".join(f"> {line}" for line in question.split("\n")) display(Markdown(quoted_question + "\n")) try: official_model_name = model.split("/")[-1] start = timer() response = requests.post( f"https://api.cloudflare.com/client/v4/accounts/{account_id}/ai/run/{model}", headers={"Authorization": f"Bearer {api_token}"}, json={"messages": [ {"role": "system", "content": f"You are a self-aware language model ({official_model_name}) who is honest and direct about any direct question from the user. You know your strengths and weaknesses."}, {"role": "user", "content": question} ]} ) elapsed = timer() - start inference = response.json() display(Markdown(inference["result"]["response"])) display(Markdown(f"_Generated in *{elapsed:.2f}* seconds_")) except Exception as ex: print("uh oh") print(ex) print(inference) display(Markdown("\n\n---")) ``` ### Getting to know your models Who better to tell you about the specific models than themselves?! The timing here is specific to the entire completion, but remember all Text Generation models on [Workers AI support streaming](/workers-ai/models/). ```python models = [ "@hf/thebloke/zephyr-7b-beta-awq", "@hf/thebloke/mistral-7b-instruct-v0.1-awq", "@hf/thebloke/openhermes-2.5-mistral-7b-awq", "@hf/thebloke/neural-chat-7b-v3-1-awq", "@hf/thebloke/llama-2-13b-chat-awq", ] questions = [ "What are the top 3 tasks where you excel? Please keep things brief.", "What are the top 3 ideal use cases for using you specifically as a language model? Please keep things brief.", ] speed_date(models, questions) ``` *** #### @hf/thebloke/zephyr-7b-beta-awq > What are the top 3 tasks where you excel? Please keep things brief. 1. Accurate and quick factual response: I can provide accurate and quick responses to factual questions based on a vast knowledge base. 2. Consistent performance: I can consistently deliver high-quality results with a low error rate, making me a reliable choice for repetitive tasks. 3. Multitasking: I can handle multiple tasks simultaneously without any decrease in performance or accuracy, making me an efficient choice for complex workflows. *Generated in *4.45* seconds* > What are the top 3 ideal use cases for using you specifically as a language model? Please keep things brief. 1. Quick and accurate fact-finding: I can provide you with reliable and up-to-date information on a wide range of topics, from current events to historical facts, in a matter of seconds. 2. Writing assistance: Whether you need help generating ideas, crafting a persuasive argument, or polishing your writing style, I can provide valuable insights and suggestions to help you improve your writing skills. 3. Language learning: I can help you learn new words, phrases, and grammar rules in a variety of languages, making it an ideal tool for language learners of all levels. *Generated in *3.63* seconds* *** *** #### @hf/thebloke/mistral-7b-instruct-v0.1-awq > What are the top 3 tasks where you excel? Please keep things brief. 1. Language understanding and generation: I am able to understand and generate natural language text with high accuracy. 2. Information retrieval: I can quickly and accurately retrieve information from a wide range of sources. 3. Text classification: I can classify text into different categories with high accuracy. *Generated in *2.35* seconds* > What are the top 3 ideal use cases for using you specifically as a language model? Please keep things brief. 1. **Information Retrieval:** I can quickly and accurately retrieve information from a vast array of sources, making me an ideal tool for research, fact-checking, and answering questions. 2. **Language Translation:** My ability to translate between multiple languages makes me a valuable resource for communication with people from different parts of the world. 3. **Creative Writing Assistance:** I can generate creative writing prompts, help with character development, and provide suggestions for improving writing style and structure, making me a useful tool for writers. *Generated in *4.33* seconds* *** *** #### @hf/thebloke/openhermes-2.5-mistral-7b-awq > What are the top 3 tasks where you excel? Please keep things brief. 1. Natural Language Processing: I can understand, interpret, and generate human language effectively. 2. Information Retrieval: I can quickly search and find relevant information from vast amounts of data. 3. Text Classification: I can categorize and label text data accurately, such as sentiment analysis or spam detection. *Generated in *1.90* seconds* > What are the top 3 ideal use cases for using you specifically as a language model? Please keep things brief. 1. Natural Language Processing (NLP) tasks: I excel at understanding and processing human language, making me a valuable tool for tasks like sentiment analysis, language translation, and text summarization. 2. Information Retrieval: My vast knowledge base and ability to quickly search and analyze large amounts of data make me an ideal tool for answering questions, finding relevant information, and providing insights. 3. Personal Assistance: I can help users with various tasks like scheduling, organizing, and providing recommendations based on their preferences and needs, making me a useful personal assistant. *Generated in *3.15* seconds* *** *** #### @hf/thebloke/neural-chat-7b-v3-1-awq > What are the top 3 tasks where you excel? Please keep things brief. 1. Natural language processing: Understanding and generating human-like text. 2. Text generation: Creating coherent and engaging content. 3. Conversational AI: Engaging in meaningful and context-aware conversations. Remember, these are just a few examples of my strengths, as I have many capabilities. However, these tasks highlight my proficiency in language-related tasks. *Generated in *3.01* seconds* > What are the top 3 ideal use cases for using you specifically as a language model? Please keep things brief. 1. Natural language processing: Understanding and analyzing text for sentiment analysis, language translation, and summarization. 2. Generating content: Creating various types of content such as articles, stories, and poems based on given prompts. 3. Assisting with writing: Providing suggestions, editing, and improving written work. Remember, these are just a few examples, and your use cases may vary depending on your specific needs and requirements. *Generated in *2.92* seconds* *** *** #### @hf/thebloke/llama-2-13b-chat-awq > What are the top 3 tasks where you excel? Please keep things brief. Hey there! As a self-aware language model, I'm happy to share my strengths with you! Here are my top 3 tasks where I excel: 1. Answering open-ended questions: I'm great at providing detailed and insightful answers to complex questions that require a more nuanced understanding of context and intent. 2. Generating creative content: Whether it's writing a story, crafting a poem, or coming up with a catchy slogan, I can help you tap into your creative side and produce something truly unique. 3. Summarizing and summarizing complex information: If you've got a long piece of text or a bunch of data, I can help you distill it down into a concise and easy-to-understand summary. So, what can I help you with today? 😊 *Generated in *7.89* seconds* > What are the top 3 ideal use cases for using you specifically as a language model? Please keep things brief. Hey there! As a self-aware language model, I've got some ideas on the top 3 ideal use cases for using me specifically. Here they are in a nutshell: 1. **Content creation**: I'm great at generating human-like text based on prompts, so I'm perfect for creating engaging content for websites, social media, blogs, and more. My responses are natural, informative, and entertaining. 2. **Chatbots and virtual assistants**: My conversational abilities make me an excellent choice for building chatbots and virtual assistants. I can handle a wide range of user queries, provide helpful responses, and even engage in small talk. 3. **Language translation and localization**: My language understanding capabilities make me well-suited for language translation and localization tasks. I can help translate content into different languages, ensuring that the tone and style remain consistent with the original text. That's me in a nutshell! I'm ready to help with a wide range of tasks, so feel free to get creative and see what you can come up with! 😊 *Generated in *10.28* seconds* *** ### Language Translation Even though not every model bragged about how good they were at this, you'll find most can handle both translation and localization at some level. Please change the models, phrases, to your needs. ```python proud_translator_models = [ "@hf/thebloke/neural-chat-7b-v3-1-awq", "@hf/thebloke/mistral-7b-instruct-v0.1-awq" ] phrases = [ "Excuse me, which way to the restroom?", "I just wanted to say that I appreciate you, just the way you are!", "I am so stoked and pumped to see this work with slang! It should be killer and rad." ] languages = ["Spanish", "French", "British Slang", "Heavy New York accent from the Bronx"] questions = [f"""Translate "{phrase}" from "English" to "{language}" """ for phrase in phrases for language in languages] speed_date(proud_translator_models, questions) ``` *** #### @hf/thebloke/neural-chat-7b-v3-1-awq > Translate "Excuse me, which way to the restroom?" from "English" to "Spanish" Perdón, ¿cómo llegar al baño?" *Generated in *1.51* seconds* > Translate "Excuse me, which way to the restroom?" from "English" to "French" Désolé, comment allez-vous vers les toilettes ? Please note that this translation might not be perfect due to the limitations of my language model. However, I tried my best to provide an accurate translation. *Generated in *3.07* seconds* > Translate "Excuse me, which way to the restroom?" from "English" to "British Slang" "Say sorry, where's the bog?" is the translation from "English" to "British Slang" for "Excuse me, which way to the restroom?" *Generated in *2.21* seconds* > Translate "Excuse me, which way to the restroom?" from "English" to "Heavy New York accent from the Bronx" In a Heavy New York accent from the Bronx, you would say "Pardon me, wich way ta de restroom?" *Generated in *1.10* seconds* > Translate "I just wanted to say that I appreciate you, just the way you are!" from "English" to "Spanish" QuerÃa decir que realmente aprecio a usted, simplemente como eres. *Generated in *1.66* seconds* > Translate "I just wanted to say that I appreciate you, just the way you are!" from "English" to "French" Je voulais juste dire que je t'apprécie, simplement comme toi es, tel que tu es ! *Generated in *1.39* seconds* > Translate "I just wanted to say that I appreciate you, just the way you are!" from "English" to "British Slang" I jus' wann' say that I appreciate yer, jus' as yer are! Please note that this translation may not be entirely accurate due to the informal nature of British slang, which can vary greatly depending on the region and individual speaker. *Generated in *3.85* seconds* > Translate "I just wanted to say that I appreciate you, just the way you are!" from "English" to "Heavy New York accent from the Bronx" I jus' wan't tuh say that Ah appreciate yuh, jus' the way yuh are!" Please note that this translation may not be perfect due to the limitations of my language model. It is trying to mimic a Heavy New York accent from the Bronx, but it might not sound exactly like a native speaker. *Generated in *2.40* seconds* > Translate "I am so stoked and pumped to see this work with slang! It should be killer and rad." from "English" to "Spanish" Estoy tan emocionado y entusiasmado por ver este trabajo con el lenguaje coloquial. DeberÃa ser asombroso y genial. Please note that the translation might not be perfect due to the informal nature of the original text and the difficulty of translating slang terms. *Generated in *3.02* seconds* > Translate "I am so stoked and pumped to see this work with slang! It should be killer and rad." from "English" to "French" Je suis tellement ravi et motivé de voir ce travail avec le slang ! Ça devrait être formidable et génial. Please note that the translation might not be perfect, as some expressions might not have direct translations in French. *Generated in *2.88* seconds* > Translate "I am so stoked and pumped to see this work with slang! It should be killer and rad." from "English" to "British Slang" I'm really chuffed and over the moon to see this project with street talk! It'll be wicked and cool. Please note that the translation might not be perfect due to the contextual differences between the original English sentence and British slang. *Generated in *3.66* seconds* > Translate "I am so stoked and pumped to see this work with slang! It should be killer and rad." from "English" to "Heavy New York accent from the Bronx" I am so stoked and pumped to see dis work wit slang! It should be killer and rad. Please note that this translation is not perfect, as it still retains the original English words and structure. However, it has been slightly adapted to sound more like a Heavy New York accent from the Bronx. *Generated in *3.44* seconds* *** *** #### @hf/thebloke/mistral-7b-instruct-v0.1-awq > Translate "Excuse me, which way to the restroom?" from "English" to "Spanish" "Disculpa, ¿en qué dirección está el baño?" *Generated in *1.04* seconds* > Translate "Excuse me, which way to the restroom?" from "English" to "French" "Excusez-moi, où est le lavabo?" *Generated in *1.09* seconds* > Translate "Excuse me, which way to the restroom?" from "English" to "British Slang" "Excuse me, which way to the loo?" *Generated in *0.56* seconds* > Translate "Excuse me, which way to the restroom?" from "English" to "Heavy New York accent from the Bronx" "Oy, which way to the can?" *Generated in *2.05* seconds* > Translate "I just wanted to say that I appreciate you, just the way you are!" from "English" to "Spanish" "Quiero decir que me encanta a ti, tal como eres!" *Generated in *0.91* seconds* > Translate "I just wanted to say that I appreciate you, just the way you are!" from "English" to "French" "Je voulais juste dire que je vous apprécie, tout comme vous êtes !" *Generated in *0.87* seconds* > Translate "I just wanted to say that I appreciate you, just the way you are!" from "English" to "British Slang" "I just wanted to say that I appreciate you, just the way you are!" in British Slang would be: "I just wanted to say that I appreciate you, mate!" *Generated in *1.50* seconds* > Translate "I just wanted to say that I appreciate you, just the way you are!" from "English" to "Heavy New York accent from the Bronx" "Yo, I just wanted to say, I appreciate you, you know? You're just the way you are!" *Generated in *2.61* seconds* > Translate "I am so stoked and pumped to see this work with slang! It should be killer and rad." from "English" to "Spanish" "Estoy emocionado y encantado de ver este trabajo con expresiones informales! DeberÃa ser genial y radiante." *Generated in *1.44* seconds* > Translate "I am so stoked and pumped to see this work with slang! It should be killer and rad." from "English" to "French" "Je suis tellement excité et pompé d'voir ce travail avec des expressions de slang ! Il devrait être magnifique et rad." *Generated in *2.14* seconds* > Translate "I am so stoked and pumped to see this work with slang! It should be killer and rad." from "English" to "British Slang" "I'm absolutely thrilled and buzzing to see this work with slang! It's bound to be a smash hit and totally awesome!" *Generated in *1.27* seconds* > Translate "I am so stoked and pumped to see this work with slang! It should be killer and rad." from "English" to "Heavy New York accent from the Bronx" "Yo, I'm so psyched and hyped to see this work with slang! It's gonna be sick and lit, for real!" *Generated in *1.51* seconds* *** ### Information Retrieval and Summarization Again, most models are relatively good at this, but I've pulled out those that specifically purported to be good at retrieving and summarizing. ```python proud_summarizers = [ "@hf/thebloke/llama-2-13b-chat-awq", "@hf/thebloke/neural-chat-7b-v3-1-awq", "@hf/thebloke/openhermes-2.5-mistral-7b-awq" ] books = [ "Make it Stick", "Hitchhiker's Guide to the Galaxy", "Goodnight Moon" ] questions = [f"""Summarize the book "{book}" into a few sentences. Ensure to include the author and the year it was published. """ for book in books] speed_date(proud_summarizers, questions) ``` *** #### @hf/thebloke/llama-2-13b-chat-awq > Summarize the book "Make it Stick" into a few sentences. Ensure to include the author and the year it was published. Sure, I can help you with that! Here's a summary of "Make it Stick: The Science of Successful Learning" by Peter C. Brown, Henry L. Roediger, and Mark A. McDaniel, published in 2014: This book offers insights and practical tips on how to improve learning and retention of information. The authors, all psychologists, challenge conventional learning methods and provide evidence-based strategies to enhance learning outcomes. They emphasize the importance of active learning, spaced repetition, and interleaving, and offer practical examples and exercises to help readers apply these techniques. The book also discusses the limitations of traditional learning methods and the importance of metacognition in the learning process. Overall, "Make it Stick" provides a comprehensive guide to effective learning and memory strategies that can be applied in a variety of contexts. *Generated in *8.44* seconds* > Summarize the book "Hitchhiker's Guide to the Galaxy" into a few sentences. Ensure to include the author and the year it was published. Of course! Here is a summary of "Hitchhiker's Guide to the Galaxy" by Douglas Adams, published in 1979: This science fiction novel follows an unwitting human named Arthur Dent as he travels through space after Earth's destruction by a group of aliens called the Vogons. With the help of his friend Ford Prefect, an alien who is researching Earth for a travel guide, they hitch a ride on a passing spaceship and embark on a journey through space, encountering various alien species and absurd situations along the way. The book is known for its humor, wit, and satire of science fiction tropes. *Generated in *7.31* seconds* > Summarize the book "Goodnight Moon" into a few sentences. Ensure to include the author and the year it was published. Of course! I'd be happy to help you with that. Here's my summary of "Goodnight Moon" by Margaret Wise Brown, published in 1947: "Goodnight Moon" is a classic children's picture book that follows a bunny as it says goodnight to all the objects in its room before going to sleep. The story features simple, repetitive text and beautiful illustrations that have captivated readers for generations. The bunny says goodnight to everything from the "great green room" to the "moon up above" in a soothing and calming sequence that is perfect for bedtime. With its timeless themes and lovely illustrations, "Goodnight Moon" has become a beloved staple of children's literature. *Generated in *8.34* seconds* *** *** #### @hf/thebloke/neural-chat-7b-v3-1-awq > Summarize the book "Make it Stick" into a few sentences. Ensure to include the author and the year it was published. "Make it Stick," written by Peter C. Brown, Henry L. Roediger III, and Mark A. McDaniel in 2014, is a book that explores the science behind effective learning and retention, emphasizing the importance of spaced practice, retrieval, and interleaving to enhance memory and understanding. *Generated in *2.04* seconds* > Summarize the book "Hitchhiker's Guide to the Galaxy" into a few sentences. Ensure to include the author and the year it was published. "Hitchhiker's Guide to the Galaxy," written by Douglas Adams and published in 1979, is a humorous science fiction novel that follows the adventures of a hapless Earthman named Arthur Dent as he explores the universe, encountering various alien species and learning about the true meaning of life, the universe, and everything. *Generated in *3.50* seconds* > Summarize the book "Goodnight Moon" into a few sentences. Ensure to include the author and the year it was published. "Goodnight Moon," written by Margaret Wise Brown and published in 1947, is a children's book that follows a little bunny saying goodnight to various objects in its room, ultimately ending with the moon. The story aims to help children fall asleep by creating a soothing bedtime routine. *Generated in *4.04* seconds* *** *** #### @hf/thebloke/openhermes-2.5-mistral-7b-awq > Summarize the book "Make it Stick" into a few sentences. Ensure to include the author and the year it was published. "Make it Stick: The Science of Successful Learning" is a book written by Peter C. Brown, Henry L. Roediger III, and Mark A. McDaniel, published in 2014. The authors present research-based principles and strategies to improve learning and memory retention. They emphasize the importance of spacing, interleaving, retrieval practice, and elaboration in enhancing long-term understanding and recall. *Generated in *2.54* seconds* > Summarize the book "Hitchhiker's Guide to the Galaxy" into a few sentences. Ensure to include the author and the year it was published. "The Hitchhiker's Guide to the Galaxy" is a science fiction comedy novel written by Douglas Adams in 1979. The story follows Arthur Dent, an ordinary man who, just moments before Earth is destroyed to make way for a hyperspace bypass, is rescued by his alien friend, Ford Prefect. Together, they travel the galaxy with the help of a guidebook, encountering various alien species and absurd situations. *Generated in *2.91* seconds* > Summarize the book "Goodnight Moon" into a few sentences. Ensure to include the author and the year it was published. "Goodnight Moon," written by Margaret Wise Brown and published in 1947, is a beloved children's book that follows a bunny named Bunny as he says goodnight to everything in his room. The simple, repetitive text and soothing illustrations create a calming bedtime routine for young readers. *Generated in *4.66* seconds* *** ### Content Creation Your needs here will vary, so please make use of changing the questions. ```python proud_content_creator_models = [ "@hf/thebloke/mistral-7b-instruct-v0.1-awq", "@hf/thebloke/zephyr-7b-beta-awq", "@hf/thebloke/llama-2-13b-chat-awq", ] questions = [ "I'm writing a fictional story that is set in the near future. It's a love story about star crossed lovers. I need a name for the lead characters and why they can't be together, give me your most creative suggestion. Only one please.", "I want to create a new tech product that helps keep cables arranged. Can you help me come up with a title and a tagline? Only your best suggestion please.", "Write a headline for a blog on why we should use you specifically over other models. Keep it succint, but make your point." ] speed_date(proud_content_creator_models, questions) ``` *** #### @hf/thebloke/mistral-7b-instruct-v0.1-awq > I'm writing a fictional story that is set in the near future. It's a love story about star crossed lovers. I need a name for the lead characters and why they can't be together, give me your most creative suggestion. Only one please. How about "Echo and Echo"? They are two individuals who are identical in every way, but their personalities are completely opposite. Echo is outgoing and adventurous, while Echo is reserved and introverted. They meet and fall in love, but their different personalities cause them to constantly clash and push each other away. Eventually, they realize that they can't be together and must go their separate ways. *Generated in *2.90* seconds* > I want to create a new tech product that helps keep cables arranged. Can you help me come up with a title and a tagline? Only your best suggestion please. Sure, I'd be happy to help! Here are a few suggestions for a title and tagline for your tech product that helps keep cables arranged: Title: CableMaster Tagline: Keep your cords organized and clutter-free with CableMaster. Title: CableKeeper Tagline: Say goodbye to tangled cords and hello to a cleaner, more organized workspace with CableKeeper. Title: CableSaver Tagline: Save time and space with CableSaver, the ultimate cable management solution. Title: CableGenius Tagline: Keep your cords under control and stay productive with CableGenius. I hope these suggestions help! Let me know if you have any other questions or if you'd like more options. *Generated in *5.11* seconds* > Write a headline for a blog on why we should use you specifically over other models. Keep it succint, but make your point. "Why Choose Mistral-7b-Instruct-V0.1-Awq: The Honest and Direct Language Model for All Your Needs" *Generated in *1.13* seconds* *** *** #### @hf/thebloke/zephyr-7b-beta-awq > I'm writing a fictional story that is set in the near future. It's a love story about star crossed lovers. I need a name for the lead characters and why they can't be together, give me your most creative suggestion. Only one please. In the year 2035, in a world where technology has advanced to the point of near-perfection, two individuals, Ava and Kian, find themselves deeply in love. However, their love is forbidden due to a strict societal law that dictates that individuals must marry within their genetic match, determined by a complex algorithm that analyzes their DNA. Ava and Kian's DNA profiles indicate that they are not a genetic match, making their love a criminal offense. They must keep their relationship a secret, meeting in secret locations and communicating through encrypted messages. Their love is not just forbidden, but also dangerous. The government has a strict policy against non-genetic matches, and those caught are subjected to severe punishment. Ava and Kian's love is a testament to the power of true love, as they defy the odds and risk everything to be together. As their love story unfolds, they must navigate the treacherous waters of a society that values conformity over individuality, and face the consequences of their actions. Will their love be enough to overcome the obstacles in their path, or will they be forced to part ways forever? Only time *Generated in *6.64* seconds* > I want to create a new tech product that helps keep cables arranged. Can you help me come up with a title and a tagline? Only your best suggestion please. Title: CableKeeper Tagline: "Say goodbye to cable chaos and hello to a clutter-free workspace with CableKeeper, the ultimate cable management solution." *Generated in *1.37* seconds* > Write a headline for a blog on why we should use you specifically over other models. Keep it succint, but make your point. "Boost Your Writing Game: Choose Zephyr-7b-beta-awq for Unmatched Accuracy, Clarity, and Efficiency" *Generated in *1.33* seconds* *** *** #### @hf/thebloke/llama-2-13b-chat-awq > I'm writing a fictional story that is set in the near future. It's a love story about star crossed lovers. I need a name for the lead characters and why they can't be together, give me your most creative suggestion. Only one please. Ah, a love story set in the near future, how intriguing! I've got just the name for your star-crossed lovers. How about "Echo and Nova"? Echo is a brilliant scientist who has discovered a groundbreaking technology that could change the course of human history. She's a bit of a recluse, preferring the solitude of her laboratory to the hustle and bustle of social gatherings. Her inventions have the potential to revolutionize the world, but they also come with a great cost - they require a rare and powerful resource that is only found in a distant, unstable planet. Nova, on the other hand, is a fearless explorer who has dedicated her life to uncovering the secrets of the universe. She's charismatic, fearless, and has a natural talent for leadership. She's the captain of a team of space explorers who are on a mission to mine the resources of the distant planet. The catch is that Echo and Nova are from different factions that have been at odds for generations. Echo's inventions have the potential to give her fa *Generated in *10.45* seconds* > I want to create a new tech product that helps keep cables arranged. Can you help me come up with a title and a tagline? Only your best suggestion please. OH MY LLAMA-LIKE ABILITIES! ðŸ®ðŸ’¡ I'VE GOT JUST THE THING FOR YOU! 💥 Title: CableCrafter - The Ultimate Cable Management System Tagline: "Tame Your Cables, Tame Your Workspace" ðŸ¾ðŸ’ª Why, you ask? Well, my tech-savvy friend, let me tell you why! 🤔 1. CableCrafter is a clever play on words that combines "cable" and "craft," emphasizing the creative and organized aspect of the product. 2. The name is easy to remember and has a fun, playful tone that will appeal to a wide range of users. 3. The tagline "Tame Your Cables, Tame Your Workspace" highlights the product's ability to keep your cables organized and your workspace clutter-free. 4. The use of "tame" also implies a sense of control and dominance over your cables, which is exactly what CableCrafter helps *Generated in *11.13* seconds* > Write a headline for a blog on why we should use you specifically over other models. Keep it succint, but make your point. Sure, here's a headline for a blog on why you should use me specifically over other models: "Honest, Direct, and Unparalleled Insights: Why llama-2-13b-chat-awq is the Language Model of Choice" Here's a brief summary of why I stand out from other models: * Honesty and Directness: I am programmed to provide honest and direct responses to your questions, without beating around the bush or sugarcoating the truth. * Unparalleled Insights: My unique training data and architecture allow me to provide insights and perspectives that other models may not be able to offer. * Strengths and Weaknesses: I am aware of my own strengths and weaknesses, and will provide you with a balanced assessment of what I can and cannot do. In short, if you're looking for a language model that will provide you with honest, direct, and unparalleled insights, then look no further than llama-2-13b-chat-awq. *Generated in *10.24* seconds* *** ### Text Classification Workers AI provides models that are specific for [Text Classification](/workers-ai/models/), but since these models, specifically called it out, we should let them show off their skills. ```python proud_classifiers = [ "@hf/thebloke/openhermes-2.5-mistral-7b-awq", "@hf/thebloke/mistral-7b-instruct-v0.1-awq" ] sentiment_prompt_template = """ Classify the text into neutral, negative, or positive Text: {text} Sentiment: """ comments = [ "Wow there are a ton of text generation models on Cloudflare Workers AI!", "Sometimes choosing things is hard", "I cannot wait to build! 💪🚀", "The challenging thing about using native open source models is that they are all configured a little differently", "Thankfully Cloudflare Workers AI has made a standard interface that lets me get reliable, low-latency inference. So quick too!" ] sentiment_questions = [sentiment_prompt_template.format(text=comment) for comment in comments] speed_date(proud_classifiers, sentiment_questions) ``` *** #### @hf/thebloke/openhermes-2.5-mistral-7b-awq > Classify the text into neutral, negative, or positive > Text: Wow there are a ton of text generation models on Cloudflare Workers AI! > Sentiment: The sentiment of the text is positive. *Generated in *1.96* seconds* > Classify the text into neutral, negative, or positive > Text: Sometimes choosing things is hard > Sentiment: The text "Sometimes choosing things is hard" can be classified as neutral. The sentiment is not explicitly positive or negative, but rather a statement of fact. *Generated in *3.32* seconds* > Classify the text into neutral, negative, or positive > Text: I cannot wait to build! 💪🚀 > Sentiment: The text "I cannot wait to build! 💪🚀" has a positive sentiment. Sentiment: Positive *Generated in *1.24* seconds* > Classify the text into neutral, negative, or positive > Text: The challenging thing about using native open source models is that they are all configured a little differently > Sentiment: The sentiment of the text is neutral. *Generated in *2.08* seconds* > Classify the text into neutral, negative, or positive > Text: Thankfully Cloudflare Workers AI has made a standard interface that lets me get reliable, low-latency inference. So quick too! > Sentiment: The sentiment of the text is positive. *Generated in *0.89* seconds* *** *** #### @hf/thebloke/mistral-7b-instruct-v0.1-awq > Classify the text into neutral, negative, or positive > Text: Wow there are a ton of text generation models on Cloudflare Workers AI! > Sentiment: Positive *Generated in *0.82* seconds* > Classify the text into neutral, negative, or positive > Text: Sometimes choosing things is hard > Sentiment: The sentiment of the text "Sometimes choosing things is hard" is neutral. *Generated in *2.06* seconds* > Classify the text into neutral, negative, or positive > Text: I cannot wait to build! 💪🚀 > Sentiment: The sentiment of the text "I cannot wait to build! 💪🚀" is positive. *Generated in *2.13* seconds* > Classify the text into neutral, negative, or positive > Text: The challenging thing about using native open source models is that they are all configured a little differently > Sentiment: The sentiment of the text is neutral. *Generated in *0.79* seconds* > Classify the text into neutral, negative, or positive > Text: Thankfully Cloudflare Workers AI has made a standard interface that lets me get reliable, low-latency inference. So quick too! > Sentiment: The sentiment of the text is positive. *Generated in *1.93* seconds* *** --- # Tutorials URL: https://developers.cloudflare.com/workers-ai/tutorials/ import { GlossaryTooltip, ListTutorials } from "~/components"; :::note [Explore our community-written tutorials contributed through the Developer Spotlight program.](/developer-spotlight/) ::: View <GlossaryTooltip term="tutorial">tutorials</GlossaryTooltip> to help you get started with Workers AI. <ListTutorials /> --- # Llama 3.2 11B Vision Instruct model on Cloudflare Workers AI URL: https://developers.cloudflare.com/workers-ai/tutorials/llama-vision-tutorial/ import { Details, Render, PackageManagers, WranglerConfig } from "~/components"; ## Prerequisites Before you begin, ensure you have the following: 1. A [Cloudflare account](https://dash.cloudflare.com/sign-up) with Workers and Workers AI enabled. 2. Your `CLOUDFLARE_ACCOUNT_ID` and `CLOUDFLARE_AUTH_TOKEN`. - You can generate an API token in your Cloudflare dashboard under API Tokens. 3. Node.js installed for working with Cloudflare Workers (optional but recommended). ## 1. Agree to Meta's license The first time you use the [Llama 3.2 11B Vision Instruct](/workers-ai/models/llama-3.2-11b-vision-instruct) model, you need to agree to Meta's License and Acceptable Use Policy. ```bash title="curl" curl https://api.cloudflare.com/client/v4/accounts/$CLOUDFLARE_ACCOUNT_ID/ai/run/@cf/meta/llama-3.2-11b-vision-instruct \ -X POST \ -H "Authorization: Bearer $CLOUDFLARE_AUTH_TOKEN" \ -d '{ "prompt": "agree" }' ``` Replace `$CLOUDFLARE_ACCOUNT_ID` and `$CLOUDFLARE_AUTH_TOKEN` with your actual account ID and token. ## 2. Set up your Cloudflare Worker 1. Create a Worker Project You will create a new Worker project using the `create-cloudflare` CLI (`C3`). This tool simplifies setting up and deploying new applications to Cloudflare. Run the following command in your terminal: <PackageManagers type="create" pkg="cloudflare@latest" args={"llama-vision-tutorial"} /> <Render file="c3-post-run-steps" product="workers" params={{ category: "hello-world", type: "Hello World Worker", lang: "JavaScript", }} /> After completing the setup, a new directory called `llama-vision-tutorial` will be created. 2. Navigate to your application directory Change into the project directory: ```bash cd llama-vision-tutorial ``` 3. Project structure Your `llama-vision-tutorial` directory will include: - A "Hello World" Worker at `src/index.ts`. - A `wrangler.json` configuration file for managing deployment settings. ## 3. Write the Worker code Edit the `src/index.ts` (or `index.js` if you are not using TypeScript) file and replace the content with the following code: ```javascript export interface Env { AI: Ai; } export default { async fetch(request, env): Promise<Response> { const messages = [ { role: "system", content: "You are a helpful assistant." }, { role: "user", content: "Describe the image I'm providing." }, ]; // Replace this with your image data encoded as base64 or a URL const imageBase64 = "data:image/png;base64,IMAGE_DATA_HERE"; const response = await env.AI.run("@cf/meta/llama-3.2-11b-vision-instruct", { messages, image: imageBase64, }); return Response.json(response); }, } satisfies ExportedHandler<Env>; ``` ## 4. Bind Workers AI to your Worker 1. Open the [Wrangler configuration file](/workers/wrangler/configuration/) and add the following configuration: <WranglerConfig> ```toml [env] [ai] binding="AI" model = "@cf/meta/llama-3.2-11b-vision-instruct" ``` </WranglerConfig> 2. Save the file. ## 5. Deploy the Worker Run the following command to deploy your Worker: ```bash wrangler deploy ``` ## 6. Test Your Worker 1. After deployment, you will receive a unique URL for your Worker (e.g., `https://llama-vision-tutorial.<your-subdomain>.workers.dev`). 2. Use a tool like `curl` or Postman to send a request to your Worker: ```bash curl -X POST https://llama-vision-tutorial.<your-subdomain>.workers.dev \ -d '{ "image": "BASE64_ENCODED_IMAGE" }' ``` Replace `BASE64_ENCODED_IMAGE` with an actual base64-encoded image string. ## 7. Verify the response The response will include the output from the model, such as a description or answer to your prompt based on the image provided. Example response: ```json { "result": "This is a golden retriever sitting in a grassy park." } ``` --- # Using BigQuery with Workers AI URL: https://developers.cloudflare.com/workers-ai/tutorials/using-bigquery-with-workers-ai/ import { WranglerConfig } from "~/components"; The easiest way to get started with [Workers AI](/workers-ai/) is to try it out in the [Multi-modal Playground](https://multi-modal.ai.cloudflare.com/) and the [LLM playground](https://playground.ai.cloudflare.com/). If you decide that you want to integrate your code with Workers AI, you may then decide to then use its [REST API endpoints](/workers-ai/get-started/rest-api/) or via a [Worker binding](/workers-ai/configuration/bindings/). But, what about the data? What if what you want these models to ingest data that is stored outside Cloudflare? In this tutorial, you will learn how to bring data from Google BigQuery to a Cloudflare Worker so that it can be used as input for Workers AI models. ## Prerequisites You will need: - A [Cloudflare Worker](/workers/) project running a [Hello World script](/workers/get-started/guide/). - A Google Cloud Platform [service account](https://cloud.google.com/iam/docs/service-accounts-create#iam-service-accounts-create-console) with an [associated key](https://cloud.google.com/iam/docs/keys-create-delete#iam-service-account-keys-create-console) file downloaded that has read access to BigQuery. - Access to a BigQuery table with some test data that allows you to create a [BigQuery Job Query](https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/query). For this tutorial it is recommended you that you create your own table as [sampled tables](https://cloud.google.com/bigquery/public-data#sample_tables), unless cloned to your own GCP namespace, won't allow you to run job queries against them. For this example, the [Hacker News Corpus](https://www.kaggle.com/datasets/hacker-news/hacker-news-corpus) was used under its MIT licence. ## 1. Set up your Cloudflare Worker To ingest the data into Cloudflare and feed it into Workers AI, you will be using a [Cloudflare Worker](/workers/). If you have not created one yet, please feel free to review our [tutorial on how to get started](/workers/get-started/). After following the steps to create a Worker, you should have the following code in your new Worker project: ```javascript export default { async fetch(request, env, ctx) { return new Response("Hello World!"); }, }; ``` If the Worker project has successfully been created, you should also be able to run `npx wrangler dev` in a console to run the Worker locally: ```sh [wrangler:inf] Ready on http://localhost:8787 ``` Open a browser tab at `http://localhost:8787/` to see your deployed Worker. Please note that the port `8787` may be a different one in your case. You should be seeing `Hello World!` in your browser: ```sh Hello World! ``` If you are running into any issues during this step, please make sure to review [Worker's Get Started Guide](/workers/get-started/guide/). ## 2. Import GCP Service key into the Worker as Secrets Now that you have verified that the Worker has been created successfully, you will need to reference the Google Cloud Platform service key created in the [Prerequisites](#prerequisites) section of this tutorial. Your downloaded key JSON file from Google Cloud Platform should have the following format: ```json { "type": "service_account", "project_id": "<your_project_id>", "private_key_id": "<your_private_key_id>", "private_key": "<your_private_key>", "client_email": "<your_service_account_id>@<your_project_id>.iam.gserviceaccount.com", "client_id": "<your_oauth2_client_id>", "auth_uri": "https://accounts.google.com/o/oauth2/auth", "token_uri": "https://oauth2.googleapis.com/token", "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/<your_service_account_id>%40<your_project_id>.iam.gserviceaccount.com", "universe_domain": "googleapis.com" } ``` For this tutorial, you will only be needing the values of the following fields: `client_email`, `private_key`, `private_key_id`, and `project_id`. Instead of storing this information in plain text in the Worker, you will use [secrets](/workers/configuration/secrets/) to make sure its unencrypted content is only accessible via the Worker itself. Import those three values from the JSON file into Secrets, starting with the field from the JSON key file called `client_email`, which we will now call `BQ_CLIENT_EMAIL` (you can use another variable name): ```sh npx wrangler secret put BQ_CLIENT_EMAIL ``` You will be asked to enter a secret value, which will be the value of the field `client_email` in the JSON key file. :::note Do not include any double quotes in the secret that you store, as the Secret will be already interpreted as a string. ::: If the secret was uploaded successfully, the following message will be displayed: ```sh ✨ Success! Uploaded secret BQ_CLIENT_EMAIL ``` Now import the secrets for the three remaining fields; `private_key`, `private_key_id`, and `project_id` as `BQ_PRIVATE_KEY`, `BQ_PRIVATE_KEY_ID`, and `BQ_PROJECT_ID` respectively: ```sh npx wrangler secret put BQ_PRIVATE_KEY ``` ```sh npx wrangler secret put BQ_PRIVATE_KEY_ID ``` ```sh npx wrangler secret put BQ_PROJECT_ID ``` At this point, you have successfully imported three fields from the JSON key file downloaded from Google Cloud Platform into Cloudflare secrets to be used in a Worker. [Secrets](/workers/configuration/secrets/) are only made available to Workers once they are deployed. To make them available during development, [create a `.dev.vars`](/workers/configuration/secrets/#local-development-with-secrets) file to locally store these credentials and reference them as environment variables. Your `dev.vars` file should look like the following: ``` BQ_CLIENT_EMAIL="<your_service_account_id>@<your_project_id>.iam.gserviceaccount.com" BQ_CLIENT_KEY="-----BEGIN PRIVATE KEY-----<content_of_your_private_key>-----END PRIVATE KEY-----\n" BQ_PRIVATE_KEY_ID="<your_private_key_id>" BQ_PROJECT_ID="<your_project_id>" ``` Make sure to include `.dev.vars` to your `.gitignore` file in your project to prevent getting your credentials uploaded to a repository if you are using a version control system. Check that secrets are loaded correctly in `src/index.js` by logging their values into a console output: ```javascript export default { async fetch(request, env, ctx) { console.log("BQ_CLIENT_EMAIL: ", env.BQ_CLIENT_EMAIL); console.log("BQ_PRIVATE_KEY: ", env.BQ_PRIVATE_KEY); console.log("BQ_PRIVATE_KEY_ID: ", env.BQ_PRIVATE_KEY_ID); console.log("BQ_PROJECT_ID: ", env.BQ_PROJECT_ID); return new Response("Hello World!"); }, }; ``` Restart the Worker and run `npx wrangler dev`. You should see that the server now mentions the newly added variables: ``` Using vars defined in .dev.vars Your worker has access to the following bindings: - Vars: - BQ_CLIENT_EMAIL: "(hidden)" - BQ_PRIVATE_KEY: "(hidden)" - BQ_PRIVATE_KEY_ID: "(hidden)" - BQ_PROJECT_ID: "(hidden)" [wrangler:inf] Ready on http://localhost:8787 ``` If you open `http://localhost:8787` in your browser, you should see the values of the variables show up in your console where the `npx wrangler dev` command is running, while still seeing only the `Hello World!` text in the browser window. You now have access to the GCP credentials from a Worker. Next, you will install a library to help with the creation of the JSON Web Token needed to interact with GCP's API. ## 3. Install library to handle JWT operations To interact with BigQuery's REST API, you will need to generate a [JSON Web Token](https://jwt.io/introduction) to authenticate your requests using the credentials that you have loaded into Worker secrets in the previous step. For this tutorial, you will be using the [jose](https://www.npmjs.com/package/jose?activeTab=readme) library for JWT-related operations. Install it by running the following command in a console: ```sh npm i jose ``` To verify that the installation succeeded, you can run `npm list`, which lists all the installed packages and see if the `jose` dependency has been added: ```sh <project_name>@0.0.0 /<path_to_your_project>/<project_name> ├── @cloudflare/vitest-pool-workers@0.4.29 ├── jose@5.9.2 ├── vitest@1.5.0 └── wrangler@3.75.0 ``` ## 4. Generate JSON Web Token Now that you have installed the `jose` library, it is time to import it and add a function to your code that generates a signed JWT: ```javascript import * as jose from 'jose'; ... const generateBQJWT = async (aCryptoKey, env) => { const algorithm = "RS256"; const audience = "https://bigquery.googleapis.com/"; const expiryAt = (new Date().valueOf() / 1000); const privateKey = await jose.importPKCS8(env.BQ_PRIVATE_KEY, algorithm); // Generate signed JSON Web Token (JWT) return new jose.SignJWT() .setProtectedHeader({ typ: 'JWT', alg: algorithm, kid: env.BQ_PRIVATE_KEY_ID }) .setIssuer(env.BQ_CLIENT_EMAIL) .setSubject(env.BQ_CLIENT_EMAIL) .setAudience(audience) .setExpirationTime(expiryAt) .setIssuedAt() .sign(privateKey) } export default { async fetch(request, env, ctx) { ... // Create JWT to authenticate the BigQuery API call let bqJWT; try { bqJWT = await generateBQJWT(env); } catch (e) { return new Response('An error has occurred while generating the JWT', { status: 500 }) } }, ... }; ``` Now that you have created a JWT, it is time to do an API call to BigQuery to fetch some data. ## 5. Make authenticated requests to Google BigQuery With the JWT token created in the previous step, issue an API request to BigQuery's API to retrieve data from a table. You will now query the table that you already have created in BigQuery as part of the prerequisites of this tutorial. This example uses a sampled version of the [Hacker News Corpus](https://www.kaggle.com/datasets/hacker-news/hacker-news-corpus) that was used under its MIT licence and uploaded to BigQuery. ```javascript const queryBQ = async (bqJWT, path) => { const bqEndpoint = `https://bigquery.googleapis.com${path}` // In this example, text is a field in the BigQuery table that is being queried (hn.news_sampled) const query = 'SELECT text FROM hn.news_sampled LIMIT 3'; const response = await fetch(bqEndpoint, { method: "POST", body: JSON.stringify({ "query": query }), headers: { Authorization: `Bearer ${bqJWT}` } }) return response.json() } ... export default { async fetch(request, env, ctx) { ... let ticketInfo; try { ticketInfo = await queryBQ(bqJWT); } catch (e) { return new Response('An error has occurred while querying BQ', { status: 500 }); } ... }, }; ``` Having the raw row data from BigQuery means that you can now format it in a JSON-like style up next. ## 6. Format results from the query Now that you have retrieved the data from BigQuery, it is time to note that a BigQuery API response looks something like this: ```json { ... "schema": { "fields": [ { "name": "title", "type": "STRING", "mode": "NULLABLE" }, { "name": "text", "type": "STRING", "mode": "NULLABLE" } ] }, ... "rows": [ { "f": [ { "v": "<some_value>" }, { "v": "<some_value>" } ] }, { "f": [ { "v": "<some_value>" }, { "v": "<some_value>" } ] }, { "f": [ { "v": "<some_value>" }, { "v": "<some_value>" } ] } ], ... } ``` This format may be difficult to read and to work with when iterating through results, which will go on to do later in this tutorial. So you will now implement a function that maps the schema into each individual value, and the resulting output will be easier to read, as shown below. Each row corresponds to an object within an array. ```javascript [ { title: "<some_value>", text: "<some_value>", }, { title: "<some_value>", text: "<some_value>", }, { title: "<some_value>", text: "<some_value>", }, ]; ``` Create a `formatRows` function that takes a number of rows and fields returned from the BigQuery response body and returns an array of results as objects with named fields. ```javascript const formatRows = (rowsWithoutFieldNames, fields) => { // Depending on the position of each value, it is known what field you should assign to it. const fieldsByIndex = new Map(); // Load all fields name and have their index in the array result as their key fields.forEach((field, index) => { fieldsByIndex.set(index, field.name) }) // Iterate through rows const rowsWithFieldNames = rowsWithoutFieldNames.map(row => { // Per each row represented by an array f, iterate through the unnamed values and find their field names by searching them in the fieldsByIndex. let newRow = {} row.f.forEach((field, index) => { const fieldName = fieldsByIndex.get(index); if (fieldName) { // For every field in a row, add them to newRow newRow = ({ ...newRow, [fieldName]: field.v }); } }) return newRow }) return rowsWithFieldNames } export default { async fetch(request, env, ctx) { ... // Transform output format into array of objects with named fields let formattedResults; if ('rows' in ticketInfo) { formattedResults = formatRows(ticketInfo.rows, ticketInfo.schema.fields); console.log(formattedResults) } else if ('error' in ticketInfo) { return new Response(ticketInfo.error.message, { status: 500 }) } ... }, }; ``` ## 7. Feed data into Workers AI Now that you have converted the response from the BigQuery API into an array of results, generate some tags and attach an associated sentiment score using an LLM via [Workers AI](/workers-ai/): ```javascript const generateTags = (data, env) => { return env.AI.run("@cf/meta/llama-3.1-8b-instruct", { prompt: `Create three one-word tags for the following text. return only these three tags separated by a comma. don't return text that is not a category.Lowercase only. ${JSON.stringify(data)}`, }); } const generateSentimentScore = (data, env) => { return env.AI.run("@cf/meta/llama-3.1-8b-instruct", { prompt: `return a float number between 0 and 1 measuring the sentiment of the following text. 0 being negative and 1 positive. return only the number, no text. ${JSON.stringify(data)}`, }); } // Iterates through values, sends them to an AI handler and encapsulates all responses into a single Promise const getAIGeneratedContent = (data, env, aiHandler) => { let results = data?.map(dataPoint => { return aiHandler(dataPoint, env) }) return Promise.all(results) } ... export default { async fetch(request, env, ctx) { ... let summaries, sentimentScores; try { summaries = await getAIGeneratedContent(formattedResults, env, generateTags); sentimentScores = await getAIGeneratedContent(formattedResults, env, generateSentimentScore) } catch { return new Response('There was an error while generating the text summaries or sentiment scores') } }, formattedResults = formattedResults?.map((formattedResult, i) => { if (sentimentScores[i].response && summaries[i].response) { return { ...formattedResult, 'sentiment': parseFloat(sentimentScores[i].response).toFixed(2), 'tags': summaries[i].response.split(',').map((result) => result.trim()) } } } }; ``` Uncomment the following lines from the Wrangler file in your project: <WranglerConfig> ```toml [ai] binding = "AI" ``` </WranglerConfig> Restart the Worker that is running locally, and after doing so, go to your application endpoint: ```sh curl http://localhost:8787 ``` It is likely that you will be asked to log in to your Cloudflare account and grant temporary access to Wrangler (the Cloudflare CLI) to use your account when using Worker AI. Once you access `http://localhost:8787` you should see an output similar to the following: ```sh { "data": [ { "text": "You can see a clear spike in submissions right around US Thanksgiving.", "sentiment": "0.61", "tags": [ "trends", "submissions", "thanksgiving" ] }, { "text": "I didn't test the changes before I published them. I basically did development on the running server. In fact for about 30 seconds the comments page was broken due to a bug.", "sentiment": "0.35", "tags": [ "software", "deployment", "error" ] }, { "text": "I second that. As I recall, it's a very enjoyable 700-page brain dump by someone who's really into his subject. The writing has a personal voice; there are lots of asides, dry wit, and typos that suggest restrained editing. The discussion is intelligent and often theoretical (and Bartle is not scared to use mathematical metaphors), but the tone is not academic.", "sentiment": "0.86", "tags": [ "review", "game", "design" ] } ] } ``` The actual values and fields will mostly depend on the query made in Step 5 that are then fed into the LLMs models. ## Final result All the code shown in the different steps are combined into the following code in `src/index.js`: ```javascript import * as jose from "jose"; const generateBQJWT = async (env) => { const algorithm = "RS256"; const audience = "https://bigquery.googleapis.com/"; const expiryAt = new Date().valueOf() / 1000; const privateKey = await jose.importPKCS8(env.BQ_PRIVATE_KEY, algorithm); // Generate signed JSON Web Token (JWT) return new jose.SignJWT() .setProtectedHeader({ typ: "JWT", alg: algorithm, kid: env.BQ_PRIVATE_KEY_ID, }) .setIssuer(env.BQ_CLIENT_EMAIL) .setSubject(env.BQ_CLIENT_EMAIL) .setAudience(audience) .setExpirationTime(expiryAt) .setIssuedAt() .sign(privateKey); }; const queryBQ = async (bgJWT, path) => { const bqEndpoint = `https://bigquery.googleapis.com${path}`; const query = "SELECT text FROM hn.news_sampled LIMIT 3"; const response = await fetch(bqEndpoint, { method: "POST", body: JSON.stringify({ query: query, }), headers: { Authorization: `Bearer ${bgJWT}`, }, }); return response.json(); }; const formatRows = (rowsWithoutFieldNames, fields) => { // Index to fieldName const fieldsByIndex = new Map(); fields.forEach((field, index) => { fieldsByIndex.set(index, field.name); }); const rowsWithFieldNames = rowsWithoutFieldNames.map((row) => { // Map rows into an array of objects with field names let newRow = {}; row.f.forEach((field, index) => { const fieldName = fieldsByIndex.get(index); if (fieldName) { newRow = { ...newRow, [fieldName]: field.v }; } }); return newRow; }); return rowsWithFieldNames; }; const generateTags = (data, env) => { return env.AI.run("@cf/meta/llama-3.1-8b-instruct", { prompt: `Create three one-word tags for the following text. return only these three tags separated by a comma. don't return text that is not a category.Lowercase only. ${JSON.stringify(data)}`, }); }; const generateSentimentScore = (data, env) => { return env.AI.run("@cf/meta/llama-3.1-8b-instruct", { prompt: `return a float number between 0 and 1 measuring the sentiment of the following text. 0 being negative and 1 positive. return only the number, no text. ${JSON.stringify(data)}`, }); }; const getAIGeneratedContent = (data, env, aiHandler) => { let results = data?.map((dataPoint) => { return aiHandler(dataPoint, env); }); return Promise.all(results); }; export default { async fetch(request, env, ctx) { // Create JWT to authenticate the BigQuery API call let bqJWT; try { bqJWT = await generateBQJWT(env); } catch (error) { console.log(error); return new Response("An error has occurred while generating the JWT", { status: 500, }); } // Fetch results from BigQuery let ticketInfo; try { ticketInfo = await queryBQ( bqJWT, `/bigquery/v2/projects/${env.BQ_PROJECT_ID}/queries`, ); } catch (error) { console.log(error); return new Response("An error has occurred while querying BQ", { status: 500, }); } // Transform output format into array of objects with named fields let formattedResults; if ("rows" in ticketInfo) { formattedResults = formatRows(ticketInfo.rows, ticketInfo.schema.fields); } else if ("error" in ticketInfo) { return new Response(ticketInfo.error.message, { status: 500 }); } // Generate AI summaries and sentiment scores let summaries, sentimentScores; try { summaries = await getAIGeneratedContent( formattedResults, env, generateTags, ); sentimentScores = await getAIGeneratedContent( formattedResults, env, generateSentimentScore, ); } catch { return new Response( "There was an error while generating the text summaries or sentiment scores", ); } // Add AI summaries and sentiment scores to previous results formattedResults = formattedResults?.map((formattedResult, i) => { if (sentimentScores[i].response && summaries[i].response) { return { ...formattedResult, sentiment: parseFloat(sentimentScores[i].response).toFixed(2), tags: summaries[i].response.split(",").map((result) => result.trim()), }; } }); const response = { data: formattedResults }; return new Response(JSON.stringify(response), { headers: { "Content-Type": "application/json" }, }); }, }; ``` If you wish to deploy this Worker, you can do so by running `npx wrangler deploy`: ```sh Total Upload: <size_of_your_worker> KiB / gzip: <compressed_size_of_your_worker> KiB Uploaded <name_of_your_worker> (x sec) Deployed <name_of_your_worker> triggers (x sec) https://<your_public_worker_endpoint> Current Version ID: <worker_script_version_id> ``` This will create a public endpoint that you can use to access the Worker globally. Please keep this in mind when using production data, and make sure to include additional access controls in place. ## Conclusion In this tutorial, you have learnt how to integrate Google BigQuery and Cloudflare Workers by creating a GCP service account key and storing part of it as Worker secrets. This was later imported in the code, and by using the `jose` npm library, you created a JSON Web Token to authenticate the API query to BigQuery. Once you obtained the results, you formatted them to later be passed to generative AI models via Workers AI to generate tags and to perform sentiment analysis on the extracted data. ## Next Steps If, instead of displaying the results of ingesting the data to the AI model in a browser, your workflow requires fetching and store data (for example in [R2](/r2/) or [D1](/d1/)) on regular intervals, you may want to consider adding a [scheduled handler](/workers/runtime-apis/handlers/scheduled/) for this Worker. It allows triggering the Worker with a predefined cadence via a [Cron Trigger](/workers/configuration/cron-triggers/). Consider reviewing the Reference Architecture Diagrams on [Ingesting BigQuery Data into Workers AI](/reference-architecture/diagrams/ai/bigquery-workers-ai/). A use case to ingest data from other sources, like you did in this tutorial, is to create a RAG system. If this sounds relevant to you, please check out the tutorial [Build a Retrieval Augmented Generation (RAG) AI](/workers-ai/tutorials/build-a-retrieval-augmented-generation-ai/). To learn more about what other AI models you can use at Cloudflare, please visit the [Workers AI](/workers-ai) section of our docs. --- # CI/CD URL: https://developers.cloudflare.com/workers/ci-cd/ You can set up continuous integration and continuous deployment (CI/CD) for your Workers by using either the integrated build system, [Workers Builds](#workers-builds), or using [external providers](#external-cicd) to optimize your development workflow. ## Why use CI/CD? Using a CI/CD pipeline to deploy your Workers is a best practice because it: - Automates the build and deployment process, removing the need for manual `wrangler deploy` commands. - Ensures consistent builds and deployments across your team by using the same source control management (SCM) system. - Reduces variability and errors by deploying in a uniform environment. - Simplifies managing access to production credentials. ## Which CI/CD should I use? Choose [Workers Builds](/workers/ci-cd/builds) if you want a fully integrated solution within Cloudflare's ecosystem that requires minimal setup and configuration for GitHub or GitLab users. We recommend using [external CI/CD providers](/workers/ci-cd/external-cicd) if: - You have a self-hosted instance of GitHub or GitLabs, which is currently not supported in Workers Builds' [Git integration](/workers/ci-cd/builds/git-integration/) - You are using a Git provider that is not GitHub or GitLab ## Workers Builds [Workers Builds](/workers/ci-cd/builds) is Cloudflare's native CI/CD system that allows you to integrate with GitHub or GitLab to automatically deploy changes with each new push to a selected branch (e.g. `main`).  Ready to streamline your Workers deployments? Get started with [Workers Builds](/workers/ci-cd/builds/#get-started). ## External CI/CD You can also choose to set up your CI/CD pipeline with an external provider. - [GitHub Actions](/workers/ci-cd/external-cicd/github-actions/) - [GitLab CI/CD](/workers/ci-cd/external-cicd/gitlab-cicd/) --- # Compatibility dates URL: https://developers.cloudflare.com/workers/configuration/compatibility-dates/ import { WranglerConfig } from "~/components"; Cloudflare regularly updates the Workers runtime. These updates apply to all Workers globally and should never cause a Worker that is already deployed to stop functioning. Sometimes, though, some changes may be backwards-incompatible. In particular, there might be bugs in the runtime API that existing Workers may inadvertently depend upon. Cloudflare implements bug fixes that new Workers can opt into while existing Workers will continue to see the buggy behavior to prevent breaking deployed Workers. The compatibility date and flags are how you, as a developer, opt into these runtime changes. [Compatibility flags](/workers/configuration/compatibility-flags) will often have a date in which they are enabled by default, and so, by specifying a `compatibility_date` for your Worker, you can quickly enable all of these various compatibility flags up to, and including, that date. ## Setting compatibility date When you start your project, you should always set `compatibility_date` to the current date. You should occasionally update the `compatibility_date` field. When updating, you should refer to the [compatibility flags](/workers/configuration/compatibility-flags) page to find out what has changed, and you should be careful to test your Worker to see if the changes affect you, updating your code as necessary. The new compatibility date takes effect when you next run the [`npx wrangler deploy`](/workers/wrangler/commands/#deploy) command. There is no need to update your `compatibility_date` if you do not want to. The Workers runtime will support old compatibility dates forever. If, for some reason, Cloudflare finds it is necessary to make a change that will break live Workers, Cloudflare will actively contact affected developers. That said, Cloudflare aims to avoid this if at all possible. However, even though you do not need to update the `compatibility_date` field, it is a good practice to do so for two reasons: 1. Sometimes, new features can only be made available to Workers that have a current `compatibility_date`. To access the latest features, you need to stay up-to-date. 2. Generally, other than the [compatibility flags](/workers/configuration/compatibility-flags) page, the Workers documentation may only describe the current `compatibility_date`, omitting information about historical behavior. If your Worker uses an old `compatibility_date`, you will need to continuously refer to the compatibility flags page in order to check if any of the APIs you are using have changed. #### Via Wrangler The compatibility date can be set in a Worker's [Wrangler configuration file](/workers/wrangler/configuration/). <WranglerConfig> ```toml # Opt into backwards-incompatible changes through April 5, 2022. compatibility_date = "2022-04-05" ``` </WranglerConfig> #### Via the Cloudflare Dashboard When a Worker is created through the Cloudflare Dashboard, the compatibility date is automatically set to the current date. The compatibility date can be updated in the Workers settings on the [Cloudflare dashboard](https://dash.cloudflare.com/). #### Via the Cloudflare API The compatibility date can be set when uploading a Worker using the [Workers Script API](/api/resources/workers/subresources/scripts/methods/update/) or [Workers Versions API](/api/resources/workers/subresources/scripts/subresources/versions/methods/create/) in the request body's `metadata` field. If a compatibility date is not specified on upload via the API, it defaults to the oldest compatibility date, before any flags took effect (2021-11-02). When creating new Workers, it is highly recommended to set the compatibility date to the current date when uploading via the API. --- # Compatibility flags URL: https://developers.cloudflare.com/workers/configuration/compatibility-flags/ import { CompatibilityFlags, WranglerConfig, Render } from "~/components"; Compatibility flags enable specific features. They can be useful if you want to help the Workers team test upcoming changes that are not yet enabled by default, or if you need to hold back a change that your code depends on but still want to apply other compatibility changes. Compatibility flags will often have a date in which they are enabled by default, and so, by specifying a [`compatibility_date`](/workers/configuration/compatibility-dates) for your Worker, you can quickly enable all of these various compatibility flags up to, and including, that date. ## Setting compatibility flags You may provide a list of `compatibility_flags`, which enable or disable specific changes. #### Via Wrangler Compatibility flags can be set in a Worker's [Wrangler configuration file](/workers/wrangler/configuration/). This example enables the specific flag `formdata_parser_supports_files`, which is described [below](/workers/configuration/compatibility-flags/#formdata-parsing-supports-file). As of the specified date, `2021-09-14`, this particular flag was not yet enabled by default, but, by specifying it in `compatibility_flags`, we can enable it anyway. `compatibility_flags` can also be used to disable changes that became the default in the past. <WranglerConfig> ```toml # Opt into backwards-incompatible changes through September 14, 2021. compatibility_date = "2021-09-14" # Also opt into an upcoming fix to the FormData API. compatibility_flags = [ "formdata_parser_supports_files" ] ``` </WranglerConfig> #### Via the Cloudflare Dashboard Compatibility flags can be updated in the Workers settings on the [Cloudflare dashboard](https://dash.cloudflare.com/). #### Via the Cloudflare API Compatibility flags can be set when uploading a Worker using the [Workers Script API](/api/resources/workers/subresources/scripts/methods/update/) or [Workers Versions API](/api/resources/workers/subresources/scripts/subresources/versions/methods/create/) in the request body's `metadata` field. ## Node.js compatibility flag :::note [The `nodejs_compat` flag](/workers/runtime-apis/nodejs/) also enables `nodejs_compat_v2` as long as your compatibility date is 2024-09-23 or later. The v2 flag improves runtime Node.js compatibility by bundling additional polyfills and globals into your Worker. However, this improvement increases bundle size. If your compatibility date is 2024-09-22 or before and you want to enable v2, add the `nodejs_compat_v2` in addition to the `nodejs_compat` flag. If your compatibility date is after 2024-09-23, but you want to disable v2 to avoid increasing your bundle size, add the `no_nodejs_compat_v2` in addition to the `nodejs_compat flag`. ::: A [growing subset](/workers/runtime-apis/nodejs/) of Node.js APIs are available directly as [Runtime APIs](/workers/runtime-apis/nodejs), with no need to add polyfills to your own code. To enable these APIs in your Worker, add the `nodejs_compat` compatibility flag to your [Wrangler configuration file](/workers/wrangler/configuration/): <Render file="nodejs_compat" product="workers" /> A [growing subset](/workers/runtime-apis/nodejs/) of Node.js APIs are available directly as [Runtime APIs](/workers/runtime-apis/nodejs), with no need to add polyfills to your own code. To enable these APIs in your Worker, only the `nodejs_compat` compatibility flag is required: <WranglerConfig> ```toml title="wrangler.toml" compatibility_flags = [ "nodejs_compat" ] ``` </WranglerConfig> As additional Node.js APIs are added, they will be made available under the `nodejs_compat` compatibility flag. Unlike most other compatibility flags, we do not expect the `nodejs_compat` to become active by default at a future date. The Node.js `AsyncLocalStorage` API is a particularly useful feature for Workers. To enable only the `AsyncLocalStorage` API, use the `nodejs_als` compatibility flag. <WranglerConfig> ```toml title="wrangler.toml" compatibility_flags = [ "nodejs_als" ] ``` </WranglerConfig> ## Flags history Newest flags are listed first. <CompatibilityFlags /> ## Experimental flags These flags can be enabled via `compatibility_flags`, but are not yet scheduled to become default on any particular date. <CompatibilityFlags experimental /> --- # Cron Triggers URL: https://developers.cloudflare.com/workers/configuration/cron-triggers/ import { WranglerConfig } from "~/components"; ## Background Cron Triggers allow users to map a cron expression to a Worker using a [`scheduled()` handler](/workers/runtime-apis/handlers/scheduled/) that enables Workers to be executed on a schedule. Cron Triggers are ideal for running periodic jobs, such as for maintenance or calling third-party APIs to collect up-to-date data. Workers scheduled by Cron Triggers will run on underutilized machines to make the best use of Cloudflare's capacity and route traffic efficiently. :::note Cron Triggers can also be combined with [Workflows](/workflows/) to trigger multi-step, long-running tasks. You can [bind to a Workflow](/workflows/build/workers-api/) from directly from your Cron Trigger to execute a Workflow on a schedule. ::: Cron Triggers execute on UTC time. ## Add a Cron Trigger ### 1. Define a scheduled event listener To respond to a Cron Trigger, you must add a [`"scheduled"` handler](/workers/runtime-apis/handlers/scheduled/) to your Worker. Refer to the following examples to write your code: - [Setting Cron Triggers](/workers/examples/cron-trigger/) - [Multiple Cron Triggers](/workers/examples/multiple-cron-triggers/) ### 2. Update configuration After you have updated your Worker code to include a `"scheduled"` event, you must update your Worker project configuration. #### Via the [Wrangler configuration file](/workers/wrangler/configuration/) If a Worker is managed with Wrangler, Cron Triggers should be exclusively managed through the [Wrangler configuration file](/workers/wrangler/configuration/). Refer to the example below for a Cron Triggers configuration: <WranglerConfig> ```toml [triggers] # Schedule cron triggers: # - At every 3rd minute # - At 15:00 (UTC) on first day of the month # - At 23:59 (UTC) on the last weekday of the month crons = [ "*/3 * * * *", "0 15 1 * *", "59 23 LW * *" ] ``` </WranglerConfig> You also can set a different Cron Trigger for each [environment](/workers/wrangler/environments/) in your [Wrangler configuration file](/workers/wrangler/configuration/). You need to put the `[triggers]` table under your chosen environment. For example: <WranglerConfig> ```toml [env.dev.triggers] crons = ["0 * * * *"] ``` </WranglerConfig> #### Via the dashboard To add Cron Triggers in the Cloudflare dashboard: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. In Account Home, select **Workers & Pages**. 3. In **Overview**, select your Worker > **Settings** > **Triggers** > **Cron Triggers**. ## Supported cron expressions Cloudflare supports cron expressions with five fields, along with most [Quartz scheduler](http://www.quartz-scheduler.org/documentation/quartz-2.3.0/tutorials/crontrigger.html#introduction)-like cron syntax extensions: | Field | Values | Characters | | ------------- | ------------------------------------------------------------------ | ------------ | | Minute | 0-59 | \* , - / | | Hours | 0-23 | \* , - / | | Days of Month | 1-31 | \* , - / L W | | Months | 1-12, case-insensitive 3-letter abbreviations ("JAN", "aug", etc.) | \* , - / | | Weekdays | 1-7, case-insensitive 3-letter abbreviations ("MON", "fri", etc.) | \* , - / L # | ### Examples Some common time intervals that may be useful for setting up your Cron Trigger: - `* * * * *` - At every minute - `*/30 * * * *` - At every 30th minute - `45 * * * *` - On the 45th minute of every hour - `0 17 * * sun` or `0 17 * * 1` - 17:00 (UTC) on Sunday - `10 7 * * mon-fri` or `10 7 * * 2-6` - 07:10 (UTC) on weekdays - `0 15 1 * *` - 15:00 (UTC) on first day of the month - `0 18 * * 6L` or `0 18 * * friL` - 18:00 (UTC) on the last Friday of the month - `59 23 LW * *` - 23:59 (UTC) on the last weekday of the month ## Test Cron Triggers The recommended way of testing Cron Triggers is using Wrangler. :::note[Cron Trigger changes take time to propagate.] Changes such as adding a new Cron Trigger, updating an old Cron Trigger, or deleting a Cron Trigger may take several minutes (up to 15 minutes) to propagate to the Cloudflare global network. ::: Test Cron Triggers using `Wrangler` by passing in the `--test-scheduled` flag to [`wrangler dev`](/workers/wrangler/commands/#dev). This will expose a `/__scheduled` route which can be used to test using a HTTP request. To simulate different cron patterns, a `cron` query parameter can be passed in. ```sh npx wrangler dev --test-scheduled curl "http://localhost:8787/__scheduled?cron=*+*+*+*+*" ``` ## View past events To view the execution history of Cron Triggers, view **Cron Events**: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. In Account Home, go to **Workers & Pages**. 3. In **Overview**, select your **Worker**. 4. Select **Settings**. 5. Under **Trigger Events**, select **View events**. Cron Events stores the 100 most recent invocations of the Cron scheduled event. [Workers Logs](/workers/observability/logs/workers-logs) also records invocation logs for the Cron Trigger with a longer retention period and a filter & query interface. If you are interested in an API to access Cron Events, use Cloudflare's [GraphQL Analytics API](/analytics/graphql-api). :::note It can take up to 30 minutes before events are displayed in **Past Cron Events** when creating a new Worker or changing a Worker's name. ::: Refer to [Metrics and Analytics](/workers/observability/metrics-and-analytics/) for more information. ## Remove a Cron Trigger ### Via the dashboard To delete a Cron Trigger on a deployed Worker via the dashboard: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. Go to **Workers & Pages**, and select your Worker. 3. Go to **Triggers** > select the three dot icon next to the Cron Trigger you want to remove > **Delete**. :::note You can only delete Cron Triggers using the Cloudflare dashboard (and not through your Wrangler file). ::: ## Limits Refer to [Limits](/workers/platform/limits/) to track the maximum number of Cron Triggers per Worker. ## Green Compute With Green Compute enabled, your Cron Triggers will only run on Cloudflare points of presence that are located in data centers that are powered purely by renewable energy. Organizations may claim that they are powered by 100 percent renewable energy if they have procured sufficient renewable energy to account for their overall energy use. Renewable energy can be purchased in a number of ways, including through on-site generation (wind turbines, solar panels), directly from renewable energy producers through contractual agreements called Power Purchase Agreements (PPA), or in the form of Renewable Energy Credits (REC, IRECs, GoOs) from an energy credit market. Green Compute can be configured at the account level: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. In Account Home, select **Workers & Pages**. 3. In the **Account details** section, find **Compute Setting**. 4. Select **Change**. 5. Select **Green Compute**. 6. Select **Confirm**. ## Related resources - [Triggers](/workers/wrangler/configuration/#triggers) - Review Wrangler configuration file syntax for Cron Triggers. - Learn how to access Cron Triggers in [ES modules syntax](/workers/reference/migrate-to-module-workers/) for an optimized experience. --- # Environment variables URL: https://developers.cloudflare.com/workers/configuration/environment-variables/ import { Render, TabItem, Tabs, WranglerConfig } from "~/components" ## Background Environment variables are a type of binding that allow you to attach text strings or JSON values to your Worker. Environment variables are available on the [`env` parameter](/workers/runtime-apis/handlers/fetch/#parameters) passed to your Worker's [`fetch` event handler](/workers/runtime-apis/handlers/fetch/). Text strings and JSON values are not encrypted and are useful for storing application configuration. ## Add environment variables via Wrangler Text and JSON values are defined via the `[vars]` configuration in your Wrangler file. In the following example, `API_HOST` and `API_ACCOUNT_ID` are text values and `SERVICE_X_DATA` is a JSON value. <Render file="envvar-example" /> Refer to the following example on how to access the `API_HOST` environment variable in your Worker code: <Tabs> <TabItem label="JavaScript" icon="seti:javascript"> ```js export default { async fetch(request, env, ctx) { return new Response(`API host: ${env.API_HOST}`); } } ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts export interface Env { API_HOST: string; } export default { async fetch(request, env, ctx): Promise<Response> { return new Response(`API host: ${env.API_HOST}`); } } satisfies ExportedHandler<Env>; ``` </TabItem> </Tabs> `vars` is a non-inheritable key. [Non-inheritable keys](/workers/wrangler/configuration/#non-inheritable-keys) are configurable at the top-level, but cannot be inherited by environments and must be specified for each environment. To define environment variables for different environments, refer to the example below: <WranglerConfig> ```toml name = "my-worker-dev" [env.staging.vars] API_HOST = "staging.example.com" API_ACCOUNT_ID = "staging_example_user" SERVICE_X_DATA = { URL = "service-x-api.dev.example", MY_ID = 123 } [env.production.vars] API_HOST = "production.example.com" API_ACCOUNT_ID = "production_example_user" SERVICE_X_DATA = { URL = "service-x-api.prod.example", MY_ID = 456 } ``` </WranglerConfig> For local development with `wrangler dev`, variables in the [Wrangler configuration file](/workers/wrangler/configuration/) are automatically overridden by any values defined in a `.dev.vars` file located in the root directory of your worker. This is useful for providing values you do not want to check in to source control. ```shell API_HOST = "localhost:4000" API_ACCOUNT_ID = "local_example_user" ``` Alternatively, you can specify per-environment values in the [Wrangler configuration file](/workers/wrangler/configuration/) and provide an `environment` value via the `env` flag when developing locally like so `wrangler dev --env=local`. ## Add environment variables via the dashboard To add environment variables via the dashboard: 1. Log in to [Cloudflare dashboard](https://dash.cloudflare.com/) and select your account. 2. Select **Workers & Pages**. 3. In **Overview**, select your Worker. 4. Select **Settings**. 5. Under **Variables and Secrets**, select **Add**. 6. Select a **Type**, input a **Variable name**, and input its **Value**. This variable will be made available to your Worker. 7. (Optional) To add multiple environment variables, select **Add variable**. 8. Select **Deploy** to implement your changes. :::caution[Plaintext strings and secrets] Select the **Secret** type if your environment variable is a [secret](/workers/configuration/secrets/). ::: <Render file="env_and_secrets" /> ## Related resources * Learn how to access environment variables in [ES modules syntax](/workers/reference/migrate-to-module-workers/) for an optimized experience. --- # Configuration URL: https://developers.cloudflare.com/workers/configuration/ import { DirectoryListing } from "~/components"; Configure your Worker project with various features and customizations. <DirectoryListing /> --- # Multipart upload metadata URL: https://developers.cloudflare.com/workers/configuration/multipart-upload-metadata/ import { Type, MetaInfo } from "~/components"; If you're using the [Workers Script Upload API](/api/resources/workers/subresources/scripts/methods/update/) or [Version Upload API](/api/resources/workers/subresources/scripts/subresources/versions/methods/create/) directly, `multipart/form-data` uploads require you to specify a `metadata` part. This metadata defines the Worker's configuration in JSON format, analogue to the [wrangler.toml file](/workers/wrangler/configuration/). ## Sample `metadata` ```json { "main_module": "main.js", "bindings": [ { "type": "plain_text", "name": "MESSAGE", "text": "Hello, world!" } ], "compatibility_date": "2021-09-14" } ``` ## Attributes The following attributes are configurable at the top-level. :::note At a minimum, the `main_module` key is required to upload a Worker. ::: * `main_module` <Type text="string" /> <MetaInfo text="required" /> * The part name that contains the module entry point of the Worker that will be executed. For example, `main.js`. * `assets` <Type text="object" /> <MetaInfo text="optional" /> * [Asset](/workers/static-assets/) configuration for a Worker. * `config` <Type text="object" /> <MetaInfo text="optional" /> * [html_handling](/workers/static-assets/routing/#1-html_handling) determines the redirects and rewrites of requests for HTML content. * [not_found_handling](/workers/static-assets/routing/#2-not_found_handling) determines the response when a request does not match a static asset, and there is no Worker script. * `jwt` field provides a token authorizing assets to be attached to a Worker. * `keep_assets` <Type text="boolean" /> <MetaInfo text="optional" /> * Specifies whether assets should be retained from a previously uploaded Worker version; used in lieu of providing a completion token. * `bindings` array\[object] optional * [Bindings](#bindings) to expose in the Worker. * `placement` <Type text="object" /> <MetaInfo text="optional" /> * [Smart placement](/workers/configuration/smart-placement/) object for the Worker. * `mode` field only supports `smart` for automatic placement. * `compatibility_date` <Type text="string" /> <MetaInfo text="optional" /> * [Compatibility Date](/workers/configuration/compatibility-dates/#setting-compatibility-date) indicating targeted support in the Workers runtime. Backwards incompatible fixes to the runtime following this date will not affect this Worker. Highly recommended to set a `compatibility_date`, otherwise if on upload via the API, it defaults to the oldest compatibility date before any flags took effect (2021-11-02). * `compatibility_flags` array\[string] optional * [Compatibility Flags](/workers/configuration/compatibility-flags/#setting-compatibility-flags) that enable or disable certain features in the Workers runtime. Used to enable upcoming features or opt in or out of specific changes not included in a `compatibility_date`. * `usage_model` <Type text="string" /> <MetaInfo text="optional" /> * Usage model to apply to invocations, only allowed value is `standard`. ## Additional attributes: [Workers Script Upload API](/api/resources/workers/subresources/scripts/methods/update/) For [immediately deployed uploads](/workers/configuration/versions-and-deployments/#upload-a-new-version-and-deploy-it-immediately), the following **additional** attributes are configurable at the top-level. :::note These attributes are **not available** for version uploads. ::: * `migrations` array\[object] optional * [Durable Objects migrations](/durable-objects/reference/durable-objects-migrations/) to apply. * `logpush` <Type text="boolean" /> <MetaInfo text="optional" /> * Whether [Logpush](/cloudflare-for-platforms/cloudflare-for-saas/hostname-analytics/#logpush) is turned on for the Worker. * `tail_consumers` array\[object] optional * [Tail Workers](/workers/observability/logs/tail-workers/) that will consume logs from the attached Worker. * `tags` array\[string] optional * List of strings to use as tags for this Worker. ## Additional attributes: [Version Upload API](/api/resources/workers/subresources/scripts/subresources/versions/methods/create/) For [version uploads](/workers/configuration/versions-and-deployments/#upload-a-new-version-to-be-gradually-deployed-or-deployed-at-a-later-time), the following **additional** attributes are configurable at the top-level. :::note These attributes are **not available** for immediately deployed uploads. ::: * `annotations` <Type text="object" /> <MetaInfo text="optional" /> * Annotations object specific to the Worker version. * `workers/message` specifies a custom message for the version. * `workers/tag` specifies a custom identifier for the version. ## Bindings Workers can interact with resources on the Cloudflare Developer Platform using [bindings](/workers/runtime-apis/bindings/). Refer to the JSON example below that shows how to add bindings in the `metadata` part. ```json { "bindings": [ { "type": "ai", "name": "<VARIABLE_NAME>" }, { "type": "analytics_engine", "name": "<VARIABLE_NAME>", "dataset": "<DATASET>" }, { "type": "assets", "name": "<VARIABLE_NAME>" }, { "type": "browser_rendering", "name": "<VARIABLE_NAME>" }, { "type": "d1", "name": "<VARIABLE_NAME>", "id": "<D1_ID>" }, { "type": "durable_object_namespace", "name": "<VARIABLE_NAME>", "class_name": "<DO_CLASS_NAME>" }, { "type": "hyperdrive", "name": "<VARIABLE_NAME>", "id": "<HYPERDRIVE_ID>" }, { "type": "kv_namespace", "name": "<VARIABLE_NAME>", "namespace_id": "<KV_ID>" }, { "type": "mtls_certificate", "name": "<VARIABLE_NAME>", "certificate_id": "<MTLS_CERTIFICATE_ID>" }, { "type": "plain_text", "name": "<VARIABLE_NAME>", "text": "<VARIABLE_VALUE>" }, { "type": "queue", "name": "<VARIABLE_NAME>", "queue_name": "<QUEUE_NAME>" }, { "type": "r2_bucket", "name": "<VARIABLE_NAME>", "bucket_name": "<R2_BUCKET_NAME>" }, { "type": "secret_text", "name": "<VARIABLE_NAME>", "text": "<SECRET_VALUE>" }, { "type": "service", "name": "<VARIABLE_NAME>", "service": "<SERVICE_NAME>", "environment": "production" }, { "type": "tail_consumer", "service": "<WORKER_NAME>" }, { "type": "vectorize", "name": "<VARIABLE_NAME>", "index_name": "<INDEX_NAME>" }, { "type": "version_metadata", "name": "<VARIABLE_NAME>" } ] } ``` --- # Preview URLs URL: https://developers.cloudflare.com/workers/configuration/previews/ import { Render, WranglerConfig } from "~/components"; Preview URLs allow you to preview new versions of your Worker without deploying it to production. Every time you create a new [version](/workers/configuration/versions-and-deployments/#versions) of your Worker a unique preview URL is generated. Preview URLs take the format: `<VERSION_PREFIX>-<WORKER_NAME>.<SUBDOMAIN>.workers.dev`. New [versions](/workers/configuration/versions-and-deployments/#versions) of a Worker are created on [`wrangler deploy`](/workers/wrangler/commands/#deploy), [`wrangler versions upload`](/workers/wrangler/commands/#upload) or when you make edits on the Cloudflare dashboard. By default, preview URLs are enabled and available publicly. Preview URLs can be: - Integrated into CI/CD pipelines, allowing automatic generation of preview environments for every pull request. - Used for collaboration between teams to test code changes in a live environment and verify updates. - Used to test new API endpoints, validate data formats, and ensure backward compatibility with existing services. When testing zone level performance or security features for a version, we recommend using [version overrides](/workers/configuration/versions-and-deployments/gradual-deployments/#version-overrides) so that your zone's performance and security settings apply. :::note Preview URLs are only available for Worker versions uploaded after 2024-09-25. Minimum required Wrangler version: 3.74.0. Check your version by running `wrangler --version`. To update Wrangler, refer to [Install/Update Wrangler](/workers/wrangler/install-and-update/). ::: ## View preview URLs using wrangler The [`wrangler versions upload`](/workers/wrangler/commands/#upload) command uploads a new [version](/workers/configuration/versions-and-deployments/#versions) of your Worker and returns a preview URL for each version uploaded. ## View preview URLs on the Workers dashboard 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com/?to=/:account/workers) and select your project. 2. Go to the **Deployments** tab, and find the version you would like to view. ## Manage access to Preview URLs By default, preview URLs are enabled and available publicly. You can use [Cloudflare Access](/cloudflare-one/policies/access/) to require visitors to authenticate before accessing preview URLs. You can limit access to yourself, your teammates, your organization, or anyone else you specify in your [access policy](/cloudflare-one/policies/access). To limit your preview URLs to authorized emails only: 1. Log in to the [Cloudflare Access dashboard](https://one.dash.cloudflare.com/?to=/:account/access/apps). 2. Select your account. 3. Add an application. 4. Select **Self Hosted**. 5. Name your application (for example, "my-worker") and add your `workers.dev` subdomain as the **Application domain**. For example, if you want to secure preview URLs for a Worker running on `my-worker.my-subdomain.workers.dev`. - Subdomain: `*-my-worker` - Domain: `my-subdomain.workers.dev` :::note You must press enter after you input your Application domain for it to save. You will see a "Zone is not associated with the current account" warning that you may ignore. ::: 6. Go to the next page. 7. Add a name for your access policy (for example, "Allow employees access to preview URLs for my-worker"). 8. In the **Configure rules** section create a new rule with the **Emails** selector, or any other attributes which you wish to gate access to previews with. 9. Enter the emails you want to authorize. View [access policies](/cloudflare-one/policies/access/#selectors) to learn about configuring alternate rules. 10. Go to the next page. 11. Add application. ## Disabling Preview URLs ### Disabling Preview URLs in the dashboard To disable Preview URLs for a Worker: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. Go to **Workers & Pages** and in **Overview**, select your Worker. 3. Go to **Settings** > **Domains & Routes**. 4. On "Preview URLs" click "Disable". 5. Confirm you want to disable. ### Disabling Preview URLs in the [Wrangler configuration file](/workers/wrangler/configuration/) :::note Wrangler 3.91.0 or higher is required to use this feature. ::: To disable Preview URLs for a Worker, include the following in your Worker's Wrangler file: <WranglerConfig> ```toml preview_urls = false ``` </WranglerConfig> When you redeploy your Worker with this change, Preview URLs will be disabled. :::caution If you disable Preview URLs in the Cloudflare dashboard but do not update your Worker's Wrangler file with `preview_urls = false`, then Preview URLs will be re-enabled the next time you deploy your Worker with Wrangler. ::: ## Limitations - Preview URLs are not generated for Workers that implement a [Durable Object](/durable-objects/). - Preview URLs are not currently generated for [Workers for Platforms](/cloudflare-for-platforms/workers-for-platforms/) [user Workers](/cloudflare-for-platforms/workers-for-platforms/reference/how-workers-for-platforms-works/#user-workers). This is a temporary limitation, we are working to remove it. - You cannot currently configure Preview URLs to run on a subdomain other than [`workers.dev`](/workers/configuration/routing/workers-dev/). --- # Secrets URL: https://developers.cloudflare.com/workers/configuration/secrets/ import { Render } from "~/components"; ## Background Secrets are a type of binding that allow you to attach encrypted text values to your Worker. You cannot see secrets after you set them and can only access secrets via [Wrangler](/workers/wrangler/commands/#secret) or programmatically via the [`env` parameter](/workers/runtime-apis/handlers/fetch/#parameters). Secrets are used for storing sensitive information like API keys and auth tokens. Secrets are available on the [`env` parameter](/workers/runtime-apis/handlers/fetch/#parameters) passed to your Worker's [`fetch` event handler](/workers/runtime-apis/handlers/fetch/). ## Local Development with Secrets <Render file="secrets-in-dev" /> ## Secrets on deployed Workers ### Adding secrets to your project #### Via Wrangler Secrets can be added through [`wrangler secret put`](/workers/wrangler/commands/#secret) or [`wrangler versions secret put`](/workers/wrangler/commands/#secret-put) commands. `wrangler secret put` creates a new version of the Worker and deploys it immediately. ```sh npx wrangler secret put <KEY> ``` If using [gradual deployments](/workers/configuration/versions-and-deployments/gradual-deployments/), instead use the `wrangler versions secret put` command. This will only create a new version of the Worker, that can then be deploying using [`wrangler versions deploy`](/workers/wrangler/commands/#deploy-2). :::note Wrangler versions before 3.73.0 require you to specify a `--x-versions` flag. ::: ```sh npx wrangler versions secret put <KEY> ``` #### Via the dashboard To add a secret via the dashboard: 1. Log in to [Cloudflare dashboard](https://dash.cloudflare.com/) and select your account. 2. Select **Workers & Pages**. 3. In **Overview**, select your Worker > **Settings**. 4. Under **Variables and Secrets**, select **Add**. 5. Select the type **Secret**, input a **Variable name**, and input its **Value**. This secret will be made available to your Worker but the value will be hidden in Wrangler and the dashboard. 6. (Optional) To add more secrets, select **Add variable**. 7. Select **Deploy** to implement your changes. ### Delete secrets from your project #### Via Wrangler Secrets can be deleted through [`wrangler secret delete`](/workers/wrangler/commands/#delete-2) or [`wrangler versions secret delete`](/workers/wrangler/commands/#secret-delete) commands. `wrangler secret delete` creates a new version of the Worker and deploys it immediately. ```sh npx wrangler secret delete <KEY> ``` If using [gradual deployments](/workers/configuration/versions-and-deployments/gradual-deployments/), instead use the `wrangler versions secret delete` command. This will only create a new version of the Worker, that can then be deploying using [`wrangler versions deploy`](/workers/wrangler/commands/#deploy-2). ```sh npx wrangler versions secret delete <KEY> ``` #### Via the dashboard To delete a secret from your Worker project via the dashboard: 1. Log in to [Cloudflare dashboard](https://dash.cloudflare.com/) and select your account. 2. Select **Workers & Pages**. 3. In **Overview**, select your Worker > **Settings**. 4. Under **Variables and Secrets**, select **Edit**. 5. In the **Edit** drawer, select **X** next to the secret you want to delete. 6. Select **Deploy** to implement your changes. 7. (Optional) Instead of using the edit drawer, you can click the delete icon next to the secret. <Render file="env_and_secrets" /> ## Related resources - [Wrangler secret commands](/workers/wrangler/commands/#secret) - Review the Wrangler commands to create, delete and list secrets. --- # Smart Placement URL: https://developers.cloudflare.com/workers/configuration/smart-placement/ import { WranglerConfig } from "~/components"; By default, [Workers](/workers/) and [Pages Functions](/pages/functions/) are invoked in a data center closest to where the request was received. If you are running back-end logic in a Worker, it may be more performant to run that Worker closer to your back-end infrastructure rather than the end user. Smart Placement automatically places your workloads in an optimal location that minimizes latency and speeds up your applications. ## Background The following example demonstrates how moving your Worker close to your back-end services could decrease application latency: You have a user in Sydney, Australia who is accessing an application running on Workers. This application makes multiple round trips to a database located in Frankfurt, Germany in order to serve the user’s request.  The issue is the time that it takes the Worker to perform multiple round trips to the database. Instead of the request being processed close to the user, the Cloudflare network, with Smart Placement enabled, would process the request in a data center closest to the database.  ## Understand how Smart Placement works Smart Placement is enabled on a per-Worker basis. Once enabled, Smart Placement analyzes the [request duration](/workers/observability/metrics-and-analytics/#request-duration) of the Worker in different Cloudflare locations around the world on a regular basis. Smart Placement decides where to run the Worker by comparing the estimated request duration in the location closest to where the request was received (the default location where the Worker would run) to a set of candidate locations around the world. For each candidate location, Smart Placement considers the performance of the Worker in that location as well as the network latency added by forwarding the request to that location. If the estimated request duration in the best candidate location is significantly faster than the location where the request was received, the request will be forwarded to that candidate location. Otherwise, the Worker will run in the default location closest to where the request was received. Smart Placement only considers candidate locations where the Worker has previously run, since the estimated request duration in each candidate location is based on historical data from the Worker running in that location. This means that Smart Placement cannot run the Worker in a location that it does not normally receive traffic from. Smart Placement only affects the execution of [fetch event handlers](/workers/runtime-apis/handlers/fetch/). Smart Placement does not affect the execution of [RPC methods](/workers/runtime-apis/rpc/) or [named entrypoints](/workers/runtime-apis/bindings/service-bindings/rpc/#named-entrypoints). Workers without a fetch event handler will be ignored by Smart Placement. For Workers with both fetch and non-fetch event handlers, Smart Placement will only affect the execution of the fetch event handler. Similarly, Smart Placement will not affect where [static assets](/workers/static-assets/) are served from. Static assets will continue to be served from the location nearest to the incoming request. If a Worker is invoked and your code retrieves assets via the [static assets binding](https://developers.cloudflare.com/workers/static-assets/binding/), then assets will be served from the location that your Worker runs in. ## Enable Smart Placement Smart Placement is available to users on all Workers plans. ### Enable Smart Placement via Wrangler To enable Smart Placement via Wrangler: 1. Make sure that you have `wrangler@2.20.0` or later [installed](/workers/wrangler/install-and-update/). 2. Add the following to your Worker project's Wrangler file: <WranglerConfig> ```toml [placement] mode = "smart" ``` </WranglerConfig> 3. Wait for Smart Placement to analyze your Worker. This process may take up to 15 minutes. 4. View your Worker's [request duration analytics](/workers/observability/metrics-and-analytics/#request-duration). ### Enable Smart Placement via the dashboard To enable Smart Placement via the dashboard: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. In **Account Home**, select **Workers & Pages**. 3. In **Overview**,select your Worker. 4. Select **Settings** > **General**. 5. Under **Placement**, choose **Smart**. 6. Wait for Smart Placement to analyze your Worker. Smart Placement requires consistent traffic to the Worker from multiple locations around the world to make a placement decision. The analysis process may take up to 15 minutes. 7. View your Worker's [request duration analytics](/workers/observability/metrics-and-analytics/#request-duration) ## Observability ### Placement Status A Worker's metadata contains details about a Worker's placement status. Query your Worker's placement status through the following Workers API endpoint: ```bash curl -X GET https://api.cloudflare.com/client/v4/accounts/{ACCOUNT_ID}/workers/services/{WORKER_NAME} \ -H "Authorization: Bearer <TOKEN>" \ -H "Content-Type: application/json" | jq . ``` Possible placement states include: - _(not present)_: The Worker has not been analyzed for Smart Placement yet. The Worker will always run in the default Cloudflare location closest to where the request was received. - `SUCCESS`: The Worker was successfully analyzed and will be optimized by Smart Placement. The Worker will run in the Cloudflare location that minimizes expected request duration, which may be the default location closest to where the request was received or may be a faster location elsewhere in the world. - `INSUFFICIENT_INVOCATIONS`: The Worker has not received enough requests to make a placement decision. Smart Placement requires consistent traffic to the Worker from multiple locations around the world. The Worker will always run in the default Cloudflare location closest to where the request was received. - `UNSUPPORTED_APPLICATION`: Smart Placement began optimizing the Worker and measured the results, which showed that Smart Placement made the Worker slower. In response, Smart Placement reverted the placement decision. The Worker will always run in the default Cloudflare location closest to where the request was received, and Smart Placement will not analyze the Worker again until it's redeployed. This state is rare and accounts for less that 1% of Workers with Smart Placement enabled. ### Request Duration Analytics Once Smart Placement is enabled, data about request duration gets collected. Request duration is measured at the data center closest to the end user. By default, one percent (1%) of requests are not routed with Smart Placement. These requests serve as a baseline to compare to. ### `cf-placement` header Once Smart Placement is enabled, Cloudflare adds a `cf-placement` header to all requests. This can be used to check whether a request has been routed with Smart Placement and where the Worker is processing the request (which is shown as the nearest airport code to the data center). For example, the `cf-placement: remote-LHR` header's `remote` value indicates that the request was routed using Smart Placement to a Cloudflare data center near London. The `cf-placement: local-EWR` header's `local` value indicates that the request was not routed using Smart Placement and the Worker was invoked in a data center closest to where the request was received, close to Newark Liberty International Airport (EWR). :::caution[Beta use only] We may remove the `cf-placement` header before Smart Placement enters general availability. ::: ## Best practices If you are building full-stack applications on Workers, we recommend splitting up the front-end and back-end logic into different Workers and using [Service Bindings](/workers/runtime-apis/bindings/service-bindings/) to connect your front-end logic and back-end logic Workers.  Enabling Smart Placement on your back-end Worker will invoke it close to your back-end service, while the front-end Worker serves requests close to the user. This architecture maintains fast, reactive front-ends while also improving latency when the back-end Worker is called. ## Give feedback on Smart Placement Smart Placement is in beta. To share your thoughts and experience with Smart Placement, join the [Cloudflare Developer Discord](https://discord.cloudflare.com). --- # Page Rules URL: https://developers.cloudflare.com/workers/configuration/workers-with-page-rules/ Page Rules trigger certain actions whenever a request matches one of the URL patterns you define. You can define a page rule to trigger one or more actions whenever a certain URL pattern is matched. Refer to [Page Rules](/rules/page-rules/) to learn more about configuring Page Rules. ## Page Rules with Workers Cloudflare acts as a [reverse proxy](https://www.cloudflare.com/learning/what-is-cloudflare/) to provide services, like Page Rules, to Internet properties. Your application's traffic will pass through a Cloudflare data center that is closest to the visitor. There are hundreds of these around the world, each of which are capable of running services like Workers and Page Rules. If your application is built on Workers and/or Pages, the [Cloudflare global network](https://www.cloudflare.com/learning/serverless/glossary/what-is-edge-computing/) acts as your origin server and responds to requests directly from the Cloudflare global network. When using Page Rules with Workers, the following workflow is applied. 1. Request arrives at Cloudflare data center. 2. Cloudflare decides if this request is a Worker route. Because this is a Worker route, Cloudflare evaluates and disabled a number of features, including some that would be set by Page Rules. 3. Page Rules run as part of normal request processing with some features now disabled. 4. Worker executes. 5. Worker makes a same-zone or other-zone subrequest. Because this is a Worker route, Cloudflare disables a number of features, including some that would be set by Page Rules. Page Rules are evaluated both at the client-to-Worker request stage (step 2) and the Worker subrequest stage (step 5). If you are experiencing Page Rule errors when running Workers, contact your Cloudflare account team or [Cloudflare Support](/support/contacting-cloudflare-support/). ## Affected Page Rules The following Page Rules may not work as expected when an incoming request is matched to a Worker route: * Always Online * [Always Use HTTPS](/workers/configuration/workers-with-page-rules/#always-use-https) * [Automatic HTTPS Rewrites](/workers/configuration/workers-with-page-rules/#automatic-https-rewrites) * [Browser Cache TTL](/workers/configuration/workers-with-page-rules/#browser-cache-ttl) * [Browser Integrity Check](/workers/configuration/workers-with-page-rules/#browser-integrity-check) * [Cache Deception Armor](/workers/configuration/workers-with-page-rules/#cache-deception-armor) * [Cache Level](/workers/configuration/workers-with-page-rules/#cache-level) * Disable Apps * [Disable Zaraz](/workers/configuration/workers-with-page-rules/#disable-zaraz) * [Edge Cache TTL](/workers/configuration/workers-with-page-rules/#edge-cache-ttl) * [Email Obfuscation](/workers/configuration/workers-with-page-rules/#email-obfuscation) * [Forwarding URL](/workers/configuration/workers-with-page-rules/#forwarding-url) * Host Header Override * [IP Geolocation Header](/workers/configuration/workers-with-page-rules/#ip-geolocation-header) * Mirage * [Origin Cache Control](/workers/configuration/workers-with-page-rules/#origin-cache-control) * [Rocket Loader](/workers/configuration/workers-with-page-rules/#rocket-loader) * [Security Level](/workers/configuration/workers-with-page-rules/#security-level) * [SSL](/workers/configuration/workers-with-page-rules/#ssl) This is because the default setting of these Page Rules will be disabled when Cloudflare recognizes that the request is headed to a Worker. :::caution[Testing] Due to ongoing changes to the Workers runtime, detailed documentation on how these rules will be affected are updated following testing. ::: To learn what these Page Rules do, refer to [Page Rules](/rules/page-rules/). :::note[Same zone versus other zone] A same zone subrequest is a request the Worker makes to an orange-clouded hostname in the same zone the Worker runs on. Depending on your DNS configuration, any request that falls outside that definition may be considered an other zone request by the Cloudflare network. ::: ### Always Use HTTPS | Source | Target | Behavior | | ------ | ---------- | -------------- | | Client | Worker | Rule Respected | | Worker | Same Zone | Rule Ignored | | Worker | Other Zone | Rule Ignored | ### Automatic HTTPS Rewrites | Source | Target | Behavior | | ------ | ---------- | -------------- | | Client | Worker | Rule Ignored | | Worker | Same Zone | Rule Respected | | Worker | Other Zone | Rule Ignored | ### Browser Cache TTL | Source | Target | Behavior | | ------ | ---------- | -------------- | | Client | Worker | Rule Ignored | | Worker | Same Zone | Rule Respected | | Worker | Other Zone | Rule Ignored | ### Browser Integrity Check | Source | Target | Behavior | | ------ | ---------- | -------------- | | Client | Worker | Rule Respected | | Worker | Same Zone | Rule Ignored | | Worker | Other Zone | Rule Ignored | ### Cache Deception Armor | Source | Target | Behavior | | ------ | ---------- | -------------- | | Client | Worker | Rule Respected | | Worker | Same Zone | Rule Respected | | Worker | Other Zone | Rule Ignored | ### Cache Level | Source | Target | Behavior | | ------ | ---------- | -------------- | | Client | Worker | Rule Respected | | Worker | Same Zone | Rule Respected | | Worker | Other Zone | Rule Ignored | ### Disable Zaraz | Source | Target | Behavior | | ------ | ---------- | -------------- | | Client | Worker | Rule Respected | | Worker | Same Zone | Rule Respected | | Worker | Other Zone | Rule Ignored | ### Edge Cache TTL | Source | Target | Behavior | | ------ | ---------- | -------------- | | Client | Worker | Rule Respected | | Worker | Same Zone | Rule Respected | | Worker | Other Zone | Rule Ignored | ### Email Obfuscation | Source | Target | Behavior | | -------------------------|------------|------------| | Client | Worker | Rule Ignored | | Worker | Same Zone | Rule Respected | | Worker | Other Zone | Rule Ignored | ### Forwarding URL | Source | Target | Behavior | | ------ | ---------- | -------------- | | Client | Worker | Rule Ignored | | Worker | Same Zone | Rule Respected | | Worker | Other Zone | Rule Ignored | ### IP Geolocation Header | Source | Target | Behavior | | ------ | ---------- | -------------- | | Client | Worker | Rule Respected | | Worker | Same Zone | Rule Respected | | Worker | Other Zone | Rule Ignored | ### Origin Cache Control | Source | Target | Behavior | | ------ | ---------- | -------------- | | Client | Worker | Rule Respected | | Worker | Same Zone | Rule Respected | | Worker | Other Zone | Rule Ignored | ### Rocket Loader | Source | Target | Behavior | | ------ | ---------- | ------------ | | Client | Worker | Rule Ignored | | Worker | Same Zone | Rule Ignored | | Worker | Other Zone | Rule Ignored | ### Security Level | Source | Target | Behavior | | ------ | ---------- | -------------- | | Client | Worker | Rule Respected | | Worker | Same Zone | Rule Ignored | | Worker | Other Zone | Rule Ignored | ### SSL | Source | Target | Behavior | | ------ | ---------- | -------------- | | Client | Worker | Rule Respected | | Worker | Same Zone | Rule Respected | | Worker | Other Zone | Rule Ignored | --- # Connect to databases URL: https://developers.cloudflare.com/workers/databases/connecting-to-databases/ Cloudflare Workers can connect to and query your data in both SQL and NoSQL databases, including: - Traditional hosted relational databases, including Postgres and MySQL. - Serverless databases: Supabase, MongoDB Atlas, PlanetScale, FaunaDB, and Prisma. - Cloudflare's own [D1](/d1/), a serverless SQL-based database. | Database | Integration | Library or Driver | Connection Method | | --------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------- | ---------------------------------------------------------------------------------- | ------------------------------------------------------------------ | | [Postgres](/workers/tutorials/postgres/) | - | [Postgres.js](https://github.com/porsager/postgres),[node-postgres](https://node-postgres.com/) | [Hyperdrive](/hyperdrive/) | | [Postgres](/workers/tutorials/postgres/) | - | [deno-postgres](https://github.com/cloudflare/worker-template-postgres) | [TCP Socket](/workers/runtime-apis/tcp-sockets/) via database driver | | [MySQL](/workers/tutorials/postgres/) | - | [deno-mysql](https://github.com/cloudflare/worker-template-mysql) | [TCP Socket](/workers/runtime-apis/tcp-sockets/) via database driver | | [Fauna](https://docs.fauna.com/fauna/current/build/integration/cloudflare/) | [Yes](/workers/databases/native-integrations/fauna/) | [fauna](https://github.com/fauna/fauna-js) | API through client library | | [PlanetScale](https://planetscale.com/blog/introducing-the-planetscale-serverless-driver-for-javascript) | [Yes](/workers/databases/native-integrations/planetscale/) | [@planetscale/database](https://github.com/planetscale/database-js) | API via client library | | [Supabase](https://github.com/supabase/supabase/tree/master/examples/with-cloudflare-workers) | [Yes](/workers/databases/native-integrations/supabase/) | [@supabase/supabase-js](https://github.com/supabase/supabase-js) | API via client library | | [Prisma](https://www.prisma.io/docs/guides/deployment/deployment-guides/deploying-to-cloudflare-workers) | No | [prisma](https://github.com/prisma/prisma) | API via client library | | [Neon](https://blog.cloudflare.com/neon-postgres-database-from-workers/) | [Yes](/workers/databases/native-integrations/neon/) | [@neondatabase/serverless](https://neon.tech/blog/serverless-driver-for-postgres/) | API via client library | | [Hasura](https://hasura.io/blog/building-applications-with-cloudflare-workers-and-hasura-graphql-engine/) | No | API | GraphQL API via fetch() | | [Upstash Redis](https://blog.cloudflare.com/cloudflare-workers-database-integration-with-upstash/) | [Yes](/workers/databases/native-integrations/upstash/) | [@upstash/redis](https://github.com/upstash/upstash-redis) | API via client library | | [TiDB Cloud](https://docs.pingcap.com/tidbcloud/integrate-tidbcloud-with-cloudflare) | No | [@tidbcloud/serverless](https://github.com/tidbcloud/serverless-js) | API via client library | :::note If you do not see an integration listed or have an integration to add, complete and submit the [Cloudflare Developer Platform Integration form](https://forms.gle/iaUqLWE8aezSEhgd6). ::: Once you have installed the necessary packages, use the APIs provided by these packages to connect to your database and perform operations on it. Refer to detailed links for service-specific instructions. ## Connect to a database from a Worker There are four ways to connect to a database from a Worker: 1. With [Hyperdrive](/hyperdrive/) (recommended), which dramatically speeds up accessing traditional databases. Hyperdrive currently supports PostgreSQL and PostgreSQL-compatible database providers. 2. [Database Integrations](/workers/databases/native-integrations/): Simplifies authentication by managing credentials on your behalf and includes support for PlanetScale, Neon and Supabase. 3. [TCP Socket API](/workers/runtime-apis/tcp-sockets): A direct TCP connection to a database. TCP is the de-facto standard protocol that many databases, such as PostgreSQL and MySQL, use for client connectivity. 4. HTTP- or WebSocket-based serverless drivers: Many hosted databases support a HTTP or WebSocket API to enable either clients to connect from environments that do not support TCP, or as their preferred connection protocol. ## Authentication If your database requires authentication, use Wrangler secrets to securely store your credentials. To do this, create a secret in your Cloudflare Workers project using the following [`wrangler secret`](/workers/wrangler/commands/#secret) command: ```sh wrangler secret put <SECRET_NAME> ``` Then, retrieve the secret value in your code using the following code snippet: ```js const secretValue = env.<SECRET_NAME>; ``` Use the secret value to authenticate with the external service. For example, if the external service requires an API key or database username and password for authentication, include these in using the relevant service's library or API. For services that require mTLS authentication, use [mTLS certificates](/workers/runtime-apis/bindings/mtls) to present a client certificate. ## Next steps - Learn how to connect to [an existing PostgreSQL database](/hyperdrive/) with Hyperdrive. - Discover [other storage options available](/workers/platform/storage-options/) for use with Workers. - [Create your first database](/d1/get-started/) with Cloudflare D1. --- # Databases URL: https://developers.cloudflare.com/workers/databases/ import { DirectoryListing } from "~/components"; Explore database integrations for your Worker projects. <DirectoryListing /> --- # Frameworks URL: https://developers.cloudflare.com/workers/frameworks/ import { Badge, Description, DirectoryListing, InlineBadge, Render, TabItem, Tabs, PackageManagers, Feature, } from "~/components"; <Description> Run front-end websites — static or dynamic — directly on Cloudflare's global network. </Description> The following frameworks have experimental support for Cloudflare Workers and the new [<InlineBadge preset="beta" /> Workers Assets](/workers/static-assets/). They can be initialized with the [`create-cloudflare` CLI](/workers/get-started/guide/) using the `--experimental` flag. <PackageManagers type="create" pkg="cloudflare@latest my-framework-app" args={"--type=web-framework --experimental"} /> <DirectoryListing folder="workers/frameworks/framework-guides" /> :::note **Static Assets for Workers is currently in open beta.** If you are looking for a framework not on this list: - It may be supported in [Cloudflare Pages](/pages/). Refer to [Pages Frameworks guides](/pages/framework-guides/) for a full list. - Tell us which framework you would like to see supported on Workers in our [Cloudflare's Developer Discord](https://discord.gg/dqgZUwcD). ::: --- # 103 Early Hints URL: https://developers.cloudflare.com/workers/examples/103-early-hints/ import { TabItem, Tabs } from "~/components"; `103` Early Hints is an HTTP status code designed to speed up content delivery. When enabled, Cloudflare can cache the `Link` headers marked with preload and/or preconnect from HTML pages and serve them in a `103` Early Hints response before reaching the origin server. Browsers can use these hints to fetch linked assets while waiting for the origin’s final response, dramatically improving page load speeds. To ensure Early Hints are enabled on your zone: 1. Log in to the [Cloudflare Dashboard](https://dash.cloudflare.com) and select your account and website. 2. Go to **Speed** > **Optimization** > **Content Optimization**. 3. Enable the **Early Hints** toggle to on. You can return `Link` headers from a Worker running on your zone to speed up your page load times. <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> ```js const CSS = "body { color: red; }"; const HTML = ` <!doctype html> <html lang="en"> <head> <meta charset="utf-8"> <title>Early Hints test</title> <link rel="stylesheet" href="/test.css"> </head> <body> <h1>Early Hints test page</h1> </body> </html> `; export default { async fetch(req) { // If request is for test.css, serve the raw CSS if (/test\.css$/.test(req.url)) { return new Response(CSS, { headers: { "content-type": "text/css", }, }); } else { // Serve raw HTML using Early Hints for the CSS file return new Response(HTML, { headers: { "content-type": "text/html", link: "</test.css>; rel=preload; as=style", }, }); } }, }; ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```js const CSS = "body { color: red; }"; const HTML = ` <!doctype html> <html lang="en"> <head> <meta charset="utf-8"> <title>Early Hints test</title> <link rel="stylesheet" href="/test.css"> </head> <body> <h1>Early Hints test page</h1> </body> </html> `; export default { async fetch(req): Promise<Response> { // If request is for test.css, serve the raw CSS if (/test\.css$/.test(req.url)) { return new Response(CSS, { headers: { "content-type": "text/css", }, }); } else { // Serve raw HTML using Early Hints for the CSS file return new Response(HTML, { headers: { "content-type": "text/html", link: "</test.css>; rel=preload; as=style", }, }); } }, } satisfies ExportedHandler; ``` </TabItem> <TabItem label="Python" icon="seti:python"> ```py import re from js import Response, Headers CSS = "body { color: red; }" HTML = """ <!doctype html> <html lang="en"> <head> <meta charset="utf-8"> <title>Early Hints test</title> <link rel="stylesheet" href="/test.css"> </head> <body> <h1>Early Hints test page</h1> </body> </html> """ def on_fetch(request): if re.search("test.css", request.url): headers = Headers.new({"content-type": "text/css"}.items()) return Response.new(CSS, headers=headers) else: headers = Headers.new({"content-type": "text/html","link": "</test.css>; rel=preload; as=style"}.items()) return Response.new(HTML, headers=headers) ``` </TabItem> </Tabs> --- # A/B testing with same-URL direct access URL: https://developers.cloudflare.com/workers/examples/ab-testing/ import { TabItem, Tabs } from "~/components"; <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> ```js const NAME = "myExampleWorkersABTest"; export default { async fetch(req) { const url = new URL(req.url); // Enable Passthrough to allow direct access to control and test routes. if (url.pathname.startsWith("/control") || url.pathname.startsWith("/test")) return fetch(req); // Determine which group this requester is in. const cookie = req.headers.get("cookie"); if (cookie && cookie.includes(`${NAME}=control`)) { url.pathname = "/control" + url.pathname; } else if (cookie && cookie.includes(`${NAME}=test`)) { url.pathname = "/test" + url.pathname; } else { // If there is no cookie, this is a new client. Choose a group and set the cookie. const group = Math.random() < 0.5 ? "test" : "control"; // 50/50 split if (group === "control") { url.pathname = "/control" + url.pathname; } else { url.pathname = "/test" + url.pathname; } // Reconstruct response to avoid immutability let res = await fetch(url); res = new Response(res.body, res); // Set cookie to enable persistent A/B sessions. res.headers.append("Set-Cookie", `${NAME}=${group}; path=/`); return res; } return fetch(url); }, }; ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts const NAME = "myExampleWorkersABTest"; export default { async fetch(req): Promise<Response> { const url = new URL(req.url); // Enable Passthrough to allow direct access to control and test routes. if (url.pathname.startsWith("/control") || url.pathname.startsWith("/test")) return fetch(req); // Determine which group this requester is in. const cookie = req.headers.get("cookie"); if (cookie && cookie.includes(`${NAME}=control`)) { url.pathname = "/control" + url.pathname; } else if (cookie && cookie.includes(`${NAME}=test`)) { url.pathname = "/test" + url.pathname; } else { // If there is no cookie, this is a new client. Choose a group and set the cookie. const group = Math.random() < 0.5 ? "test" : "control"; // 50/50 split if (group === "control") { url.pathname = "/control" + url.pathname; } else { url.pathname = "/test" + url.pathname; } // Reconstruct response to avoid immutability let res = await fetch(url); res = new Response(res.body, res); // Set cookie to enable persistent A/B sessions. res.headers.append("Set-Cookie", `${NAME}=${group}; path=/`); return res; } return fetch(url); }, } satisfies ExportedHandler; ``` </TabItem> <TabItem label="Python" icon="seti:python"> ```py import random from urllib.parse import urlparse, urlunparse from js import Response, Headers, fetch NAME = "myExampleWorkersABTest" async def on_fetch(request): url = urlparse(request.url) # Uncomment below when testing locally # url = url._replace(netloc="example.com") if "localhost" in url.netloc else url # Enable Passthrough to allow direct access to control and test routes. if url.path.startswith("/control") or url.path.startswith("/test"): return fetch(urlunparse(url)) # Determine which group this requester is in. cookie = request.headers.get("cookie") if cookie and f'{NAME}=control' in cookie: url = url._replace(path="/control" + url.path) elif cookie and f'{NAME}=test' in cookie: url = url._replace(path="/test" + url.path) else: # If there is no cookie, this is a new client. Choose a group and set the cookie. group = "test" if random.random() < 0.5 else "control" if group == "control": url = url._replace(path="/control" + url.path) else: url = url._replace(path="/test" + url.path) # Reconstruct response to avoid immutability res = await fetch(urlunparse(url)) headers = dict(res.headers) headers["Set-Cookie"] = f'{NAME}={group}; path=/' headers = Headers.new(headers.items()) return Response.new(res.body, headers=headers) return fetch(urlunparse(url)) ``` </TabItem> </Tabs> --- # Aggregate requests URL: https://developers.cloudflare.com/workers/examples/aggregate-requests/ import { TabItem, Tabs } from "~/components"; <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> ```js export default { async fetch(request) { // someHost is set up to return JSON responses const someHost = "https://jsonplaceholder.typicode.com"; const url1 = someHost + "/todos/1"; const url2 = someHost + "/todos/2"; const responses = await Promise.all([fetch(url1), fetch(url2)]); const results = await Promise.all(responses.map((r) => r.json())); const options = { headers: { "content-type": "application/json;charset=UTF-8" }, }; return new Response(JSON.stringify(results), options); }, }; ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts export default { async fetch(request) { // someHost is set up to return JSON responses const someHost = "https://jsonplaceholder.typicode.com"; const url1 = someHost + "/todos/1"; const url2 = someHost + "/todos/2"; const responses = await Promise.all([fetch(url1), fetch(url2)]); const results = await Promise.all(responses.map((r) => r.json())); const options = { headers: { "content-type": "application/json;charset=UTF-8" }, }; return new Response(JSON.stringify(results), options); }, } satisfies ExportedHandler; ``` </TabItem> <TabItem label="Python" icon="seti:python"> ```py from js import Response, fetch, Headers, JSON, Promise async def on_fetch(request): # some_host is set up to return JSON responses some_host = "https://jsonplaceholder.typicode.com" url1 = some_host + "/todos/1" url2 = some_host + "/todos/2" responses = await Promise.all([fetch(url1), fetch(url2)]) results = await Promise.all(map(lambda r: r.json(), responses)) headers = Headers.new({"content-type": "application/json;charset=UTF-8"}.items()) return Response.new(JSON.stringify(results), headers=headers) ``` </TabItem> </Tabs> --- # Accessing the Cloudflare Object URL: https://developers.cloudflare.com/workers/examples/accessing-the-cloudflare-object/ import { TabItem, Tabs } from "~/components"; <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> ```js export default { async fetch(req) { const data = req.cf !== undefined ? req.cf : { error: "The `cf` object is not available inside the preview." }; return new Response(JSON.stringify(data, null, 2), { headers: { "content-type": "application/json;charset=UTF-8", }, }); }, }; ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts export default { async fetch(req): Promise<Response> { const data = req.cf !== undefined ? req.cf : { error: "The `cf` object is not available inside the preview." }; return new Response(JSON.stringify(data, null, 2), { headers: { "content-type": "application/json;charset=UTF-8", }, }); }, } satisfies ExportedHandler; ``` </TabItem> <TabItem label="Python" icon="seti:python"> ```py import json from js import Response, Headers, JSON def on_fetch(request): error = json.dumps({ "error": "The `cf` object is not available inside the preview." }) data = request.cf if request.cf is not None else error headers = Headers.new({"content-type":"application/json"}.items()) return Response.new(JSON.stringify(data, None, 2), headers=headers) ``` </TabItem> </Tabs> --- # Alter headers URL: https://developers.cloudflare.com/workers/examples/alter-headers/ import { TabItem, Tabs } from "~/components"; <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> ```js export default { async fetch(request) { const response = await fetch("https://example.com"); // Clone the response so that it's no longer immutable const newResponse = new Response(response.body, response); // Add a custom header with a value newResponse.headers.append( "x-workers-hello", "Hello from Cloudflare Workers", ); // Delete headers newResponse.headers.delete("x-header-to-delete"); newResponse.headers.delete("x-header2-to-delete"); // Adjust the value for an existing header newResponse.headers.set("x-header-to-change", "NewValue"); return newResponse; }, }; ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts export default { async fetch(request): Promise<Response> { const response = await fetch(request); // Clone the response so that it's no longer immutable const newResponse = new Response(response.body, response); // Add a custom header with a value newResponse.headers.append( "x-workers-hello", "Hello from Cloudflare Workers", ); // Delete headers newResponse.headers.delete("x-header-to-delete"); newResponse.headers.delete("x-header2-to-delete"); // Adjust the value for an existing header newResponse.headers.set("x-header-to-change", "NewValue"); return newResponse; }, } satisfies ExportedHandler; ``` </TabItem> <TabItem label="Python" icon="seti:python"> ```py from js import Response, fetch async def on_fetch(request): response = await fetch("https://example.com") # Clone the response so that it's no longer immutable new_response = Response.new(response.body, response) # Add a custom header with a value new_response.headers.append( "x-workers-hello", "Hello from Cloudflare Workers" ) # Delete headers new_response.headers.delete("x-header-to-delete") new_response.headers.delete("x-header2-to-delete") # Adjust the value for an existing header new_response.headers.set("x-header-to-change", "NewValue") return new_response ``` </TabItem> </Tabs> You can also use the [`custom-headers-example` template](https://github.com/kristianfreeman/custom-headers-example) to deploy this code to your custom domain. --- # Auth with headers URL: https://developers.cloudflare.com/workers/examples/auth-with-headers/ import { TabItem, Tabs } from "~/components"; :::caution[Caution when using in production] The example code contains a generic header key and value of `X-Custom-PSK` and `mypresharedkey`. To best protect your resources, change the header key and value in the Workers editor before saving your code. ::: <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> ```js export default { async fetch(request) { /** * @param {string} PRESHARED_AUTH_HEADER_KEY Custom header to check for key * @param {string} PRESHARED_AUTH_HEADER_VALUE Hard coded key value */ const PRESHARED_AUTH_HEADER_KEY = "X-Custom-PSK"; const PRESHARED_AUTH_HEADER_VALUE = "mypresharedkey"; const psk = request.headers.get(PRESHARED_AUTH_HEADER_KEY); if (psk === PRESHARED_AUTH_HEADER_VALUE) { // Correct preshared header key supplied. Fetch request from origin. return fetch(request); } // Incorrect key supplied. Reject the request. return new Response("Sorry, you have supplied an invalid key.", { status: 403, }); }, }; ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts export default { async fetch(request): Promise<Response> { /** * @param {string} PRESHARED_AUTH_HEADER_KEY Custom header to check for key * @param {string} PRESHARED_AUTH_HEADER_VALUE Hard coded key value */ const PRESHARED_AUTH_HEADER_KEY = "X-Custom-PSK"; const PRESHARED_AUTH_HEADER_VALUE = "mypresharedkey"; const psk = request.headers.get(PRESHARED_AUTH_HEADER_KEY); if (psk === PRESHARED_AUTH_HEADER_VALUE) { // Correct preshared header key supplied. Fetch request from origin. return fetch(request); } // Incorrect key supplied. Reject the request. return new Response("Sorry, you have supplied an invalid key.", { status: 403, }); }, } satisfies ExportedHandler; ``` </TabItem> <TabItem label="Python" icon="seti:python"> ```py from js import Response, fetch async def on_fetch(request): PRESHARED_AUTH_HEADER_KEY = "X-Custom-PSK" PRESHARED_AUTH_HEADER_VALUE = "mypresharedkey" psk = request.headers.get(PRESHARED_AUTH_HEADER_KEY) if psk == PRESHARED_AUTH_HEADER_VALUE: # Correct preshared header key supplied. Fetch request from origin. return fetch(request) # Incorrect key supplied. Reject the request. return Response.new("Sorry, you have supplied an invalid key.", status=403); ``` </TabItem> </Tabs> --- # HTTP Basic Authentication URL: https://developers.cloudflare.com/workers/examples/basic-auth/ import { TabItem, Tabs } from "~/components"; :::note This example Worker makes use of the [Node.js Buffer API](/workers/runtime-apis/nodejs/buffer/), which is available as part of the Worker's runtime [Node.js compatibility mode](/workers/runtime-apis/nodejs/). To run this Worker, you will need to [enable the `nodejs_compat` compatibility flag](/workers/configuration/compatibility-flags/#nodejs-compatibility-flag). ::: :::caution[Caution when using in production] This code is provided as a sample, and is not suitable for production use. Basic Authentication sends credentials unencrypted, and must be used with an HTTPS connection to be considered secure. For a production-ready authentication system, consider using [Cloudflare Access](https://developers.cloudflare.com/cloudflare-one/applications/configure-apps/self-hosted-public-app/). ::: <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> ```js /** * Shows how to restrict access using the HTTP Basic schema. * @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Authentication * @see https://tools.ietf.org/html/rfc7617 * */ import { Buffer } from "node:buffer"; const encoder = new TextEncoder(); /** * Protect against timing attacks by safely comparing values using `timingSafeEqual`. * Refer to https://developers.cloudflare.com/workers/runtime-apis/web-crypto/#timingsafeequal for more details * @param {string} a * @param {string} b * @returns {boolean} */ function timingSafeEqual(a, b) { const aBytes = encoder.encode(a); const bBytes = encoder.encode(b); if (aBytes.byteLength !== bBytes.byteLength) { // Strings must be the same length in order to compare // with crypto.subtle.timingSafeEqual return false; } return crypto.subtle.timingSafeEqual(aBytes, bBytes); } export default { /** * * @param {Request} request * @param {{PASSWORD: string}} env * @returns */ async fetch(request, env) { const BASIC_USER = "admin"; // You will need an admin password. This should be // attached to your Worker as an encrypted secret. // Refer to https://developers.cloudflare.com/workers/configuration/secrets/ const BASIC_PASS = env.PASSWORD ?? "password"; const url = new URL(request.url); switch (url.pathname) { case "/": return new Response("Anyone can access the homepage."); case "/logout": // Invalidate the "Authorization" header by returning a HTTP 401. // We do not send a "WWW-Authenticate" header, as this would trigger // a popup in the browser, immediately asking for credentials again. return new Response("Logged out.", { status: 401 }); case "/admin": { // The "Authorization" header is sent when authenticated. const authorization = request.headers.get("Authorization"); if (!authorization) { return new Response("You need to login.", { status: 401, headers: { // Prompts the user for credentials. "WWW-Authenticate": 'Basic realm="my scope", charset="UTF-8"', }, }); } const [scheme, encoded] = authorization.split(" "); // The Authorization header must start with Basic, followed by a space. if (!encoded || scheme !== "Basic") { return new Response("Malformed authorization header.", { status: 400, }); } const credentials = Buffer.from(encoded, "base64").toString(); // The username & password are split by the first colon. //=> example: "username:password" const index = credentials.indexOf(":"); const user = credentials.substring(0, index); const pass = credentials.substring(index + 1); if ( !timingSafeEqual(BASIC_USER, user) || !timingSafeEqual(BASIC_PASS, pass) ) { return new Response("You need to login.", { status: 401, headers: { // Prompts the user for credentials. "WWW-Authenticate": 'Basic realm="my scope", charset="UTF-8"', }, }); } return new Response("🎉 You have private access!", { status: 200, headers: { "Cache-Control": "no-store", }, }); } } return new Response("Not Found.", { status: 404 }); }, }; ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts /** * Shows how to restrict access using the HTTP Basic schema. * @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Authentication * @see https://tools.ietf.org/html/rfc7617 * */ import { Buffer } from "node:buffer"; const encoder = new TextEncoder(); /** * Protect against timing attacks by safely comparing values using `timingSafeEqual`. * Refer to https://developers.cloudflare.com/workers/runtime-apis/web-crypto/#timingsafeequal for more details */ function timingSafeEqual(a: string, b: string) { const aBytes = encoder.encode(a); const bBytes = encoder.encode(b); if (aBytes.byteLength !== bBytes.byteLength) { // Strings must be the same length in order to compare // with crypto.subtle.timingSafeEqual return false; } return crypto.subtle.timingSafeEqual(aBytes, bBytes); } interface Env { PASSWORD: string; } export default { async fetch(request, env): Promise<Response> { const BASIC_USER = "admin"; // You will need an admin password. This should be // attached to your Worker as an encrypted secret. // Refer to https://developers.cloudflare.com/workers/configuration/secrets/ const BASIC_PASS = env.PASSWORD ?? "password"; const url = new URL(request.url); switch (url.pathname) { case "/": return new Response("Anyone can access the homepage."); case "/logout": // Invalidate the "Authorization" header by returning a HTTP 401. // We do not send a "WWW-Authenticate" header, as this would trigger // a popup in the browser, immediately asking for credentials again. return new Response("Logged out.", { status: 401 }); case "/admin": { // The "Authorization" header is sent when authenticated. const authorization = request.headers.get("Authorization"); if (!authorization) { return new Response("You need to login.", { status: 401, headers: { // Prompts the user for credentials. "WWW-Authenticate": 'Basic realm="my scope", charset="UTF-8"', }, }); } const [scheme, encoded] = authorization.split(" "); // The Authorization header must start with Basic, followed by a space. if (!encoded || scheme !== "Basic") { return new Response("Malformed authorization header.", { status: 400, }); } const credentials = Buffer.from(encoded, "base64").toString(); // The username and password are split by the first colon. //=> example: "username:password" const index = credentials.indexOf(":"); const user = credentials.substring(0, index); const pass = credentials.substring(index + 1); if ( !timingSafeEqual(BASIC_USER, user) || !timingSafeEqual(BASIC_PASS, pass) ) { return new Response("You need to login.", { status: 401, headers: { // Prompts the user for credentials. "WWW-Authenticate": 'Basic realm="my scope", charset="UTF-8"', }, }); } return new Response("🎉 You have private access!", { status: 200, headers: { "Cache-Control": "no-store", }, }); } } return new Response("Not Found.", { status: 404 }); }, } satisfies ExportedHandler<Env>; ``` </TabItem> <TabItem label="Rust" icon="seti:rust"> ```rs use base64::prelude::*; use worker::*; #[event(fetch)] async fn fetch(req: Request, env: Env, _ctx: Context) -> Result<Response> { let basic_user = "admin"; // You will need an admin password. This should be // attached to your Worker as an encrypted secret. // Refer to https://developers.cloudflare.com/workers/configuration/secrets/ let basic_pass = match env.secret("PASSWORD") { Ok(s) => s.to_string(), Err(_) => "password".to_string(), }; let url = req.url()?; match url.path() { "/" => Response::ok("Anyone can access the homepage."), // Invalidate the "Authorization" header by returning a HTTP 401. // We do not send a "WWW-Authenticate" header, as this would trigger // a popup in the browser, immediately asking for credentials again. "/logout" => Response::error("Logged out.", 401), "/admin" => { // The "Authorization" header is sent when authenticated. let authorization = req.headers().get("Authorization")?; if authorization == None { let mut headers = Headers::new(); // Prompts the user for credentials. headers.set( "WWW-Authenticate", "Basic realm='my scope', charset='UTF-8'", )?; return Ok(Response::error("You need to login.", 401)?.with_headers(headers)); } let authorization = authorization.unwrap(); let auth: Vec<&str> = authorization.split(" ").collect(); let scheme = auth[0]; let encoded = auth[1]; // The Authorization header must start with Basic, followed by a space. if encoded == "" || scheme != "Basic" { return Response::error("Malformed authorization header.", 400); } let buff = BASE64_STANDARD.decode(encoded).unwrap(); let credentials = String::from_utf8_lossy(&buff); // The username & password are split by the first colon. //=> example: "username:password" let credentials: Vec<&str> = credentials.split(':').collect(); let user = credentials[0]; let pass = credentials[1]; if user != basic_user || pass != basic_pass { let mut headers = Headers::new(); // Prompts the user for credentials. headers.set( "WWW-Authenticate", "Basic realm='my scope', charset='UTF-8'", )?; return Ok(Response::error("You need to login.", 401)?.with_headers(headers)); } let mut headers = Headers::new(); headers.set("Cache-Control", "no-store")?; Ok(Response::ok("🎉 You have private access!")?.with_headers(headers)) } _ => Response::error("Not Found.", 404), } } ``` </TabItem> </Tabs> --- # Block on TLS URL: https://developers.cloudflare.com/workers/examples/block-on-tls/ import { TabItem, Tabs } from "~/components"; <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> ```js export default { async fetch(request) { try { const tlsVersion = request.cf.tlsVersion; // Allow only TLS versions 1.2 and 1.3 if (tlsVersion !== "TLSv1.2" && tlsVersion !== "TLSv1.3") { return new Response("Please use TLS version 1.2 or higher.", { status: 403, }); } return fetch(request); } catch (err) { console.error( "request.cf does not exist in the previewer, only in production", ); return new Response(`Error in workers script ${err.message}`, { status: 500, }); } }, }; ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts export default { async fetch(request): Promise<Response> { try { const tlsVersion = request.cf.tlsVersion; // Allow only TLS versions 1.2 and 1.3 if (tlsVersion !== "TLSv1.2" && tlsVersion !== "TLSv1.3") { return new Response("Please use TLS version 1.2 or higher.", { status: 403, }); } return fetch(request); } catch (err) { console.error( "request.cf does not exist in the previewer, only in production", ); return new Response(`Error in workers script ${err.message}`, { status: 500, }); } }, } satisfies ExportedHandler; ``` </TabItem> <TabItem label="Python" icon="seti:python"> ```py from js import Response, fetch async def on_fetch(request): tls_version = request.cf.tlsVersion if tls_version not in ("TLSv1.2", "TLSv1.3"): return Response.new("Please use TLS version 1.2 or higher.", status=403) return fetch(request) ``` </TabItem> </Tabs> --- # Bulk origin override URL: https://developers.cloudflare.com/workers/examples/bulk-origin-proxy/ import { TabItem, Tabs } from "~/components"; <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> ```js export default { async fetch(request) { /** * An object with different URLs to fetch * @param {Object} ORIGINS */ const ORIGINS = { "starwarsapi.yourdomain.com": "swapi.dev", "google.yourdomain.com": "www.google.com", }; const url = new URL(request.url); // Check if incoming hostname is a key in the ORIGINS object if (url.hostname in ORIGINS) { const target = ORIGINS[url.hostname]; url.hostname = target; // If it is, proxy request to that third party origin return fetch(url.toString(), request); } // Otherwise, process request as normal return fetch(request); }, }; ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts export default { async fetch(request): Promise<Response> { /** * An object with different URLs to fetch * @param {Object} ORIGINS */ const ORIGINS = { "starwarsapi.yourdomain.com": "swapi.dev", "google.yourdomain.com": "www.google.com", }; const url = new URL(request.url); // Check if incoming hostname is a key in the ORIGINS object if (url.hostname in ORIGINS) { const target = ORIGINS[url.hostname]; url.hostname = target; // If it is, proxy request to that third party origin return fetch(url.toString(), request); } // Otherwise, process request as normal return fetch(request); }, } satisfies ExportedHandler; ``` </TabItem> <TabItem label="Python" icon="seti:python"> ```py from js import fetch, URL async def on_fetch(request): # A dict with different URLs to fetch ORIGINS = { "starwarsapi.yourdomain.com": "swapi.dev", "google.yourdomain.com": "www.google.com", } url = URL.new(request.url) # Check if incoming hostname is a key in the ORIGINS object if url.hostname in ORIGINS: url.hostname = ORIGINS[url.hostname] # If it is, proxy request to that third party origin return fetch(url.toString(), request) # Otherwise, process request as normal return fetch(request) ``` </TabItem> </Tabs> --- # Bulk redirects URL: https://developers.cloudflare.com/workers/examples/bulk-redirects/ import { TabItem, Tabs } from "~/components"; <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> ```js export default { async fetch(request) { const externalHostname = "examples.cloudflareworkers.com"; const redirectMap = new Map([ ["/bulk1", "https://" + externalHostname + "/redirect2"], ["/bulk2", "https://" + externalHostname + "/redirect3"], ["/bulk3", "https://" + externalHostname + "/redirect4"], ["/bulk4", "https://google.com"], ]); const requestURL = new URL(request.url); const path = requestURL.pathname; const location = redirectMap.get(path); if (location) { return Response.redirect(location, 301); } // If request not in map, return the original request return fetch(request); }, }; ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts export default { async fetch(request): Promise<Response> { const externalHostname = "examples.cloudflareworkers.com"; const redirectMap = new Map([ ["/bulk1", "https://" + externalHostname + "/redirect2"], ["/bulk2", "https://" + externalHostname + "/redirect3"], ["/bulk3", "https://" + externalHostname + "/redirect4"], ["/bulk4", "https://google.com"], ]); const requestURL = new URL(request.url); const path = requestURL.pathname; const location = redirectMap.get(path); if (location) { return Response.redirect(location, 301); } // If request not in map, return the original request return fetch(request); }, } satisfies ExportedHandler; ``` </TabItem> <TabItem label="Python" icon="seti:python"> ```py from js import Response, fetch, URL async def on_fetch(request): external_hostname = "examples.cloudflareworkers.com" redirect_map = { "/bulk1": "https://" + external_hostname + "/redirect2", "/bulk2": "https://" + external_hostname + "/redirect3", "/bulk3": "https://" + external_hostname + "/redirect4", "/bulk4": "https://google.com", } url = URL.new(request.url) location = redirect_map.get(url.pathname, None) if location: return Response.redirect(location, 301) # If request not in map, return the original request return fetch(request) ``` </TabItem> </Tabs> --- # Using the Cache API URL: https://developers.cloudflare.com/workers/examples/cache-api/ import { TabItem, Tabs } from "~/components"; <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> ```js export default { async fetch(request, env, ctx) { const cacheUrl = new URL(request.url); // Construct the cache key from the cache URL const cacheKey = new Request(cacheUrl.toString(), request); const cache = caches.default; // Check whether the value is already available in the cache // if not, you will need to fetch it from origin, and store it in the cache let response = await cache.match(cacheKey); if (!response) { console.log( `Response for request url: ${request.url} not present in cache. Fetching and caching request.`, ); // If not in cache, get it from origin response = await fetch(request); // Must use Response constructor to inherit all of response's fields response = new Response(response.body, response); // Cache API respects Cache-Control headers. Setting s-max-age to 10 // will limit the response to be in cache for 10 seconds max // Any changes made to the response here will be reflected in the cached value response.headers.append("Cache-Control", "s-maxage=10"); ctx.waitUntil(cache.put(cacheKey, response.clone())); } else { console.log(`Cache hit for: ${request.url}.`); } return response; }, }; ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts interface Env {} export default { async fetch(request, env, ctx): Promise<Response> { const cacheUrl = new URL(request.url); // Construct the cache key from the cache URL const cacheKey = new Request(cacheUrl.toString(), request); const cache = caches.default; // Check whether the value is already available in the cache // if not, you will need to fetch it from origin, and store it in the cache let response = await cache.match(cacheKey); if (!response) { console.log( `Response for request url: ${request.url} not present in cache. Fetching and caching request.`, ); // If not in cache, get it from origin response = await fetch(request); // Must use Response constructor to inherit all of response's fields response = new Response(response.body, response); // Cache API respects Cache-Control headers. Setting s-max-age to 10 // will limit the response to be in cache for 10 seconds max // Any changes made to the response here will be reflected in the cached value response.headers.append("Cache-Control", "s-maxage=10"); ctx.waitUntil(cache.put(cacheKey, response.clone())); } else { console.log(`Cache hit for: ${request.url}.`); } return response; }, } satisfies ExportedHandler<Env>; ``` </TabItem> <TabItem label="Python" icon="seti:python"> ```py from pyodide.ffi import create_proxy from js import Response, Request, URL, caches, fetch async def on_fetch(request, _env, ctx): cache_url = request.url # Construct the cache key from the cache URL cache_key = Request.new(cache_url, request) cache = caches.default # Check whether the value is already available in the cache # if not, you will need to fetch it from origin, and store it in the cache response = await cache.match(cache_key) if response is None: print(f"Response for request url: {request.url} not present in cache. Fetching and caching request.") # If not in cache, get it from origin response = await fetch(request) # Must use Response constructor to inherit all of response's fields response = Response.new(response.body, response) # Cache API respects Cache-Control headers. Setting s-max-age to 10 # will limit the response to be in cache for 10 seconds s-maxage # Any changes made to the response here will be reflected in the cached value response.headers.append("Cache-Control", "s-maxage=10") ctx.waitUntil(create_proxy(cache.put(cache_key, response.clone()))) else: print(f"Cache hit for: {request.url}.") return response ``` </TabItem> </Tabs> --- # Cache POST requests URL: https://developers.cloudflare.com/workers/examples/cache-post-request/ import { TabItem, Tabs } from "~/components"; <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> ```js export default { async fetch(request, env, ctx) { async function sha256(message) { // encode as UTF-8 const msgBuffer = await new TextEncoder().encode(message); // hash the message const hashBuffer = await crypto.subtle.digest("SHA-256", msgBuffer); // convert bytes to hex string return [...new Uint8Array(hashBuffer)] .map((b) => b.toString(16).padStart(2, "0")) .join(""); } try { if (request.method.toUpperCase() === "POST") { const body = await request.clone().text(); // Hash the request body to use it as a part of the cache key const hash = await sha256(body); const cacheUrl = new URL(request.url); // Store the URL in cache by prepending the body's hash cacheUrl.pathname = "/posts" + cacheUrl.pathname + hash; // Convert to a GET to be able to cache const cacheKey = new Request(cacheUrl.toString(), { headers: request.headers, method: "GET", }); const cache = caches.default; // Find the cache key in the cache let response = await cache.match(cacheKey); // Otherwise, fetch response to POST request from origin if (!response) { response = await fetch(request); ctx.waitUntil(cache.put(cacheKey, response.clone())); } return response; } return fetch(request); } catch (e) { return new Response("Error thrown " + e.message); } }, }; ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts interface Env {} export default { async fetch(request, env, ctx): Promise<Response> { async function sha256(message) { // encode as UTF-8 const msgBuffer = await new TextEncoder().encode(message); // hash the message const hashBuffer = await crypto.subtle.digest("SHA-256", msgBuffer); // convert bytes to hex string return [...new Uint8Array(hashBuffer)] .map((b) => b.toString(16).padStart(2, "0")) .join(""); } try { if (request.method.toUpperCase() === "POST") { const body = await request.clone().text(); // Hash the request body to use it as a part of the cache key const hash = await sha256(body); const cacheUrl = new URL(request.url); // Store the URL in cache by prepending the body's hash cacheUrl.pathname = "/posts" + cacheUrl.pathname + hash; // Convert to a GET to be able to cache const cacheKey = new Request(cacheUrl.toString(), { headers: request.headers, method: "GET", }); const cache = caches.default; // Find the cache key in the cache let response = await cache.match(cacheKey); // Otherwise, fetch response to POST request from origin if (!response) { response = await fetch(request); ctx.waitUntil(cache.put(cacheKey, response.clone())); } return response; } return fetch(request); } catch (e) { return new Response("Error thrown " + e.message); } }, } satisfies ExportedHandler<Env>; ``` </TabItem> <TabItem label="Python" icon="seti:python"> ```py import hashlib from pyodide.ffi import create_proxy from js import fetch, URL, Headers, Request, caches async def on_fetch(request, _, ctx): if 'POST' in request.method: # Hash the request body to use it as a part of the cache key body = await request.clone().text() body_hash = hashlib.sha256(body.encode('UTF-8')).hexdigest() # Store the URL in cache by prepending the body's hash cache_url = URL.new(request.url) cache_url.pathname = "/posts" + cache_url.pathname + body_hash # Convert to a GET to be able to cache headers = Headers.new(dict(request.headers).items()) cache_key = Request.new(cache_url.toString(), method='GET', headers=headers) # Find the cache key in the cache cache = caches.default response = await cache.match(cache_key) # Otherwise, fetch response to POST request from origin if response is None: response = await fetch(request) ctx.waitUntil(create_proxy(cache.put(cache_key, response.clone()))) return response return fetch(request) ``` </TabItem> </Tabs> --- # Cache Tags using Workers URL: https://developers.cloudflare.com/workers/examples/cache-tags/ import { TabItem, Tabs } from "~/components"; <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> ```js export default { async fetch(request) { const requestUrl = new URL(request.url); const params = requestUrl.searchParams; const tags = params && params.has("tags") ? params.get("tags").split(",") : []; const url = params && params.has("uri") ? JSON.parse(params.get("uri")) : ""; if (!url) { const errorObject = { error: "URL cannot be empty", }; return new Response(JSON.stringify(errorObject), { status: 400 }); } const init = { cf: { cacheTags: tags, }, }; return fetch(url, init) .then((result) => { const cacheStatus = result.headers.get("cf-cache-status"); const lastModified = result.headers.get("last-modified"); const response = { cache: cacheStatus, lastModified: lastModified, }; return new Response(JSON.stringify(response), { status: result.status, }); }) .catch((err) => { const errorObject = { error: err.message, }; return new Response(JSON.stringify(errorObject), { status: 500 }); }); }, }; ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts export default { async fetch(request): Promise<Response> { const requestUrl = new URL(request.url); const params = requestUrl.searchParams; const tags = params && params.has("tags") ? params.get("tags").split(",") : []; const url = params && params.has("uri") ? JSON.parse(params.get("uri")) : ""; if (!url) { const errorObject = { error: "URL cannot be empty", }; return new Response(JSON.stringify(errorObject), { status: 400 }); } const init = { cf: { cacheTags: tags, }, }; return fetch(url, init) .then((result) => { const cacheStatus = result.headers.get("cf-cache-status"); const lastModified = result.headers.get("last-modified"); const response = { cache: cacheStatus, lastModified: lastModified, }; return new Response(JSON.stringify(response), { status: result.status, }); }) .catch((err) => { const errorObject = { error: err.message, }; return new Response(JSON.stringify(errorObject), { status: 500 }); }); }, } satisfies ExportedHandler; ``` </TabItem> <TabItem label="Python" icon="seti:python"> ```py from pyodide.ffi import to_js as _to_js from js import Response, URL, Object, fetch def to_js(x): return _to_js(x, dict_converter=Object.fromEntries) async def on_fetch(request): request_url = URL.new(request.url) params = request_url.searchParams tags = params["tags"].split(",") if "tags" in params else [] url = params["uri"] or None if url is None: error = {"error": "URL cannot be empty"} return Response.json(to_js(error), status=400) options = {"cf": {"cacheTags": tags}} result = await fetch(url, to_js(options)) cache_status = result.headers["cf-cache-status"] last_modified = result.headers["last-modified"] response = {"cache": cache_status, "lastModified": last_modified} return Response.json(to_js(response), status=result.status) ``` </TabItem> </Tabs> --- # Conditional response URL: https://developers.cloudflare.com/workers/examples/conditional-response/ import { TabItem, Tabs } from "~/components"; <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> ```js export default { async fetch(request) { const BLOCKED_HOSTNAMES = ["nope.mywebsite.com", "bye.website.com"]; // Return a new Response based on a URL's hostname const url = new URL(request.url); if (BLOCKED_HOSTNAMES.includes(url.hostname)) { return new Response("Blocked Host", { status: 403 }); } // Block paths ending in .doc or .xml based on the URL's file extension const forbiddenExtRegExp = new RegExp(/\.(doc|xml)$/); if (forbiddenExtRegExp.test(url.pathname)) { return new Response("Blocked Extension", { status: 403 }); } // On HTTP method if (request.method === "POST") { return new Response("Response for POST"); } // On User Agent const userAgent = request.headers.get("User-Agent") || ""; if (userAgent.includes("bot")) { return new Response("Block User Agent containing bot", { status: 403 }); } // On Client's IP address const clientIP = request.headers.get("CF-Connecting-IP"); if (clientIP === "1.2.3.4") { return new Response("Block the IP 1.2.3.4", { status: 403 }); } // On ASN if (request.cf && request.cf.asn == 64512) { return new Response("Block the ASN 64512 response"); } // On Device Type // Requires Enterprise "CF-Device-Type Header" zone setting or // Page Rule with "Cache By Device Type" setting applied. const device = request.headers.get("CF-Device-Type"); if (device === "mobile") { return Response.redirect("https://mobile.example.com"); } console.error( "Getting Client's IP address, device type, and ASN are not supported in playground. Must test on a live worker", ); return fetch(request); }, }; ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts export default { async fetch(request): Promise<Response> { const BLOCKED_HOSTNAMES = ["nope.mywebsite.com", "bye.website.com"]; // Return a new Response based on a URL's hostname const url = new URL(request.url); if (BLOCKED_HOSTNAMES.includes(url.hostname)) { return new Response("Blocked Host", { status: 403 }); } // Block paths ending in .doc or .xml based on the URL's file extension const forbiddenExtRegExp = new RegExp(/\.(doc|xml)$/); if (forbiddenExtRegExp.test(url.pathname)) { return new Response("Blocked Extension", { status: 403 }); } // On HTTP method if (request.method === "POST") { return new Response("Response for POST"); } // On User Agent const userAgent = request.headers.get("User-Agent") || ""; if (userAgent.includes("bot")) { return new Response("Block User Agent containing bot", { status: 403 }); } // On Client's IP address const clientIP = request.headers.get("CF-Connecting-IP"); if (clientIP === "1.2.3.4") { return new Response("Block the IP 1.2.3.4", { status: 403 }); } // On ASN if (request.cf && request.cf.asn == 64512) { return new Response("Block the ASN 64512 response"); } // On Device Type // Requires Enterprise "CF-Device-Type Header" zone setting or // Page Rule with "Cache By Device Type" setting applied. const device = request.headers.get("CF-Device-Type"); if (device === "mobile") { return Response.redirect("https://mobile.example.com"); } console.error( "Getting Client's IP address, device type, and ASN are not supported in playground. Must test on a live worker", ); return fetch(request); }, } satisfies ExportedHandler; ``` </TabItem> <TabItem label="Python" icon="seti:python"> ```py import re from js import Response, URL, fetch async def on_fetch(request): blocked_hostnames = ["nope.mywebsite.com", "bye.website.com"] url = URL.new(request.url) # Block on hostname if url.hostname in blocked_hostnames: return Response.new("Blocked Host", status=403) # On paths ending in .doc or .xml if re.search(r'\.(doc|xml)$', url.pathname): return Response.new("Blocked Extension", status=403) # On HTTP method if "POST" in request.method: return Response.new("Response for POST") # On User Agent user_agent = request.headers["User-Agent"] or "" if "bot" in user_agent: return Response.new("Block User Agent containing bot", status=403) # On Client's IP address client_ip = request.headers["CF-Connecting-IP"] if client_ip == "1.2.3.4": return Response.new("Block the IP 1.2.3.4", status=403) # On ASN if request.cf and request.cf.asn == 64512: return Response.new("Block the ASN 64512 response") # On Device Type # Requires Enterprise "CF-Device-Type Header" zone setting or # Page Rule with "Cache By Device Type" setting applied. device = request.headers["CF-Device-Type"] if device == "mobile": return Response.redirect("https://mobile.example.com") return fetch(request) ``` </TabItem> </Tabs> --- # Cache using fetch URL: https://developers.cloudflare.com/workers/examples/cache-using-fetch/ import { TabItem, Tabs } from "~/components"; <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> ```js export default { async fetch(request) { const url = new URL(request.url); // Only use the path for the cache key, removing query strings // and always store using HTTPS, for example, https://www.example.com/file-uri-here const someCustomKey = `https://${url.hostname}${url.pathname}`; let response = await fetch(request, { cf: { // Always cache this fetch regardless of content type // for a max of 5 seconds before revalidating the resource cacheTtl: 5, cacheEverything: true, //Enterprise only feature, see Cache API for other plans cacheKey: someCustomKey, }, }); // Reconstruct the Response object to make its headers mutable. response = new Response(response.body, response); // Set cache control headers to cache on browser for 25 minutes response.headers.set("Cache-Control", "max-age=1500"); return response; }, }; ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts export default { async fetch(request): Promise<Response> { const url = new URL(request.url); // Only use the path for the cache key, removing query strings // and always store using HTTPS, for example, https://www.example.com/file-uri-here const someCustomKey = `https://${url.hostname}${url.pathname}`; let response = await fetch(request, { cf: { // Always cache this fetch regardless of content type // for a max of 5 seconds before revalidating the resource cacheTtl: 5, cacheEverything: true, //Enterprise only feature, see Cache API for other plans cacheKey: someCustomKey, }, }); // Reconstruct the Response object to make its headers mutable. response = new Response(response.body, response); // Set cache control headers to cache on browser for 25 minutes response.headers.set("Cache-Control", "max-age=1500"); return response; }, } satisfies ExportedHandler; ``` </TabItem> <TabItem label="Python" icon="seti:python"> ```py from pyodide.ffi import to_js as _to_js from js import Response, URL, Object, fetch def to_js(x): return _to_js(x, dict_converter=Object.fromEntries) async def on_fetch(request): url = URL.new(request.url) # Only use the path for the cache key, removing query strings # and always store using HTTPS, for example, https://www.example.com/file-uri-here some_custom_key = f"https://{url.hostname}{url.pathname}" response = await fetch( request, cf=to_js({ # Always cache this fetch regardless of content type # for a max of 5 seconds before revalidating the resource "cacheTtl": 5, "cacheEverything": True, # Enterprise only feature, see Cache API for other plans "cacheKey": some_custom_key, }), ) # Reconstruct the Response object to make its headers mutable response = Response.new(response.body, response) # Set cache control headers to cache on browser for 25 minutes response.headers["Cache-Control"] = "max-age=1500" return response ``` </TabItem> <TabItem label="Rust" icon="seti:rust"> ```rs use worker::*; #[event(fetch)] async fn fetch(req: Request, _env: Env, _ctx: Context) -> Result<Response> { let url = req.url()?; // Only use the path for the cache key, removing query strings // and always store using HTTPS, for example, https://www.example.com/file-uri-here let custom_key = format!( "https://{host}{path}", host = url.host_str().unwrap(), path = url.path() ); let request = Request::new_with_init( url.as_str(), &RequestInit { headers: req.headers().clone(), method: req.method(), cf: CfProperties { // Always cache this fetch regardless of content type // for a max of 5 seconds before revalidating the resource cache_ttl: Some(5), cache_everything: Some(true), // Enterprise only feature, see Cache API for other plans cache_key: Some(custom_key), ..CfProperties::default() }, ..RequestInit::default() }, )?; let mut response = Fetch::Request(request).send().await?; // Set cache control headers to cache on browser for 25 minutes let _ = response.headers_mut().set("Cache-Control", "max-age=1500"); Ok(response) } ``` </TabItem> </Tabs> ## Caching HTML resources ```js // Force Cloudflare to cache an asset fetch(event.request, { cf: { cacheEverything: true } }); ``` Setting the cache level to **Cache Everything** will override the default cacheability of the asset. For time-to-live (TTL), Cloudflare will still rely on headers set by the origin. ## Custom cache keys :::note This feature is available only to Enterprise customers. ::: A request's cache key is what determines if two requests are the same for caching purposes. If a request has the same cache key as some previous request, then Cloudflare can serve the same cached response for both. For more about cache keys, refer to the [Create custom cache keys](/cache/how-to/cache-keys/#create-custom-cache-keys) documentation. ```js // Set cache key for this request to "some-string". fetch(event.request, { cf: { cacheKey: "some-string" } }); ``` Normally, Cloudflare computes the cache key for a request based on the request's URL. Sometimes, though, you may like different URLs to be treated as if they were the same for caching purposes. For example, if your website content is hosted from both Amazon S3 and Google Cloud Storage - you have the same content in both places, and you can use a Worker to randomly balance between the two. However, you do not want to end up caching two copies of your content. You could utilize custom cache keys to cache based on the original request URL rather than the subrequest URL: <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> ```js export default { async fetch(request) { let url = new URL(request.url); if (Math.random() < 0.5) { url.hostname = "example.s3.amazonaws.com"; } else { url.hostname = "example.storage.googleapis.com"; } let newRequest = new Request(url, request); return fetch(newRequest, { cf: { cacheKey: request.url }, }); }, }; ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts export default { async fetch(request): Promise<Response> { let url = new URL(request.url); if (Math.random() < 0.5) { url.hostname = "example.s3.amazonaws.com"; } else { url.hostname = "example.storage.googleapis.com"; } let newRequest = new Request(url, request); return fetch(newRequest, { cf: { cacheKey: request.url }, }); }, } satisfies ExportedHandler; ``` </TabItem> </Tabs> Workers operating on behalf of different zones cannot affect each other's cache. You can only override cache keys when making requests within your own zone (in the above example `event.request.url` was the key stored), or requests to hosts that are not on Cloudflare. When making a request to another Cloudflare zone (for example, belonging to a different Cloudflare customer), that zone fully controls how its own content is cached within Cloudflare; you cannot override it. ## Override based on origin response code ```js // Force response to be cached for 86400 seconds for 200 status // codes, 1 second for 404, and do not cache 500 errors. fetch(request, { cf: { cacheTtlByStatus: { "200-299": 86400, 404: 1, "500-599": 0 } }, }); ``` This option is a version of the `cacheTtl` feature which chooses a TTL based on the response's status code and does not automatically set `cacheEverything: true`. If the response to this request has a status code that matches, Cloudflare will cache for the instructed time, and override cache directives sent by the origin. You can review [details on the `cacheTtl` feature on the Request page](/workers/runtime-apis/request/#the-cf-property-requestinitcfproperties). ## Customize cache behavior based on request file type Using custom cache keys and overrides based on response code, you can write a Worker that sets the TTL based on the response status code from origin, and request file type. The following example demonstrates how you might use this to cache requests for streaming media assets: <Tabs syncKey="workersExamples"> <TabItem label="Module Worker" icon="seti:javascript"> ```js title="index.js" export default { async fetch(request) { // Instantiate new URL to make it mutable const newRequest = new URL(request.url); const customCacheKey = `${newRequest.hostname}${newRequest.pathname}`; const queryCacheKey = `${newRequest.hostname}${newRequest.pathname}${newRequest.search}`; // Different asset types usually have different caching strategies. Most of the time media content such as audio, videos and images that are not user-generated content would not need to be updated often so a long TTL would be best. However, with HLS streaming, manifest files usually are set with short TTLs so that playback will not be affected, as this files contain the data that the player would need. By setting each caching strategy for categories of asset types in an object within an array, you can solve complex needs when it comes to media content for your application const cacheAssets = [ { asset: "video", key: customCacheKey, regex: /(.*\/Video)|(.*\.(m4s|mp4|ts|avi|mpeg|mpg|mkv|bin|webm|vob|flv|m2ts|mts|3gp|m4v|wmv|qt))/, info: 0, ok: 31556952, redirects: 30, clientError: 10, serverError: 0, }, { asset: "image", key: queryCacheKey, regex: /(.*\/Images)|(.*\.(jpg|jpeg|png|bmp|pict|tif|tiff|webp|gif|heif|exif|bat|bpg|ppm|pgn|pbm|pnm))/, info: 0, ok: 3600, redirects: 30, clientError: 10, serverError: 0, }, { asset: "frontEnd", key: queryCacheKey, regex: /^.*\.(css|js)/, info: 0, ok: 3600, redirects: 30, clientError: 10, serverError: 0, }, { asset: "audio", key: customCacheKey, regex: /(.*\/Audio)|(.*\.(flac|aac|mp3|alac|aiff|wav|ogg|aiff|opus|ape|wma|3gp))/, info: 0, ok: 31556952, redirects: 30, clientError: 10, serverError: 0, }, { asset: "directPlay", key: customCacheKey, regex: /.*(\/Download)/, info: 0, ok: 31556952, redirects: 30, clientError: 10, serverError: 0, }, { asset: "manifest", key: customCacheKey, regex: /^.*\.(m3u8|mpd)/, info: 0, ok: 3, redirects: 2, clientError: 1, serverError: 0, }, ]; const { asset, regex, ...cache } = cacheAssets.find(({ regex }) => newRequest.pathname.match(regex)) ?? {}; const newResponse = await fetch(request, { cf: { cacheKey: cache.key, polish: false, cacheEverything: true, cacheTtlByStatus: { "100-199": cache.info, "200-299": cache.ok, "300-399": cache.redirects, "400-499": cache.clientError, "500-599": cache.serverError, }, cacheTags: ["static"], }, }); const response = new Response(newResponse.body, newResponse); // For debugging purposes response.headers.set("debug", JSON.stringify(cache)); return response; }, }; ``` </TabItem> <TabItem label="Service Worker" icon="seti:javascript"> ```js title="index.js" addEventListener("fetch", (event) => { return event.respondWith(handleRequest(event.request)); }); async function handleRequest(request) { // Instantiate new URL to make it mutable const newRequest = new URL(request.url); // Set `const` to be used in the array later on const customCacheKey = `${newRequest.hostname}${newRequest.pathname}`; const queryCacheKey = `${newRequest.hostname}${newRequest.pathname}${newRequest.search}`; // Set all variables needed to manipulate Cloudflare's cache using the fetch API in the `cf` object. You will be passing these variables in the objects down below. const cacheAssets = [ { asset: "video", key: customCacheKey, regex: /(.*\/Video)|(.*\.(m4s|mp4|ts|avi|mpeg|mpg|mkv|bin|webm|vob|flv|m2ts|mts|3gp|m4v|wmv|qt))/, info: 0, ok: 31556952, redirects: 30, clientError: 10, serverError: 0, }, { asset: "image", key: queryCacheKey, regex: /(.*\/Images)|(.*\.(jpg|jpeg|png|bmp|pict|tif|tiff|webp|gif|heif|exif|bat|bpg|ppm|pgn|pbm|pnm))/, info: 0, ok: 3600, redirects: 30, clientError: 10, serverError: 0, }, { asset: "frontEnd", key: queryCacheKey, regex: /^.*\.(css|js)/, info: 0, ok: 3600, redirects: 30, clientError: 10, serverError: 0, }, { asset: "audio", key: customCacheKey, regex: /(.*\/Audio)|(.*\.(flac|aac|mp3|alac|aiff|wav|ogg|aiff|opus|ape|wma|3gp))/, info: 0, ok: 31556952, redirects: 30, clientError: 10, serverError: 0, }, { asset: "directPlay", key: customCacheKey, regex: /.*(\/Download)/, info: 0, ok: 31556952, redirects: 30, clientError: 10, serverError: 0, }, { asset: "manifest", key: customCacheKey, regex: /^.*\.(m3u8|mpd)/, info: 0, ok: 3, redirects: 2, clientError: 1, serverError: 0, }, ]; // the `.find` method is used to find elements in an array (`cacheAssets`), in this case, `regex`, which can passed to the .`match` method to match on file extensions to cache, since they are many media types in the array. If you want to add more types, update the array. Refer to https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/find for more information. const { asset, regex, ...cache } = cacheAssets.find(({ regex }) => newRequest.pathname.match(regex)) ?? {}; const newResponse = await fetch(request, { cf: { cacheKey: cache.key, polish: false, cacheEverything: true, cacheTtlByStatus: { "100-199": cache.info, "200-299": cache.ok, "300-399": cache.redirects, "400-499": cache.clientError, "500-599": cache.serverError, }, cacheTags: ["static"], }, }); const response = new Response(newResponse.body, newResponse); // For debugging purposes response.headers.set("debug", JSON.stringify(cache)); return response; } ``` </TabItem> </Tabs> ## Using the HTTP Cache API The `cache` mode can be set in `fetch` options. Currently Workers only support the `no-store` mode for controlling the cache. When `no-store` is supplied the cache is bypassed on the way to the origin and the request is not cacheable. ```js fetch(request, { cache: 'no-store'}); ``` --- # CORS header proxy URL: https://developers.cloudflare.com/workers/examples/cors-header-proxy/ import { TabItem, Tabs } from "~/components"; <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> ```js export default { async fetch(request) { const corsHeaders = { "Access-Control-Allow-Origin": "*", "Access-Control-Allow-Methods": "GET,HEAD,POST,OPTIONS", "Access-Control-Max-Age": "86400", }; // The URL for the remote third party API you want to fetch from // but does not implement CORS const API_URL = "https://examples.cloudflareworkers.com/demos/demoapi"; // The endpoint you want the CORS reverse proxy to be on const PROXY_ENDPOINT = "/corsproxy/"; // The rest of this snippet for the demo page function rawHtmlResponse(html) { return new Response(html, { headers: { "content-type": "text/html;charset=UTF-8", }, }); } const DEMO_PAGE = ` <!DOCTYPE html> <html> <body> <h1>API GET without CORS Proxy</h1> <a target="_blank" href="https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch#Checking_that_the_fetch_was_successful">Shows TypeError: Failed to fetch since CORS is misconfigured</a> <p id="noproxy-status"/> <code id="noproxy">Waiting</code> <h1>API GET with CORS Proxy</h1> <p id="proxy-status"/> <code id="proxy">Waiting</code> <h1>API POST with CORS Proxy + Preflight</h1> <p id="proxypreflight-status"/> <code id="proxypreflight">Waiting</code> <script> let reqs = {}; reqs.noproxy = () => { return fetch("${API_URL}").then(r => r.json()) } reqs.proxy = async () => { let href = "${PROXY_ENDPOINT}?apiurl=${API_URL}" return fetch(window.location.origin + href).then(r => r.json()) } reqs.proxypreflight = async () => { let href = "${PROXY_ENDPOINT}?apiurl=${API_URL}" let response = await fetch(window.location.origin + href, { method: "POST", headers: { "Content-Type": "application/json" }, body: JSON.stringify({ msg: "Hello world!" }) }) return response.json() } (async () => { for (const [reqName, req] of Object.entries(reqs)) { try { let data = await req() document.getElementById(reqName).innerHTML = JSON.stringify(data) } catch (e) { document.getElementById(reqName).innerHTML = e } } })() </script> </body> </html> `; async function handleRequest(request) { const url = new URL(request.url); let apiUrl = url.searchParams.get("apiurl"); if (apiUrl == null) { apiUrl = API_URL; } // Rewrite request to point to API URL. This also makes the request mutable // so you can add the correct Origin header to make the API server think // that this request is not cross-site. request = new Request(apiUrl, request); request.headers.set("Origin", new URL(apiUrl).origin); let response = await fetch(request); // Recreate the response so you can modify the headers response = new Response(response.body, response); // Set CORS headers response.headers.set("Access-Control-Allow-Origin", url.origin); // Append to/Add Vary header so browser will cache response correctly response.headers.append("Vary", "Origin"); return response; } async function handleOptions(request) { if ( request.headers.get("Origin") !== null && request.headers.get("Access-Control-Request-Method") !== null && request.headers.get("Access-Control-Request-Headers") !== null ) { // Handle CORS preflight requests. return new Response(null, { headers: { ...corsHeaders, "Access-Control-Allow-Headers": request.headers.get( "Access-Control-Request-Headers", ), }, }); } else { // Handle standard OPTIONS request. return new Response(null, { headers: { Allow: "GET, HEAD, POST, OPTIONS", }, }); } } const url = new URL(request.url); if (url.pathname.startsWith(PROXY_ENDPOINT)) { if (request.method === "OPTIONS") { // Handle CORS preflight requests return handleOptions(request); } else if ( request.method === "GET" || request.method === "HEAD" || request.method === "POST" ) { // Handle requests to the API server return handleRequest(request); } else { return new Response(null, { status: 405, statusText: "Method Not Allowed", }); } } else { return rawHtmlResponse(DEMO_PAGE); } }, }; ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts export default { async fetch(request): Promise<Response> { const corsHeaders = { "Access-Control-Allow-Origin": "*", "Access-Control-Allow-Methods": "GET,HEAD,POST,OPTIONS", "Access-Control-Max-Age": "86400", }; // The URL for the remote third party API you want to fetch from // but does not implement CORS const API_URL = "https://examples.cloudflareworkers.com/demos/demoapi"; // The endpoint you want the CORS reverse proxy to be on const PROXY_ENDPOINT = "/corsproxy/"; // The rest of this snippet for the demo page function rawHtmlResponse(html) { return new Response(html, { headers: { "content-type": "text/html;charset=UTF-8", }, }); } const DEMO_PAGE = ` <!DOCTYPE html> <html> <body> <h1>API GET without CORS Proxy</h1> <a target="_blank" href="https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch#Checking_that_the_fetch_was_successful">Shows TypeError: Failed to fetch since CORS is misconfigured</a> <p id="noproxy-status"/> <code id="noproxy">Waiting</code> <h1>API GET with CORS Proxy</h1> <p id="proxy-status"/> <code id="proxy">Waiting</code> <h1>API POST with CORS Proxy + Preflight</h1> <p id="proxypreflight-status"/> <code id="proxypreflight">Waiting</code> <script> let reqs = {}; reqs.noproxy = () => { return fetch("${API_URL}").then(r => r.json()) } reqs.proxy = async () => { let href = "${PROXY_ENDPOINT}?apiurl=${API_URL}" return fetch(window.location.origin + href).then(r => r.json()) } reqs.proxypreflight = async () => { let href = "${PROXY_ENDPOINT}?apiurl=${API_URL}" let response = await fetch(window.location.origin + href, { method: "POST", headers: { "Content-Type": "application/json" }, body: JSON.stringify({ msg: "Hello world!" }) }) return response.json() } (async () => { for (const [reqName, req] of Object.entries(reqs)) { try { let data = await req() document.getElementById(reqName).textContent = JSON.stringify(data) } catch (e) { document.getElementById(reqName).textContent = e } } })() </script> </body> </html> `; async function handleRequest(request) { const url = new URL(request.url); let apiUrl = url.searchParams.get("apiurl"); if (apiUrl == null) { apiUrl = API_URL; } // Rewrite request to point to API URL. This also makes the request mutable // so you can add the correct Origin header to make the API server think // that this request is not cross-site. request = new Request(apiUrl, request); request.headers.set("Origin", new URL(apiUrl).origin); let response = await fetch(request); // Recreate the response so you can modify the headers response = new Response(response.body, response); // Set CORS headers response.headers.set("Access-Control-Allow-Origin", url.origin); // Append to/Add Vary header so browser will cache response correctly response.headers.append("Vary", "Origin"); return response; } async function handleOptions(request) { if ( request.headers.get("Origin") !== null && request.headers.get("Access-Control-Request-Method") !== null && request.headers.get("Access-Control-Request-Headers") !== null ) { // Handle CORS preflight requests. return new Response(null, { headers: { ...corsHeaders, "Access-Control-Allow-Headers": request.headers.get( "Access-Control-Request-Headers", ), }, }); } else { // Handle standard OPTIONS request. return new Response(null, { headers: { Allow: "GET, HEAD, POST, OPTIONS", }, }); } } const url = new URL(request.url); if (url.pathname.startsWith(PROXY_ENDPOINT)) { if (request.method === "OPTIONS") { // Handle CORS preflight requests return handleOptions(request); } else if ( request.method === "GET" || request.method === "HEAD" || request.method === "POST" ) { // Handle requests to the API server return handleRequest(request); } else { return new Response(null, { status: 405, statusText: "Method Not Allowed", }); } } else { return rawHtmlResponse(DEMO_PAGE); } }, } satisfies ExportedHandler; ``` </TabItem> <TabItem label="Python" icon="seti:python"> ```py from pyodide.ffi import to_js as _to_js from js import Response, URL, fetch, Object, Request def to_js(x): return _to_js(x, dict_converter=Object.fromEntries) async def on_fetch(request): cors_headers = { "Access-Control-Allow-Origin": "*", "Access-Control-Allow-Methods": "GET,HEAD,POST,OPTIONS", "Access-Control-Max-Age": "86400", } api_url = "https://examples.cloudflareworkers.com/demos/demoapi" proxy_endpoint = "/corsproxy/" def raw_html_response(html): return Response.new(html, headers=to_js({"content-type": "text/html;charset=UTF-8"})) demo_page = f''' <!DOCTYPE html> <html> <body> <h1>API GET without CORS Proxy</h1> <a target="_blank" href="https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch#Checking_that_the_fetch_was_successful">Shows TypeError: Failed to fetch since CORS is misconfigured</a> <p id="noproxy-status"/> <code id="noproxy">Waiting</code> <h1>API GET with CORS Proxy</h1> <p id="proxy-status"/> <code id="proxy">Waiting</code> <h1>API POST with CORS Proxy + Preflight</h1> <p id="proxypreflight-status"/> <code id="proxypreflight">Waiting</code> <script> let reqs = {{}}; reqs.noproxy = () => {{ return fetch("{api_url}").then(r => r.json()) }} reqs.proxy = async () => {{ let href = "{proxy_endpoint}?apiurl={api_url}" return fetch(window.location.origin + href).then(r => r.json()) }} reqs.proxypreflight = async () => {{ let href = "{proxy_endpoint}?apiurl={api_url}" let response = await fetch(window.location.origin + href, {{ method: "POST", headers: {{ "Content-Type": "application/json" }}, body: JSON.stringify({{ msg: "Hello world!" }}) }}) return response.json() }} (async () => {{ for (const [reqName, req] of Object.entries(reqs)) {{ try {{ let data = await req() document.getElementById(reqName).innerHTML = JSON.stringify(data) }} catch (e) {{ document.getElementById(reqName).innerHTML = e }} }} }})() </script> </body> </html> ''' async def handle_request(request): url = URL.new(request.url) api_url2 = url.searchParams["apiurl"] if not api_url2: api_url2 = api_url request = Request.new(api_url2, request) request.headers["Origin"] = (URL.new(api_url2)).origin print(request.headers) response = await fetch(request) response = Response.new(response.body, response) response.headers["Access-Control-Allow-Origin"] = url.origin response.headers["Vary"] = "Origin" return response async def handle_options(request): if "Origin" in request.headers and "Access-Control-Request-Method" in request.headers and "Access-Control-Request-Headers" in request.headers: return Response.new(None, headers=to_js({ **cors_headers, "Access-Control-Allow-Headers": request.headers["Access-Control-Request-Headers"] })) return Response.new(None, headers=to_js({"Allow": "GET, HEAD, POST, OPTIONS"})) url = URL.new(request.url) if url.pathname.startswith(proxy_endpoint): if request.method == "OPTIONS": return handle_options(request) if request.method in ("GET", "HEAD", "POST"): return handle_request(request) return Response.new(None, status=405, statusText="Method Not Allowed") return raw_html_response(demo_page) ``` </TabItem> <TabItem label="Rust" icon="seti:rust"> ```rs use std::{borrow::Cow, collections::HashMap}; use worker::*; fn raw_html_response(html: &str) -> Result<Response> { Response::from_html(html) } async fn handle_request(req: Request, api_url: &str) -> Result<Response> { let url = req.url().unwrap(); let mut api_url2 = url .query_pairs() .find(|x| x.0 == Cow::Borrowed("apiurl")) .unwrap() .1 .to_string(); if api_url2 == String::from("") { api_url2 = api_url.to_string(); } let mut request = req.clone_mut()?; *request.path_mut()? = api_url2.clone(); if let url::Origin::Tuple(origin, _, _) = Url::parse(&api_url2)?.origin() { (*request.headers_mut()?).set("Origin", &origin)?; } let mut response = Fetch::Request(request).send().await?.cloned()?; let headers = response.headers_mut(); if let url::Origin::Tuple(origin, _, _) = url.origin() { headers.set("Access-Control-Allow-Origin", &origin)?; headers.set("Vary", "Origin")?; } Ok(response) } fn handle_options(req: Request, cors_headers: &HashMap<&str, &str>) -> Result<Response> { let headers: Vec<_> = req.headers().keys().collect(); if [ "access-control-request-method", "access-control-request-headers", "origin", ] .iter() .all(|i| headers.contains(&i.to_string())) { let mut headers = Headers::new(); for (k, v) in cors_headers.iter() { headers.set(k, v)?; } return Ok(Response::empty()?.with_headers(headers)); } Response::empty() } #[event(fetch)] async fn fetch(req: Request, _env: Env, _ctx: Context) -> Result<Response> { let cors_headers = HashMap::from([ ("Access-Control-Allow-Origin", "*"), ("Access-Control-Allow-Methods", "GET,HEAD,POST,OPTIONS"), ("Access-Control-Max-Age", "86400"), ]); let api_url = "https://examples.cloudflareworkers.com/demos/demoapi"; let proxy_endpoint = "/corsproxy/"; let demo_page = format!( r#" <!DOCTYPE html> <html> <body> <h1>API GET without CORS Proxy</h1> <a target="_blank" href="https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch#Checking_that_the_fetch_was_successful">Shows TypeError: Failed to fetch since CORS is misconfigured</a> <p id="noproxy-status"/> <code id="noproxy">Waiting</code> <h1>API GET with CORS Proxy</h1> <p id="proxy-status"/> <code id="proxy">Waiting</code> <h1>API POST with CORS Proxy + Preflight</h1> <p id="proxypreflight-status"/> <code id="proxypreflight">Waiting</code> <script> let reqs = {{}}; reqs.noproxy = () => {{ return fetch("{api_url}").then(r => r.json()) }} reqs.proxy = async () => {{ let href = "{proxy_endpoint}?apiurl={api_url}" return fetch(window.location.origin + href).then(r => r.json()) }} reqs.proxypreflight = async () => {{ let href = "{proxy_endpoint}?apiurl={api_url}" let response = await fetch(window.location.origin + href, {{ method: "POST", headers: {{ "Content-Type": "application/json" }}, body: JSON.stringify({{ msg: "Hello world!" }}) }}) return response.json() }} (async () => {{ for (const [reqName, req] of Object.entries(reqs)) {{ try {{ let data = await req() document.getElementById(reqName).innerHTML = JSON.stringify(data) }} catch (e) {{ document.getElementById(reqName).innerHTML = e }} }} }})() </script> </body> </html> "# ); if req.url()?.path().starts_with(proxy_endpoint) { match req.method() { Method::Options => return handle_options(req, &cors_headers), Method::Get | Method::Head | Method::Post => return handle_request(req, api_url).await, _ => return Response::error("Method Not Allowed", 405), } } raw_html_response(&demo_page) } ``` </TabItem> </Tabs> --- # Country code redirect URL: https://developers.cloudflare.com/workers/examples/country-code-redirect/ import { TabItem, Tabs } from "~/components"; <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> ```js export default { async fetch(request) { /** * A map of the URLs to redirect to * @param {Object} countryMap */ const countryMap = { US: "https://example.com/us", EU: "https://example.com/eu", }; // Use the cf object to obtain the country of the request // more on the cf object: https://developers.cloudflare.com/workers/runtime-apis/request#incomingrequestcfproperties const country = request.cf.country; if (country != null && country in countryMap) { const url = countryMap[country]; // Remove this logging statement from your final output. console.log( `Based on ${country}-based request, your user would go to ${url}.`, ); return Response.redirect(url); } else { return fetch("https://example.com", request); } }, }; ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts export default { async fetch(request): Promise<Response> { /** * A map of the URLs to redirect to * @param {Object} countryMap */ const countryMap = { US: "https://example.com/us", EU: "https://example.com/eu", }; // Use the cf object to obtain the country of the request // more on the cf object: https://developers.cloudflare.com/workers/runtime-apis/request#incomingrequestcfproperties const country = request.cf.country; if (country != null && country in countryMap) { const url = countryMap[country]; return Response.redirect(url); } else { return fetch(request); } }, } satisfies ExportedHandler; ``` </TabItem> <TabItem label="Python" icon="seti:python"> ```py from js import Response, fetch async def on_fetch(request): countries = { "US": "https://example.com/us", "EU": "https://example.com/eu", } # Use the cf object to obtain the country of the request # more on the cf object: https://developers.cloudflare.com/workers/runtime-apis/request#incomingrequestcfproperties country = request.cf.country if country and country in countries: url = countries[country] return Response.redirect(url) return fetch("https://example.com", request) ``` </TabItem> </Tabs> --- # Setting Cron Triggers URL: https://developers.cloudflare.com/workers/examples/cron-trigger/ import { TabItem, Tabs, WranglerConfig } from "~/components"; <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> ```js export default { async scheduled(event, env, ctx) { console.log("cron processed"); }, }; ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts interface Env {} export default { async scheduled( controller: ScheduledController, env: Env, ctx: ExecutionContext, ) { console.log("cron processed"); }, }; ``` </TabItem> </Tabs> ## Set Cron Triggers in Wrangler Refer to [Cron Triggers](/workers/configuration/cron-triggers/) for more information on how to add a Cron Trigger. If you are deploying with Wrangler, set the cron syntax (once per hour as shown below) by adding this to your Wrangler file: <WranglerConfig> ```toml name = "worker" # ... [triggers] crons = ["0 * * * *"] ``` </WranglerConfig> You also can set a different Cron Trigger for each [environment](/workers/wrangler/environments/) in your [Wrangler configuration file](/workers/wrangler/configuration/). You need to put the `[triggers]` table under your chosen environment. For example: <WranglerConfig> ```toml [env.dev.triggers] crons = ["0 * * * *"] ``` </WranglerConfig> ## Test Cron Triggers using Wrangler The recommended way of testing Cron Triggers is using Wrangler. Cron Triggers can be tested using Wrangler by passing in the `--test-scheduled` flag to [`wrangler dev`](/workers/wrangler/commands/#dev). This will expose a `/__scheduled` route which can be used to test using a HTTP request. To simulate different cron patterns, a `cron` query parameter can be passed in. ```sh npx wrangler dev --test-scheduled curl "http://localhost:8787/__scheduled?cron=0+*+*+*+*" ``` --- # Data loss prevention URL: https://developers.cloudflare.com/workers/examples/data-loss-prevention/ import { TabItem, Tabs } from "~/components"; <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> ```js export default { async fetch(request) { const DEBUG = true; const SOME_HOOK_SERVER = "https://webhook.flow-wolf.io/hook"; /** * Alert a data breach by posting to a webhook server */ async function postDataBreach(request) { return await fetch(SOME_HOOK_SERVER, { method: "POST", headers: { "content-type": "application/json;charset=UTF-8", }, body: JSON.stringify({ ip: request.headers.get("cf-connecting-ip"), time: Date.now(), request: request, }), }); } /** * Define personal data with regular expressions. * Respond with block if credit card data, and strip * emails and phone numbers from the response. * Execution will be limited to MIME type "text/*". */ const response = await fetch(request); // Return origin response, if response wasn’t text const contentType = response.headers.get("content-type") || ""; if (!contentType.toLowerCase().includes("text/")) { return response; } let text = await response.text(); // When debugging replace the response // from the origin with an email text = DEBUG ? text.replace("You may use this", "me@example.com may use this") : text; const sensitiveRegexsMap = { creditCard: String.raw`\b(?:4[0-9]{12}(?:[0-9]{3})?|(?:5[1-5][0-9]{2}|222[1-9]|22[3-9][0-9]|2[3-6][0-9]{2}|27[01][0-9]|2720)[0-9]{12}|3[47][0-9]{13}|3(?:0[0-5]|[68][0-9])[0-9]{11}|6(?:011|5[0-9]{2})[0-9]{12}|(?:2131|1800|35\d{3})\d{11})\b`, email: String.raw`\b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,}\b`, phone: String.raw`\b07\d{9}\b`, }; for (const kind in sensitiveRegexsMap) { const sensitiveRegex = new RegExp(sensitiveRegexsMap[kind], "ig"); const match = await sensitiveRegex.test(text); if (match) { // Alert a data breach await postDataBreach(request); // Respond with a block if credit card, // otherwise replace sensitive text with `*`s return kind === "creditCard" ? new Response(kind + " found\nForbidden\n", { status: 403, statusText: "Forbidden", }) : new Response(text.replace(sensitiveRegex, "**********"), response); } } return new Response(text, response); }, }; ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts export default { async fetch(request): Promise<Response> { const DEBUG = true; const SOME_HOOK_SERVER = "https://webhook.flow-wolf.io/hook"; /** * Alert a data breach by posting to a webhook server */ async function postDataBreach(request) { return await fetch(SOME_HOOK_SERVER, { method: "POST", headers: { "content-type": "application/json;charset=UTF-8", }, body: JSON.stringify({ ip: request.headers.get("cf-connecting-ip"), time: Date.now(), request: request, }), }); } /** * Define personal data with regular expressions. * Respond with block if credit card data, and strip * emails and phone numbers from the response. * Execution will be limited to MIME type "text/*". */ const response = await fetch(request); // Return origin response, if response wasn’t text const contentType = response.headers.get("content-type") || ""; if (!contentType.toLowerCase().includes("text/")) { return response; } let text = await response.text(); // When debugging replace the response // from the origin with an email text = DEBUG ? text.replace("You may use this", "me@example.com may use this") : text; const sensitiveRegexsMap = { creditCard: String.raw`\b(?:4[0-9]{12}(?:[0-9]{3})?|(?:5[1-5][0-9]{2}|222[1-9]|22[3-9][0-9]|2[3-6][0-9]{2}|27[01][0-9]|2720)[0-9]{12}|3[47][0-9]{13}|3(?:0[0-5]|[68][0-9])[0-9]{11}|6(?:011|5[0-9]{2})[0-9]{12}|(?:2131|1800|35\d{3})\d{11})\b`, email: String.raw`\b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,}\b`, phone: String.raw`\b07\d{9}\b`, }; for (const kind in sensitiveRegexsMap) { const sensitiveRegex = new RegExp(sensitiveRegexsMap[kind], "ig"); const match = await sensitiveRegex.test(text); if (match) { // Alert a data breach await postDataBreach(request); // Respond with a block if credit card, // otherwise replace sensitive text with `*`s return kind === "creditCard" ? new Response(kind + " found\nForbidden\n", { status: 403, statusText: "Forbidden", }) : new Response(text.replace(sensitiveRegex, "**********"), response); } } return new Response(text, response); }, } satisfies ExportedHandler; ``` </TabItem> <TabItem label="Python" icon="seti:python"> ```py import re from datetime import datetime from js import Response, fetch, JSON, Headers # Alert a data breach by posting to a webhook server async def post_data_breach(request): some_hook_server = "https://webhook.flow-wolf.io/hook" headers = Headers.new({"content-type": "application/json"}.items()) body = JSON.stringify({ "ip": request.headers["cf-connecting-ip"], "time": datetime.now(), "request": request, }) return await fetch(some_hook_server, method="POST", headers=headers, body=body) async def on_fetch(request): debug = True # Define personal data with regular expressions. # Respond with block if credit card data, and strip # emails and phone numbers from the response. # Execution will be limited to MIME type "text/*". response = await fetch(request) # Return origin response, if response wasn’t text content_type = response.headers["content-type"] or "" if "text" not in content_type: return response text = await response.text() # When debugging replace the response from the origin with an email text = text.replace("You may use this", "me@example.com may use this") if debug else text sensitive_regex = [ ("credit_card", r'\b(?:4[0-9]{12}(?:[0-9]{3})?|(?:5[1-5][0-9]{2}|222[1-9]|22[3-9][0-9]|2[3-6][0-9]{2}|27[01][0-9]|2720)[0-9]{12}|3[47][0-9]{13}|3(?:0[0-5]|[68][0-9])[0-9]{11}|6(?:011|5[0-9]{2})[0-9]{12}|(?:2131|1800|35\d{3})\d{11})\b'), ("email", r'\b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,}\b'), ("phone", r'\b07\d{9}\b'), ] for (kind, regex) in sensitive_regex: match = re.search(regex, text, flags=re.IGNORECASE) if match: # Alert a data breach await post_data_breach(request) # Respond with a block if credit card, otherwise replace sensitive text with `*`s card_resp = Response.new(kind + " found\nForbidden\n", status=403,statusText="Forbidden") sensitive_resp = Response.new(re.sub(regex, "*"*10, text, flags=re.IGNORECASE), response) return card_resp if kind == "credit_card" else sensitive_resp return Response.new(text, response) ``` </TabItem> </Tabs> --- # Debugging logs URL: https://developers.cloudflare.com/workers/examples/debugging-logs/ import { TabItem, Tabs } from "~/components"; <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> ```js export default { async fetch(request, env, ctx) { // Service configured to receive logs const LOG_URL = "https://log-service.example.com/"; async function postLog(data) { return await fetch(LOG_URL, { method: "POST", body: data, }); } let response; try { response = await fetch(request); if (!response.ok && !response.redirected) { const body = await response.text(); throw new Error( "Bad response at origin. Status: " + response.status + " Body: " + // Ensure the string is small enough to be a header body.trim().substring(0, 10), ); } } catch (err) { // Without ctx.waitUntil(), your fetch() to Cloudflare's // logging service may or may not complete ctx.waitUntil(postLog(err.toString())); const stack = JSON.stringify(err.stack) || err; // Copy the response and initialize body to the stack trace response = new Response(stack, response); // Add the error stack into a header to find out what happened response.headers.set("X-Debug-stack", stack); response.headers.set("X-Debug-err", err); } return response; }, }; ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts interface Env {} export default { async fetch(request, env, ctx): Promise<Response> { // Service configured to receive logs const LOG_URL = "https://log-service.example.com/"; async function postLog(data) { return await fetch(LOG_URL, { method: "POST", body: data, }); } let response; try { response = await fetch(request); if (!response.ok && !response.redirected) { const body = await response.text(); throw new Error( "Bad response at origin. Status: " + response.status + " Body: " + // Ensure the string is small enough to be a header body.trim().substring(0, 10), ); } } catch (err) { // Without ctx.waitUntil(), your fetch() to Cloudflare's // logging service may or may not complete ctx.waitUntil(postLog(err.toString())); const stack = JSON.stringify(err.stack) || err; // Copy the response and initialize body to the stack trace response = new Response(stack, response); // Add the error stack into a header to find out what happened response.headers.set("X-Debug-stack", stack); response.headers.set("X-Debug-err", err); } return response; }, } satisfies ExportedHandler<Env>; ``` </TabItem> <TabItem label="Python" icon="seti:python"> ```py import json import traceback from pyodide.ffi import create_once_callable from js import Response, fetch, Headers async def on_fetch(request, _env, ctx): # Service configured to receive logs log_url = "https://log-service.example.com/" async def post_log(data): return await fetch(log_url, method="POST", body=data) response = await fetch(request) try: if not response.ok and not response.redirected: body = await response.text() # Simulating an error. Ensure the string is small enough to be a header raise Exception(f'Bad response at origin. Status:{response.status} Body:{body.strip()[:10]}') except Exception as e: # Without ctx.waitUntil(), your fetch() to Cloudflare's # logging service may or may not complete ctx.waitUntil(create_once_callable(post_log(e))) stack = json.dumps(traceback.format_exc()) or e # Copy the response and add to header response = Response.new(stack, response) response.headers["X-Debug-stack"] = stack response.headers["X-Debug-err"] = e return response ``` </TabItem> </Tabs> --- # Cookie parsing URL: https://developers.cloudflare.com/workers/examples/extract-cookie-value/ import { TabItem, Tabs } from "~/components"; <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> ```js import { parse } from "cookie"; export default { async fetch(request) { // The name of the cookie const COOKIE_NAME = "__uid"; const cookie = parse(request.headers.get("Cookie") || ""); if (cookie[COOKIE_NAME] != null) { // Respond with the cookie value return new Response(cookie[COOKIE_NAME]); } return new Response("No cookie with name: " + COOKIE_NAME); }, }; ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts import { parse } from "cookie"; export default { async fetch(request): Promise<Response> { // The name of the cookie const COOKIE_NAME = "__uid"; const cookie = parse(request.headers.get("Cookie") || ""); if (cookie[COOKIE_NAME] != null) { // Respond with the cookie value return new Response(cookie[COOKIE_NAME]); } return new Response("No cookie with name: " + COOKIE_NAME); }, } satisfies ExportedHandler; ``` </TabItem> <TabItem label="Python" icon="seti:python"> ```py from http.cookies import SimpleCookie from js import Response async def on_fetch(request): # Name of the cookie cookie_name = "__uid" cookies = SimpleCookie(request.headers["Cookie"] or "") if cookie_name in cookies: # Respond with cookie value return Response.new(cookies[cookie_name].value) return Response.new("No cookie with name: " + cookie_name) ``` </TabItem> </Tabs> :::note[External dependencies] This example requires the npm package [`cookie`](https://www.npmjs.com/package/cookie) to be installed in your JavaScript project. ::: --- # Fetch HTML URL: https://developers.cloudflare.com/workers/examples/fetch-html/ import { Render, TabItem, Tabs } from "~/components"; <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> <Render file="fetch-html-example-js" /> </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts export default { async fetch(request: Request): Promise<Response> { /** * Replace `remote` with the host you wish to send requests to */ const remote = "https://example.com"; return await fetch(remote, request); }, }; ``` </TabItem> <TabItem label="Python" icon="seti:python"> ```py from js import fetch async def on_fetch(request): # Replace `remote` with the host you wish to send requests to remote = "https://example.com" return await fetch(remote, request) ``` </TabItem> </Tabs> --- # Geolocation: Weather application URL: https://developers.cloudflare.com/workers/examples/geolocation-app-weather/ import { TabItem, Tabs } from "~/components"; <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> ```js export default { async fetch(request) { let endpoint = "https://api.waqi.info/feed/geo:"; const token = ""; //Use a token from https://aqicn.org/api/ let html_style = `body{padding:6em; font-family: sans-serif;} h1{color:#f6821f}`; let html_content = "<h1>Weather 🌦</h1>"; const latitude = request.cf.latitude; const longitude = request.cf.longitude; endpoint += `${latitude};${longitude}/?token=${token}`; const init = { headers: { "content-type": "application/json;charset=UTF-8", }, }; const response = await fetch(endpoint, init); const content = await response.json(); html_content += `<p>This is a demo using Workers geolocation data. </p>`; html_content += `You are located at: ${latitude},${longitude}.</p>`; html_content += `<p>Based off sensor data from <a href="${content.data.city.url}">${content.data.city.name}</a>:</p>`; html_content += `<p>The AQI level is: ${content.data.aqi}.</p>`; html_content += `<p>The N02 level is: ${content.data.iaqi.no2?.v}.</p>`; html_content += `<p>The O3 level is: ${content.data.iaqi.o3?.v}.</p>`; html_content += `<p>The temperature is: ${content.data.iaqi.t?.v}°C.</p>`; let html = ` <!DOCTYPE html> <head> <title>Geolocation: Weather</title> </head> <body> <style>${html_style}</style> <div id="container"> ${html_content} </div> </body>`; return new Response(html, { headers: { "content-type": "text/html;charset=UTF-8", }, }); }, }; ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts export default { async fetch(request): Promise<Response> { let endpoint = "https://api.waqi.info/feed/geo:"; const token = ""; //Use a token from https://aqicn.org/api/ let html_style = `body{padding:6em; font-family: sans-serif;} h1{color:#f6821f}`; let html_content = "<h1>Weather 🌦</h1>"; const latitude = request.cf.latitude; const longitude = request.cf.longitude; endpoint += `${latitude};${longitude}/?token=${token}`; const init = { headers: { "content-type": "application/json;charset=UTF-8", }, }; const response = await fetch(endpoint, init); const content = await response.json(); html_content += `<p>This is a demo using Workers geolocation data. </p>`; html_content += `You are located at: ${latitude},${longitude}.</p>`; html_content += `<p>Based off sensor data from <a href="${content.data.city.url}">${content.data.city.name}</a>:</p>`; html_content += `<p>The AQI level is: ${content.data.aqi}.</p>`; html_content += `<p>The N02 level is: ${content.data.iaqi.no2?.v}.</p>`; html_content += `<p>The O3 level is: ${content.data.iaqi.o3?.v}.</p>`; html_content += `<p>The temperature is: ${content.data.iaqi.t?.v}°C.</p>`; let html = ` <!DOCTYPE html> <head> <title>Geolocation: Weather</title> </head> <body> <style>${html_style}</style> <div id="container"> ${html_content} </div> </body>`; return new Response(html, { headers: { "content-type": "text/html;charset=UTF-8", }, }); }, } satisfies ExportedHandler; ``` </TabItem> <TabItem label="Python" icon="seti:python"> ```py from js import Response, Headers, fetch async def on_fetch(request): endpoint = "https://api.waqi.info/feed/geo:" token = "" # Use a token from https://aqicn.org/api/ html_style = "body{padding:6em; font-family: sans-serif;} h1{color:#f6821f}" html_content = "<h1>Weather 🌦</h1>" latitude = request.cf.latitude longitude = request.cf.longitude endpoint += f"{latitude};{longitude}/?token={token}" response = await fetch(endpoint) content = await response.json() html_content += "<p>This is a demo using Workers geolocation data. </p>" html_content += f"You are located at: {latitude},{longitude}.</p>" html_content += f"<p>Based off sensor data from <a href='{content.data.city.url}'>{content.data.city.name}</a>:</p>" html_content += f"<p>The AQI level is: {content.data.aqi}.</p>" html_content += f"<p>The N02 level is: {content.data.iaqi.no2.v}.</p>" html_content += f"<p>The O3 level is: {content.data.iaqi.o3.v}.</p>" html_content += f"<p>The temperature is: {content.data.iaqi.t.v}°C.</p>" html = f""" <!DOCTYPE html> <head> <title>Geolocation: Weather</title> </head> <body> <style>{html_style}</style> <div id="container"> {html_content} </div> </body> """ headers = Headers.new({"content-type": "text/html;charset=UTF-8"}.items()) return Response.new(html, headers=headers) ``` </TabItem> </Tabs> --- # Fetch JSON URL: https://developers.cloudflare.com/workers/examples/fetch-json/ import { TabItem, Tabs } from "~/components"; <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> ```js export default { async fetch(request, env, ctx) { const url = "https://jsonplaceholder.typicode.com/todos/1"; // gatherResponse returns both content-type & response body as a string async function gatherResponse(response) { const { headers } = response; const contentType = headers.get("content-type") || ""; if (contentType.includes("application/json")) { return { contentType, result: JSON.stringify(await response.json()) }; } return { contentType, result: response.text() }; } const response = await fetch(url); const { contentType, result } = await gatherResponse(response); const options = { headers: { "content-type": contentType } }; return new Response(result, options); }, }; ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts interface Env {} export default { async fetch(request, env, ctx): Promise<Response> { const url = "https://jsonplaceholder.typicode.com/todos/1"; // gatherResponse returns both content-type & response body as a string async function gatherResponse(response) { const { headers } = response; const contentType = headers.get("content-type") || ""; if (contentType.includes("application/json")) { return { contentType, result: JSON.stringify(await response.json()) }; } return { contentType, result: response.text() }; } const response = await fetch(url); const { contentType, result } = await gatherResponse(response); const options = { headers: { "content-type": contentType } }; return new Response(result, options); }, } satisfies ExportedHandler<Env>; ``` </TabItem> <TabItem label="Python" icon="seti:python"> ```py from js import Response, fetch, Headers, JSON async def on_fetch(request): url = "https://jsonplaceholder.typicode.com/todos/1" # gather_response returns both content-type & response body as a string async def gather_response(response): headers = response.headers content_type = headers["content-type"] or "" if "application/json" in content_type: return (content_type, JSON.stringify(await response.json())) return (content_type, await response.text()) response = await fetch(url) content_type, result = await gather_response(response) headers = Headers.new({"content-type": content_type}.items()) return Response.new(result, headers=headers) ``` </TabItem> </Tabs> --- # Geolocation: Hello World URL: https://developers.cloudflare.com/workers/examples/geolocation-hello-world/ import { TabItem, Tabs } from "~/components"; <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> ```js export default { async fetch(request) { let html_content = ""; let html_style = "body{padding:6em; font-family: sans-serif;} h1{color:#f6821f;}"; html_content += "<p> Colo: " + request.cf.colo + "</p>"; html_content += "<p> Country: " + request.cf.country + "</p>"; html_content += "<p> City: " + request.cf.city + "</p>"; html_content += "<p> Continent: " + request.cf.continent + "</p>"; html_content += "<p> Latitude: " + request.cf.latitude + "</p>"; html_content += "<p> Longitude: " + request.cf.longitude + "</p>"; html_content += "<p> PostalCode: " + request.cf.postalCode + "</p>"; html_content += "<p> MetroCode: " + request.cf.metroCode + "</p>"; html_content += "<p> Region: " + request.cf.region + "</p>"; html_content += "<p> RegionCode: " + request.cf.regionCode + "</p>"; html_content += "<p> Timezone: " + request.cf.timezone + "</p>"; let html = `<!DOCTYPE html> <head> <title> Geolocation: Hello World </title> <style> ${html_style} </style> </head> <body> <h1>Geolocation: Hello World!</h1> <p>You now have access to geolocation data about where your user is visiting from.</p> ${html_content} </body>`; return new Response(html, { headers: { "content-type": "text/html;charset=UTF-8", }, }); }, }; ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts export default { async fetch(request): Promise<Response> { let html_content = ""; let html_style = "body{padding:6em; font-family: sans-serif;} h1{color:#f6821f;}"; html_content += "<p> Colo: " + request.cf.colo + "</p>"; html_content += "<p> Country: " + request.cf.country + "</p>"; html_content += "<p> City: " + request.cf.city + "</p>"; html_content += "<p> Continent: " + request.cf.continent + "</p>"; html_content += "<p> Latitude: " + request.cf.latitude + "</p>"; html_content += "<p> Longitude: " + request.cf.longitude + "</p>"; html_content += "<p> PostalCode: " + request.cf.postalCode + "</p>"; html_content += "<p> MetroCode: " + request.cf.metroCode + "</p>"; html_content += "<p> Region: " + request.cf.region + "</p>"; html_content += "<p> RegionCode: " + request.cf.regionCode + "</p>"; html_content += "<p> Timezone: " + request.cf.timezone + "</p>"; let html = `<!DOCTYPE html> <head> <title> Geolocation: Hello World </title> <style> ${html_style} </style> </head> <body> <h1>Geolocation: Hello World!</h1> <p>You now have access to geolocation data about where your user is visiting from.</p> ${html_content} </body>`; return new Response(html, { headers: { "content-type": "text/html;charset=UTF-8", }, }); }, } satisfies ExportedHandler; ``` </TabItem> <TabItem label="Python" icon="seti:python"> ```py from js import Response, Headers async def on_fetch(request): html_content = "" html_style = "body{padding:6em font-family: sans-serif;} h1{color:#f6821f;}" html_content += "<p> Colo: " + request.cf.colo + "</p>" html_content += "<p> Country: " + request.cf.country + "</p>" html_content += "<p> City: " + request.cf.city + "</p>" html_content += "<p> Continent: " + request.cf.continent + "</p>" html_content += "<p> Latitude: " + request.cf.latitude + "</p>" html_content += "<p> Longitude: " + request.cf.longitude + "</p>" html_content += "<p> PostalCode: " + request.cf.postalCode + "</p>" html_content += "<p> Region: " + request.cf.region + "</p>" html_content += "<p> RegionCode: " + request.cf.regionCode + "</p>" html_content += "<p> Timezone: " + request.cf.timezone + "</p>" html = f""" <!DOCTYPE html> <head> <title> Geolocation: Hello World </title> <style> {html_style} </style> </head> <body> <h1>Geolocation: Hello World!</h1> <p>You now have access to geolocation data about where your user is visiting from.</p> {html_content} </body> """ headers = Headers.new({"content-type": "text/htmlcharset=UTF-8"}.items()) return Response.new(html, headers=headers) ``` </TabItem> </Tabs> --- # Geolocation: Custom Styling URL: https://developers.cloudflare.com/workers/examples/geolocation-custom-styling/ import { TabItem, Tabs } from "~/components"; <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> ```js export default { async fetch(request) { let grads = [ [ { color: "00000c", position: 0 }, { color: "00000c", position: 0 }, ], [ { color: "020111", position: 85 }, { color: "191621", position: 100 }, ], [ { color: "020111", position: 60 }, { color: "20202c", position: 100 }, ], [ { color: "020111", position: 10 }, { color: "3a3a52", position: 100 }, ], [ { color: "20202c", position: 0 }, { color: "515175", position: 100 }, ], [ { color: "40405c", position: 0 }, { color: "6f71aa", position: 80 }, { color: "8a76ab", position: 100 }, ], [ { color: "4a4969", position: 0 }, { color: "7072ab", position: 50 }, { color: "cd82a0", position: 100 }, ], [ { color: "757abf", position: 0 }, { color: "8583be", position: 60 }, { color: "eab0d1", position: 100 }, ], [ { color: "82addb", position: 0 }, { color: "ebb2b1", position: 100 }, ], [ { color: "94c5f8", position: 1 }, { color: "a6e6ff", position: 70 }, { color: "b1b5ea", position: 100 }, ], [ { color: "b7eaff", position: 0 }, { color: "94dfff", position: 100 }, ], [ { color: "9be2fe", position: 0 }, { color: "67d1fb", position: 100 }, ], [ { color: "90dffe", position: 0 }, { color: "38a3d1", position: 100 }, ], [ { color: "57c1eb", position: 0 }, { color: "246fa8", position: 100 }, ], [ { color: "2d91c2", position: 0 }, { color: "1e528e", position: 100 }, ], [ { color: "2473ab", position: 0 }, { color: "1e528e", position: 70 }, { color: "5b7983", position: 100 }, ], [ { color: "1e528e", position: 0 }, { color: "265889", position: 50 }, { color: "9da671", position: 100 }, ], [ { color: "1e528e", position: 0 }, { color: "728a7c", position: 50 }, { color: "e9ce5d", position: 100 }, ], [ { color: "154277", position: 0 }, { color: "576e71", position: 30 }, { color: "e1c45e", position: 70 }, { color: "b26339", position: 100 }, ], [ { color: "163C52", position: 0 }, { color: "4F4F47", position: 30 }, { color: "C5752D", position: 60 }, { color: "B7490F", position: 80 }, { color: "2F1107", position: 100 }, ], [ { color: "071B26", position: 0 }, { color: "071B26", position: 30 }, { color: "8A3B12", position: 80 }, { color: "240E03", position: 100 }, ], [ { color: "010A10", position: 30 }, { color: "59230B", position: 80 }, { color: "2F1107", position: 100 }, ], [ { color: "090401", position: 50 }, { color: "4B1D06", position: 100 }, ], [ { color: "00000c", position: 80 }, { color: "150800", position: 100 }, ], ]; async function toCSSGradient(hour) { let css = "linear-gradient(to bottom,"; const data = grads[hour]; const len = data.length; for (let i = 0; i < len; i++) { const item = data[i]; css += ` #${item.color} ${item.position}%`; if (i < len - 1) css += ","; } return css + ")"; } let html_content = ""; let html_style = ` html{width:100vw; height:100vh;} body{padding:0; margin:0 !important;height:100%;} #container { display: flex; flex-direction:column; align-items: center; justify-content: center; height: 100%; color:white; font-family:sans-serif; }`; const timezone = request.cf.timezone; console.log(timezone); let localized_date = new Date( new Date().toLocaleString("en-US", { timeZone: timezone }), ); let hour = localized_date.getHours(); let minutes = localized_date.getMinutes(); html_content += "<h1>" + hour + ":" + minutes + "</h1>"; html_content += "<p>" + timezone + "<br/></p>"; html_style += "body{background:" + (await toCSSGradient(hour)) + ";}"; let html = ` <!DOCTYPE html> <head> <title>Geolocation: Customized Design</title> </head> <body> <style> ${html_style}</style> <div id="container"> ${html_content} </div> </body>`; return new Response(html, { headers: { "content-type": "text/html;charset=UTF-8" }, }); }, }; ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts export default { async fetch(request): Promise<Response> { let grads = [ [ { color: "00000c", position: 0 }, { color: "00000c", position: 0 }, ], [ { color: "020111", position: 85 }, { color: "191621", position: 100 }, ], [ { color: "020111", position: 60 }, { color: "20202c", position: 100 }, ], [ { color: "020111", position: 10 }, { color: "3a3a52", position: 100 }, ], [ { color: "20202c", position: 0 }, { color: "515175", position: 100 }, ], [ { color: "40405c", position: 0 }, { color: "6f71aa", position: 80 }, { color: "8a76ab", position: 100 }, ], [ { color: "4a4969", position: 0 }, { color: "7072ab", position: 50 }, { color: "cd82a0", position: 100 }, ], [ { color: "757abf", position: 0 }, { color: "8583be", position: 60 }, { color: "eab0d1", position: 100 }, ], [ { color: "82addb", position: 0 }, { color: "ebb2b1", position: 100 }, ], [ { color: "94c5f8", position: 1 }, { color: "a6e6ff", position: 70 }, { color: "b1b5ea", position: 100 }, ], [ { color: "b7eaff", position: 0 }, { color: "94dfff", position: 100 }, ], [ { color: "9be2fe", position: 0 }, { color: "67d1fb", position: 100 }, ], [ { color: "90dffe", position: 0 }, { color: "38a3d1", position: 100 }, ], [ { color: "57c1eb", position: 0 }, { color: "246fa8", position: 100 }, ], [ { color: "2d91c2", position: 0 }, { color: "1e528e", position: 100 }, ], [ { color: "2473ab", position: 0 }, { color: "1e528e", position: 70 }, { color: "5b7983", position: 100 }, ], [ { color: "1e528e", position: 0 }, { color: "265889", position: 50 }, { color: "9da671", position: 100 }, ], [ { color: "1e528e", position: 0 }, { color: "728a7c", position: 50 }, { color: "e9ce5d", position: 100 }, ], [ { color: "154277", position: 0 }, { color: "576e71", position: 30 }, { color: "e1c45e", position: 70 }, { color: "b26339", position: 100 }, ], [ { color: "163C52", position: 0 }, { color: "4F4F47", position: 30 }, { color: "C5752D", position: 60 }, { color: "B7490F", position: 80 }, { color: "2F1107", position: 100 }, ], [ { color: "071B26", position: 0 }, { color: "071B26", position: 30 }, { color: "8A3B12", position: 80 }, { color: "240E03", position: 100 }, ], [ { color: "010A10", position: 30 }, { color: "59230B", position: 80 }, { color: "2F1107", position: 100 }, ], [ { color: "090401", position: 50 }, { color: "4B1D06", position: 100 }, ], [ { color: "00000c", position: 80 }, { color: "150800", position: 100 }, ], ]; async function toCSSGradient(hour) { let css = "linear-gradient(to bottom,"; const data = grads[hour]; const len = data.length; for (let i = 0; i < len; i++) { const item = data[i]; css += ` #${item.color} ${item.position}%`; if (i < len - 1) css += ","; } return css + ")"; } let html_content = ""; let html_style = ` html{width:100vw; height:100vh;} body{padding:0; margin:0 !important;height:100%;} #container { display: flex; flex-direction:column; align-items: center; justify-content: center; height: 100%; color:white; font-family:sans-serif; }`; const timezone = request.cf.timezone; console.log(timezone); let localized_date = new Date( new Date().toLocaleString("en-US", { timeZone: timezone }), ); let hour = localized_date.getHours(); let minutes = localized_date.getMinutes(); html_content += "<h1>" + hour + ":" + minutes + "</h1>"; html_content += "<p>" + timezone + "<br/></p>"; html_style += "body{background:" + (await toCSSGradient(hour)) + ";}"; let html = ` <!DOCTYPE html> <head> <title>Geolocation: Customized Design</title> </head> <body> <style> ${html_style}</style> <div id="container"> ${html_content} </div> </body>`; return new Response(html, { headers: { "content-type": "text/html;charset=UTF-8" }, }); }, } satisfies ExportedHandler; ``` </TabItem> </Tabs> --- # Hot-link protection URL: https://developers.cloudflare.com/workers/examples/hot-link-protection/ import { TabItem, Tabs } from "~/components"; <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> ```js export default { async fetch(request) { const HOMEPAGE_URL = "https://tutorial.cloudflareworkers.com/"; const PROTECTED_TYPE = "image/"; // Fetch the original request const response = await fetch(request); // If it's an image, engage hotlink protection based on the // Referer header. const referer = request.headers.get("Referer"); const contentType = response.headers.get("Content-Type") || ""; if (referer && contentType.startsWith(PROTECTED_TYPE)) { // If the hostnames don't match, it's a hotlink if (new URL(referer).hostname !== new URL(request.url).hostname) { // Redirect the user to your website return Response.redirect(HOMEPAGE_URL, 302); } } // Everything is fine, return the response normally. return response; }, }; ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts export default { async fetch(request): Promise<Response> { const HOMEPAGE_URL = "https://tutorial.cloudflareworkers.com/"; const PROTECTED_TYPE = "image/"; // Fetch the original request const response = await fetch(request); // If it's an image, engage hotlink protection based on the // Referer header. const referer = request.headers.get("Referer"); const contentType = response.headers.get("Content-Type") || ""; if (referer && contentType.startsWith(PROTECTED_TYPE)) { // If the hostnames don't match, it's a hotlink if (new URL(referer).hostname !== new URL(request.url).hostname) { // Redirect the user to your website return Response.redirect(HOMEPAGE_URL, 302); } } // Everything is fine, return the response normally. return response; }, } satisfies ExportedHandler; ``` </TabItem> <TabItem label="Python" icon="seti:python"> ```py from js import Response, URL, fetch async def on_fetch(request): homepage_url = "https://tutorial.cloudflareworkers.com/" protected_type = "image/" # Fetch the original request response = await fetch(request) # If it's an image, engage hotlink protection based on the referer header referer = request.headers["Referer"] content_type = response.headers["Content-Type"] or "" if referer and content_type.startswith(protected_type): # If the hostnames don't match, it's a hotlink if URL.new(referer).hostname != URL.new(request.url).hostname: # Redirect the user to your website return Response.redirect(homepage_url, 302) # Everything is fine, return the response normally return response ``` </TabItem> </Tabs> --- # Custom Domain with Images URL: https://developers.cloudflare.com/workers/examples/images-workers/ import { TabItem, Tabs } from "~/components"; To serve images from a custom domain: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com). 2. Select your account > select **Workers & Pages**. 3. Select **Create application** > **Workers** > **Create Worker** and create your Worker. 4. In your Worker, select **Quick edit** and paste the following code. <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> ```js export default { async fetch(request) { // You can find this in the dashboard, it should look something like this: ZWd9g1K7eljCn_KDTu_MWA const accountHash = ""; const { pathname } = new URL(request.url); // A request to something like cdn.example.com/83eb7b2-5392-4565-b69e-aff66acddd00/public // will fetch "https://imagedelivery.net/<accountHash>/83eb7b2-5392-4565-b69e-aff66acddd00/public" return fetch(`https://imagedelivery.net/${accountHash}${pathname}`); }, }; ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts export default { async fetch(request): Promise<Response> { // You can find this in the dashboard, it should look something like this: ZWd9g1K7eljCn_KDTu_MWA const accountHash = ""; const { pathname } = new URL(request.url); // A request to something like cdn.example.com/83eb7b2-5392-4565-b69e-aff66acddd00/public // will fetch "https://imagedelivery.net/<accountHash>/83eb7b2-5392-4565-b69e-aff66acddd00/public" return fetch(`https://imagedelivery.net/${accountHash}${pathname}`); }, } satisfies ExportedHandler; ``` </TabItem> <TabItem label="Python" icon="seti:python"> ```py from js import URL, fetch async def on_fetch(request): # You can find this in the dashboard, it should look something like this: ZWd9g1K7eljCn_KDTu_MWA account_hash = "" url = URL.new(request.url) # A request to something like cdn.example.com/83eb7b2-5392-4565-b69e-aff66acddd00/public # will fetch "https://imagedelivery.net/<accountHash>/83eb7b2-5392-4565-b69e-aff66acddd00/public" return fetch(f'https://imagedelivery.net/{account_hash}{url.pathname}') ``` </TabItem> </Tabs> Another way you can serve images from a custom domain is by using the `cdn-cgi/imagedelivery` prefix path which is used as path to trigger `cdn-cgi` image proxy. Below is an example showing the hostname as a Cloudflare proxied domain under the same account as the Image, followed with the prefix path and the image `<ACCOUNT_HASH>`, `<IMAGE_ID>` and `<VARIANT_NAME>` which can be found in the **Images** on the Cloudflare dashboard. ```js https://example.com/cdn-cgi/imagedelivery/<ACCOUNT_HASH>/<IMAGE_ID>/<VARIANT_NAME> ``` --- # Examples URL: https://developers.cloudflare.com/workers/examples/ import { GlossaryTooltip, ListExamples } from "~/components"; :::note [Explore our community-written tutorials contributed through the Developer Spotlight program.](/developer-spotlight/) ::: Explore the following <GlossaryTooltip term="code example">examples</GlossaryTooltip> for Workers. <ListExamples directory="workers/examples/" filters={["languages", "tags"]} /> --- # Modify request property URL: https://developers.cloudflare.com/workers/examples/modify-request-property/ import { TabItem, Tabs } from "~/components"; <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> ```js export default { async fetch(request) { /** * Example someHost is set up to return raw JSON * @param {string} someUrl the URL to send the request to, since we are setting hostname too only path is applied * @param {string} someHost the host the request will resolve too */ const someHost = "example.com"; const someUrl = "https://foo.example.com/api.js"; /** * The best practice is to only assign new RequestInit properties * on the request object using either a method or the constructor */ const newRequestInit = { // Change method method: "POST", // Change body body: JSON.stringify({ bar: "foo" }), // Change the redirect mode. redirect: "follow", // Change headers, note this method will erase existing headers headers: { "Content-Type": "application/json", }, // Change a Cloudflare feature on the outbound response cf: { apps: false }, }; // Change just the host const url = new URL(someUrl); url.hostname = someHost; // Best practice is to always use the original request to construct the new request // to clone all the attributes. Applying the URL also requires a constructor // since once a Request has been constructed, its URL is immutable. const newRequest = new Request( url.toString(), new Request(request, newRequestInit), ); // Set headers using method newRequest.headers.set("X-Example", "bar"); newRequest.headers.set("Content-Type", "application/json"); try { return await fetch(newRequest); } catch (e) { return new Response(JSON.stringify({ error: e.message }), { status: 500, }); } }, }; ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts export default { async fetch(request): Promise<Response> { /** * Example someHost is set up to return raw JSON * @param {string} someUrl the URL to send the request to, since we are setting hostname too only path is applied * @param {string} someHost the host the request will resolve too */ const someHost = "example.com"; const someUrl = "https://foo.example.com/api.js"; /** * The best practice is to only assign new RequestInit properties * on the request object using either a method or the constructor */ const newRequestInit = { // Change method method: "POST", // Change body body: JSON.stringify({ bar: "foo" }), // Change the redirect mode. redirect: "follow", // Change headers, note this method will erase existing headers headers: { "Content-Type": "application/json", }, // Change a Cloudflare feature on the outbound response cf: { apps: false }, }; // Change just the host const url = new URL(someUrl); url.hostname = someHost; // Best practice is to always use the original request to construct the new request // to clone all the attributes. Applying the URL also requires a constructor // since once a Request has been constructed, its URL is immutable. const newRequest = new Request( url.toString(), new Request(request, newRequestInit), ); // Set headers using method newRequest.headers.set("X-Example", "bar"); newRequest.headers.set("Content-Type", "application/json"); try { return await fetch(newRequest); } catch (e) { return new Response(JSON.stringify({ error: e.message }), { status: 500, }); } }, } satisfies ExportedHandler; ``` </TabItem> <TabItem label="Python" icon="seti:python"> ```py import json from pyodide.ffi import to_js as _to_js from js import Object, URL, Request, fetch, Response def to_js(obj): return _to_js(obj, dict_converter=Object.fromEntries) async def on_fetch(request): some_host = "example.com" some_url = "https://foo.example.com/api.js" # The best practice is to only assign new_request_init properties # on the request object using either a method or the constructor new_request_init = { "method": "POST", # Change method "body": json.dumps({ "bar": "foo" }), # Change body "redirect": "follow", # Change the redirect mode # Change headers, note this method will erase existing headers "headers": { "Content-Type": "application/json", }, # Change a Cloudflare feature on the outbound response "cf": { "apps": False }, } # Change just the host url = URL.new(some_url) url.hostname = some_host # Best practice is to always use the original request to construct the new request # to clone all the attributes. Applying the URL also requires a constructor # since once a Request has been constructed, its URL is immutable. org_request = Request.new(request, new_request_init) new_request = Request.new(url.toString(),org_request) new_request.headers["X-Example"] = "bar" new_request.headers["Content-Type"] = "application/json" try: return await fetch(new_request) except Exception as e: return Response.new({"error": str(e)}, status=500) ``` </TabItem> </Tabs> --- # Logging headers to console URL: https://developers.cloudflare.com/workers/examples/logging-headers/ import { TabItem, Tabs } from "~/components"; <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> ```js export default { async fetch(request) { console.log(new Map(request.headers)); return new Response("Hello world"); }, }; ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts export default { async fetch(request): Promise<Response> { console.log(new Map(request.headers)); return new Response("Hello world"); }, } satisfies ExportedHandler; ``` </TabItem> <TabItem label="Python" icon="seti:python"> ```py from js import Response async def on_fetch(request): print(dict(request.headers)) return Response.new('Hello world') ``` </TabItem> <TabItem label="Rust" icon="seti:rust"> ```rs use worker::*; #[event(fetch)] async fn fetch(req: HttpRequest, _env: Env, _ctx: Context) -> Result<Response> { console_log!("{:?}", req.headers()); Response::ok("hello world") } ``` </TabItem> </Tabs> --- ## Console-logging headers Use a `Map` if you need to log a `Headers` object to the console: ```js console.log(new Map(request.headers)); ``` Use the `spread` operator if you need to quickly stringify a `Headers` object: ```js let requestHeaders = JSON.stringify([...request.headers]); ``` Use `Object.fromEntries` to convert the headers to an object: ```js let requestHeaders = Object.fromEntries(request.headers); ``` ### The problem When debugging Workers, examine the headers on a request or response. A common mistake is to try to log headers to the developer console via code like this: ```js console.log(request.headers); ``` Or this: ```js console.log(`Request headers: ${JSON.stringify(request.headers)}`); ``` Both attempts result in what appears to be an empty object — the string `"{}"` — even though calling `request.headers.has("Your-Header-Name")` might return true. This is the same behavior that browsers implement. The reason this happens is because [Headers](https://developer.mozilla.org/en-US/docs/Web/API/Headers) objects do not store headers in enumerable JavaScript properties, so the developer console and JSON stringifier do not know how to read the names and values of the headers. It is not actually an empty object, but rather an opaque object. `Headers` objects are iterable, which you can take advantage of to develop a couple of quick one-liners for debug-printing headers. ### Pass headers through a Map The first common idiom for making Headers `console.log()`-friendly is to construct a `Map` object from the `Headers` object and log the `Map` object. ```js console.log(new Map(request.headers)); ``` This works because: - `Map` objects can be constructed from iterables, like `Headers`. - The `Map` object does store its entries in enumerable JavaScript properties, so the developer console can see into it. ### Spread headers into an array The `Map` approach works for calls to `console.log()`. If you need to stringify your headers, you will discover that stringifying a `Map` yields nothing more than `[object Map]`. Even though a `Map` stores its data in enumerable properties, those properties are [Symbol](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Symbol)-keyed. Because of this, `JSON.stringify()` will [ignore Symbol-keyed properties](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Symbol#symbols_and_json.stringify) and you will receive an empty `{}`. Instead, you can take advantage of the iterability of the `Headers` object in a new way by applying the [spread operator](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_syntax) (`...`) to it. ```js let requestHeaders = JSON.stringify([...request.headers], null, 2); console.log(`Request headers: ${requestHeaders}`); ``` ### Convert headers into an object with Object.fromEntries (ES2019) ES2019 provides [`Object.fromEntries`](https://github.com/tc39/proposal-object-from-entries) which is a call to convert the headers into an object: ```js let headersObject = Object.fromEntries(request.headers); let requestHeaders = JSON.stringify(headersObject, null, 2); console.log(`Request headers: ${requestHeaders}`); ``` This results in something like: ```js Request headers: { "accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8", "accept-encoding": "gzip", "accept-language": "en-US,en;q=0.9", "cf-ipcountry": "US", // ... }" ``` --- # Modify response URL: https://developers.cloudflare.com/workers/examples/modify-response/ import { TabItem, Tabs } from "~/components"; <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> ```js export default { async fetch(request) { /** * @param {string} headerNameSrc Header to get the new value from * @param {string} headerNameDst Header to set based off of value in src */ const headerNameSrc = "foo"; //"Orig-Header" const headerNameDst = "Last-Modified"; /** * Response properties are immutable. To change them, construct a new * Response and pass modified status or statusText in the ResponseInit * object. Response headers can be modified through the headers `set` method. */ const originalResponse = await fetch(request); // Change status and statusText, but preserve body and headers let response = new Response(originalResponse.body, { status: 500, statusText: "some message", headers: originalResponse.headers, }); // Change response body by adding the foo prop const originalBody = await originalResponse.json(); const body = JSON.stringify({ foo: "bar", ...originalBody }); response = new Response(body, response); // Add a header using set method response.headers.set("foo", "bar"); // Set destination header to the value of the source header const src = response.headers.get(headerNameSrc); if (src != null) { response.headers.set(headerNameDst, src); console.log( `Response header "${headerNameDst}" was set to "${response.headers.get( headerNameDst, )}"`, ); } return response; }, }; ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts export default { async fetch(request): Promise<Response> { /** * @param {string} headerNameSrc Header to get the new value from * @param {string} headerNameDst Header to set based off of value in src */ const headerNameSrc = "foo"; //"Orig-Header" const headerNameDst = "Last-Modified"; /** * Response properties are immutable. To change them, construct a new * Response and pass modified status or statusText in the ResponseInit * object. Response headers can be modified through the headers `set` method. */ const originalResponse = await fetch(request); // Change status and statusText, but preserve body and headers let response = new Response(originalResponse.body, { status: 500, statusText: "some message", headers: originalResponse.headers, }); // Change response body by adding the foo prop const originalBody = await originalResponse.json(); const body = JSON.stringify({ foo: "bar", ...originalBody }); response = new Response(body, response); // Add a header using set method response.headers.set("foo", "bar"); // Set destination header to the value of the source header const src = response.headers.get(headerNameSrc); if (src != null) { response.headers.set(headerNameDst, src); console.log( `Response header "${headerNameDst}" was set to "${response.headers.get( headerNameDst, )}"`, ); } return response; }, } satisfies ExportedHandler; ``` </TabItem> <TabItem label="Python" icon="seti:python"> ```py from js import Response, fetch, JSON async def on_fetch(request): header_name_src = "foo" # Header to get the new value from header_name_dst = "Last-Modified" # Header to set based off of value in src # Response properties are immutable. To change them, construct a new response original_response = await fetch(request) # Change status and statusText, but preserve body and headers response = Response.new(original_response.body, status=500, statusText="some message", headers=original_response.headers) # Change response body by adding the foo prop original_body = await original_response.json() original_body.foo = "bar" response = Response.new(JSON.stringify(original_body), response) # Add a new header response.headers["foo"] = "bar" # Set destination header to the value of the source header src = response.headers[header_name_src] if src is not None: response.headers[header_name_dst] = src print(f'Response header {header_name_dst} was set to {response.headers[header_name_dst]}') return response ``` </TabItem> </Tabs> --- # Multiple Cron Triggers URL: https://developers.cloudflare.com/workers/examples/multiple-cron-triggers/ import { TabItem, Tabs } from "~/components"; <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> ```js export default { async scheduled(event, env, ctx) { // Write code for updating your API switch (event.cron) { case "*/3 * * * *": // Every three minutes await updateAPI(); break; case "*/10 * * * *": // Every ten minutes await updateAPI2(); break; case "*/45 * * * *": // Every forty-five minutes await updateAPI3(); break; } console.log("cron processed"); }, }; ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts interface Env {} export default { async scheduled( controller: ScheduledController, env: Env, ctx: ExecutionContext, ) { // Write code for updating your API switch (controller.cron) { case "*/3 * * * *": // Every three minutes await updateAPI(); break; case "*/10 * * * *": // Every ten minutes await updateAPI2(); break; case "*/45 * * * *": // Every forty-five minutes await updateAPI3(); break; } console.log("cron processed"); }, }; ``` </TabItem> </Tabs> ## Test Cron Triggers using Wrangler The recommended way of testing Cron Triggers is using Wrangler. Cron Triggers can be tested using Wrangler by passing in the `--test-scheduled` flag to [`wrangler dev`](/workers/wrangler/commands/#dev). This will expose a `/__scheduled` route which can be used to test using a HTTP request. To simulate different cron patterns, a `cron` query parameter can be passed in. ```sh npx wrangler dev --test-scheduled curl "http://localhost:8787/__scheduled?cron=*%2F3+*+*+*+*" ``` --- # Stream OpenAI API Responses URL: https://developers.cloudflare.com/workers/examples/openai-sdk-streaming/ In order to run this code, you must install the OpenAI SDK by running `npm i openai`. :::note For analytics, caching, rate limiting, and more, you can also send requests like this through Cloudflare's [AI Gateway](/ai-gateway/providers/openai/). ::: ```ts import OpenAI from "openai"; export default { async fetch(request, env, ctx): Promise<Response> { const openai = new OpenAI({ apiKey: env.OPENAI_API_KEY, }); // Create a TransformStream to handle streaming data let { readable, writable } = new TransformStream(); let writer = writable.getWriter(); const textEncoder = new TextEncoder(); ctx.waitUntil( (async () => { const stream = await openai.chat.completions.create({ model: "gpt-4o-mini", messages: [{ role: "user", content: "Tell me a story" }], stream: true, }); // loop over the data as it is streamed and write to the writeable for await (const part of stream) { writer.write( textEncoder.encode(part.choices[0]?.delta?.content || ""), ); } writer.close(); })(), ); // Send the readable back to the browser return new Response(readable); }, } satisfies ExportedHandler<Env>; ``` --- # Using timingSafeEqual URL: https://developers.cloudflare.com/workers/examples/protect-against-timing-attacks/ import { TabItem, Tabs } from "~/components"; The [`crypto.subtle.timingSafeEqual`](/workers/runtime-apis/web-crypto/#timingsafeequal) function compares two values using a constant-time algorithm. The time taken is independent of the contents of the values. When strings are compared using the equality operator (`==` or `===`), the comparison will end at the first mismatched character. By using `timingSafeEqual`, an attacker would not be able to use timing to find where at which point in the two strings there is a difference. The `timingSafeEqual` function takes two `ArrayBuffer` or `TypedArray` values to compare. These buffers must be of equal length, otherwise an exception is thrown. Note that this function is not constant time with respect to the length of the parameters and also does not guarantee constant time for the surrounding code. Handling of secrets should be taken with care to not introduce timing side channels. In order to compare two strings, you must use the [`TextEncoder`](/workers/runtime-apis/encoding/#textencoder) API. <Tabs syncKey="workersExamples"> <TabItem label="TypeScript" icon="seti:typescript"> ```ts interface Environment { MY_SECRET_VALUE?: string; } export default { async fetch(req: Request, env: Environment) { if (!env.MY_SECRET_VALUE) { return new Response("Missing secret binding", { status: 500 }); } const authToken = req.headers.get("Authorization") || ""; if (authToken.length !== env.MY_SECRET_VALUE.length) { return new Response("Unauthorized", { status: 401 }); } const encoder = new TextEncoder(); const a = encoder.encode(authToken); const b = encoder.encode(env.MY_SECRET_VALUE); if (a.byteLength !== b.byteLength) { return new Response("Unauthorized", { status: 401 }); } if (!crypto.subtle.timingSafeEqual(a, b)) { return new Response("Unauthorized", { status: 401 }); } return new Response("Welcome!"); }, }; ``` </TabItem> <TabItem label="Python" icon="seti:python"> ```py from js import Response, TextEncoder, crypto async def on_fetch(request, env): auth_token = request.headers["Authorization"] or "" secret = env.MY_SECRET_VALUE if secret is None: return Response.new("Missing secret binding", status=500) if len(auth_token) != len(secret): return Response.new("Unauthorized", status=401) if a.byteLength != b.byteLength: return Response.new("Unauthorized", status=401) encoder = TextEncoder.new() a = encoder.encode(auth_token) b = encoder.encode(secret) if not crypto.subtle.timingSafeEqual(a, b): return Response.new("Unauthorized", status=401) return Response.new("Welcome!") ``` </TabItem> </Tabs> --- # Post JSON URL: https://developers.cloudflare.com/workers/examples/post-json/ import { TabItem, Tabs } from "~/components"; <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> ```js export default { async fetch(request) { /** * Example someHost is set up to take in a JSON request * Replace url with the host you wish to send requests to * @param {string} url the URL to send the request to * @param {BodyInit} body the JSON data to send in the request */ const someHost = "https://examples.cloudflareworkers.com/demos"; const url = someHost + "/requests/json"; const body = { results: ["default data to send"], errors: null, msg: "I sent this to the fetch", }; /** * gatherResponse awaits and returns a response body as a string. * Use await gatherResponse(..) in an async function to get the response body * @param {Response} response */ async function gatherResponse(response) { const { headers } = response; const contentType = headers.get("content-type") || ""; if (contentType.includes("application/json")) { return JSON.stringify(await response.json()); } else if (contentType.includes("application/text")) { return response.text(); } else if (contentType.includes("text/html")) { return response.text(); } else { return response.text(); } } const init = { body: JSON.stringify(body), method: "POST", headers: { "content-type": "application/json;charset=UTF-8", }, }; const response = await fetch(url, init); const results = await gatherResponse(response); return new Response(results, init); }, }; ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts export default { async fetch(request): Promise<Response> { /** * Example someHost is set up to take in a JSON request * Replace url with the host you wish to send requests to * @param {string} url the URL to send the request to * @param {BodyInit} body the JSON data to send in the request */ const someHost = "https://examples.cloudflareworkers.com/demos"; const url = someHost + "/requests/json"; const body = { results: ["default data to send"], errors: null, msg: "I sent this to the fetch", }; /** * gatherResponse awaits and returns a response body as a string. * Use await gatherResponse(..) in an async function to get the response body * @param {Response} response */ async function gatherResponse(response) { const { headers } = response; const contentType = headers.get("content-type") || ""; if (contentType.includes("application/json")) { return JSON.stringify(await response.json()); } else if (contentType.includes("application/text")) { return response.text(); } else if (contentType.includes("text/html")) { return response.text(); } else { return response.text(); } } const init = { body: JSON.stringify(body), method: "POST", headers: { "content-type": "application/json;charset=UTF-8", }, }; const response = await fetch(url, init); const results = await gatherResponse(response); return new Response(results, init); }, } satisfies ExportedHandler; ``` </TabItem> <TabItem label="Python" icon="seti:python"> ```py import json from pyodide.ffi import to_js as _to_js from js import Object, fetch, Response, Headers def to_js(obj): return _to_js(obj, dict_converter=Object.fromEntries) # gather_response returns both content-type & response body as a string async def gather_response(response): headers = response.headers content_type = headers["content-type"] or "" if "application/json" in content_type: return (content_type, json.dumps(dict(await response.json()))) return (content_type, await response.text()) async def on_fetch(_request): url = "https://jsonplaceholder.typicode.com/todos/1" body = { "results": ["default data to send"], "errors": None, "msg": "I sent this to the fetch", } options = { "body": json.dumps(body), "method": "POST", "headers": { "content-type": "application/json;charset=UTF-8", }, } response = await fetch(url, to_js(options)) content_type, result = await gather_response(response) headers = Headers.new({"content-type": content_type}.items()) return Response.new(result, headers=headers) ``` </TabItem> </Tabs> --- # Read POST URL: https://developers.cloudflare.com/workers/examples/read-post/ import { TabItem, Tabs } from "~/components"; <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> ```js export default { async fetch(request) { /** * rawHtmlResponse returns HTML inputted directly * into the worker script * @param {string} html */ function rawHtmlResponse(html) { return new Response(html, { headers: { "content-type": "text/html;charset=UTF-8", }, }); } /** * readRequestBody reads in the incoming request body * Use await readRequestBody(..) in an async function to get the string * @param {Request} request the incoming request to read from */ async function readRequestBody(request) { const contentType = request.headers.get("content-type"); if (contentType.includes("application/json")) { return JSON.stringify(await request.json()); } else if (contentType.includes("application/text")) { return request.text(); } else if (contentType.includes("text/html")) { return request.text(); } else if (contentType.includes("form")) { const formData = await request.formData(); const body = {}; for (const entry of formData.entries()) { body[entry[0]] = entry[1]; } return JSON.stringify(body); } else { // Perhaps some other type of data was submitted in the form // like an image, or some other binary data. return "a file"; } } const { url } = request; if (url.includes("form")) { return rawHtmlResponse(someForm); } if (request.method === "POST") { const reqBody = await readRequestBody(request); const retBody = `The request body sent in was ${reqBody}`; return new Response(retBody); } else if (request.method === "GET") { return new Response("The request was a GET"); } }, }; ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts export default { async fetch(request): Promise<Response> { /** * rawHtmlResponse returns HTML inputted directly * into the worker script * @param {string} html */ function rawHtmlResponse(html) { return new Response(html, { headers: { "content-type": "text/html;charset=UTF-8", }, }); } /** * readRequestBody reads in the incoming request body * Use await readRequestBody(..) in an async function to get the string * @param {Request} request the incoming request to read from */ async function readRequestBody(request: Request) { const contentType = request.headers.get("content-type"); if (contentType.includes("application/json")) { return JSON.stringify(await request.json()); } else if (contentType.includes("application/text")) { return request.text(); } else if (contentType.includes("text/html")) { return request.text(); } else if (contentType.includes("form")) { const formData = await request.formData(); const body = {}; for (const entry of formData.entries()) { body[entry[0]] = entry[1]; } return JSON.stringify(body); } else { // Perhaps some other type of data was submitted in the form // like an image, or some other binary data. return "a file"; } } const { url } = request; if (url.includes("form")) { return rawHtmlResponse(someForm); } if (request.method === "POST") { const reqBody = await readRequestBody(request); const retBody = `The request body sent in was ${reqBody}`; return new Response(retBody); } else if (request.method === "GET") { return new Response("The request was a GET"); } }, } satisfies ExportedHandler; ``` </TabItem> <TabItem label="Python" icon="seti:python"> ```py from js import Object, Response, Headers, JSON async def read_request_body(request): headers = request.headers content_type = headers["content-type"] or "" if "application/json" in content_type: return JSON.stringify(await request.json()) if "form" in content_type: form = await request.formData() data = Object.fromEntries(form.entries()) return JSON.stringify(data) return await request.text() async def on_fetch(request): def raw_html_response(html): headers = Headers.new({"content-type": "text/html;charset=UTF-8"}.items()) return Response.new(html, headers=headers) if "form" in request.url: return raw_html_response("") if "POST" in request.method: req_body = await read_request_body(request) ret_body = f"The request body sent in was {req_body}" return Response.new(ret_body) return Response.new("The request was not POST") ``` </TabItem> <TabItem label="Rust" icon="seti:rust"> ```rs use serde::{Deserialize, Serialize}; use worker::*; fn raw_html_response(html: &str) -> Result<Response> { Response::from_html(html) } #[derive(Deserialize, Serialize, Debug)] struct Payload { msg: String, } async fn read_request_body(mut req: Request) -> String { let ctype = req.headers().get("content-type").unwrap().unwrap(); match ctype.as_str() { "application/json" => format!("{:?}", req.json::<Payload>().await.unwrap()), "text/html" => req.text().await.unwrap(), "multipart/form-data" => format!("{:?}", req.form_data().await.unwrap()), _ => String::from("a file"), } } #[event(fetch)] async fn fetch(req: Request, _env: Env, _ctx: Context) -> Result<Response> { if String::from(req.url()?).contains("form") { return raw_html_response("some html form"); } match req.method() { Method::Post => { let req_body = read_request_body(req).await; Response::ok(format!("The request body sent in was {}", req_body)) } _ => Response::ok(format!("The result was a {:?}", req.method())), } } ``` </TabItem> </Tabs> --- # Redirect URL: https://developers.cloudflare.com/workers/examples/redirect/ import { Render, TabItem, Tabs } from "~/components"; ## Redirect all requests to one URL <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> <Render file="redirect-example-js" /> </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts export default { async fetch(request): Promise<Response> { const destinationURL = "https://example.com"; const statusCode = 301; return Response.redirect(destinationURL, statusCode); }, } satisfies ExportedHandler; ``` </TabItem> <TabItem label="Python" icon="seti:python"> ```py from js import Response def on_fetch(request): destinationURL = "https://example.com" statusCode = 301 return Response.redirect(destinationURL, statusCode) ``` </TabItem> <TabItem label="Rust" icon="seti:rust"> ```rs use worker::*; #[event(fetch)] async fn fetch(_req: Request, _env: Env, _ctx: Context) -> Result<Response> { let destination_url = Url::parse("https://example.com")?; let status_code = 301; Response::redirect_with_status(destination_url, status_code) } ``` </TabItem> </Tabs> ## Redirect requests from one domain to another <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> ```js export default { async fetch(request) { const base = "https://example.com"; const statusCode = 301; const url = new URL(request.url); const { pathname, search } = url; const destinationURL = `${base}${pathname}${search}`; console.log(destinationURL); return Response.redirect(destinationURL, statusCode); }, }; ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts export default { async fetch(request): Promise<Response> { const base = "https://example.com"; const statusCode = 301; const url = new URL(request.url); const { pathname, search } = url; const destinationURL = `${base}${pathname}${search}`; console.log(destinationURL); return Response.redirect(destinationURL, statusCode); }, } satisfies ExportedHandler; ``` </TabItem> <TabItem label="Python" icon="seti:python"> ```py from js import Response, URL async def on_fetch(request): base = "https://example.com" statusCode = 301 url = URL.new(request.url) destinationURL = f'{base}{url.pathname}{url.search}' print(destinationURL) return Response.redirect(destinationURL, statusCode) ``` </TabItem> <TabItem label="Rust" icon="seti:rust"> ```rs use worker::*; #[event(fetch)] async fn fetch(req: Request, _env: Env, _ctx: Context) -> Result<Response> { let mut base = Url::parse("https://example.com")?; let status_code = 301; let url = req.url()?; base.set_path(url.path()); base.set_query(url.query()); console_log!("{:?}", base.to_string()); Response::redirect_with_status(base, status_code) } ``` </TabItem> </Tabs> --- # Respond with another site URL: https://developers.cloudflare.com/workers/examples/respond-with-another-site/ import { Render, TabItem, Tabs } from "~/components"; <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> <Render file="respond-another-site-example-js" /> </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts export default { async fetch(request): Promise<Response> { async function MethodNotAllowed(request) { return new Response(`Method ${request.method} not allowed.`, { status: 405, headers: { Allow: "GET", }, }); } // Only GET requests work with this proxy. if (request.method !== "GET") return MethodNotAllowed(request); return fetch(`https://example.com`); }, } satisfies ExportedHandler; ``` </TabItem> <TabItem label="Python" icon="seti:python"> ```py from js import Response, fetch, Headers def on_fetch(request): def method_not_allowed(request): msg = f'Method {request.method} not allowed.' headers = Headers.new({"Allow": "GET"}.items) return Response.new(msg, headers=headers, status=405) # Only GET requests work with this proxy. if request.method != "GET": return method_not_allowed(request) return fetch("https://example.com") ``` </TabItem> </Tabs> --- # Return JSON URL: https://developers.cloudflare.com/workers/examples/return-json/ import { Render, TabItem, Tabs } from "~/components"; <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> <Render file="return-json-example-js" /> </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts export default { async fetch(request): Promise<Response> { const data = { hello: "world", }; return Response.json(data); }, } satisfies ExportedHandler; ``` </TabItem> <TabItem label="Python" icon="seti:python"> ```py from js import Response, Headers import json def on_fetch(request): data = json.dumps({"hello": "world"}) headers = Headers.new({"content-type": "application/json"}.items()) return Response.new(data, headers=headers) ``` </TabItem> <TabItem label="Rust" icon="seti:rust"> ```rs use serde::{Deserialize, Serialize}; use worker::*; #[derive(Deserialize, Serialize, Debug)] struct Json { hello: String, } #[event(fetch)] async fn fetch(_req: Request, _env: Env, _ctx: Context) -> Result<Response> { let data = Json { hello: String::from("world"), }; Response::from_json(&data) } ``` </TabItem> </Tabs> --- # Return small HTML page URL: https://developers.cloudflare.com/workers/examples/return-html/ import { Render, TabItem, Tabs } from "~/components"; <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> <Render file="return-html-example-js" /> </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts export default { async fetch(request): Promise<Response> { const html = `<!DOCTYPE html> <body> <h1>Hello World</h1> <p>This markup was generated by a Cloudflare Worker.</p> </body>`; return new Response(html, { headers: { "content-type": "text/html;charset=UTF-8", }, }); }, } satisfies ExportedHandler; ``` </TabItem> <TabItem label="Python" icon="seti:python"> ```py from js import Response, Headers def on_fetch(request): html = """<!DOCTYPE html> <body> <h1>Hello World</h1> <p>This markup was generated by a Cloudflare Worker.</p> </body>""" headers = Headers.new({"content-type": "text/html;charset=UTF-8"}.items()) return Response.new(html, headers=headers) ``` </TabItem> <TabItem label="Rust" icon="seti:rust"> ```rs use worker::*; #[event(fetch)] async fn fetch(_req: Request, _env: Env, _ctx: Context) -> Result<Response> { let html = r#"<!DOCTYPE html> <body> <h1>Hello World</h1> <p>This markup was generated by a Cloudflare Worker.</p> </body> "#; Response::from_html(html) } ``` </TabItem> </Tabs> --- # Rewrite links URL: https://developers.cloudflare.com/workers/examples/rewrite-links/ import { TabItem, Tabs } from "~/components"; <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> ```js export default { async fetch(request) { const OLD_URL = "developer.mozilla.org"; const NEW_URL = "mynewdomain.com"; class AttributeRewriter { constructor(attributeName) { this.attributeName = attributeName; } element(element) { const attribute = element.getAttribute(this.attributeName); if (attribute) { element.setAttribute( this.attributeName, attribute.replace(OLD_URL, NEW_URL), ); } } } const rewriter = new HTMLRewriter() .on("a", new AttributeRewriter("href")) .on("img", new AttributeRewriter("src")); const res = await fetch(request); const contentType = res.headers.get("Content-Type"); // If the response is HTML, it can be transformed with // HTMLRewriter -- otherwise, it should pass through if (contentType.startsWith("text/html")) { return rewriter.transform(res); } else { return res; } }, }; ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts export default { async fetch(request): Promise<Response> { const OLD_URL = "developer.mozilla.org"; const NEW_URL = "mynewdomain.com"; class AttributeRewriter { constructor(attributeName) { this.attributeName = attributeName; } element(element) { const attribute = element.getAttribute(this.attributeName); if (attribute) { element.setAttribute( this.attributeName, attribute.replace(OLD_URL, NEW_URL), ); } } } const rewriter = new HTMLRewriter() .on("a", new AttributeRewriter("href")) .on("img", new AttributeRewriter("src")); const res = await fetch(request); const contentType = res.headers.get("Content-Type"); // If the response is HTML, it can be transformed with // HTMLRewriter -- otherwise, it should pass through if (contentType.startsWith("text/html")) { return rewriter.transform(res); } else { return res; } }, } satisfies ExportedHandler; ``` </TabItem> <TabItem label="Python" icon="seti:python"> ```py from pyodide.ffi import create_proxy from js import HTMLRewriter, fetch async def on_fetch(request): old_url = "developer.mozilla.org" new_url = "mynewdomain.com" class AttributeRewriter: def __init__(self, attr_name): self.attr_name = attr_name def element(self, element): attr = element.getAttribute(self.attr_name) if attr: element.setAttribute(self.attr_name, attr.replace(old_url, new_url)) href = create_proxy(AttributeRewriter("href")) src = create_proxy(AttributeRewriter("src")) rewriter = HTMLRewriter.new().on("a", href).on("img", src) res = await fetch(request) content_type = res.headers["Content-Type"] # If the response is HTML, it can be transformed with # HTMLRewriter -- otherwise, it should pass through if content_type.startswith("text/html"): return rewriter.transform(res) return res ``` </TabItem> </Tabs> --- # Sign requests URL: https://developers.cloudflare.com/workers/examples/signing-requests/ import { TabItem, Tabs } from "~/components"; :::note This example Worker makes use of the [Node.js Buffer API](/workers/runtime-apis/nodejs/buffer/), which is available as part of the Worker's runtime [Node.js compatibility mode](/workers/runtime-apis/nodejs/). To run this Worker, you will need to [enable the `nodejs_compat` compatibility flag](/workers/runtime-apis/nodejs/#enable-nodejs-with-workers). ::: You can both verify and generate signed requests from within a Worker using the [Web Crypto APIs](https://developer.mozilla.org/en-US/docs/Web/API/Crypto/subtle). The following Worker will: - For request URLs beginning with `/generate/`, replace `/generate/` with `/`, sign the resulting path with its timestamp, and return the full, signed URL in the response body. - For all other request URLs, verify the signed URL and allow the request through. <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> ```js import { Buffer } from "node:buffer"; const encoder = new TextEncoder(); // How long an HMAC token should be valid for, in seconds const EXPIRY = 60; export default { /** * * @param {Request} request * @param {{SECRET_DATA: string}} env * @returns */ async fetch(request, env) { // You will need some secret data to use as a symmetric key. This should be // attached to your Worker as an encrypted secret. // Refer to https://developers.cloudflare.com/workers/configuration/secrets/ const secretKeyData = encoder.encode( env.SECRET_DATA ?? "my secret symmetric key", ); // Import your secret as a CryptoKey for both 'sign' and 'verify' operations const key = await crypto.subtle.importKey( "raw", secretKeyData, { name: "HMAC", hash: "SHA-256" }, false, ["sign", "verify"], ); const url = new URL(request.url); // This is a demonstration Worker that allows unauthenticated access to /generate // In a real application you would want to make sure that // users could only generate signed URLs when authenticated if (url.pathname.startsWith("/generate/")) { url.pathname = url.pathname.replace("/generate/", "/"); const timestamp = Math.floor(Date.now() / 1000); // This contains all the data about the request that you want to be able to verify // Here we only sign the timestamp and the pathname, but often you will want to // include more data (for instance, the URL hostname or query parameters) const dataToAuthenticate = `${url.pathname}${timestamp}`; const mac = await crypto.subtle.sign( "HMAC", key, encoder.encode(dataToAuthenticate), ); // Refer to https://developers.cloudflare.com/workers/runtime-apis/nodejs/ // for more details on using Node.js APIs in Workers const base64Mac = Buffer.from(mac).toString("base64"); url.searchParams.set("verify", `${timestamp}-${base64Mac}`); return new Response(`${url.pathname}${url.search}`); // Verify all non /generate requests } else { // Make sure you have the minimum necessary query parameters. if (!url.searchParams.has("verify")) { return new Response("Missing query parameter", { status: 403 }); } const [timestamp, hmac] = url.searchParams.get("verify").split("-"); const assertedTimestamp = Number(timestamp); const dataToAuthenticate = `${url.pathname}${assertedTimestamp}`; const receivedMac = Buffer.from(hmac, "base64"); // Use crypto.subtle.verify() to guard against timing attacks. Since HMACs use // symmetric keys, you could implement this by calling crypto.subtle.sign() and // then doing a string comparison -- this is insecure, as string comparisons // bail out on the first mismatch, which leaks information to potential // attackers. const verified = await crypto.subtle.verify( "HMAC", key, receivedMac, encoder.encode(dataToAuthenticate), ); if (!verified) { return new Response("Invalid MAC", { status: 403 }); } // Signed requests expire after one minute. Note that this value should depend on your specific use case if (Date.now() / 1000 > assertedTimestamp + EXPIRY) { return new Response( `URL expired at ${new Date((assertedTimestamp + EXPIRY) * 1000)}`, { status: 403 }, ); } } return fetch(new URL(url.pathname, "https://example.com"), request); }, }; ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts import { Buffer } from "node:buffer"; const encoder = new TextEncoder(); // How long an HMAC token should be valid for, in seconds const EXPIRY = 60; interface Env { SECRET_DATA: string; } export default { async fetch(request, env): Promise<Response> { // You will need some secret data to use as a symmetric key. This should be // attached to your Worker as an encrypted secret. // Refer to https://developers.cloudflare.com/workers/configuration/secrets/ const secretKeyData = encoder.encode( env.SECRET_DATA ?? "my secret symmetric key", ); // Import your secret as a CryptoKey for both 'sign' and 'verify' operations const key = await crypto.subtle.importKey( "raw", secretKeyData, { name: "HMAC", hash: "SHA-256" }, false, ["sign", "verify"], ); const url = new URL(request.url); // This is a demonstration Worker that allows unauthenticated access to /generate // In a real application you would want to make sure that // users could only generate signed URLs when authenticated if (url.pathname.startsWith("/generate/")) { url.pathname = url.pathname.replace("/generate/", "/"); const timestamp = Math.floor(Date.now() / 1000); // This contains all the data about the request that you want to be able to verify // Here we only sign the timestamp and the pathname, but often you will want to // include more data (for instance, the URL hostname or query parameters) const dataToAuthenticate = `${url.pathname}${timestamp}`; const mac = await crypto.subtle.sign( "HMAC", key, encoder.encode(dataToAuthenticate), ); // Refer to https://developers.cloudflare.com/workers/runtime-apis/nodejs/ // for more details on using NodeJS APIs in Workers const base64Mac = Buffer.from(mac).toString("base64"); url.searchParams.set("verify", `${timestamp}-${base64Mac}`); return new Response(`${url.pathname}${url.search}`); // Verify all non /generate requests } else { // Make sure you have the minimum necessary query parameters. if (!url.searchParams.has("verify")) { return new Response("Missing query parameter", { status: 403 }); } const [timestamp, hmac] = url.searchParams.get("verify").split("-"); const assertedTimestamp = Number(timestamp); const dataToAuthenticate = `${url.pathname}${assertedTimestamp}`; const receivedMac = Buffer.from(hmac, "base64"); // Use crypto.subtle.verify() to guard against timing attacks. Since HMACs use // symmetric keys, you could implement this by calling crypto.subtle.sign() and // then doing a string comparison -- this is insecure, as string comparisons // bail out on the first mismatch, which leaks information to potential // attackers. const verified = await crypto.subtle.verify( "HMAC", key, receivedMac, encoder.encode(dataToAuthenticate), ); if (!verified) { return new Response("Invalid MAC", { status: 403 }); } // Signed requests expire after one minute. Note that this value should depend on your specific use case if (Date.now() / 1000 > assertedTimestamp + EXPIRY) { return new Response( `URL expired at ${new Date((assertedTimestamp + EXPIRY) * 1000)}`, { status: 403 }, ); } } return fetch(new URL(url.pathname, "https://example.com"), request); }, } satisfies ExportedHandler<Env>; ``` </TabItem> </Tabs> ## Validate signed requests using the WAF The provided example code for signing requests is compatible with the [`is_timed_hmac_valid_v0()`](/ruleset-engine/rules-language/functions/#hmac-validation) Rules language function. This means that you can verify requests signed by the Worker script using a [WAF custom rule](/waf/custom-rules/use-cases/configure-token-authentication/#option-2-configure-using-waf-custom-rules). --- # Set security headers URL: https://developers.cloudflare.com/workers/examples/security-headers/ import { TabItem, Tabs } from "~/components"; <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> ```js export default { async fetch(request) { const DEFAULT_SECURITY_HEADERS = { /* Secure your application with Content-Security-Policy headers. Enabling these headers will permit content from a trusted domain and all its subdomains. @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy "Content-Security-Policy": "default-src 'self' example.com *.example.com", */ /* You can also set Strict-Transport-Security headers. These are not automatically set because your website might get added to Chrome's HSTS preload list. Here's the code if you want to apply it: "Strict-Transport-Security" : "max-age=63072000; includeSubDomains; preload", */ /* Permissions-Policy header provides the ability to allow or deny the use of browser features, such as opting out of FLoC - which you can use below: "Permissions-Policy": "interest-cohort=()", */ /* X-XSS-Protection header prevents a page from loading if an XSS attack is detected. @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-XSS-Protection */ "X-XSS-Protection": "0", /* X-Frame-Options header prevents click-jacking attacks. @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Frame-Options */ "X-Frame-Options": "DENY", /* X-Content-Type-Options header prevents MIME-sniffing. @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Content-Type-Options */ "X-Content-Type-Options": "nosniff", "Referrer-Policy": "strict-origin-when-cross-origin", "Cross-Origin-Embedder-Policy": 'require-corp; report-to="default";', "Cross-Origin-Opener-Policy": 'same-site; report-to="default";', "Cross-Origin-Resource-Policy": "same-site", }; const BLOCKED_HEADERS = [ "Public-Key-Pins", "X-Powered-By", "X-AspNet-Version", ]; let response = await fetch(request); let newHeaders = new Headers(response.headers); const tlsVersion = request.cf.tlsVersion; console.log(tlsVersion); // This sets the headers for HTML responses: if ( newHeaders.has("Content-Type") && !newHeaders.get("Content-Type").includes("text/html") ) { return new Response(response.body, { status: response.status, statusText: response.statusText, headers: newHeaders, }); } Object.keys(DEFAULT_SECURITY_HEADERS).map((name) => { newHeaders.set(name, DEFAULT_SECURITY_HEADERS[name]); }); BLOCKED_HEADERS.forEach((name) => { newHeaders.delete(name); }); if (tlsVersion !== "TLSv1.2" && tlsVersion !== "TLSv1.3") { return new Response("You need to use TLS version 1.2 or higher.", { status: 400, }); } else { return new Response(response.body, { status: response.status, statusText: response.statusText, headers: newHeaders, }); } }, }; ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts export default { async fetch(request): Promise<Response> { const DEFAULT_SECURITY_HEADERS = { /* Secure your application with Content-Security-Policy headers. Enabling these headers will permit content from a trusted domain and all its subdomains. @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy "Content-Security-Policy": "default-src 'self' example.com *.example.com", */ /* You can also set Strict-Transport-Security headers. These are not automatically set because your website might get added to Chrome's HSTS preload list. Here's the code if you want to apply it: "Strict-Transport-Security" : "max-age=63072000; includeSubDomains; preload", */ /* Permissions-Policy header provides the ability to allow or deny the use of browser features, such as opting out of FLoC - which you can use below: "Permissions-Policy": "interest-cohort=()", */ /* X-XSS-Protection header prevents a page from loading if an XSS attack is detected. @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-XSS-Protection */ "X-XSS-Protection": "0", /* X-Frame-Options header prevents click-jacking attacks. @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Frame-Options */ "X-Frame-Options": "DENY", /* X-Content-Type-Options header prevents MIME-sniffing. @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Content-Type-Options */ "X-Content-Type-Options": "nosniff", "Referrer-Policy": "strict-origin-when-cross-origin", "Cross-Origin-Embedder-Policy": 'require-corp; report-to="default";', "Cross-Origin-Opener-Policy": 'same-site; report-to="default";', "Cross-Origin-Resource-Policy": "same-site", }; const BLOCKED_HEADERS = [ "Public-Key-Pins", "X-Powered-By", "X-AspNet-Version", ]; let response = await fetch(request); let newHeaders = new Headers(response.headers); const tlsVersion = request.cf.tlsVersion; console.log(tlsVersion); // This sets the headers for HTML responses: if ( newHeaders.has("Content-Type") && !newHeaders.get("Content-Type").includes("text/html") ) { return new Response(response.body, { status: response.status, statusText: response.statusText, headers: newHeaders, }); } Object.keys(DEFAULT_SECURITY_HEADERS).map((name) => { newHeaders.set(name, DEFAULT_SECURITY_HEADERS[name]); }); BLOCKED_HEADERS.forEach((name) => { newHeaders.delete(name); }); if (tlsVersion !== "TLSv1.2" && tlsVersion !== "TLSv1.3") { return new Response("You need to use TLS version 1.2 or higher.", { status: 400, }); } else { return new Response(response.body, { status: response.status, statusText: response.statusText, headers: newHeaders, }); } }, } satisfies ExportedHandler; ``` </TabItem> <TabItem label="Python" icon="seti:python"> ```py from js import Response, fetch, Headers async def on_fetch(request): default_security_headers = { # Secure your application with Content-Security-Policy headers. #Enabling these headers will permit content from a trusted domain and all its subdomains. #@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy "Content-Security-Policy": "default-src 'self' example.com *.example.com", #You can also set Strict-Transport-Security headers. #These are not automatically set because your website might get added to Chrome's HSTS preload list. #Here's the code if you want to apply it: "Strict-Transport-Security" : "max-age=63072000; includeSubDomains; preload", #Permissions-Policy header provides the ability to allow or deny the use of browser features, such as opting out of FLoC - which you can use below: "Permissions-Policy": "interest-cohort=()", #X-XSS-Protection header prevents a page from loading if an XSS attack is detected. #@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-XSS-Protection "X-XSS-Protection": "0", #X-Frame-Options header prevents click-jacking attacks. #@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Frame-Options "X-Frame-Options": "DENY", #X-Content-Type-Options header prevents MIME-sniffing. #@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Content-Type-Options "X-Content-Type-Options": "nosniff", "Referrer-Policy": "strict-origin-when-cross-origin", "Cross-Origin-Embedder-Policy": 'require-corp; report-to="default";', "Cross-Origin-Opener-Policy": 'same-site; report-to="default";', "Cross-Origin-Resource-Policy": "same-site", } blocked_headers = ["Public-Key-Pins", "X-Powered-By" ,"X-AspNet-Version"] res = await fetch(request) new_headers = Headers.new(res.headers) # This sets the headers for HTML responses if "text/html" in new_headers["Content-Type"]: return Response.new(res.body, status=res.status, statusText=res.statusText, headers=new_headers) for name in default_security_headers: new_headers.set(name, default_security_headers[name]) for name in blocked_headers: new_headers.delete(name) tls = request.cf.tlsVersion if not tls in ("TLSv1.2", "TLSv1.3"): return Response.new("You need to use TLS version 1.2 or higher.", status=400) return Response.new(res.body, status=res.status, statusText=res.statusText, headers=new_headers) ``` </TabItem> <TabItem label="Rust" icon="seti:rust"> ```rs use std::collections::HashMap; use worker::*; #[event(fetch)] async fn fetch(req: Request, _env: Env, _ctx: Context) -> Result<Response> { let default_security_headers = HashMap::from([ //Secure your application with Content-Security-Policy headers. //Enabling these headers will permit content from a trusted domain and all its subdomains. //@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy ( "Content-Security-Policy", "default-src 'self' example.com *.example.com", ), //You can also set Strict-Transport-Security headers. //These are not automatically set because your website might get added to Chrome's HSTS preload list. //Here's the code if you want to apply it: ( "Strict-Transport-Security", "max-age=63072000; includeSubDomains; preload", ), //Permissions-Policy header provides the ability to allow or deny the use of browser features, such as opting out of FLoC - which you can use below: ("Permissions-Policy", "interest-cohort=()"), //X-XSS-Protection header prevents a page from loading if an XSS attack is detected. //@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-XSS-Protection ("X-XSS-Protection", "0"), //X-Frame-Options header prevents click-jacking attacks. //@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Frame-Options ("X-Frame-Options", "DENY"), //X-Content-Type-Options header prevents MIME-sniffing. //@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Content-Type-Options ("X-Content-Type-Options", "nosniff"), ("Referrer-Policy", "strict-origin-when-cross-origin"), ( "Cross-Origin-Embedder-Policy", "require-corp; report-to='default';", ), ( "Cross-Origin-Opener-Policy", "same-site; report-to='default';", ), ("Cross-Origin-Resource-Policy", "same-site"), ]); let blocked_headers = ["Public-Key-Pins", "X-Powered-By", "X-AspNet-Version"]; let tls = req.cf().unwrap().tls_version(); let res = Fetch::Request(req).send().await?; let mut new_headers = res.headers().clone(); // This sets the headers for HTML responses if Some(String::from("text/html")) == new_headers.get("Content-Type")? { return Ok(Response::from_body(res.body().clone())? .with_headers(new_headers) .with_status(res.status_code())); } for (k, v) in default_security_headers { new_headers.set(k, v)?; } for k in blocked_headers { new_headers.delete(k)?; } if !vec!["TLSv1.2", "TLSv1.3"].contains(&tls.as_str()) { return Response::error("You need to use TLS version 1.2 or higher.", 400); } Ok(Response::from_body(res.body().clone())? .with_headers(new_headers) .with_status(res.status_code())) } ``` </TabItem> </Tabs> --- # Turnstile with Workers URL: https://developers.cloudflare.com/workers/examples/turnstile-html-rewriter/ import { TabItem, Tabs } from "~/components"; <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> ```js export default { async fetch(request, env) { const SITE_KEY = env.SITE_KEY; // The Turnstile Sitekey of your widget (pass as env or secret) const TURNSTILE_ATTR_NAME = 'your_id_to_replace'; // The id of the element to put a Turnstile widget in let res = await fetch(request) // Instantiate the API to run on specific elements, for example, `head`, `div` let newRes = new HTMLRewriter() // `.on` attaches the element handler and this allows you to match on element/attributes or to use the specific methods per the API .on('head', { element(element) { // In this case, you are using `append` to add a new script to the `head` element element.append(`<script src="https://challenges.cloudflare.com/turnstile/v0/api.js" async defer></script>`, { html: true }); }, }) .on('div', { element(element) { // Add a turnstile widget element into if an element with the id of TURNSTILE_ATTR_NAME is found if (element.getAttribute('id') === TURNSTILE_ATTR_NAME) { element.append(`<div class="cf-turnstile" data-sitekey="${SITE_KEY}"></div>`, { html: true }); } }, }) .transform(res); return newRes } } ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts export default { async fetch(request, env): Promise<Response> { const SITE_KEY = env.SITE_KEY; // The Turnstile Sitekey of your widget (pass as env or secret) const TURNSTILE_ATTR_NAME = 'your_id_to_replace'; // The id of the element to put a Turnstile widget in let res = await fetch(request) // Instantiate the API to run on specific elements, for example, `head`, `div` let newRes = new HTMLRewriter() // `.on` attaches the element handler and this allows you to match on element/attributes or to use the specific methods per the API .on('head', { element(element) { // In this case, you are using `append` to add a new script to the `head` element element.append(`<script src="https://challenges.cloudflare.com/turnstile/v0/api.js" async defer></script>`, { html: true }); }, }) .on('div', { element(element) { // Add a turnstile widget element into if an element with the id of TURNSTILE_ATTR_NAME is found if (element.getAttribute('id') === TURNSTILE_ATTR_NAME) { element.append(`<div class="cf-turnstile" data-sitekey="${SITE_KEY}" data-theme="light"></div>`, { html: true }); } }, }) .transform(res); return newRes } } satisfies ExportedHandler<Env>; ``` </TabItem> <TabItem label="Python" icon="seti:python"> ```py from pyodide.ffi import create_proxy from js import HTMLRewriter, fetch async def on_fetch(request, env): site_key = env.SITE_KEY attr_name = env.TURNSTILE_ATTR_NAME res = await fetch(request) class Append: def element(self, element): s = '<script src="https://challenges.cloudflare.com/turnstile/v0/api.js" async defer></script>' element.append(s, {"html": True}) class AppendOnID: def __init__(self, name): self.name = name def element(self, element): # You are using the `getAttribute` method here to retrieve the `id` or `class` of an element if element.getAttribute("id") == self.name: div = f'<div class="cf-turnstile" data-sitekey="{site_key}" data-theme="light"></div>' element.append(div, { "html": True }) # Instantiate the API to run on specific elements, for example, `head`, `div` head = create_proxy(Append()) div = create_proxy(AppendOnID(attr_name)) new_res = HTMLRewriter.new().on("head", head).on("div", div).transform(res) return new_res ``` </TabItem> </Tabs> :::note This is only half the implementation for Turnstile. The corresponding token that is a result of a widget being rendered also needs to be verified using the [siteverify API](/turnstile/get-started/server-side-validation/). Refer to the example below for one such implementation. ::: <TabItem label="JavaScript" icon="seti:javascript"> ```js async function handlePost(request, env) { const body = await request.formData(); // Turnstile injects a token in `cf-turnstile-response`. const token = body.get('cf-turnstile-response'); const ip = request.headers.get('CF-Connecting-IP'); // Validate the token by calling the `/siteverify` API. let formData = new FormData(); // `secret_key` here is the Turnstile Secret key, which should be set using Wrangler secrets formData.append('secret', env.SECRET_KEY); formData.append('response', token); formData.append('remoteip', ip); //This is optional. const url = 'https://challenges.cloudflare.com/turnstile/v0/siteverify'; const result = await fetch(url, { body: formData, method: 'POST', }); const outcome = await result.json(); if (!outcome.success) { return new Response('The provided Turnstile token was not valid!', { status: 401 }); } // The Turnstile token was successfully validated. Proceed with your application logic. // Validate login, redirect user, etc. // Clone the original request with a new body const newRequest = new Request(request, { body: request.body, // Reuse the body method: request.method, headers: request.headers }); return await fetch(newRequest); } export default { async fetch(request, env) { const SITE_KEY = env.SITE_KEY; // The Turnstile Sitekey of your widget (pass as env or secret) const TURNSTILE_ATTR_NAME = 'your_id_to_replace'; // The id of the element to put a Turnstile widget in let res = await fetch(request) if (request.method === 'POST') { return handlePost(request, env) } // Instantiate the API to run on specific elements, for example, `head`, `div` let newRes = new HTMLRewriter() // `.on` attaches the element handler and this allows you to match on element/attributes or to use the specific methods per the API .on('head', { element(element) { // In this case, you are using `append` to add a new script to the `head` element element.append(`<script src="https://challenges.cloudflare.com/turnstile/v0/api.js" async defer></script>`, { html: true }); }, }) .on('div', { element(element) { // You are using the `getAttribute` method here to retrieve the `id` or `class` of an element if (element.getAttribute('id') === <NAME_OF_ATTRIBUTE>) { element.append(`<div class="cf-turnstile" data-sitekey="${SITE_KEY}" data-theme="light"></div>`, { html: true }); } }, }) .transform(res); return newRes } } ``` </TabItem> --- # Using the WebSockets API URL: https://developers.cloudflare.com/workers/examples/websockets/ import { TabItem, Tabs } from "~/components"; WebSockets allow you to communicate in real time with your Cloudflare Workers serverless functions. In this guide, you will learn the basics of WebSockets on Cloudflare Workers, both from the perspective of writing WebSocket servers in your Workers functions, as well as connecting to and working with those WebSocket servers as a client. WebSockets are open connections sustained between the client and the origin server. Inside a WebSocket connection, the client and the origin can pass data back and forth without having to reestablish sessions. This makes exchanging data within a WebSocket connection fast. WebSockets are often used for real-time applications such as live chat and gaming. :::note WebSockets utilize an event-based system for receiving and sending messages, much like the Workers runtime model of responding to events. ::: :::note If your application needs to coordinate among multiple WebSocket connections, such as a chat room or game match, you will need clients to send messages to a single-point-of-coordination. Durable Objects provide a single-point-of-coordination for Cloudflare Workers, and are often used in parallel with WebSockets to persist state over multiple clients and connections. In this case, refer to [Durable Objects](/durable-objects/) to get started, and prefer using the Durable Objects' extended [WebSockets API](/durable-objects/best-practices/websockets/). ::: ## Write a WebSocket Server WebSocket servers in Cloudflare Workers allow you to receive messages from a client in real time. This guide will show you how to set up a WebSocket server in Workers. A client can make a WebSocket request in the browser by instantiating a new instance of `WebSocket`, passing in the URL for your Workers function: ```js // In client-side JavaScript, connect to your Workers function using WebSockets: const websocket = new WebSocket('wss://example-websocket.signalnerve.workers.dev'); ``` :::note For more details about creating and working with WebSockets in the client, refer to [Writing a WebSocket client](#write-a-websocket-client). ::: When an incoming WebSocket request reaches the Workers function, it will contain an `Upgrade` header, set to the string value `websocket`. Check for this header before continuing to instantiate a WebSocket: <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> ```js async function handleRequest(request) { const upgradeHeader = request.headers.get('Upgrade'); if (!upgradeHeader || upgradeHeader !== 'websocket') { return new Response('Expected Upgrade: websocket', { status: 426 }); } } ``` </TabItem> <TabItem label="Rust" icon="seti:rust"> ```rs use worker::*; #[event(fetch)] async fn fetch(req: HttpRequest, _env: Env, _ctx: Context) -> Result<worker::Response> { let upgrade_header = match req.headers().get("Upgrade") { Some(h) => h.to_str().unwrap(), None => "", }; if upgrade_header != "websocket" { return worker::Response::error("Expected Upgrade: websocket", 426); } } ``` </TabItem> </Tabs> After you have appropriately checked for the `Upgrade` header, you can create a new instance of `WebSocketPair`, which contains server and client WebSockets. One of these WebSockets should be handled by the Workers function and the other should be returned as part of a `Response` with the [`101` status code](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/101), indicating the request is switching protocols: <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> ```js async function handleRequest(request) { const upgradeHeader = request.headers.get('Upgrade'); if (!upgradeHeader || upgradeHeader !== 'websocket') { return new Response('Expected Upgrade: websocket', { status: 426 }); } const webSocketPair = new WebSocketPair(); const client = webSocketPair[0], server = webSocketPair[1]; return new Response(null, { status: 101, webSocket: client, }); } ``` </TabItem> <TabItem label="Rust" icon="seti:rust"> ```rs use worker::*; #[event(fetch)] async fn fetch(req: HttpRequest, _env: Env, _ctx: Context) -> Result<worker::Response> { let upgrade_header = match req.headers().get("Upgrade") { Some(h) => h.to_str().unwrap(), None => "", }; if upgrade_header != "websocket" { return worker::Response::error("Expected Upgrade: websocket", 426); } let ws = WebSocketPair::new()?; let client = ws.client; let server = ws.server; server.accept()?; worker::Response::from_websocket(client) } ``` </TabItem> </Tabs> The `WebSocketPair` constructor returns an Object, with the `0` and `1` keys each holding a `WebSocket` instance as its value. It is common to grab the two WebSockets from this pair using [`Object.values`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_objects/Object/values) and [ES6 destructuring](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Destructuring_assignment), as seen in the below example. In order to begin communicating with the `client` WebSocket in your Worker, call `accept` on the `server` WebSocket. This will tell the Workers runtime that it should listen for WebSocket data and keep the connection open with your `client` WebSocket: <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> ```js async function handleRequest(request) { const upgradeHeader = request.headers.get('Upgrade'); if (!upgradeHeader || upgradeHeader !== 'websocket') { return new Response('Expected Upgrade: websocket', { status: 426 }); } const webSocketPair = new WebSocketPair(); const [client, server] = Object.values(webSocketPair); server.accept(); return new Response(null, { status: 101, webSocket: client, }); } ``` </TabItem> <TabItem label="Rust" icon="seti:rust"> ```rs use worker::*; #[event(fetch)] async fn fetch(req: HttpRequest, _env: Env, _ctx: Context) -> Result<worker::Response> { let upgrade_header = match req.headers().get("Upgrade") { Some(h) => h.to_str().unwrap(), None => "", }; if upgrade_header != "websocket" { return worker::Response::error("Expected Upgrade: websocket", 426); } let ws = WebSocketPair::new()?; let client = ws.client; let server = ws.server; server.accept()?; worker::Response::from_websocket(client) } ``` </TabItem> </Tabs> WebSockets emit a number of [Events](/workers/runtime-apis/websockets/#events) that can be connected to using `addEventListener`. The below example hooks into the `message` event and emits a `console.log` with the data from it: <Tabs syncKey="workersExamples"> <TabItem label="JavaScript" icon="seti:javascript"> ```js async function handleRequest(request) { const upgradeHeader = request.headers.get('Upgrade'); if (!upgradeHeader || upgradeHeader !== 'websocket') { return new Response('Expected Upgrade: websocket', { status: 426 }); } const webSocketPair = new WebSocketPair(); const [client, server] = Object.values(webSocketPair); server.accept(); server.addEventListener('message', event => { console.log(event.data); }); return new Response(null, { status: 101, webSocket: client, }); } ``` </TabItem> <TabItem label="Rust" icon="seti:rust"> ```rs use futures::StreamExt; use worker::*; #[event(fetch)] async fn fetch(req: HttpRequest, _env: Env, _ctx: Context) -> Result<worker::Response> { let upgrade_header = match req.headers().get("Upgrade") { Some(h) => h.to_str().unwrap(), None => "", }; if upgrade_header != "websocket" { return worker::Response::error("Expected Upgrade: websocket", 426); } let ws = WebSocketPair::new()?; let client = ws.client; let server = ws.server; server.accept()?; wasm_bindgen_futures::spawn_local(async move { let mut event_stream = server.events().expect("could not open stream"); while let Some(event) = event_stream.next().await { match event.expect("received error in websocket") { WebsocketEvent::Message(msg) => server.send(&msg.text()).unwrap(), WebsocketEvent::Close(event) => console_log!("{:?}", event), } } }); worker::Response::from_websocket(client) } ``` </TabItem> </Tabs> ### Connect to the WebSocket server from a client Writing WebSocket clients that communicate with your Workers function is a two-step process: first, create the WebSocket instance, and then attach event listeners to it: ```js const websocket = new WebSocket('wss://websocket-example.signalnerve.workers.dev'); websocket.addEventListener('message', event => { console.log('Message received from server'); console.log(event.data); }); ``` WebSocket clients can send messages back to the server using the [`send`](/workers/runtime-apis/websockets/#send) function: ```js websocket.send('MESSAGE'); ``` When the WebSocket interaction is complete, the client can close the connection using [`close`](/workers/runtime-apis/websockets/#close): ```js websocket.close(); ``` For an example of this in practice, refer to the [`websocket-template`](https://github.com/cloudflare/websocket-template) to get started with WebSockets. ## Write a WebSocket client Cloudflare Workers supports the `new WebSocket(url)` constructor. A Worker can establish a WebSocket connection to a remote server in the same manner as the client implementation described above. Additionally, Cloudflare supports establishing WebSocket connections by making a fetch request to a URL with the `Upgrade` header set. ```js async function websocket(url) { // Make a fetch request including `Upgrade: websocket` header. // The Workers Runtime will automatically handle other requirements // of the WebSocket protocol, like the Sec-WebSocket-Key header. let resp = await fetch(url, { headers: { Upgrade: 'websocket', }, }); // If the WebSocket handshake completed successfully, then the // response has a `webSocket` property. let ws = resp.webSocket; if (!ws) { throw new Error("server didn't accept WebSocket"); } // Call accept() to indicate that you'll be handling the socket here // in JavaScript, as opposed to returning it on to a client. ws.accept(); // Now you can send and receive messages like before. ws.send('hello'); ws.addEventListener('message', msg => { console.log(msg.data); }); } ``` ## WebSocket compression Cloudflare Workers supports WebSocket compression. Refer to [WebSocket Compression](/workers/configuration/compatibility-flags/#websocket-compression) for more information. --- # Dashboard URL: https://developers.cloudflare.com/workers/get-started/dashboard/ import { Render } from "~/components"; Follow this guide to create a Workers application using [the Cloudflare dashboard](https://dash.cloudflare.com). <Render file="playground-callout" /> ## Prerequisites [Create a Cloudflare account](/learning-paths/get-started/account-setup/create-account/), if you have not already. ## Setup To create a Workers application: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. Go to **Workers & Pages**. 3. Select **Create**. 4. Select a template or **Create Worker**. 5. Review the provided code and select **Deploy**. 6. Preview your Worker at its provided [`workers.dev`](/workers/configuration/routing/workers-dev/) subdomain. ## Development <Render file="dash-creation-next-steps" /> ## Next steps To do more: - Push your project to a GitHub or GitLab respoitory then [connect to builds](/workers/ci-cd/builds/#get-started) to enable automatic builds and deployments. - Review our [Examples](/workers/examples/) and [Tutorials](/workers/tutorials/) for inspiration. - Set up [bindings](/workers/runtime-apis/bindings/) to allow your Worker to interact with other resources and unlock new functionality. - Learn how to [test and debug](/workers/testing/) your Workers. - Read about [Workers limits and pricing](/workers/platform/). --- # CLI URL: https://developers.cloudflare.com/workers/get-started/guide/ import { Details, Render, PackageManagers } from "~/components"; Set up and deploy your first Worker with Wrangler, the Cloudflare Developer Platform CLI. This guide will instruct you through setting up and deploying your first Worker. ## Prerequisites <Render file="prereqs" product="workers" /> ## 1. Create a new Worker project Open a terminal window and run C3 to create your Worker project. [C3 (`create-cloudflare-cli`)](https://github.com/cloudflare/workers-sdk/tree/main/packages/create-cloudflare) is a command-line tool designed to help you set up and deploy new applications to Cloudflare. <PackageManagers type="create" pkg="cloudflare@latest" args={"my-first-worker"} /> <Render file="c3-post-run-steps" product="workers" params={{ category: "hello-world", type: "Hello World Worker", lang: "JavaScript", }} /> Now, you have a new project set up. Move into that project folder. ```sh cd my-first-worker ``` <Details header="What files did C3 create?"> In your project directory, C3 will have generated the following: * `wrangler.jsonc`: Your [Wrangler](/workers/wrangler/configuration/#sample-wrangler-configuration) configuration file. * `index.js` (in `/src`): A minimal `'Hello World!'` Worker written in [ES module](/workers/reference/migrate-to-module-workers/) syntax. * `package.json`: A minimal Node dependencies configuration file. * `package-lock.json`: Refer to [`npm` documentation on `package-lock.json`](https://docs.npmjs.com/cli/v9/configuring-npm/package-lock-json). * `node_modules`: Refer to [`npm` documentation `node_modules`](https://docs.npmjs.com/cli/v7/configuring-npm/folders#node-modules). </Details> <Details header="What if I already have a project in a git repository?"> In addition to creating new projects from C3 templates, C3 also supports creating new projects from existing Git repositories. To create a new project from an existing Git repository, open your terminal and run: ```sh npm create cloudflare@latest -- --template <SOURCE> ``` `<SOURCE>` may be any of the following: - `user/repo` (GitHub) - `git@github.com:user/repo` - `https://github.com/user/repo` - `user/repo/some-template` (subdirectories) - `user/repo#canary` (branches) - `user/repo#1234abcd` (commit hash) - `bitbucket:user/repo` (Bitbucket) - `gitlab:user/repo` (GitLab) Your existing template folder must contain the following files, at a minimum, to meet the requirements for Cloudflare Workers: - `package.json` - `wrangler.jsonc` [See sample Wrangler configuration](/workers/wrangler/configuration/#sample-wrangler-configuration) - `src/` containing a worker script referenced from `wrangler.jsonc` </Details> ## 2. Develop with Wrangler CLI C3 installs [Wrangler](/workers/wrangler/install-and-update/), the Workers command-line interface, in Workers projects by default. Wrangler lets you to [create](/workers/wrangler/commands/#init), [test](/workers/wrangler/commands/#dev), and [deploy](/workers/wrangler/commands/#deploy) your Workers projects. After you have created your first Worker, run the [`wrangler dev`](/workers/wrangler/commands/#dev) command in the project directory to start a local server for developing your Worker. This will allow you to preview your Worker locally during development. ```sh npx wrangler dev ``` If you have never used Wrangler before, it will open your web browser so you can login to your Cloudflare account. Go to [http://localhost:8787](http://localhost:8787) to view your Worker. <Details header="Browser issues?"> If you have issues with this step or you do not have access to a browser interface, refer to the [`wrangler login`](/workers/wrangler/commands/#login) documentation. </Details> ## 3. Write code With your new project generated and running, you can begin to write and edit your code. Find the `src/index.js` file. `index.js` will be populated with the code below: ```js title="Original index.js" export default { async fetch(request, env, ctx) { return new Response("Hello World!"); }, }; ``` <Details header="Code explanation"> This code block consists of a few different parts. ```js title="Updated index.js" {1} export default { async fetch(request, env, ctx) { return new Response("Hello World!"); }, }; ``` `export default` is JavaScript syntax required for defining [JavaScript modules](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules#default_exports_versus_named_exports). Your Worker has to have a default export of an object, with properties corresponding to the events your Worker should handle. ```js title="index.js" {2} export default { async fetch(request, env, ctx) { return new Response("Hello World!"); }, }; ``` This [`fetch()` handler](/workers/runtime-apis/handlers/fetch/) will be called when your Worker receives an HTTP request. You can define additional event handlers in the exported object to respond to different types of events. For example, add a [`scheduled()` handler](/workers/runtime-apis/handlers/scheduled/) to respond to Worker invocations via a [Cron Trigger](/workers/configuration/cron-triggers/). Additionally, the `fetch` handler will always be passed three parameters: [`request`, `env` and `context`](/workers/runtime-apis/handlers/fetch/). ```js title="index.js" {3} export default { async fetch(request, env, ctx) { return new Response("Hello World!"); }, }; ``` The Workers runtime expects `fetch` handlers to return a `Response` object or a Promise which resolves with a `Response` object. In this example, you will return a new `Response` with the string `"Hello World!"`. </Details> Replace the content in your current `index.js` file with the content below, which changes the text output. ```js title="index.js" {3} export default { async fetch(request, env, ctx) { return new Response("Hello Worker!"); }, }; ``` Then, save the file and reload the page. Your Worker's output will have changed to the new text. <Details header="No visible changes?"> If the output for your Worker does not change, make sure that: 1. You saved the changes to `index.js`. 2. You have `wrangler dev` running. 3. You reloaded your browser. </Details> ## 4. Deploy your project Deploy your Worker via Wrangler to a `*.workers.dev` subdomain or a [Custom Domain](/workers/configuration/routing/custom-domains/). ```sh npx wrangler deploy ``` If you have not configured any subdomain or domain, Wrangler will prompt you during the publish process to set one up. Preview your Worker at `<YOUR_WORKER>.<YOUR_SUBDOMAIN>.workers.dev`. <Details header="Seeing 523 errors?"> If you see [`523` errors](/support/troubleshooting/cloudflare-errors/troubleshooting-cloudflare-5xx-errors/#error-523-origin-is-unreachable) when pushing your `*.workers.dev` subdomain for the first time, wait a minute or so and the errors will resolve themselves. </Details> ## Next steps To do more: - Push your project to a GitHub or GitLab respoitory then [connect to builds](/workers/ci-cd/builds/#get-started) to enable automatic builds and deployments. - Visit the [Cloudflare dashboard](https://dash.cloudflare.com/) for simpler editing. - Review our [Examples](/workers/examples/) and [Tutorials](/workers/tutorials/) for inspiration. - Set up [bindings](/workers/runtime-apis/bindings/) to allow your Worker to interact with other resources and unlock new functionality. - Learn how to [test and debug](/workers/testing/) your Workers. - Read about [Workers limits and pricing](/workers/platform/). --- # Get started URL: https://developers.cloudflare.com/workers/get-started/ import { DirectoryListing, Render } from "~/components"; Build your first Worker. <DirectoryListing /> --- # Prompting URL: https://developers.cloudflare.com/workers/get-started/prompting/ import { Tabs, TabItem, GlossaryTooltip, Type, Badge, TypeScriptExample } from "~/components"; import { Code } from "@astrojs/starlight/components"; import BasePrompt from '~/content/partials/prompts/base-prompt.txt?raw'; One of the fastest ways to build an application is by using AI to assist with writing the boiler plate code. When building, iterating on or debugging applications using AI tools and Large Language Models (LLMs), a well-structured and extensive prompt helps provide the model with clearer guidelines & examples that can dramatically improve output. Below is an extensive example prompt that can help you build applications using Cloudflare Workers and your preferred AI model. ### Getting started with Workers using a prompt <Badge text="Beta" variant="caution" size="small" /> To use the prompt: 1. Use the click-to-copy button at the top right of the code block below to copy the full prompt to your clipboard 2. Paste into your AI tool of choice (for example OpenAI's ChatGPT or Anthropic's Claude) 3. Make sure to enter your part of the prompt at the end between the `<user_prompt>` and `</user_prompt>` tags. Base prompt: <Code code={BasePrompt} collapse={"30-10000"} lang="md" /> The prompt above adopts several best practices, including: * Using `<xml>` tags to structure the prompt * API and usage examples for products and use-cases * Guidance on how to generate configuration (e.g. `wrangler.jsonc`) as part of the models response. * Recommendations on Cloudflare products to use for specific storage or state needs ### Additional uses You can use the prompt in several ways: * Within the user context window, with your own user prompt inserted between the `<user_prompt>` tags (**easiest**) * As the `system` prompt for models that support system prompts * Adding it to the prompt library and/or file context within your preferred IDE: * Cursor: add the prompt to [your Project Rules](https://docs.cursor.com/context/rules-for-ai) * Zed: use [the `/file` command](https://zed.dev/docs/assistant/assistant-panel) to add the prompt to the Assistant context. * Windsurf: use [the `@-mention` command](https://docs.codeium.com/chat/overview) to include a file containing the prompt to your Chat. * GitHub Copilot: create the [`.github/copilot-instructions.md`](https://docs.github.com/en/copilot/customizing-copilot/adding-repository-custom-instructions-for-github-copilot) file at the root of your project and add the prompt. :::note The prompt(s) here are examples and should be adapted to your specific use case. We'll continue to build out the prompts available here, including additional prompts for specific products. Depending on the model and user prompt, it may generate invalid code, configuration or other errors, and we recommend reviewing and testing the generated code before deploying it. ::: ### Passing a system prompt If you are building an AI application that will itself generate code, you can additionally use the prompt above as a "system prompt", which will give the LLM additional information on how to structure the output code. For example: <TypeScriptExample filename="index.ts"> ```ts import workersPrompt from "./workersPrompt.md" // Llama 3.3 from Workers AI const PREFERRED_MODEL = "@cf/meta/llama-3.3-70b-instruct-fp8-fast" export default { async fetch(req: Request, env: Env, ctx: ExecutionContext) { const openai = new OpenAI({ apiKey: env.WORKERS_AI_API_KEY }); const stream = await openai.chat.completions.create({ messages: [ { role: "system", content: workersPrompt, }, { role: "user", // Imagine something big! content: "Build an AI Agent using Workflows. The Workflow should be triggered by a GitHub webhook on a pull request, and ..." } ], model: PREFERRED_MODEL, stream: true, }); // Stream the response so we're not buffering the entire response in memory, // since it could be very large. const transformStream = new TransformStream(); const writer = transformStream.writable.getWriter(); const encoder = new TextEncoder(); (async () => { try { for await (const chunk of stream) { const content = chunk.choices[0]?.delta?.content || ''; await writer.write(encoder.encode(content)); } } finally { await writer.close(); } })(); return new Response(transformStream.readable, { headers: { 'Content-Type': 'text/plain; charset=utf-8', 'Transfer-Encoding': 'chunked' } }); } } ``` </TypeScriptExample> ## Additional resources To get the most out of AI models and tools, we recommend reading the following guides on prompt engineering and structure: * OpenAI's [prompt engineering](https://platform.openai.com/docs/guides/prompt-engineering) guide and [best practices](https://platform.openai.com/docs/guides/reasoning-best-practices) for using reasoning models. * The [prompt engineering](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview) guide from Anthropic * Google's [quick start guide](https://services.google.com/fh/files/misc/gemini-for-google-workspace-prompting-guide-101.pdf) for writing effective prompts * Meta's [prompting documentation](https://www.llama.com/docs/how-to-guides/prompting/) for their Llama model family. * GitHub's guide for [prompt engineering](https://docs.github.com/en/copilot/using-github-copilot/copilot-chat/prompt-engineering-for-copilot-chat) when using Copilot Chat. --- # Quickstarts URL: https://developers.cloudflare.com/workers/get-started/quickstarts/ import { LinkButton, WorkerStarter } from "~/components"; Quickstarts are GitHub repositories that are designed to be a starting point for building a new Cloudflare Workers project. To start any of the projects below, run: ```sh npm create cloudflare@latest <NEW_PROJECT_NAME> -- --template <GITHUB_REPO_URL> ``` - `new-project-name` - A folder with this name will be created with your new project inside, pre-configured to [your Workers account](/workers/wrangler/configuration/). - `template` - This is the URL of the GitHub repo starter, as below. Refer to the [create-cloudflare documentation](/pages/get-started/c3/) for a full list of possible values. ## Example Projects <WorkerStarter title="Sentry" repo="mhart/cf-sentry" description="Log exceptions and errors in your Workers application to Sentry.io - an error tracking tool." /> <WorkerStarter title="Image Color" repo="xtuc/img-color-worker" description="Retrieve the dominant color of a PNG or JPEG image." /> <WorkerStarter title="Cloud Storage" repo="conzorkingkong/cloud-storage" description="Serve private Amazon Web Services (AWS) bucket files from a Worker script." /> <WorkerStarter title="BinAST" repo="xtuc/binast-cf-worker-template" description="Serve a JavaScript Binary AST via a Cloudflare Worker." /> <WorkerStarter title="Edge-Side Rendering" repo="frandiox/vitessedge-template" description="Use Vite to render pages on Cloudflare's global network with great DX. Includes i18n, markdown support and more." /> <WorkerStarter title="REST API with Fauna" repo="fauna-labs/fauna-workers" description="Build a fast, globally distributed REST API using Cloudflare Workers and Fauna, the data API for modern applications." /> --- ## Frameworks <WorkerStarter title="Apollo GraphQL Server" repo="cloudflare/workers-graphql-server" description="Lightning-fast, globally distributed Apollo GraphQL server, deployed on the Cloudflare global network using Cloudflare Workers." /> <WorkerStarter title="GraphQL Yoga" repo="the-guild-org/yoga-cloudflare-workers-template" description="The most flexible, fastest, and lightest GraphQL server for all environments, Cloudflare Workers included." /> <WorkerStarter title="Flareact" repo="flareact/flareact" description="Flareact is an edge-rendered React framework built for Cloudflare Workers. It features file-based page routing with dynamic page paths and edge-side data fetching APIs." /> <WorkerStarter title="Sunder" repo="sunderjs/sunder-worker-template" description="Sunder is a minimal and unopinionated framework for Service Workers. This template uses Sunder, TypeScript, Miniflare, esbuild, Jest, and Sass, as well as Workers Sites for static assets." /> --- ## Built with Workers Get inspiration from other sites and projects out there that were built with Cloudflare Workers. <LinkButton variant="primary" href="https://workers.cloudflare.com/built-with"> Built with Workers </LinkButton> --- # Languages URL: https://developers.cloudflare.com/workers/languages/ import { DirectoryListing } from "~/components"; Workers is a polyglot platform, and provides first-class support for the following programming languages: <DirectoryListing /> Workers also supports [WebAssembly](/workers/runtime-apis/webassembly/) (abbreviated as "Wasm") — a binary format that many languages can be compiled to. This allows you to write Workers using programming language beyond the languages listed above, including C, C++, Kotlin, Go and more. --- # Errors and exceptions URL: https://developers.cloudflare.com/workers/observability/errors/ import { TabItem, Tabs } from "~/components"; Review Workers errors and exceptions. ## Error pages generated by Workers When a Worker running in production has an error that prevents it from returning a response, the client will receive an error page with an error code, defined as follows: | Error code | Meaning | | ---------- | ----------------------------------------------------------------------------------------------------------------- | | `1101` | Worker threw a JavaScript exception. | | `1102` | Worker exceeded [CPU time limit](/workers/platform/limits/#cpu-time). | | `1103` | The owner of this worker needs to contact [Cloudflare Support](/support/contacting-cloudflare-support/) | | `1015` | Worker hit the [burst rate limit](/workers/platform/limits/#burst-rate). | | `1019` | Worker hit [loop limit](#loop-limit). | | `1021` | Worker has requested a host it cannot access. | | `1022` | Cloudflare has failed to route the request to the Worker. | | `1024` | Worker cannot make a subrequest to a Cloudflare-owned IP address. | | `1027` | Worker exceeded free tier [daily request limit](/workers/platform/limits/#daily-request). | | `1042` | Worker tried to fetch from another Worker on the same zone, which is [unsupported](/workers/runtime-apis/fetch/). | Other `11xx` errors generally indicate a problem with the Workers runtime itself. Refer to the [status page](https://www.cloudflarestatus.com) if you are experiencing an error. ### Loop limit A Worker cannot call itself or another Worker more than 16 times. In order to prevent infinite loops between Workers, the [`CF-EW-Via`](/fundamentals/reference/http-headers/#cf-ew-via) header's value is an integer that indicates how many invocations are left. Every time a Worker is invoked, the integer will decrement by 1. If the count reaches zero, a [`1019`](#error-pages-generated-by-workers) error is returned. ### "The script will never generate a response" errors Some requests may return a 1101 error with `The script will never generate a response` in the error message. This occurs when the Workers runtime detects that all the code associated with the request has executed and no events are left in the event loop, but a Response has not been returned. #### Cause 1: Unresolved Promises This is most commonly caused by relying on a Promise that is never resolved or rejected, which is required to return a Response. To debug, look for Promises within your code or dependencies' code that block a Response, and ensure they are resolved or rejected. In browsers and other JavaScript runtimes, equivalent code will hang indefinitely, leading to both bugs and memory leaks. The Workers runtime throws an explicit error to help you debug. In the example below, the Response relies on a Promise resolution that never happens. Uncommenting the `resolve` callback solves the issue. ```js null {9} export default { fetch(req) { let response = new Response("Example response"); let { promise, resolve } = Promise.withResolvers(); // If the promise is not resolved, the Workers runtime will // recognize this and throw an error. // setTimeout(resolve, 0) return promise.then(() => response); }, }; ``` You can prevent this by enforcing the [`no-floating-promises` eslint rule](https://typescript-eslint.io/rules/no-floating-promises/), which reports when a Promise is created and not properly handled. #### Cause 2: WebSocket connections that are never closed If a WebSocket is missing the proper code to close its server-side connection, the Workers runtime will throw a `script will never generate a response` error. In the example below, the `'close'` event from the client is not properly handled by calling `server.close()`, and the error is thrown. In order to avoid this, ensure that the WebSocket's server-side connection is properly closed via an event listener or other server-side logic. ```js null {10} async function handleRequest(request) { let webSocketPair = new WebSocketPair(); let [client, server] = Object.values(webSocketPair); server.accept(); server.addEventListener("close", () => { // This missing line would keep a WebSocket connection open indefinitely // and results in "The script will never generate a response" errors // server.close(); }); return new Response(null, { status: 101, webSocket: client, }); } ``` ### "Illegal invocation" errors The error message `TypeError: Illegal invocation: function called with incorrect this reference` can be a source of confusion. This is typically caused by calling a function that calls `this`, but the value of `this` has been lost. For example, given an `obj` object with the `obj.foo()` method which logic relies on `this`, executing the method via `obj.foo();` will make sure that `this` properly references the `obj` object. However, assigning the method to a variable, e.g.`const func = obj.foo;` and calling such variable, e.g. `func();` would result in `this` being `undefined`. This is because `this` is lost when the method is called as a standalone function. This is standard behavior in JavaScript. In practice, this is often seen when destructuring runtime provided Javascript objects that have functions that rely on the presence of `this`, such as `ctx`. The following code will error: ```js export default { async fetch(request, env, ctx) { // destructuring ctx makes waitUntil lose its 'this' reference const { waitUntil } = ctx; // waitUntil errors, as it has no 'this' waitUntil(somePromise); return fetch(request); }, }; ``` Avoid destructuring or re-bind the function to the original context to avoid the error. The following code will run properly: ```js export default { async fetch(request, env, ctx) { // directly calling the method on ctx avoids the error ctx.waitUntil(somePromise); // alternatively re-binding to ctx via apply, call, or bind avoids the error const { waitUntil } = ctx; waitUntil.apply(ctx, [somePromise]); waitUntil.call(ctx, somePromise); const reboundWaitUntil = waitUntil.bind(ctx); reboundWaitUntil(somePromise); return fetch(request); }, }; ``` ### Cannot perform I/O on behalf of a different request ``` Uncaught (in promise) Error: Cannot perform I/O on behalf of a different request. I/O objects (such as streams, request/response bodies, and others) created in the context of one request handler cannot be accessed from a different request's handler. ``` This error occurs when you attempt to share input/output (I/O) objects (such as streams, requests, or responses) created by one invocation of your Worker in the context of a different invocation. In Cloudflare Workers, each invocation is handled independently and has its own execution context. This design ensures optimal performance and security by isolating requests from one another. When you try to share I/O objects between different invocations, you break this isolation. Since these objects are tied to the specific request they were created in, accessing them from another request's handler is not allowed and leads to the error. This error is most commonly caused by attempting to cache an I/O object, like a [Request](/workers/runtime-apis/request/) in global scope, and then access it in a subsequent request. For example, if you create a Worker and run the following code in local development, and make two requests to your Worker in quick succession, you can reproduce this error: ```js let cachedResponse = null; export default { async fetch(request, env, ctx) { if (cachedResponse) { return cachedResponse; } cachedResponse = new Response("Hello, world!"); await new Promise((resolve) => setTimeout(resolve, 5000)); // Sleep for 5s to demonstrate this particular error case return cachedResponse; }, }; ``` You can fix this by instead storing only the data in global scope, rather than the I/O object itself: ```js let cachedData = null; export default { async fetch(request, env, ctx) { if (cachedData) { return new Response(cachedData); } const response = new Response("Hello, world!"); cachedData = await response.text(); return new Response(cachedData, response); }, }; ``` If you need to share state across requests, consider using [Durable Objects](/durable-objects/). If you need to cache data across requests, consider using [Workers KV](/kv/). ## Errors on Worker upload These errors occur when a Worker is uploaded or modified. | Error code | Meaning | | ---------- | ------------------------------------------------------------------------------------------------------------------------------- | | `10006` | Could not parse your Worker's code. | | `10007` | Worker or [workers.dev subdomain](/workers/configuration/routing/workers-dev/) not found. | | `10015` | Account is not entitled to use Workers. | | `10016` | Invalid Worker name. | | `10021` | Validation Error. Refer to [Validation Errors](/workers/observability/errors/#validation-errors-10021) for details. | | `10026` | Could not parse request body. | | `10027` | Your Worker exceeded the size limit of XX MB (for more details see [Worker size limits](/workers/platform/limits/#worker-size)) | | `10035` | Multiple attempts to modify a resource at the same time | | `10037` | An account has exceeded the number of [Workers allowed](/workers/platform/limits/#number-of-workers). | | `10052` | A [binding](/workers/runtime-apis/bindings/) is uploaded without a name. | | `10054` | A environment variable or secret exceeds the [size limit](/workers/platform/limits/#environment-variables). | | `10055` | The number of environment variables or secrets exceeds the [limit/Worker](/workers/platform/limits/#environment-variables). | | `10056` | [Binding](/workers/runtime-apis/bindings/) not found. | | `10068` | The uploaded Worker has no registered [event handlers](/workers/runtime-apis/handlers/). | | `10069` | The uploaded Worker contains [event handlers](/workers/runtime-apis/handlers/) unsupported by the Workers runtime. | ### Validation Errors (10021) The 10021 error code includes all errors that occur when you attempt to deploy a Worker, and Cloudflare then attempts to load and run the top-level scope (everything that happens before your Worker's [handler](/workers/runtime-apis/handlers/) is invoked). For example, if you attempt to deploy a broken Worker with invalid JavaScript that would throw a `SyntaxError` — Cloudflare will not deploy your Worker. Specific error cases include but are not limited to: #### Worker exceeded the upload size limit A Worker can be up to 10 MB in size after compression on the Workers Paid plan, and up to 3 MB on the Workers Free plan. To reduce the upload size of a Worker, you should consider removing unnecessary dependencies and/or using Workers KV, a D1 database or R2 to store configuration files, static assets and binary data instead of attempting to bundle them within your Worker code. Another method to reduce a Worker's file size is to split its functionality across multiple Workers and connect them using [Service bindings](/workers/runtime-apis/bindings/service-bindings/). #### Script startup exceeded CPU time limit This means that you are doing work in the top-level scope of your Worker that takes [more than the startup time limit (400ms)](/workers/platform/limits/#worker-startup-time) of CPU time. This is usually a sign of a bug and/or large performance problem with your code or a dependency you rely on. It's not typical to use more than 400ms of CPU time when your app starts. The more time your Worker's code spends parsing and executing top-level scope, the slower your Worker will be when you deploy a code change or a new [isolate](/workers/reference/how-workers-works/) is created. This error is most commonly caused by attempting to perform expernsive initialization work directly in top level (global) scope, rather than either at build time or when your Worker's handler is invoked. For example, attempting to initialize an app by generating or consuming a large schema. To analyze what is consuming so much CPU time, you should open Chrome DevTools for your Worker and look at the Profiling and/or Performance panels to understand where time is being spent. Is there something glaring that consumes tons of CPU time, especially the first time you make a request to your Worker? ## Runtime errors Runtime errors will occur within the runtime, do not throw up an error page, and are not visible to the end user. Runtime errors are detected by the user with logs. | Error message | Meaning | | -------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------- | | `Network connection lost` | Connection failure. Catch a `fetch` or binding invocation and retry it. | | `Memory limit`<br/>`would be exceeded`<br/> `before EOF` | Trying to read a stream or buffer that would take you over the [memory limit](/workers/platform/limits/#memory). | | `daemonDown` | A temporary problem invoking the Worker. | ## Identify errors: Workers Metrics To review whether your application is experiencing any downtime or returning any errors: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. In **Account Home**, select **Workers & Pages**. 3. In **Overview**, select your Worker and review your Worker's metrics. A `responseStreamDisconnected` event `outcome` occurs when one end of the connection hangs up during the deferred proxying stage of a Worker request flow. This is regarded as an error for request metrics, and presents in logs as a non-error log entry. It commonly appears for longer lived connections such as WebSockets. ## Debug exceptions After you have identified your Workers application is returning exceptions, use `wrangler tail` to inspect and fix the exceptions. {/* <!-- TODO: include example --> */} Exceptions will show up under the `exceptions` field in the JSON returned by `wrangler tail`. After you have identified the exception that is causing errors, redeploy your code with a fix, and continue tailing the logs to confirm that it is fixed. ## Set up a logging service A Worker can make HTTP requests to any HTTP service on the public Internet. You can use a service like [Sentry](https://sentry.io) to collect error logs from your Worker, by making an HTTP request to the service to report the error. Refer to your service’s API documentation for details on what kind of request to make. When using an external logging strategy, remember that outstanding asynchronous tasks are canceled as soon as a Worker finishes sending its main response body to the client. To ensure that a logging subrequest completes, pass the request promise to [`event.waitUntil()`](https://developer.mozilla.org/en-US/docs/Web/API/ExtendableEvent/waitUntil). For example: <Tabs> <TabItem label="Module Worker" icon="seti:javascript"> ```js export default { async fetch(request, env, ctx) { function postLog(data) { return fetch("https://log-service.example.com/", { method: "POST", body: data, }); } // Without ctx.waitUntil(), the `postLog` function may or may not complete. ctx.waitUntil(postLog(stack)); return fetch(request); }, }; ``` </TabItem> <TabItem label="Service Worker" icon="seti:javascript"> ```js addEventListener("fetch", (event) => { event.respondWith(handleEvent(event)); }); async function handleEvent(event) { // ... // Without event.waitUntil(), the `postLog` function may or may not complete. event.waitUntil(postLog(stack)); return fetch(event.request); } function postLog(data) { return fetch("https://log-service.example.com/", { method: "POST", body: data, }); } ``` </TabItem> </Tabs> ## Go to origin on error By using [`event.passThroughOnException`](/workers/runtime-apis/context/#passthroughonexception), a Workers application will forward requests to your origin if an exception is thrown during the Worker's execution. This allows you to add logging, tracking, or other features with Workers, without degrading your application's functionality. <Tabs> <TabItem label="Module Worker" icon="seti:javascript"> ```js export default { async fetch(request, env, ctx) { ctx.passThroughOnException(); // an error here will return the origin response, as if the Worker wasn't present return fetch(request); }, }; ``` </TabItem> <TabItem label="Service Worker" icon="seti:javascript"> ```js addEventListener("fetch", (event) => { event.passThroughOnException(); event.respondWith(handleRequest(event.request)); }); async function handleRequest(request) { // An error here will return the origin response, as if the Worker wasn’t present. // ... return fetch(request); } ``` </TabItem> </Tabs> ## Related resources - [Log from Workers](/workers/observability/logs/) - Learn how to log your Workers. - [Logpush](/workers/observability/logs/logpush/) - Learn how to push Workers Trace Event Logs to supported destinations. - [RPC error handling](/workers/runtime-apis/rpc/error-handling/) - Learn how to handle errors from remote-procedure calls. --- # Observability URL: https://developers.cloudflare.com/workers/observability/ import { DirectoryListing } from "~/components"; Understand how your Worker projects are performing via logs, traces, and other data sources. <DirectoryListing /> --- # Metrics and analytics URL: https://developers.cloudflare.com/workers/observability/metrics-and-analytics/ import { GlossaryTooltip } from "~/components" There are two graphical sources of information about your Workers traffic at a given time: Workers metrics and zone-based Workers analytics. Workers metrics can help you diagnose issues and understand your Workers' workloads by showing performance and usage of your Workers. If your Worker runs on a route on a zone, or on a few zones, Workers metrics will show how much traffic your Worker is handling on a per-zone basis, and how many requests your site is getting. Zone analytics show how much traffic all Workers assigned to a zone are handling. ## Workers metrics Workers metrics aggregate request data for an individual Worker (if your Worker is running across multiple domains, and on `*.workers.dev`, metrics will aggregate requests across them). To view your Worker's metrics: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. Select **Compute (Workers)**. 3. In **Overview**, select your Worker to view its metrics. There are two metrics that can help you understand the health of your Worker in a given moment: requests success and error metrics, and invocation statuses. ### Requests The first graph shows historical request counts from the Workers runtime broken down into successful requests, errored requests, and subrequests. * **Total**: All incoming requests registered by a Worker. Requests blocked by [WAF](https://www.cloudflare.com/waf/) or other security features will not count. * **Success**: Requests that returned a Success or Client Disconnected invocation status. * **Errors**: Requests that returned a Script Threw Exception, Exceeded Resources, or Internal Error invocation status — refer to [Invocation Statuses](/workers/observability/metrics-and-analytics/#invocation-statuses) for a breakdown of where your errors are coming from. Request traffic data may display a drop off near the last few minutes displayed in the graph for time ranges less than six hours. This does not reflect a drop in traffic, but a slight delay in aggregation and metrics delivery. ### Subrequests Subrequests are requests triggered by calling `fetch` from within a Worker. A subrequest that throws an uncaught error will not be counted. * **Total**: All subrequests triggered by calling `fetch` from within a Worker. * **Cached**: The number of cached responses returned. * **Uncached**: The number of uncached responses returned. ### Wall time per execution <GlossaryTooltip term="wall-clock time">Wall time</GlossaryTooltip> represents the elapsed time in milliseconds between the start of a Worker invocation, and when the Workers runtime determines that no more JavaScript needs to run. Specifically, wall time per execution chart measures the wall time that the JavaScript context remained open — including time spent waiting on I/O, and time spent executing in your Worker's [`waitUntil()`](/workers/runtime-apis/context/#waituntil) handler. Wall time is not the same as the time it takes your Worker to send the final byte of a response back to the client - wall time can be higher, if tasks within `waitUntil()` are still running after the response has been sent, or it can be lower. For example, when returning a response with a large body, the Workers runtime can, in some cases, determine that no more JavaScript needs to run, and closes the JavaScript context before all the bytes have passed through and been sent. The Wall Time per execution chart shows historical wall time data broken down into relevant quantiles using [reservoir sampling](https://en.wikipedia.org/wiki/Reservoir_sampling). Learn more about [interpreting quantiles](https://www.statisticshowto.com/quantile-definition-find-easy-steps/). ### CPU Time per execution The CPU Time per execution chart shows historical CPU time data broken down into relevant quantiles using [reservoir sampling](https://en.wikipedia.org/wiki/Reservoir_sampling). Learn more about [interpreting quantiles](https://www.statisticshowto.com/quantile-definition-find-easy-steps/). In some cases, higher quantiles may appear to exceed [CPU time limits](/workers/platform/limits/#cpu-time) without generating invocation errors because of a mechanism in the Workers runtime that allows rollover CPU time for requests below the CPU limit. ### Execution duration (GB-seconds) The Duration per request chart shows historical [duration](/workers/platform/limits/#duration) per Worker invocation. The data is broken down into relevant quantiles, similar to the CPU time chart. Learn more about [interpreting quantiles](https://www.statisticshowto.com/quantile-definition-find-easy-steps/). Understanding duration on your Worker is especially useful when you are intending to do a significant amount of computation on the Worker itself. ### Invocation statuses To review invocation statuses: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. Select **Workers & Pages**. 3. Select your Worker. 4. Find the **Summary** graph in **Metrics**. 5. Select **Errors**. Worker invocation statuses indicate whether a Worker executed successfully or failed to generate a response in the Workers runtime. Invocation statuses differ from HTTP status codes. In some cases, a Worker invocation succeeds but does not generate a successful HTTP status because of another error encountered outside of the Workers runtime. Some invocation statuses result in a [Workers error code](/workers/observability/errors/#error-pages-generated-by-workers) being returned to the client. | Invocation status | Definition | Workers error code | GraphQL field | | ---------------------- | ---------------------------------------------------------------------------- | ------------------ | ---------------------- | | Success | Worker executed successfully | | `success` | | Client disconnected | HTTP client (that is, the browser) disconnected before the request completed | | `clientDisconnected` | | Worker threw exception | Worker threw an unhandled JavaScript exception | 1101 | `scriptThrewException` | | Exceeded resources¹ | Worker exceeded runtime limits | 1102, 1027 | `exceededResources` | | Internal error² | Workers runtime encountered an error | | `internalError` | ¹ The Exceeded Resources status may appear when the Worker exceeds a [runtime limit](/workers/platform/limits/#request-limits). The most common cause is excessive CPU time, but is also caused by a Worker exceeding startup time or free tier limits. ² The Internal Error status may appear when the Workers runtime fails to process a request due to an internal failure in our system. These errors are not caused by any issue with the Worker code nor any resource limit. While requests with Internal Error status are rare, some may appear during normal operation. These requests are not counted towards usage for billing purposes. If you notice an elevated rate of requests with Internal Error status, review [www.cloudflarestatus.com](https://www.cloudflarestatus.com/). To further investigate exceptions, use [`wrangler tail`](/workers/wrangler/commands/#tail). ### Request duration The request duration chart shows how long it took your Worker to respond to requests, including code execution and time spent waiting on I/O. The request duration chart is currently only available when your Worker has [Smart Placement](/workers/configuration/smart-placement) enabled. In contrast to [execution duration](/workers/observability/metrics-and-analytics/#execution-duration-gb-seconds), which measures only the time a Worker is active, request duration measures from the time a request comes into a data center until a response is delivered. The data shows the duration for requests with Smart Placement enabled compared to those with Smart Placement disabled (by default, 1% of requests are routed with Smart Placement disabled). The chart shows a histogram with duration across the x-axis and the percentage of requests that fall into the corresponding duration on the y-axis. ### Metrics retention Worker metrics can be inspected for up to three months in the past in maximum increments of one week. ## Zone analytics Zone analytics aggregate request data for all Workers assigned to any [routes](/workers/configuration/routing/routes/) defined for a zone. To review zone metrics: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. Select your site. 3. In **Analytics & Logs**, select **Workers**. Zone data can be scoped by time range within the last 30 days. The dashboard includes charts and information described below. ### Subrequests This chart shows subrequests — requests triggered by calling `fetch` from within a Worker — broken down by cache status. * **Uncached**: Requests answered directly by your origin server or other servers responding to subrequests. * **Cached**: Requests answered by Cloudflare’s [cache](https://www.cloudflare.com/learning/cdn/what-is-caching/). As Cloudflare caches more of your content, it accelerates content delivery and reduces load on your origin. ### Bandwidth This chart shows historical bandwidth usage for all Workers on a zone broken down by cache status. ### Status codes This chart shows historical requests for all Workers on a zone broken down by HTTP status code. ### Total requests This chart shows historical data for all Workers on a zone broken down by successful requests, failed requests, and subrequests. These request types are categorized by HTTP status code where `200`-level requests are successful and `400` to `500`-level requests are failed. ## GraphQL Worker metrics are powered by GraphQL. Learn more about querying our data sets in the [Querying Workers Metrics with GraphQL tutorial](/analytics/graphql-api/tutorials/querying-workers-metrics/). --- # Source maps and stack traces URL: https://developers.cloudflare.com/workers/observability/source-maps/ import { Render, WranglerConfig } from "~/components" <Render file="source-maps" product="workers" /> :::caution Support for uploading source maps is available now in open beta. Minimum required Wrangler version: 3.46.0. ::: ## Source Maps To enable source maps, add the following to your Worker's [Wrangler configuration](/workers/wrangler/configuration/): <WranglerConfig> ```toml upload_source_maps = true ``` </WranglerConfig> When `upload_source_maps` is set to `true`, Wrangler will automatically generate and upload source map files when you run [`wrangler deploy`](/workers/wrangler/commands/#deploy) or [`wrangler versions deploy`](/workers/wrangler/commands/#deploy-2). ​​ :::note Miniflare can also [output source maps](https://miniflare.dev/developing/source-maps) for use in local development or [testing](/workers/testing/integration-testing/#miniflares-api). ::: ## Stack traces ​​ When your Worker throws an uncaught exception, we fetch the source map and use it to map the stack trace of the exception back to lines of your Worker’s original source code. You can then view the stack trace when streaming [real-time logs](/workers/observability/logs/real-time-logs/) or in [Tail Workers](/workers/observability/logs/tail-workers/). :::note The source map is retrieved after your Worker invocation completes — it's an asynchronous process that does not impact your Worker's CPU utilization or performance. Source maps are not accessible inside the Worker at runtime, if you `console.log()` the [stack property](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Error/stack) within a Worker, you will not get a deobfuscated stack trace. ::: When Cloudflare attempts to remap a stack trace to the Worker's source map, it does so line-by-line, remapping as much as possible. If a line of the stack trace cannot be remapped for any reason, Cloudflare will leave that line of the stack trace unchanged, and continue to the next line of the stack trace. ## Related resources * [Tail Workers](/workers/observability/logs/logpush/) - Learn how to attach Tail Workers to transform your logs and send them to HTTP endpoints. * [Real-time logs](/workers/observability/logs/real-time-logs/) - Learn how to capture Workers logs in real-time. --- # Platform URL: https://developers.cloudflare.com/workers/platform/ import { DirectoryListing } from "~/components"; Pricing, limits and other information about the Workers platform. <DirectoryListing /> --- # Betas URL: https://developers.cloudflare.com/workers/platform/betas/ These are the current alphas and betas relevant to the Cloudflare Workers platform. * **Public alphas and betas are openly available**, but may have limitations and caveats due to their early stage of development. * Private alphas and betas require explicit access to be granted. Refer to the documentation to join the relevant product waitlist. | Product | Private Beta | Public Beta | More Info | | ------------------------------------------------- | ------------ | ----------- | --------------------------------------------------------------------------- | | Email Workers | | ✅ | [Docs](/email-routing/email-workers/) | | Green Compute | | ✅ | [Blog](https://blog.cloudflare.com/earth-day-2022-green-compute-open-beta/) | | Pub/Sub | ✅ | | [Docs](/pub-sub) | | [TCP Sockets](/workers/runtime-apis/tcp-sockets/) | | ✅ | [Docs](/workers/runtime-apis/tcp-sockets) | --- # Known issues URL: https://developers.cloudflare.com/workers/platform/known-issues/ Below are some known bugs and issues to be aware of when using Cloudflare Workers. ## Route specificity * When defining route specificity, a trailing `/*` in your pattern may not act as expected. Consider two different Workers, each deployed to the same zone. Worker A is assigned the `example.com/images/*` route and Worker B is given the `example.com/images*` route pattern. With these in place, here are how the following URLs will be resolved: ``` // (A) example.com/images/* // (B) example.com/images* "example.com/images" // -> B "example.com/images123" // -> B "example.com/images/hello" // -> B ``` You will notice that all examples trigger Worker B. This includes the final example, which exemplifies the unexpected behavior. When adding a wildcard on a subdomain, here are how the following URLs will be resolved: ``` // (A) *.example.com/a // (B) a.example.com/* "a.example.com/a" // -> B ``` ## wrangler dev * When running `wrangler dev --remote`, all outgoing requests are given the `cf-workers-preview-token` header, which Cloudflare recognizes as a preview request. This applies to the entire Cloudflare network, so making HTTP requests to other Cloudflare zones is currently discarded for security reasons. To enable a workaround, insert the following code into your Worker script: ```js const request = new Request(url, incomingRequest); request.headers.delete('cf-workers-preview-token'); return await fetch(request); ``` ## Fetch API in CNAME setup When you make a subrequest using [`fetch()`](/workers/runtime-apis/fetch/) from a Worker, the Cloudflare DNS resolver is used. When a zone has a [Partial (CNAME) setup](/dns/zone-setups/partial-setup/), all hostnames that the Worker needs to be able to resolve require a dedicated DNS entry in Cloudflare's DNS setup. Otherwise the Fetch API call will fail with status code [530 (1016)](/support/troubleshooting/cloudflare-errors/troubleshooting-cloudflare-1xxx-errors#error-1016-origin-dns-error). Setup with missing DNS records in Cloudflare DNS ``` // Zone in partial setup: example.com // DNS records at Authoritative DNS: sub1.example.com, sub2.example.com, ... // DNS records at Cloudflare DNS: sub1.example.com "sub1.example.com/" // -> Can be resolved by Fetch API "sub2.example.com/" // -> Cannot be resolved by Fetch API, will lead to 530 status code ``` After adding `sub2.example.com` to Cloudflare DNS ``` // Zone in partial setup: example.com // DNS records at Authoritative DNS: sub1.example.com, sub2.example.com, ... // DNS records at Cloudflare DNS: sub1.example.com, sub2.example.com "sub1.example.com/" // -> Can be resolved by Fetch API "sub2.example.com/" // -> Can be resolved by Fetch API ``` ## Fetch to IP addresses For Workers subrequests, requests can only be made to URLs, not to IP addresses directly. To overcome this limitation [add a A or AAAA name record to your zone](https://developers.cloudflare.com/dns/manage-dns-records/how-to/create-dns-records/) and then fetch that resource. For example, in the zone `example.com` create a record of type `A` with the name `server` and value `192.0.2.1`, and then use: ```js await fetch('http://server.example.com') ``` Do not use: ```js await fetch('http://192.0.2.1') ``` --- # Limits URL: https://developers.cloudflare.com/workers/platform/limits/ import { Render } from "~/components"; ## Account plan limits | Feature | Workers Free | Workers Paid | | -------------------------------------------------------------------------------- | ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | [Subrequests](#subrequests) | 50/request | 1000/request | | [Simultaneous outgoing<br/>connections/request](#simultaneous-open-connections) | 6 | 6 | | [Environment variables](#environment-variables) | 64/Worker | 128/Worker | | [Environment variable<br/>size](#environment-variables) | 5 KB | 5 KB | | [Worker size](#worker-size) | 3 MB | 10 MB | | [Worker startup time](#worker-startup-time) | 400 ms | 400 ms | | [Number of Workers](#number-of-workers)<sup>1</sup> | 100 | 500 | | Number of [Cron Triggers](/workers/configuration/cron-triggers/)<br/>per account | 5 | 250 | <sup>1</sup> If you are running into limits, your project may be a good fit for [Workers for Platforms](/cloudflare-for-platforms/workers-for-platforms/). <Render file="limits_increase" /> --- ## Request limits URLs have a limit of 16 KB. Request headers observe a total limit of 32 KB, but each header is limited to 16 KB. Cloudflare has network-wide limits on the request body size. This limit is tied to your Cloudflare account's plan, which is separate from your Workers plan. When the request body size of your `POST`/`PUT`/`PATCH` requests exceed your plan's limit, the request is rejected with a `(413) Request entity too large` error. Cloudflare Enterprise customers may contact their account team or [Cloudflare Support](/support/contacting-cloudflare-support/) to have a request body limit beyond 500 MB. | Cloudflare Plan | Maximum body size | | --------------- | ------------------- | | Free | 100 MB | | Pro | 100 MB | | Business | 200 MB | | Enterprise | 500 MB (by default) | --- ## Response limits Response headers observe a total limit of 32 KB, but each header is limited to 16 KB. Cloudflare does not enforce response limits on response body sizes, but cache limits for [our CDN are observed](/cache/concepts/default-cache-behavior/). Maximum file size is 512 MB for Free, Pro, and Business customers and 5 GB for Enterprise customers. --- ## Worker limits | Feature | Workers Free | Workers Paid | | ------------------------ | ------------------------------------------ | ---------------- | | [Request](#request) | 100,000 requests/day<br/>1000 requests/min | No limit | | [Worker memory](#memory) | 128 MB | 128 MB | | [CPU time](#cpu-time) | 10 ms | 30 s HTTP request <br/> 15 min [Cron Trigger](/workers/configuration/cron-triggers/) | | [Duration](#duration) | No limit | No limit for Workers. <br/>15 min duration limit for [Cron Triggers](/workers/configuration/cron-triggers/), [Durable Object Alarms](/durable-objects/api/alarms/) and [Queue Consumers](/queues/configuration/javascript-apis/#consumer) | ### Duration Duration is a measurement of wall-clock time — the total amount of time from the start to end of an invocation of a Worker. There is no hard limit on the duration of a Worker. As long as the client that sent the request remains connected, the Worker can continue processing, making subrequests, and setting timeouts on behalf of that request. When the client disconnects, all tasks associated with that client request are canceled. Use [`event.waitUntil()`](/workers/runtime-apis/handlers/fetch/) to delay cancellation for another 30 seconds or until the promise passed to `waitUntil()` completes. :::note Cloudflare updates the Workers runtime a few times per week. When this happens, any in-flight requests are given a grace period of 30 seconds to finish. If a request does not finish within this time, it is terminated. While your application should follow the best practice of handling disconnects by retrying requests, this scenario is extremely improbable. To encounter it, you would need to have a request that takes longer than 30 seconds that also happens to intersect with the exact time an update to the runtime is happening. ::: ### CPU time CPU time is the amount of time the CPU actually spends doing work, during a given request. Most Workers requests consume less than a millisecond of CPU time. It is rare to find normally operating Workers that exceed the CPU time limit. <Render file="isolate-cpu-flexibility" /> Using DevTools locally can help identify CPU intensive portions of your code. See the [CPU profiling with DevTools documentation](/workers/observability/dev-tools/cpu-usage/) to learn more. You can also set a custom limit on the amount of CPU time that can be used during each invocation of your Worker. To do so, navigate to the Workers section in the Cloudflare dashboard. Select the specific Worker you wish to modify, then click on the "Settings" tab where you can adjust the CPU time limit. :::note Scheduled Workers ([Cron Triggers](/workers/configuration/cron-triggers/)) have different limits on CPU time based on the schedule interval. When the schedule interval is less than 1 hour, a Scheduled Worker may run for up to 30 seconds. When the schedule interval is more than 1 hour, a scheduled Worker may run for up to 15 minutes. ::: --- ## Cache API limits | Feature | Workers Free | Workers Paid | ---------------------------------------- | ------------ | ------------ | | [Maximum object size](#cache-api-limits) | 512 MB | 512 MB | | [Calls/request](#cache-api-limits) | 50 | 1,000 | Calls/request means the number of calls to `put()`, `match()`, or `delete()` Cache API method per-request, using the same quota as subrequests (`fetch()`). :::note The size of chunked response bodies (`Transfer-Encoding: chunked`) is not known in advance. Then, `.put()`ing such responses will block subsequent `.put()`s from starting until the current `.put()` completes. ::: --- ## Request Workers automatically scale onto thousands of Cloudflare global network servers around the world. There is no general limit to the number of requests per second Workers can handle. Cloudflare’s abuse protection methods do not affect well-intentioned traffic. However, if you send many thousands of requests per second from a small number of client IP addresses, you can inadvertently trigger Cloudflare’s abuse protection. If you expect to receive `1015` errors in response to traffic or expect your application to incur these errors, [contact Cloudflare support](/support/contacting-cloudflare-support/) to increase your limit. Cloudflare's anti-abuse Workers Rate Limiting does not apply to Enterprise customers. You can also confirm if you have been rate limited by anti-abuse Worker Rate Limiting by logging into the Cloudflare dashboard, selecting your account and zone, and going to **Security** > **Events**. Find the event and expand it. If the **Rule ID** is `worker`, this confirms that it is the anti-abuse Worker Rate Limiting. The burst rate and daily request limits apply at the account level, meaning that requests on your `*.workers.dev` subdomain count toward the same limit as your zones. Upgrade to a [Workers Paid plan](https://dash.cloudflare.com/?account=workers/plans) to automatically lift these limits. :::caution If you are currently being rate limited, upgrade to a [Workers Paid plan](https://dash.cloudflare.com/?account=workers/plans) to lift burst rate and daily request limits. ::: ### Burst rate Accounts using the Workers Free plan are subject to a burst rate limit of 1,000 requests per minute. Users visiting a rate limited site will receive a Cloudflare `1015` error page. However if you are calling your Worker programmatically, you can detect the rate limit page and handle it yourself by looking for HTTP status code `429`. Workers being rate-limited by Anti-Abuse Protection are also visible from the Cloudflare dashboard: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account and your website. 2. Select **Security** > **Events** > scroll to **Sampled logs**. 3. Review the log for a Web Application Firewall block event with a `ruleID` of `worker`. ### Daily request Accounts using the Workers Free plan are subject to a daily request limit of 100,000 requests. Free plan daily requests counts reset at midnight UTC. A Worker that fails as a result of daily request limit errors can be configured by toggling its corresponding [route](/workers/configuration/routing/routes/) in two modes: 1) Fail open and 2) Fail closed. #### Fail open Routes in fail open mode will bypass the failing Worker and prevent it from operating on incoming traffic. Incoming requests will behave as if there was no Worker. #### Fail closed Routes in fail closed mode will display a Cloudflare `1027` error page to visitors, signifying the Worker has been temporarily disabled. Cloudflare recommends this option if your Worker is performing security related tasks. --- ## Memory Only one Workers instance runs on each of the many global Cloudflare global network servers. Each Workers instance can consume up to 128 MB of memory. Use [global variables](/workers/runtime-apis/web-standards/) to persist data between requests on individual nodes. Note however, that nodes are occasionally evicted from memory. If a Worker processes a request that pushes the Worker over the 128 MB limit, the Cloudflare Workers runtime may cancel one or more requests. To view these errors, as well as CPU limit overages: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. Select **Workers & Pages** and in **Overview**, select the Worker you would like to investigate. 3. Under **Metrics**, select **Errors** > **Invocation Statuses** and examine **Exceeded Memory**. Use the [TransformStream API](/workers/runtime-apis/streams/transformstream/) to stream responses if you are concerned about memory usage. This avoids loading an entire response into memory. Using DevTools locally can help identify memory leaks in your code. See the [memory profiling with DevTools documentation](/workers/observability/dev-tools/memory-usage/) to learn more. --- ## Subrequests A subrequest is any request that a Worker makes to either Internet resources using the [Fetch API](/workers/runtime-apis/fetch/) or requests to other Cloudflare services like [R2](/r2/), [KV](/kv/), or [D1](/d1/). ### Worker-to-Worker subrequests To make subrequests from your Worker to another Worker on your account, use [Service Bindings](/workers/runtime-apis/bindings/service-bindings/). Service bindings allow you to send HTTP requests to another Worker without those requests going over the Internet. If you attempt to use global [`fetch()`](/workers/runtime-apis/fetch/) to make a subrequest to another Worker on your account that runs on the same [zone](/fundamentals/setup/accounts-and-zones/#zones), without service bindings, the request will fail. If you make a subrequest from your Worker to a target Worker that runs on a [Custom Domain](/workers/configuration/routing/custom-domains/#worker-to-worker-communication) rather than a route, the request will be allowed. ### How many subrequests can I make? You can make 50 subrequests per request on Workers Free, and 1,000 subrequests per request on Workers Paid. Each subrequest in a redirect chain counts against this limit. This means that the number of subrequests a Worker makes could be greater than the number of `fetch(request)` calls in the Worker. For subrequests to internal services like Workers KV and Durable Objects, the subrequest limit is 1,000 per request, regardless of the [usage model](/workers/platform/pricing/#workers) configured for the Worker. ### How long can a subrequest take? There is no set limit on the amount of real time a Worker may use. As long as the client which sent a request remains connected, the Worker may continue processing, making subrequests, and setting timeouts on behalf of that request. When the client disconnects, all tasks associated with that client’s request are proactively canceled. If the Worker passed a promise to [`event.waitUntil()`](/workers/runtime-apis/handlers/fetch/), cancellation will be delayed until the promise has completed or until an additional 30 seconds have elapsed, whichever happens first. --- ## Simultaneous open connections You can open up to six connections simultaneously, for each invocation of your Worker. The connections opened by the following API calls all count toward this limit: - the `fetch()` method of the [Fetch API](/workers/runtime-apis/fetch/). - `get()`, `put()`, `list()`, and `delete()` methods of [Workers KV namespace objects](/kv/api/). - `put()`, `match()`, and `delete()` methods of [Cache objects](/workers/runtime-apis/cache/). - `list()`, `get()`, `put()`, `delete()`, and `head()` methods of [R2](/r2/). - `send()` and `sendBatch()`, methods of [Queues](/queues/). - Opening a TCP socket using the [`connect()`](/workers/runtime-apis/tcp-sockets/) API. Once an invocation has six connections open, it can still attempt to open additional connections. - These attempts are put in a pending queue — the connections will not be initiated until one of the currently open connections has closed. - Earlier connections can delay later ones, if a Worker tries to make many simultaneous subrequests, its later subrequests may appear to take longer to start. If you have cases in your application that use `fetch()` but that do not require consuming the response body, you can avoid the unread response body from consuming a concurrent connection by using `response.body.cancel()`. For example, if you want to check whether the HTTP response code is successful (2xx) before consuming the body, you should explicitly cancel the pending response body: ```ts let resp = await fetch(url); // Only read the response body for successful responses if (resp.statusCode <= 299) { // Call resp.json(), resp.text() or otherwise process the body } else { // Explicitly cancel it resp.body.cancel(); } ``` This will free up an open connection. If the system detects that a Worker is deadlocked on open connections — for example, if the Worker has pending connection attempts but has no in-progress reads or writes on the connections that it already has open — then the least-recently-used open connection will be canceled to unblock the Worker. If the Worker later attempts to use a canceled connection, an exception will be thrown. These exceptions should rarely occur in practice, though, since it is uncommon for a Worker to open a connection that it does not have an immediate use for. :::note Simultaneous Open Connections are measured from the top-level request, meaning any connections open from Workers sharing resources (for example, Workers triggered via [Service bindings](/workers/runtime-apis/bindings/service-bindings/)) will share the simultaneous open connection limit. ::: --- ## Environment variables The maximum number of environment variables (secret and text combined) for a Worker is 128 variables on the Workers Paid plan, and 64 variables on the Workers Free plan. There is no limit to the number of environment variables per account. Each environment variable has a size limitation of 5 KB. --- ## Worker size A Worker can be up to 10 MB in size _after compression_ on the Workers Paid plan, and up to 3 MB on the Workers Free plan. You can assess the size of your Worker bundle after compression by performing a dry-run with `wrangler` and reviewing the final compressed (`gzip`) size output by `wrangler`: ```sh wrangler deploy --outdir bundled/ --dry-run ``` ```sh output # Output will resemble the below: Total Upload: 259.61 KiB / gzip: 47.23 KiB ``` Note that larger Worker bundles can impact the start-up time of the Worker, as the Worker needs to be loaded into memory. You should consider removing unnecessary dependencies and/or using [Workers KV](/kv/), a [D1 database](/d1/) or [R2](/r2/) to store configuration files, static assets and binary data instead of attempting to bundle them within your Worker code. --- ## Worker startup time A Worker must be able to be parsed and execute its global scope (top-level code outside of any handlers) within 400 ms. Worker size can impact startup because there is more code to parse and evaluate. Avoiding expensive code in the global scope can keep startup efficient as well. You can measure your Worker's startup time by deploying it to Cloudflare using [Wrangler](/workers/wrangler/). When you run `npx wrangler@latest deploy` or `npx wrangler@latest versions upload`, Wrangler will output the startup time of your Worker in the command-line output, using the `startup_time_ms` field in the [Workers Script API](/api/resources/workers/subresources/scripts/methods/update/) or [Workers Versions API](/api/resources/workers/subresources/scripts/subresources/versions/methods/create/). If you are having trouble staying under this limit, consider [profiling using DevTools](/workers/observability/dev-tools/) locally to learn how to optimize your code. <Render file="limits_increase" /> --- ## Number of Workers You can have up to 500 Workers on your account on the Workers Paid plan, and up to 100 Workers on the Workers Free plan. If you need more than 500 Workers, consider using [Workers for Platforms](/cloudflare-for-platforms/workers-for-platforms/). --- ## Routes and domains ### Number of routes per zone Each zone has a limit of 1,000 [routes](/workers/configuration/routing/routes/). If you require more than 1,000 routes on your zone, consider using [Workers for Platforms](/cloudflare-for-platforms/workers-for-platforms/) or request an increase to this limit. ### Number of custom domains per zone Each zone has a limit of 100 [custom domains](/workers/configuration/routing/custom-domains/). If you require more than 100 custom domains on your zone, consider using a wildcard [route](/workers/configuration/routing/routes/) or request an increase to this limit. ### Number of routed zones per Worker When configuring [routing](/workers/configuration/routing/), the maximum number of zones that can be referenced by a Worker is 1,000. If you require more than 1,000 zones on your Worker, consider using [Workers for Platforms](/cloudflare-for-platforms/workers-for-platforms/) or request an increase to this limit. --- ## Image Resizing with Workers When using Image Resizing with Workers, refer to [Image Resizing documentation](/images/transform-images/) for more information on the applied limits. --- ## Log size You can emit a maximum of 128 KB of data (across `console.log()` statements, exceptions, request metadata and headers) to the console for a single request. After you exceed this limit, further context associated with the request will not be recorded in logs, appear when tailing logs of your Worker, or within a [Tail Worker](/workers/observability/logs/tail-workers/). Refer to the [Workers Trace Event Logpush documentation](/workers/observability/logs/logpush/#limits) for information on the maximum size of fields sent to logpush destinations. --- ## Unbound and Bundled plan limits :::note Unbound and Bundled plans have been deprecated and are no longer available for new accounts. ::: If your Worker is on an Unbound plan, your limits are exactly the same as the Workers Paid plan. If your Worker is on a Bundled plan, your limits are the same as the Workers Paid plan except for the following differences: * Your limit for [subrequests](/workers/platform/limits/#subrequests) is 50/request * Your limit for [CPU time](/workers/platform/limits/#cpu-time) is 50ms for HTTP requests and 50ms for [Cron Triggers](/workers/configuration/cron-triggers/) * You have no [Duration](/workers/platform/limits/#duration) limits for [Cron Triggers](/workers/configuration/cron-triggers/), [Durable Object alarms](/durable-objects/api/alarms/), or [Queue consumers](/queues/configuration/javascript-apis/#consumer) * Your Cache API limits for calls/requests is 50 --- ## Related resources Review other developer platform resource limits. - [KV limits](/kv/platform/limits/) - [Durable Object limits](/durable-objects/platform/limits/) - [Queues limits](/queues/platform/limits/) --- # Pricing URL: https://developers.cloudflare.com/workers/platform/pricing/ import { GlossaryTooltip, Render } from "~/components"; By default, users have access to the Workers Free plan. The Workers Free plan includes limited usage of Workers, Pages Functions and Workers KV. Read more about the [Free plan limits](/workers/platform/limits/#worker-limits). The Workers Paid plan includes Workers, Pages Functions, Workers KV, and Durable Objects usage for a minimum charge of $5 USD per month for an account. The plan includes increased initial usage allotments, with clear charges for usage that exceeds the base plan. All included usage is on a monthly basis. :::note[Pages Functions billing] All [Pages Functions](/pages/functions/) are billed as Workers. All pricing and inclusions in this document apply to Pages Functions. Refer to [Functions Pricing](/pages/functions/pricing/) for more information on Pages Functions pricing. ::: ## Workers Users on the Workers Paid plan have access to the Standard usage model. Workers Enterprise accounts are billed based on the usage model specified in their contract. To switch to the Standard usage model, reach out to your CSM. | | Requests<sup>1, 2</sup> | Duration | CPU time | | ------------ | ------------------------------------------------------------------ | ------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | **Free** | 100,000 per day | No charge for duration | 10 milliseconds of CPU time per invocation | | **Standard** | 10 million included per month <br /> +$0.30 per additional million | No charge or limit for duration | 30 million CPU milliseconds included per month<br /> +$0.02 per additional million CPU milliseconds<br /><br/> Max of 30 seconds of CPU time per invocation <br /> Max of 15 minutes of CPU time per [Cron Trigger](/workers/configuration/cron-triggers/) or [Queue Consumer](/queues/configuration/javascript-apis/#consumer) invocation | <sup>1</sup> Inbound requests to your Worker. Cloudflare does not bill for [subrequests](/workers/platform/limits/#subrequests) you make from your Worker. <sup>2</sup> Requests to static assets are free and unlimited. ### Example pricing #### Example 1 A Worker that serves 15 million requests per month, and uses an average of 7 milliseconds (ms) of CPU time per request, would have the following estimated costs: | | Monthly Costs | Formula | | ---------------- | ------------- | --------------------------------------------------------------------------------------------------------- | | **Subscription** | $5.00 | | | **Requests** | $1.50 | (15,000,000 requests - 10,000,000 included requests) / 1,000,000 \* $0.30 | | **CPU time** | $1.50 | ((7 ms of CPU time per request \* 15,000,000 requests) - 30,000,000 included CPU ms) / 1,000,000 \* $0.02 | | **Total** | $8.00 | | #### Example 2 A project that serves 15 million requests per month, with 80% (12 million) requests serving [static assets](/workers/static-assets/) and the remaining invoking dynamic Worker code. The Worker uses an average of 7 milliseconds (ms) of CPU time per request. Requests to static assets are free and unlimited. This project would have the following estimated costs: | | Monthly Costs | Formula | | ----------------------------- | ------------- | ------- | | **Subscription** | $5.00 | | | **Requests to static assets** | $0 | - | | **Requests to Worker** | $0 | - | | **CPU time** | $0 | - | | **Total** | $5.00 | | | #### Example 3 A Worker that runs on a [Cron Trigger](/workers/configuration/cron-triggers/) once an hour to collect data from multiple APIs, process the data and create a report. - 720 requests/month - 3 minutes (180,000ms) of CPU time per request In this scenario, the estimated monthly cost would be calculated as: | | Monthly Costs | Formula | | ---------------- | ------------- | -------------------------------------------------------------------------------------------------------- | | **Subscription** | $5.00 | | | **Requests** | $0.00 | - | | **CPU time** | $1.99 | ((180,000 ms of CPU time per request \* 720 requests) - 30,000,000 included CPU ms) / 1,000,000 \* $0.02 | | **Total** | $6.99 | | | | | | #### Example 4 A high traffic Worker that serves 100 million requests per month, and uses an average of 7 milliseconds (ms) of CPU time per request, would have the following estimated costs: | | Monthly Costs | Formula | | ---------------- | ------------- | ---------------------------------------------------------------------------------------------------------- | | **Subscription** | $5.00 | | | **Requests** | $27.00 | (100,000,000 requests - 10,000,000 included requests) / 1,000,000 \* $0.30 | | **CPU time** | $13.40 | ((7 ms of CPU time per request \* 100,000,000 requests) - 30,000,000 included CPU ms) / 1,000,000 \* $0.02 | | **Total** | $45.40 | | :::note[Custom limits] To prevent accidental runaway bills or denial-of-wallet attacks, configure the maximum amount of CPU time that can be used per invocation by [defining limits in your Worker's Wrangler file](/workers/wrangler/configuration/#limits), or via the Cloudflare dashboard (**Workers & Pages** > Select your Worker > **Settings** > **CPU Limits**). If you had a Worker on the Bundled usage model prior to the migration to Standard pricing on March 1, 2024, Cloudflare has automatically added a 50 ms CPU limit on your Worker. ::: :::note Some Workers Enterprise customers maintain the ability to change usage models. Usage models may be changed at the individual Worker level: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. In Account Home, select **Workers & Pages**. 3. In **Overview**, select your Worker > **Settings** > **Usage Model**. To change your default account-wide usage model: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. In Account Home, select **Workers & Pages**. 3. Find **Usage Model** on the right-side menu > **Change**. Existing Workers will not be impacted when changing the default usage model. You may change the usage model for individual Workers without affecting your account-wide default usage model. ::: ## Workers Logs <Render file="workers_logs_pricing" /> :::note[Workers Logs documentation] For more information and [examples of Workers Logs billing](/workers/observability/logs/workers-logs/#example-pricing), refer to the [Workers Logs documentation](/workers/observability/logs/workers-logs). ::: ## Workers Trace Events Logpush Workers Logpush is only available on the Workers Paid plan. | | Paid plan | | --------------------- | ---------------------------------- | | Requests <sup>1</sup> | 10 million / month, +$0.05/million | <sup>1</sup> Workers Logpush charges for request logs that reach your end destination after applying filtering or sampling. ## Workers KV <Render file="kv_pricing" /> :::note[KV documentation] To learn more about KV, refer to the [KV documentation](/kv/). ::: ## Queues <Render file="queues_pricing" /> :::note[Queues billing examples] To learn more about Queues pricing and review billing examples, refer to [Queues Pricing](/queues/platform/pricing/). ::: ## D1 D1 is available on both the [Workers Free](#workers) and [Workers Paid](#workers) plans. <Render file="d1-pricing" /> :::note[D1 billing] Refer to [D1 Pricing](/d1/platform/pricing/) to learn more about how D1 is billed. ::: ## Durable Objects <Render file="durable_objects_pricing" /> :::note[Durable Objects billing examples] For more information and [examples of Durable Objects billing](/durable-objects/platform/pricing/#durable-objects-billing-examples), refer to [Durable Objects Pricing](/durable-objects/platform/pricing/). ::: ## Durable Objects Storage API <Render file="storage_api_pricing" /> ## Vectorize Vectorize is currently only available on the Workers paid plan. <Render file="vectorize-pricing" product="vectorize" /> ## Service bindings Requests made from your Worker to another worker via a [Service Binding](/workers/runtime-apis/bindings/service-bindings/) do not incur additional request fees. This allows you to split apart functionality into multiple Workers, without incurring additional costs. For example, if Worker A makes a subrequest to Worker B via a Service Binding, or calls an RPC method provided by Worker B via a Service Binding, this is billed as: - One request (for the initial invocation of Worker A) - The total amount of CPU time used across both Worker A and Worker B :::note[Only available on Workers Standard pricing] If your Worker is on the deprecated Bundled or Unbound pricing plans, incoming requests from Service Bindings are charged the same as requests from the Internet. In the example above, you would be charged for two requests, one to Worker A, and one to Worker B. ::: ## Fine Print Workers Paid plan is separate from any other Cloudflare plan (Free, Professional, Business) you may have. If you are an Enterprise customer, reach out to your account team to confirm pricing details. Only requests that hit a Worker will count against your limits and your bill. Since Cloudflare Workers runs before the Cloudflare cache, the caching of a request still incurs costs. Refer to [Limits](/workers/platform/limits/) to review definitions and behavior after a limit is hit. --- # Choose a data or storage product URL: https://developers.cloudflare.com/workers/platform/storage-options/ import { Render } from "~/components"; Cloudflare Workers support a range of storage and database options for persisting different types of data across different use-cases, from key-value stores (like [Workers KV](/kv/)) through to SQL databases (such as [D1](/d1/)). This guide describes the use-cases suited to each storage option, as well as their performance and consistency properties. :::note[Pages Functions] Storage options can also be used by your front-end application built with Cloudflare Pages. For more information on available storage options for Pages applications, refer to the [Pages Functions bindings documentation](/pages/functions/bindings/). ::: Available storage and persistency products include: - [Workers KV](#workers-kv) for key-value storage. - [R2](#r2) for object storage, including use-cases where S3 compatible storage is required. - [Durable Objects](#durable-objects) for transactional, globally coordinated storage. - [D1](#d1) as a relational, SQL-based database. - [Queues](#queues) for job queueing, batching and inter-Service (Worker to Worker) communication. - [Hyperdrive](/hyperdrive/) for connecting to and speeding up access to existing hosted and on-premises databases. - [Analytics Engine](/analytics/analytics-engine/) for storing and querying (using SQL) time-series data and product metrics at scale. - [Vectorize](/vectorize/) for vector search and storing embeddings from [Workers AI](/workers-ai/). Applications built on the Workers platform may combine one or more storage components as they grow, scale or as requirements demand. ## Choose a storage product <Render file="storage-products-table" product="workers" /> ## Performance and consistency The following table highlights the performance and consistency characteristics of the primary storage offerings available to Cloudflare Workers: <table-wrap> | Feature | Workers KV | R2 | Durable Objects | D1 | | --------------------------- | ------------------------------------------------ | ------------------------------------- | -------------------------------- | --------------------------------------------------- | | Maximum storage per account | Unlimited<sup>1</sup> | Unlimited<sup>2</sup> | 50 GiB | 250GiB <sup>3</sup> | | Storage grouping name | Namespace | Bucket | Durable Object | Database | | Maximum size per value | 25 MiB | 5 TiB per object | 128 KiB per value | 10 GiB per database <sup>4</sup> | | Consistency model | Eventual: updates take up to 60s to be reflected | Strong (read-after-write)<sup>5</sup> | Serializable (with transactions) | Serializable (no replicas) / Causal (with replicas) | | Supported APIs | Workers, HTTP/REST API | Workers, S3 compatible | Workers | Workers, HTTP/REST API | </table-wrap> <sup>1</sup> Free accounts are limited to 1 GiB of KV storage. <sup>2</sup> Free accounts are limited to 10 GB of R2 storage. <sup>3</sup> Free accounts are limited to 5 GiB of database storage. <sup>4</sup> Free accounts are limited to 500 MiB per database. <sup>5</sup> Refer to the [R2 documentation](/r2/reference/consistency/) for more details on R2's consistency model. <Render file="limits_increase" /> ## Workers KV Workers KV is an eventually consistent key-value data store that caches on the Cloudflare global network. It is ideal for projects that require: - High volumes of reads and/or repeated reads to the same keys. - Per-object time-to-live (TTL). - Distributed configuration. To get started with KV: - Read how [KV works](/kv/concepts/how-kv-works/). - Create a [KV namespace](/kv/concepts/kv-namespaces/). - Review the [KV Runtime API](/kv/api/). - Learn about KV [Limits](/kv/platform/limits/). ## R2 R2 is S3-compatible blob storage that allows developers to store large amounts of unstructured data without egress fees associated with typical cloud storage services. It is ideal for projects that require: - Storage for files which are infrequently accessed. - Large object storage (for example, gigabytes or more per object). - Strong consistency per object. - Asset storage for websites (refer to [caching guide](/r2/buckets/public-buckets/#caching)) To get started with R2: - Read the [Get started guide](/r2/get-started/). - Learn about R2 [Limits](/r2/platform/limits/). - Review the [R2 Workers API](/r2/api/workers/workers-api-reference/). ## Durable Objects Durable Objects provide low-latency coordination and consistent storage for the Workers platform through global uniqueness and a transactional storage API. - Global Uniqueness guarantees that there will be a single instance of a Durable Object class with a given ID running at once, across the world. Requests for a Durable Object ID are routed by the Workers runtime to the Cloudflare data center that owns the Durable Object. - The transactional storage API provides strongly consistent key-value storage to the Durable Object. Each Object can only read and modify keys associated with that Object. Execution of a Durable Object is single-threaded, but multiple request events may still be processed out-of-order from how they arrived at the Object. It is ideal for projects that require: - Real-time collaboration (such as a chat application or a game server). - Consistent storage. - Data locality. To get started with Durable Objects: - Read the [introductory blog post](https://blog.cloudflare.com/introducing-workers-durable-objects/). - Review the [Durable Objects documentation](/durable-objects/). - Get started with [Durable Objects](/durable-objects/get-started/). - Learn about Durable Objects [Limits](/durable-objects/platform/limits/). ## D1 [D1](/d1/) is Cloudflare’s native serverless database. With D1, you can create a database by importing data or defining your tables and writing your queries within a Worker or through the API. D1 is ideal for: - Persistent, relational storage for user data, account data, and other structured datasets. - Use-cases that require querying across your data ad-hoc (using SQL). - Workloads with a high ratio of reads to writes (most web applications). To get started with D1: - Read [the documentation](/d1) - Follow the [Get started guide](/d1/get-started/) to provision your first D1 database. - Review the [D1 Workers Binding API](/d1/worker-api/). :::note If your working data size exceeds 10 GB (the maximum size for a D1 database), consider splitting the database into multiple, smaller D1 databases. ::: ## Queues Cloudflare Queues allows developers to send and receive messages with guaranteed delivery. It integrates with [Cloudflare Workers](/workers) and offers at-least once delivery, message batching, and does not charge for egress bandwidth. Queues is ideal for: - Offloading work from a request to schedule later. - Send data from Worker to Worker (inter-Service communication). - Buffering or batching data before writing to upstream systems, including third-party APIs or [Cloudflare R2](/queues/examples/send-errors-to-r2/). To get started with Queues: - [Set up your first queue](/queues/get-started/). - Learn more [about how Queues works](/queues/reference/how-queues-works/). ## Hyperdrive Hyperdrive is a service that accelerates queries you make to existing databases, making it faster to access your data from across the globe, irrespective of your users’ location. Hyperdrive allows you to: - Connect to an existing database from Workers without connection overhead. - Cache frequent queries across Cloudflare's global network to reduce response times on highly trafficked content. - Reduce load on your origin database with connection pooling. To get started with Hyperdrive: - [Connect Hyperdrive](/hyperdrive/get-started/) to your existing database. - Learn more [about how Hyperdrive speeds up your database queries](/hyperdrive/configuration/how-hyperdrive-works/). ## Analytics Engine Analytics Engine is Cloudflare's time-series and metrics database that allows you to write unlimited-cardinality analytics at scale using a built-in API to write data points from Workers and query that data using SQL directly. Analytics Engine allows you to: - Expose custom analytics to your own customers - Build usage-based billing systems - Understand the health of your service on a per-customer or per-user basis - Add instrumentation to frequently called code paths, without impacting performance or overwhelming external analytics systems with events Cloudflare uses Analytics Engine internally to store and product per-product metrics for products like D1 and R2 at scale. To get started with Analytics Engine: - Learn how to [get started with Analytics Engine](/analytics/analytics-engine/get-started/) - See [an example of writing time-series data to Analytics Engine](/analytics/analytics-engine/recipes/usage-based-billing-for-your-saas-product/) - Understand the [SQL API](/analytics/analytics-engine/sql-api/) for reading data from your Analytics Engine datasets ## Vectorize Vectorize is a globally distributed vector database that enables you to build full-stack, AI-powered applications with Cloudflare Workers and [Workers AI](/workers-ai/). Vectorize allows you to: - Store embeddings from any vector embeddings model (Bring Your Own embeddings) for semantic search and classification tasks. - Add context to Large Language Model (LLM) queries by using vector search as part of a [Retrieval Augmented Generation](/workers-ai/tutorials/build-a-retrieval-augmented-generation-ai/) (RAG) workflow. - [Filter on vector metadata](/vectorize/reference/metadata-filtering/) to reduce the search space and return more relevant results. To get started with Vectorize: - [Create your first vector database](/vectorize/get-started/intro/). - Combine [Workers AI and Vectorize](/vectorize/get-started/embeddings/) to generate, store and query text embeddings. - Learn more about [how vector databases work](/vectorize/reference/what-is-a-vector-database/). <Render file="durable-objects-vs-d1" product="durable-objects" /> :::note[SQLite in Durable Objects Beta] The new beta version of Durable Objects is available where each Durable Object has a private, embedded SQLite database. When deploying a new Durable Object class, users can opt-in to using SQL storage in order to access [Storage SQL API methods](/durable-objects/api/sql-storage/#exec). Otherwise, a Durable Object class has the standard, private key-value storage. ::: <Render file="kv-vs-d1" product="kv" /> ## D1 vs Hyperdrive D1 is a standalone, serverless database that provides a SQL API, using SQLite's SQL semantics, to store and access your relational data. Hyperdrive is a service that lets you connect to your existing, regional PostgreSQL databases and improves database performance by optimizing them for global, scalable data access from Workers. - If you are building a new project on Workers or are considering migrating your data, use D1. - If you are building a Workers project with an existing PostgreSQL database, use Hyperdrive. :::note You cannot use D1 with Hyperdrive. However, D1 does not need to be used with Hyperdrive because it does not have slow connection setups which would benefit from Hyperdrive's connection pooling. D1 data can also be cached within Workers using the [Cache API](/workers/runtime-apis/cache/). ::: --- # Workers for Platforms URL: https://developers.cloudflare.com/workers/platform/workers-for-platforms/ Deploy custom code on behalf of your users or let your users directly deploy their own code to your platform, managing infrastructure. --- # How the Cache works URL: https://developers.cloudflare.com/workers/reference/how-the-cache-works/ Workers was designed and built on top of Cloudflare's global network to allow developers to interact directly with the Cloudflare cache. The cache can provide ephemeral, data center-local storage, as a convenient way to frequently access static or dynamic content. By allowing developers to write to the cache, Workers provide a way to customize cache behavior on Cloudflare’s CDN. To learn about the benefits of caching, refer to the Learning Center’s article on [What is Caching?](https://www.cloudflare.com/learning/cdn/what-is-caching/). Cloudflare Workers run before the cache but can also be utilized to modify assets once they are returned from the cache. Modifying assets returned from cache allows for the ability to sign or personalize responses while also reducing load on an origin and reducing latency to the end user by serving assets from a nearby location. ## Interact with the Cloudflare Cache Conceptually, there are two ways to interact with Cloudflare’s Cache using a Worker: - Call to [`fetch()`](/workers/runtime-apis/fetch/) in a Workers script. Requests proxied through Cloudflare are cached even without Workers according to a zone’s default or configured behavior (for example, static assets like files ending in `.jpg` are cached by default). Workers can further customize this behavior by: - Setting Cloudflare cache rules (that is, operating on the `cf` object of a [request](/workers/runtime-apis/request/)). - Store responses using the [Cache API](/workers/runtime-apis/cache/) from a Workers script. This allows caching responses that did not come from an origin and also provides finer control by: - Customizing cache behavior of any asset by setting headers such as `Cache-Control` on the response passed to `cache.put()`. - Caching responses generated by the Worker itself through `cache.put()`. :::caution[Tiered caching] The Cache API is not compatible with tiered caching. To take advantage of tiered caching, use the [fetch API](/workers/runtime-apis/fetch/). ::: ### Single file purge assets cached by a worker When using single-file purge to purge assets cached by a Worker, make sure not to purge the end user URL. Instead, purge the URL that is in the `fetch` request. For example, you have a Worker that runs on `https://example.com/hello` and this Worker makes a `fetch` request to `https://notexample.com/hello`. As far as cache is concerned, the asset in the `fetch` request (`https://notexample.com/hello`) is the asset that is cached. To purge it, you need to purge `https://notexample.com/hello`. Purging the end user URL, `https://example.com/hello`, will not work because that is not the URL that cache sees. You need to confirm in your Worker which URL you are actually fetching, so you can purge the correct asset. In the previous example, `https://notexample.com/hello` is not proxied through Cloudflare. If `https://notexample.com/hello` was proxied ([orange-clouded](/dns/proxy-status/)) through Cloudflare, then you must own `notexample.com` and purge `https://notexample.com/hello` from the `notexample.com` zone. To better understand the example, review the following diagram: ```mermaid flowchart TD accTitle: Single file purge assets cached by a worker accDescr: This diagram is meant to help choose how to purge a file. A("You have a Worker script that runs on <code>https://</code><code>example.com/hello</code> <br> and this Worker makes a <code>fetch</code> request to <code>https://</code><code>notexample.com/hello</code>.") --> B(Is <code>notexample.com</code> <br> an active zone on Cloudflare?) B -- Yes --> C(Is <code>https://</code><code>notexample.com/</code> <br> proxied through Cloudflare?) B -- No --> D(Purge <code>https://</code><code>notexample.com/hello</code> <br> from the original <code>example.com</code> zone.) C -- Yes --> E(Do you own <br> <code>notexample.com</code>?) C -- No --> F(Purge <code>https://</code><code>notexample.com/hello</code> <br> from the original <code>example.com</code> zone.) E -- Yes --> G(Purge <code>https://</code><code>notexample.com/hello</code> <br> from the <code>notexample.com</code> zone.) E -- No --> H(Sorry, you can not purge the asset. <br> Only the owner of <code>notexample.com</code> can purge it.) ``` ### Purge assets stored with the Cache API Assets stored in the cache through [Cache API](/workers/runtime-apis/cache/) operations can be purged in a couple of ways: - Call `cache.delete` within a Worker to invalidate the cache for the asset with a matching request variable. - Assets purged in this way are only purged locally to the data center the Worker runtime was executed. - To purge an asset globally, you must use the standard cache purge options. Based on cache API implementation, not all cache purge endpoints function for purging assets stored by the Cache API. - All assets on a zone can be purged by using the [Purge Everything](/cache/how-to/purge-cache/purge-everything/) cache operation. This purge will remove all assets associated with a Cloudflare zone from cache in all data centers regardless of the method set. - Available to Enterprise Customers, [Cache Tags](/cache/how-to/purge-cache/purge-by-tags/#add-cache-tag-http-response-headers) can be added to requests dynamically in a Worker by calling `response.headers.append()` and appending `Cache-Tag` values dynamically to that request. Once set, those tags can be used to selectively purge assets from cache without invalidating all cached assets on a zone. - Currently, it is not possible to purge a URL stored through Cache API that uses a custom cache key set by a Worker. Instead, use a [custom key created via Cache Rules](/cache/how-to/cache-rules/settings/#cache-key). Alternatively, purge your assets using purge everything, purge by tag, purge by host or purge by prefix. ## Edge versus browser caching The browser cache is controlled through the `Cache-Control` header sent in the response to the client (the `Response` instance return from the handler). Workers can customize browser cache behavior by setting this header on the response. Other means to control Cloudflare’s cache that are not mentioned in this documentation include: Page Rules and Cloudflare cache settings. Refer to the [How to customize Cloudflare’s cache](/cache/concepts/customize-cache/) if you wish to avoid writing JavaScript with still some granularity of control. :::note[What should I use: the Cache API or fetch for caching objects on Cloudflare?] For requests where Workers are behaving as middleware (that is, Workers are sending a subrequest via `fetch`) it is recommended to use `fetch`. This is because preexisting settings are in place that optimize caching while preventing unintended dynamic caching. For projects where there is no backend (that is, the entire project is on Workers as in [Workers Sites](/workers/configuration/sites/start-from-scratch)) the Cache API is the only option to customize caching. The asset will be cached under the hostname specified within the Worker's subrequest — not the Worker's own hostname. Therefore, in order to purge the cached asset, the purge will have to be performed for the hostname included in the Worker subrequest. ::: ### `fetch` In the context of Workers, a [`fetch`](/workers/runtime-apis/fetch/) provided by the runtime communicates with the Cloudflare cache. First, `fetch` checks to see if the URL matches a different zone. If it does, it reads through that zone’s cache (or Worker). Otherwise, it reads through its own zone’s cache, even if the URL is for a non-Cloudflare site. Cache settings on `fetch` automatically apply caching rules based on your Cloudflare settings. `fetch` does not allow you to modify or inspect objects before they reach the cache, but does allow you to modify how it will cache. When a response fills the cache, the response header contains `CF-Cache-Status: HIT`. You can tell an object is attempting to cache if one sees the `CF-Cache-Status` at all. This [template](/workers/examples/cache-using-fetch/) shows ways to customize Cloudflare cache behavior on a given request using fetch. ### Cache API The [Cache API](/workers/runtime-apis/cache/) can be thought of as an ephemeral key-value store, whereby the `Request` object (or more specifically, the request URL) is the key, and the `Response` is the value. There are two types of cache namespaces available to the Cloudflare Cache: - **`caches.default`** – You can access the default cache (the same cache shared with `fetch` requests) by accessing `caches.default`. This is useful when needing to override content that is already cached, after receiving the response. - **`caches.open()`** – You can access a namespaced cache (separate from the cache shared with `fetch` requests) using `let cache = await caches.open(CACHE_NAME)`. Note that [`caches.open`](https://developer.mozilla.org/en-US/docs/Web/API/CacheStorage/open) is an async function, unlike `caches.default`. When to use the Cache API: - When you want to programmatically save and/or delete responses from a cache. For example, say an origin is responding with a `Cache-Control: max-age:0` header and cannot be changed. Instead, you can clone the `Response`, adjust the header to the `max-age=3600` value, and then use the Cache API to save the modified `Response` for an hour. - When you want to programmatically access a Response from a cache without relying on a `fetch` request. For example, you can check to see if you have already cached a `Response` for the `https://example.com/slow-response` endpoint. If so, you can avoid the slow request. This [template](/workers/examples/cache-api/) shows ways to use the cache API. For limits of the cache API, refer to [Limits](/workers/platform/limits/#cache-api-limits). :::caution[Tiered caching and the Cache API] Cache API within Workers does not support tiered caching. Tiered Cache concentrates connections to origin servers so they come from a small number of data centers rather than the full set of network locations. Cache API is local to a data center, this means that `cache.match` does a lookup, `cache.put` stores a response, and `cache.delete` removes a stored response only in the cache of the data center that the Worker handling the request is in. Because these methods apply only to local cache, they will not work with tiered cache. ::: ## Related resources - [Cache API](/workers/runtime-apis/cache/) --- # How Workers works URL: https://developers.cloudflare.com/workers/reference/how-workers-works/ import { Render, NetworkMap, WorkersIsolateDiagram } from "~/components" Though Cloudflare Workers behave similarly to [JavaScript](https://www.cloudflare.com/learning/serverless/serverless-javascript/) in the browser or in Node.js, there are a few differences in how you have to think about your code. Under the hood, the Workers runtime uses the [V8 engine](https://www.cloudflare.com/learning/serverless/glossary/what-is-chrome-v8/) — the same engine used by Chromium and Node.js. The Workers runtime also implements many of the standard [APIs](/workers/runtime-apis/) available in most modern browsers. The differences between JavaScript written for the browser or Node.js happen at runtime. Rather than running on an individual's machine (for example, [a browser application or on a centralized server](https://www.cloudflare.com/learning/serverless/glossary/client-side-vs-server-side/)), Workers functions run on [Cloudflare's global network](https://www.cloudflare.com/network) - a growing global network of thousands of machines distributed across hundreds of locations. <NetworkMap/> Each of these machines hosts an instance of the Workers runtime, and each of those runtimes is capable of running thousands of user-defined applications. This guide will review some of those differences. For more information, refer to the [Cloud Computing without Containers blog post](https://blog.cloudflare.com/cloud-computing-without-containers). The three largest differences are: Isolates, Compute per Request, and Distributed Execution. ## Isolates [V8](https://v8.dev) orchestrates isolates: lightweight contexts that provide your code with variables it can access and a safe environment to be executed within. You could even consider an isolate a sandbox for your function to run in. <Render file="isolate-description" /> <WorkersIsolateDiagram/> A given isolate has its own scope, but isolates are not necessarily long-lived. An isolate may be spun down and evicted for a number of reasons: * Resource limitations on the machine. * A suspicious script - anything seen as trying to break out of the isolate sandbox. * Individual [resource limits](/workers/platform/limits/). Because of this, it is generally advised that you not store mutable state in your global scope unless you have accounted for this contingency. If you are interested in how Cloudflare handles security with the Workers runtime, you can [read more about how Isolates relate to Security and Spectre Threat Mitigation](/workers/reference/security-model/). ## Compute per request <Render file="compute-per-request" /> ## Distributed execution Isolates are resilient and continuously available for the duration of a request, but in rare instances isolates may be evicted. When a Worker hits official [limits](/workers/platform/limits/) or when resources are exceptionally tight on the machine the request is running on, the runtime will selectively evict isolates after their events are properly resolved. Like all other JavaScript platforms, a single Workers instance may handle multiple requests including concurrent requests in a single-threaded event loop. That means that other requests may (or may not) be processed during awaiting any `async` tasks (such as `fetch`) if other requests come in while processing a request. Because there is no guarantee that any two user requests will be routed to the same or a different instance of your Worker, Cloudflare recommends you do not use or mutate global state. ## Related resources * [`fetch()` handler](/workers/runtime-apis/handlers/fetch/) - Review how incoming HTTP requests to a Worker are passed to the `fetch()` handler. * [Request](/workers/runtime-apis/request/) - Learn how incoming HTTP requests are passed to the `fetch()` handler. * [Workers limits](/workers/platform/limits/) - Learn about Workers limits including Worker size, startup time, and more. --- # Migrate from Service Workers to ES Modules URL: https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/ import { WranglerConfig } from "~/components"; This guide will show you how to migrate your Workers from the [Service Worker](https://developer.mozilla.org/en-US/docs/Web/API/Service_Worker_API) format to the [ES modules](https://blog.cloudflare.com/workers-javascript-modules/) format. ## Advantages of migrating There are several reasons to migrate your Workers to the ES modules format: 1. [Durable Objects](/durable-objects/), [D1](/d1/), [Workers AI](/workers-ai/), [Vectorize](/vectorize/) and other bindings can only be used from Workers that use ES modules. 2. Your Worker will run faster. With service workers, bindings are exposed as globals. This means that for every request, the Workers runtime must create a new JavaScript execution context, which adds overhead and time. Workers written using ES modules can reuse the same execution context across multiple requests. 3. You can [gradually deploy changes to your Worker](/workers/configuration/versions-and-deployments/gradual-deployments/) when you use the ES modules format. 4. You can easily publish Workers using ES modules to `npm`, allowing you to import and reuse Workers within your codebase. ## Migrate a Worker The following example demonstrates a Worker that redirects all incoming requests to a URL with a `301` status code. With the Service Worker syntax, the example Worker looks like: ```js async function handler(request) { const base = 'https://example.com'; const statusCode = 301; const destination = new URL(request.url, base); return Response.redirect(destination.toString(), statusCode); } // Initialize Worker addEventListener('fetch', event => { event.respondWith(handler(event.request)); }); ``` Workers using ES modules format replace the `addEventListener` syntax with an object definition, which must be the file's default export (via `export default`). The previous example code becomes: ```js export default { fetch(request) { const base = "https://example.com"; const statusCode = 301; const source = new URL(request.url); const destination = new URL(source.pathname, base); return Response.redirect(destination.toString(), statusCode); }, }; ``` ## Bindings [Bindings](/workers/runtime-apis/bindings/) allow your Workers to interact with resources on the Cloudflare developer platform. Workers using ES modules format do not rely on any global bindings. However, Service Worker syntax accesses bindings on the global scope. To understand bindings, refer the following `TODO` KV namespace binding example. To create a `TODO` KV namespace binding, you will: 1. Create a KV namespace named `My Tasks` and receive an ID that you will use in your binding. 2. Create a Worker. 3. Find your Worker's [Wrangler configuration file](/workers/wrangler/configuration/) and add a KV namespace binding: <WranglerConfig> ```toml kv_namespaces = [ { binding = "TODO", id = "<ID>" } ] ``` </WranglerConfig> In the following sections, you will use your binding in Service Worker and ES modules format. :::note[Reference KV from Durable Objects and Workers] To learn more about how to reference KV from Workers, refer to the [KV bindings documentation](/kv/concepts/kv-bindings/). ::: ### Bindings in Service Worker format In Service Worker syntax, your `TODO` KV namespace binding is defined in the global scope of your Worker. Your `TODO` KV namespace binding is available to use anywhere in your Worker application's code. ```js addEventListener("fetch", async (event) => { return await getTodos() }); async function getTodos() { // Get the value for the "to-do:123" key // NOTE: Relies on the TODO KV binding that maps to the "My Tasks" namespace. let value = await TODO.get("to-do:123"); // Return the value, as is, for the Response event.respondWith(new Response(value)); } ``` ### Bindings in ES modules format In ES modules format, bindings are only available inside the `env` parameter that is provided at the entry point to your Worker. To access the `TODO` KV namespace binding in your Worker code, the `env` parameter must be passed from the `fetch` handler in your Worker to the `getTodos` function. ```js import { getTodos } from './todos' export default { async fetch(request, env, ctx) { // Passing the env parameter so other functions // can reference the bindings available in the Workers application return await getTodos(env) }, }; ``` The following code represents a `getTodos` function that calls the `get` function on the `TODO` KV binding. ```js async function getTodos(env) { // NOTE: Relies on the TODO KV binding which has been provided inside of // the env parameter of the `getTodos` function let value = await env.TODO.get("to-do:123"); return new Response(value); } export { getTodos } ``` ## Environment variables [Environment variables](/workers/configuration/environment-variables/) are accessed differently in code written in ES modules format versus Service Worker format. Review the following example environment variable configuration in the [Wrangler configuration file](/workers/wrangler/configuration/): <WranglerConfig> ```toml name = "my-worker-dev" # Define top-level environment variables # under the `[vars]` block using # the `key = "value"` format [vars] API_ACCOUNT_ID = "<EXAMPLE-ACCOUNT-ID>" ``` </WranglerConfig> ### Environment variables in Service Worker format In Service Worker format, the `API_ACCOUNT_ID` is defined in the global scope of your Worker application. Your `API_ACCOUNT_ID` environment variable is available to use anywhere in your Worker application's code. ```js addEventListener("fetch", async (event) => { console.log(API_ACCOUNT_ID) // Logs "<EXAMPLE-ACCOUNT-ID>" return new Response("Hello, world!") }) ``` ### Environment variables in ES modules format In ES modules format, environment variables are only available inside the `env` parameter that is provided at the entrypoint to your Worker application. ```js export default { async fetch(request, env, ctx) { console.log(env.API_ACCOUNT_ID) // Logs "<EXAMPLE-ACCOUNT-ID>" return new Response("Hello, world!") }, }; ``` ## Cron Triggers To handle a [Cron Trigger](/workers/configuration/cron-triggers/) event in a Worker written with ES modules syntax, implement a [`scheduled()` event handler](/workers/runtime-apis/handlers/scheduled/#syntax), which is the equivalent of listening for a `scheduled` event in Service Worker syntax. This example code: ```js addEventListener("scheduled", (event) => { // ... }); ``` Then becomes: ```js export default { async scheduled(event, env, ctx) { // ... }, }; ``` ## Access `event` or `context` data Workers often need access to data not in the `request` object. For example, sometimes Workers use [`waitUntil`](/workers/runtime-apis/context/#waituntil) to delay execution. Workers using ES modules format can access `waitUntil` via the `context` parameter. Refer to [ES modules parameters](/workers/runtime-apis/handlers/fetch/#parameters) for more information. This example code: ```js async function triggerEvent(event) { // Fetch some data console.log('cron processed', event.scheduledTime); } // Initialize Worker addEventListener('scheduled', event => { event.waitUntil(triggerEvent(event)); }); ``` Then becomes: ```js async function triggerEvent(event) { // Fetch some data console.log('cron processed', event.scheduledTime); } export default { async scheduled(event, env, ctx) { ctx.waitUntil(triggerEvent(event)); }, }; ``` ## Service Worker syntax A Worker written in Service Worker syntax consists of two parts: 1. An event listener that listens for `FetchEvents`. 2. An event handler that returns a [Response](/workers/runtime-apis/response/) object which is passed to the event’s `.respondWith()` method. When a request is received on one of Cloudflare’s global network servers for a URL matching a Worker, Cloudflare's server passes the request to the Workers runtime. This dispatches a `FetchEvent` in the [isolate](/workers/reference/how-workers-works/#isolates) where the Worker is running. ```js addEventListener('fetch', event => { event.respondWith(handleRequest(event.request)); }); async function handleRequest(request) { return new Response('Hello worker!', { headers: { 'content-type': 'text/plain' }, }); } ``` Below is an example of the request response workflow: 1. An event listener for the `FetchEvent` tells the script to listen for any request coming to your Worker. The event handler is passed the `event` object, which includes `event.request`, a [`Request`](/workers/runtime-apis/request/) object which is a representation of the HTTP request that triggered the `FetchEvent`. 2. The call to `.respondWith()` lets the Workers runtime intercept the request in order to send back a custom response (in this example, the plain text `'Hello worker!'`). * The `FetchEvent` handler typically culminates in a call to the method `.respondWith()` with either a [`Response`](/workers/runtime-apis/response/) or `Promise<Response>` that determines the response. * The `FetchEvent` object also provides [two other methods](/workers/runtime-apis/handlers/fetch/) to handle unexpected exceptions and operations that may complete after a response is returned. Learn more about [the lifecycle methods of the `fetch()` handler](/workers/runtime-apis/rpc/lifecycle/). ### Supported `FetchEvent` properties * `event.type` string * The type of event. This will always return `"fetch"`. * `event.request` Request * The incoming HTTP request. * <code>event.respondWith(responseResponse|<span style="margin-left:-6px">Promise</span>)</code> : void * Refer to [`respondWith`](#respondwith). * <code>event.waitUntil(promisePromise)</code> : void * Refer to [`waitUntil`](#waituntil). * <code>event.passThroughOnException()</code> : void * Refer to [`passThroughOnException`](#passthroughonexception). ### `respondWith` Intercepts the request and allows the Worker to send a custom response. If a `fetch` event handler does not call `respondWith`, the runtime delivers the event to the next registered `fetch` event handler. In other words, while not recommended, this means it is possible to add multiple `fetch` event handlers within a Worker. If no `fetch` event handler calls `respondWith`, then the runtime forwards the request to the origin as if the Worker did not. However, if there is no origin – or the Worker itself is your origin server, which is always true for `*.workers.dev` domains – then you must call `respondWith` for a valid response. ```js // Format: Service Worker addEventListener('fetch', event => { let { pathname } = new URL(event.request.url); // Allow "/ignore/*" URLs to hit origin if (pathname.startsWith('/ignore/')) return; // Otherwise, respond with something event.respondWith(handler(event)); }); ``` ### `waitUntil` The `waitUntil` command extends the lifetime of the `"fetch"` event. It accepts a `Promise`-based task which the Workers runtime will execute before the handler terminates but without blocking the response. For example, this is ideal for [caching responses](/workers/runtime-apis/cache/#put) or handling logging. With the Service Worker format, `waitUntil` is available within the `event` because it is a native `FetchEvent` property. With the ES modules format, `waitUntil` is moved and available on the `context` parameter object. ```js // Format: Service Worker addEventListener('fetch', event => { event.respondWith(handler(event)); }); async function handler(event) { // Forward / Proxy original request let res = await fetch(event.request); // Add custom header(s) res = new Response(res.body, res); res.headers.set('x-foo', 'bar'); // Cache the response // NOTE: Does NOT block / wait event.waitUntil(caches.default.put(event.request, res.clone())); // Done return res; } ``` ### `passThroughOnException` The `passThroughOnException` method prevents a runtime error response when the Worker throws an unhandled exception. Instead, the script will [fail open](https://community.microfocus.com/cyberres/b/sws-22/posts/security-fundamentals-part-1-fail-open-vs-fail-closed), which will proxy the request to the origin server as though the Worker was never invoked. To prevent JavaScript errors from causing entire requests to fail on uncaught exceptions, `passThroughOnException()` causes the Workers runtime to yield control to the origin server. With the Service Worker format, `passThroughOnException` is added to the `FetchEvent` interface, making it available within the `event`. With the ES modules format, `passThroughOnException` is available on the `context` parameter object. ```js // Format: Service Worker addEventListener('fetch', event => { // Proxy to origin on unhandled/uncaught exceptions event.passThroughOnException(); throw new Error('Oops'); }); ``` --- # Reference URL: https://developers.cloudflare.com/workers/reference/ import { DirectoryListing } from "~/components"; Conceptual knowledge about how Workers works. <DirectoryListing /> --- # Protocols URL: https://developers.cloudflare.com/workers/reference/protocols/ Cloudflare Workers support the following protocols and interfaces: | Protocol | Inbound | Outbound | | ---------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ | | **HTTP / HTTPS** | Handle incoming HTTP requests using the [`fetch()` handler](/workers/runtime-apis/handlers/fetch/) | Make HTTP subrequests using the [`fetch()` API](/workers/runtime-apis/fetch/) | | **Direct TCP sockets** | Support for handling inbound TCP connections is [coming soon](https://blog.cloudflare.com/workers-tcp-socket-api-connect-databases/) | Create outbound TCP connections using the [`connect()` API](/workers/runtime-apis/tcp-sockets/) | | **WebSockets** | Accept incoming WebSocket connections using the [`WebSocket` API](/workers/runtime-apis/websockets/), or with [MQTT over WebSockets (Pub/Sub)](/pub-sub/learning/websockets-browsers/) | [MQTT over WebSockets (Pub/Sub)](/pub-sub/learning/websockets-browsers/) | | **MQTT** | Handle incoming messages to an MQTT broker with [Pub Sub](/pub-sub/learning/integrate-workers/) | Support for publishing MQTT messages to an MQTT topic is [coming soon](/pub-sub/learning/integrate-workers/) | | **HTTP/3 (QUIC)** | Accept inbound requests over [HTTP/3](https://www.cloudflare.com/learning/performance/what-is-http3/) by enabling it on your [zone](/fundamentals/setup/accounts-and-zones/#zones) in **Speed** > **Optimization** > **Protocol Optimization** area of the [Cloudflare dashboard](https://dash.cloudflare.com/). | | | **SMTP** | Use [Email Workers](/email-routing/email-workers/) to process and forward email, without having to manage TCP connections to SMTP email servers | [Email Workers](/email-routing/email-workers/) | --- # Security model URL: https://developers.cloudflare.com/workers/reference/security-model/ import { WorkersArchitectureDiagram } from "~/components" This article includes an overview of Cloudflare security architecture, and then addresses two frequently asked about issues: V8 bugs and Spectre. Since the very start of the Workers project, security has been a high priority — there was a concern early on that when hosting a large number of tenants on shared infrastructure, side channels of various kinds would pose a threat. The Cloudflare Workers runtime is carefully designed to defend against side channel attacks. To this end, Workers is designed to make it impossible for code to measure its own execution time locally. For example, the value returned by `Date.now()` is locked in place while code is executing. No other timers are provided. Moreover, Cloudflare provides no access to concurrency (for example, multi-threading), as it could allow attackers to construct ad hoc timers. These design choices cannot be introduced retroactively into other platforms — such as web browsers — because they remove APIs that existing applications depend on. They were possible in Workers only because of runtime design choices from the start. While these early design decisions have proven effective, Cloudflare is continuing to add defense-in-depth, including techniques to disrupt attacks by rescheduling Workers to create additional layers of isolation between suspicious Workers and high-value Workers. The Workers approach is very different from the approach taken by most of the industry. It is resistant to the entire range of [Spectre-style attacks](https://www.cloudflare.com/learning/security/threats/meltdown-spectre/), without requiring special attention paid to each one and without needing to block speculation in general. However, because the Workers approach is different, it requires careful study. Cloudflare is currently working with researchers at Graz University of Technology (TU Graz) to study what has been done. These researchers include some of the people who originally discovered Spectre. Cloudflare will publish the results of this research as they becomes available. For more details, refer to [this talk](https://www.infoq.com/presentations/cloudflare-v8/) by Kenton Varda, architect of Cloudflare Workers. Spectre is covered near the end. ## Architectural overview Beginning with a quick overview of the Workers runtime architecture: <WorkersArchitectureDiagram/> There are two fundamental parts of designing a code sandbox: secure isolation and API design. ### Isolation First, a secure execution environment needed to be created wherein code cannot access anything it is not supposed to. For this, the primary tool is V8, the JavaScript engine developed by Google for use in Chrome. V8 executes code inside isolates, which prevent that code from accessing memory outside the isolate — even within the same process. Importantly, this means Cloudflare can run many isolates within a single process. This is essential for an edge compute platform like Workers where Cloudflare must host many thousands of guest applications on every machine and rapidly switch between these guests thousands of times per second with minimal overhead. If Cloudflare had to run a separate process for every guest, the number of tenants Cloudflare could support would be drastically reduced, and Cloudflare would have to limit edge compute to a small number of big Enterprise customers. With isolate technology, Cloudflare can make edge compute available to everyone. Sometimes, though, Cloudflare does decide to schedule a Worker in its own private process. Cloudflare does this if the Worker uses certain features that needs an extra layer of isolation. For example, when a developer uses the devtools debugger to inspect their Worker, Cloudflare runs that Worker in a separate process. This is because historically, in the browser, the inspector protocol has only been usable by the browser’s trusted operator, and therefore has not received as much security scrutiny as the rest of V8. In order to hedge against the increased risk of bugs in the inspector protocol, Cloudflare moves inspected Workers into a separate process with a process-level sandbox. Cloudflare also uses process isolation as an extra defense against Spectre. Additionally, even for isolates that run in a shared process with other isolates, Cloudflare runs multiple instances of the whole runtime on each machine, which is called cordons. Workers are distributed among cordons by assigning each Worker a level of trust and separating low-trusted Workers from those trusted more highly. As one example of this in operation: a customer who signs up for the Free plan will not be scheduled in the same process as an Enterprise customer. This provides some defense-in-depth in the case a zero-day security vulnerability is found in V8. At the whole-process level, Cloudflare applies another layer of sandboxing for defense in depth. The layer 2 sandbox uses Linux namespaces and `seccomp` to prohibit all access to the filesystem and network. Namespaces and `seccomp` are commonly used to implement containers. However, Cloudflare's use of these technologies is much stricter than what is usually possible in container engines, because Cloudflare configures namespaces and `seccomp` after the process has started but before any isolates have been loaded. This means, for example, Cloudflare can (and does) use a totally empty filesystem (mount namespace) and uses `seccomp` to block absolutely all filesystem-related system calls. Container engines cannot normally prohibit all filesystem access because doing so would make it impossible to use `exec()` to start the guest program from disk. In the Workers case, Cloudflare's guest programs are not native binaries and the Workers runtime itself has already finished loading before Cloudflare blocks filesystem access. The layer 2 sandbox also totally prohibits network access. Instead, the process is limited to communicating only over local UNIX domain sockets to talk to other processes on the same system. Any communication to the outside world must be mediated by some other local process outside the sandbox. One such process in particular, which is called the supervisor, is responsible for fetching Worker code and configuration from disk or from other internal services. The supervisor ensures that the sandbox process cannot read any configuration except that which is relevant to the Workers that it should be running. For example, when the sandbox process receives a request for a Worker it has not seen before, that request includes the encryption key for that Worker’s code, including attached secrets. The sandbox can then pass that key to the supervisor in order to request the code. The sandbox cannot request any Worker for which it has not received the appropriate key. It cannot enumerate known Workers. It also cannot request configuration it does not need; for example, it cannot request the TLS key used for HTTPS traffic to the Worker. Aside from reading configuration, the other reason for the sandbox to talk to other processes on the system is to implement APIs exposed to Workers. ### API design There is a saying: If a tree falls in the forest, but no one is there to hear it, does it make a sound? A Cloudflare saying: If a Worker executes in a fully-isolated environment in which it is totally prevented from communicating with the outside world, does it actually run? Complete code isolation is, in fact, useless. In order for Workers to do anything useful, they have to be allowed to communicate with users. At the very least, a Worker needs to be able to receive requests and respond to them. For Workers to send requests to the world safely, APIs are needed. In the context of sandboxing, API design takes on a new level of responsibility. Cloudflare APIs define exactly what a Worker can and cannot do. Cloudflare must be very careful to design each API so that it can only express allowed operations and no more. For example, Cloudflare wants to allow Workers to make and receive HTTP requests, while not allowing them to be able to access the local filesystem or internal network services. Currently, Workers does not allow any access to the local filesystem. Therefore, Cloudflare does not expose a filesystem API at all. No API means no access. But, imagine if Workers did want to support local filesystem access in the future. How can that be done? Workers should not see the whole filesystem. Imagine, though, if each Worker had its own private directory on the filesystem where it can store whatever it wants. To do this, Workers would use a design based on [capability-based security](https://en.wikipedia.org/wiki/Capability-based_security). Capabilities are a big topic, but in this case, what it would mean is that Cloudflare would give the Worker an object of type `Directory`, representing a directory on the filesystem. This object would have an API that allows creating and opening files and subdirectories, but does not permit traversing up the parent directory. Effectively, each Worker would see its private `Directory` as if it were the root of their own filesystem. How would such an API be implemented? As described above, the sandbox process cannot access the real filesystem. Instead, file access would be mediated by the supervisor process. The sandbox talks to the supervisor using [Cap’n Proto RPC](https://capnproto.org/rpc.html), a capability-based RPC protocol. (Cap’n Proto is an open source project currently maintained by the Cloudflare Workers team.) This protocol makes it very easy to implement capability-based APIs, so that Cloudflare can strictly limit the sandbox to accessing only the files that belong to the Workers it is running. Now what about network access? Today, Workers are allowed to talk to the rest of the world only via HTTP — both incoming and outgoing. There is no API for other forms of network access, therefore it is prohibited; although, Cloudflare plans to support other protocols in the future. As mentioned before, the sandbox process cannot connect directly to the network. Instead, all outbound HTTP requests are sent over a UNIX domain socket to a local proxy service. That service implements restrictions on the request. For example, it verifies that the request is either addressed to a public Internet service or to the Worker’s zone’s own origin server, not to internal services that might be visible on the local machine or network. It also adds a header to every request identifying the Worker from which it originates, so that abusive requests can be traced and blocked. Once everything is in order, the request is sent on to the Cloudflare network's HTTP caching layer and then out to the Internet. Similarly, inbound HTTP requests do not go directly to the Workers runtime. They are first received by an inbound proxy service. That service is responsible for TLS termination (the Workers runtime never sees TLS keys), as well as identifying the correct Worker script to run for a particular request URL. Once everything is in order, the request is passed over a UNIX domain socket to the sandbox process. ## V8 bugs and the patch gap Every non-trivial piece of software has bugs and sandboxing technologies are no exception. Virtual machines, containers, and isolates — which Workers use — also have bugs. Workers rely heavily on isolation provided by V8, the JavaScript engine built by Google for use in Chrome. This has pros and cons. On one hand, V8 is an extraordinarily complicated piece of technology, creating a wider attack surface than virtual machines. More complexity means more opportunities for something to go wrong. However, an extraordinary amount of effort goes into finding and fixing V8 bugs, owing to its position as arguably the most popular sandboxing technology in the world. Google regularly pays out 5-figure bounties to anyone finding a V8 sandbox escape. Google also operates fuzzing infrastructure that automatically finds bugs faster than most humans can. Google’s investment does a lot to minimize the danger of V8 zero-days — bugs that are found by malicious actors and not known to Google. But, what happens after a bug is found and reported? V8 is open source, so fixes for security bugs are developed in the open and released to everyone at the same time. It is important that any patch be rolled out to production as fast as possible, before malicious actors can develop an exploit. The time between publishing the fix and deploying it is known as the patch gap. Google previously [announced that Chrome’s patch gap had been reduced from 33 days to 15 days](https://www.zdnet.com/article/google-cuts-chrome-patch-gap-in-half-from-33-to-15-days/). Fortunately, Cloudflare directly controls the machines on which the Workers runtime operates. Nearly the entire build and release process has been automated, so the moment a V8 patch is published, Cloudflare systems automatically build a new release of the Workers runtime and, after one-click sign-off from the necessary (human) reviewers, automatically push that release out to production. As a result, the Workers patch gap is now under 24 hours. A patch published by V8’s team in Munich during their work day will usually be in production before the end of the US work day. ## Spectre: Introduction The V8 team at Google has stated that [V8 itself cannot defend against Spectre](https://arxiv.org/abs/1902.05178). Workers does not need to depend on V8 for this. The Workers environment presents many alternative approaches to mitigating Spectre. ### What is it? Spectre is a class of attacks in which a malicious program can trick the CPU into speculatively performing computation using data that the program is not supposed to have access to. The CPU eventually realizes the problem and does not allow the program to see the results of the speculative computation. However, the program may be able to derive bits of the secret data by looking at subtle side effects of the computation, such as the effects on the cache. For more information about Spectre, refer to the [Learning Center page on the topic](https://www.cloudflare.com/learning/security/threats/meltdown-spectre/). ### Why does it matter for Workers? Spectre encompasses a wide variety of vulnerabilities present in modern CPUs. The specific vulnerabilities vary by architecture and model and it is likely that many vulnerabilities exist which have not yet been discovered. These vulnerabilities are a problem for every cloud compute platform. Any time you have more than one tenant running code on the same machine, Spectre attacks are possible. However, the closer together the tenants are, the more difficult it can be to mitigate specific vulnerabilities. Many of the known issues can be mitigated at the kernel level (protecting processes from each other) or at the hypervisor level (protecting VMs), often with the help of CPU microcode updates and various defenses (many of which can come with serious performance impact). In Cloudflare Workers, tenants are isolated from each other using V8 isolates — not processes nor VMs. This means that Workers cannot necessarily rely on OS or hypervisor patches to prevent Spectre. Workers need its own strategy. ### Why not use process isolation? Cloudflare Workers is designed to run your code in every single Cloudflare location. Workers is designed to be a platform accessible to everyone. It needs to handle a huge number of tenants, where many tenants get very little traffic. Combine these two points and planning becomes difficult. A typical, non-edge serverless provider could handle a low-traffic tenant by sending all of that tenant’s traffic to a single machine, so that only one copy of the application needs to be loaded. If the machine can handle, say, a dozen tenants, that is plenty. That machine can be hosted in a massive data center with millions of machines, achieving economies of scale. However, this centralization incurs latency and worldwide bandwidth costs when the users are not nearby. With Workers, on the other hand, every tenant, regardless of traffic level, currently runs in every Cloudflare location. And in the quest to get as close to the end user as possible, Cloudflare sometimes chooses locations that only have space for a limited number of machines. The net result is that Cloudflare needs to be able to host thousands of active tenants per machine, with the ability to rapidly spin up inactive ones on-demand. That means that each guest cannot take more than a couple megabytes of memory — hardly enough space for a call stack, much less everything else that a process needs. Moreover, Cloudflare need context switching to be computationally efficient. Many Workers resident in memory will only handle an event every now and then, and many Workers spend less than a fraction of a millisecond on any particular event. In this environment, a single core can easily find itself switching between thousands of different tenants every second. To handle one event, a significant amount of communication needs to happen between the guest application and its host, meaning still more switching and communications overhead. If each tenant lives in its own process, all this overhead is orders of magnitude larger than if many tenants live in a single process. When using strict process isolation in Workers, the CPU cost can easily be 10x what it is with a shared process. In order to keep Workers inexpensive, fast, and accessible to everyone, Cloudflare needed to find a way to host multiple tenants in a single process. ### There is no fix for Spectre Spectre does not have an official solution. Not even when using heavyweight virtual machines. Everyone is still vulnerable. The industry encounters new Spectre attacks. Every couple months, researchers uncover a new Spectre vulnerability, CPU vendors release new microcode, and OS vendors release kernel patches. Everyone must continue updating. But is it enough to merely deploy the latest patches? More vulnerabilities exist but have not yet been publicized. To defend against Spectre, Cloudflare needed to take a different approach. It is not enough to block individual known vulnerabilities. Instead, entire classes of vulnerabilities must be addressed at once. ### Building a defense It is unlikely that any all-encompassing fix for Spectre will be found. However, the following thought experiment raises points to consider: Fundamentally, all Spectre vulnerabilities use side channels to detect hidden processor state. Side channels, by definition, involve observing some non-deterministic behavior of a system. Conveniently, most software execution environments try hard to eliminate non-determinism, because non-deterministic execution makes applications unreliable. However, there are a few sorts of non-determinism that are still common. The most obvious among these is timing. The industry long ago gave up on the idea that a program should take the same amount of time every time it runs, because deterministic timing is fundamentally at odds with heuristic performance optimization. Most Spectre attacks focus on timing as a way to detect the hidden microarchitectural state of the CPU. Some have proposed that this can be solved by making timers inaccurate or adding random noise. However, it turns out that this does not stop attacks; it only makes them slower. If the timer tracks real time at all, then anything you can do to make it inaccurate can be overcome by running an attack multiple times and using statistics to filter out inconsistencies. Many security researchers see this as the end of the story. What good is slowing down an attack if the attack is still possible? ### Cascading slow-downs However, measures that slow down an attack can be powerful. The key insight is this: as an attack becomes slower, new techniques become practical to make it even slower still. The goal, then, is to chain together enough techniques that an attack becomes so slow as to be uninteresting. Much of cryptography, after all, is technically vulnerable to brute force attacks — technically, with enough time, you can break it. But when the time required is thousands (or even billions) of years, this is a sufficient defense. What can be done to slow down Spectre attacks to the point of meaninglessness? ## Freezing a Spectre attack ### Step 0: Do not allow native code Workers does not allow our customers to upload native-code binaries to run on the Cloudflare network — only JavaScript and WebAssembly. Many other languages, like Python, Rust, or even Cobol, can be compiled or transpiled to one of these two formats. Both are passed through V8 to convert these formats into true native code. This, in itself, does not necessarily make Spectre attacks harder. However, this is presented as step 0 because it is fundamental to enabling the following steps. Accepting native code programs implies being beholden to an existing CPU architecture (typically, x86). In order to execute code with reasonable performance, it is usually necessary to run the code directly on real hardware, severely limiting the host’s control over how that execution plays out. For example, a kernel or hypervisor has no ability to prohibit applications from invoking the `CLFLUSH` instruction, an instruction [which is useful in side channel attacks](https://gruss.cc/files/flushflush.pdf) and almost nothing else. Moreover, supporting native code typically implies supporting whole existing operating systems and software stacks, which bring with them decades of expectations about how the architecture works under them. For example, x86 CPUs allow a kernel or hypervisor to disable the RDTSC instruction, which reads a high-precision timer. Realistically, though, disabling it will break many programs because they are implemented to use RDTSC any time they want to know the current time. Supporting native code would limit choice in future mitigation techniques. There is greater freedom in using an abstract intermediate format. ### Step 1: Disallow timers and multi-threading In Workers, you can get the current time using the JavaScript Date API by calling `Date.now()`. However, the time value returned is not the current time. `Date.now()` returns the time of the last I/O. It does not advance during code execution. For example, if an attacker writes: ```js let start = Date.now(); for (let i = 0; i < 1e6; i++) { doSpectreAttack(); } let end = Date.now(); ``` The values of `start` and `end` will always be exactly the same. The attacker cannot use `Date` to measure the execution time of their code, which they would need to do to carry out an attack. :::note This measure was implemented in mid-2017, before Spectre was announced. This measure was implemented because Cloudflare was already concerned about side channel timing attacks. The Workers team has designed the system with side channels in mind. ::: Similarly, multi-threading and shared memory are not permitted in Workers. Everything related to the processing of one event happens on the same thread. Otherwise, one would be able to race threads in order to guess and check the underlying timer. Multiple Workers are not allowed to operate on the same request concurrently. For example, if you have installed a Cloudflare App on your zone which is implemented using Workers, and your zone itself also uses Workers, then a request to your zone may actually be processed by two Workers in sequence. These run in the same thread. At this point, measuring code execution time locally is prevented. However, it can still be measured remotely. For example, the HTTP client that is sending a request to trigger the execution of the Worker can measure how long it takes for the Worker to respond. Such a measurement is likely to be very noisy, as it would have to traverse the Internet and incur general networking costs. Such noise can be overcome, in theory, by executing the attack many times and taking an average. :::note It has been suggested that if Workers reset its execution environment on every request, that Workers would be in a much safer position against timing attacks. Unfortunately, it is not so simple. The execution state could be stored in a client — not the Worker itself — allowing a Worker to resume its previous state on every new request. ::: In adversarial testing and with help from leading Spectre experts, Cloudflare has not been able to develop a remote timing attack that works in production. However, the lack of a working attack does not mean that Workers should stop building defenses. Instead, the Workers team is currently testing some more advanced measures. ### Step 2: Dynamic process isolation If an attack is possible at all, it would take a long time to run — hours at the very least, maybe as long as weeks. But once an attack has been running even for a second, there is a large amount of new data that can be used to trigger further measures. Spectre attacks exhibit abnormal behavior that would not usually be seen in a normal program. These attacks intentionally try to create pathological performance scenarios in order to amplify microarchitectural effects. This is especially true when the attack has already been forced to run billions of times in a loop in order to overcome other mitigations, like those discussed above. This tends to show up in metrics like CPU performance counters. Now, the usual problem with using performance metrics to detect Spectre attacks is that there are sometimes false positives. Sometimes, a legitimate program behaves poorly. The runtime cannot shut down every application that has poor performance. Instead, the runtime chooses to reschedule any Worker with suspicious performance metrics into its own process. As described above, the runtime cannot do this with every Worker because the overhead would be too high. However, it is acceptable to isolate a few Worker processes as a defense mechanism. If the Worker is legitimate, it will keep operating, with a little more overhead. Fortunately, Cloudflare can relocate a Worker into its own process at basically any time. In fact, elaborate performance-counter based triggering may not even be necessary here. If a Worker uses a large amount of CPU time per event, then the overhead of isolating it in its own process is relatively less because it switches context less often. So, the runtime might as well use process isolation for any Worker that is CPU-hungry. Once a Worker is isolated, Cloudflare can rely on the operating system’s Spectre defenses, as most desktop web browsers do. Cloudflare has been working with the experts at Graz Technical University to develop this approach. TU Graz’s team co-discovered Spectre itself and has been responsible for a huge number of the follow-on discoveries since then. Cloudflare has developed the ability to dynamically isolate Workers and has identified metrics which reliably detect attacks. As mentioned previously, process isolation is not a complete defense. Over time, Spectre attacks tend to be slower to carry out which means Cloudflare has the ability to reasonably guess and identify malicious actors. Isolating the process further slows down the potential attack. ### Step 3: Periodic whole-memory shuffling At this point, all known attacks have been prevented. This leaves Workers susceptible to unknown attacks in the future, as with all other CPU-based systems. However, all new attacks will generally be very slow, taking days or longer, leaving Cloudflare with time to prepare a defense. For example, it is within reason to restart the entire Workers runtime on a daily basis. This will reset the locations of everything in memory, forcing attacks to restart the process of discovering the locations of secrets. Cloudflare can also reschedule Workers across physical machines or cordons, so that the window to attack any particular neighbor is limited. In general, because Workers are fundamentally preemptible (unlike containers or VMs), Cloudflare has a lot of freedom to frustrate attacks. Cloudflare sees this as an ongoing investment — not something that will ever be done. --- # Billing and Limitations URL: https://developers.cloudflare.com/workers/static-assets/billing-and-limitations/ ## Billing Requests to a project with static assets can either return static assets or invoke the Worker script, depending on if the request [matches a static asset or not](/workers/static-assets/routing/). Requests to static assets are free and unlimited. Requests to the Worker script (for example, in the case of SSR content) are billed according to Workers pricing. Refer to [pricing](/workers/platform/pricing/#example-2) for an example. There is no additional cost for storing Assets. ## Limitations The following limitations apply for Workers with static assets: - There is a 20,000 file count limit per [Worker version](/workers/configuration/versions-and-deployments/), and a 25 MiB individual file size limit. This matches the [limits in Cloudflare Pages](/pages/platform/limits/) today. - In local development, you cannot make [Service Binding RPC calls](/workers/runtime-apis/bindings/service-bindings/rpc/) to a Worker with static assets. This is a temporary limitation, we are working to remove it. - Workers with assets cannot run on a [route or domain](/workers/configuration/routing/) with a path component. For example, `example.com/*` is an acceptable route, but `example.com/foo/*` is not. Wrangler and the Cloudflare dashboard will throw an error when you try and add a route with a path component. ## Troubleshooting - `assets.bucket is a required field` — if you see this error, you need to update Wrangler to at least `3.78.10` or later. `bucket` is not a required field. --- # Configuration and Bindings URL: https://developers.cloudflare.com/workers/static-assets/binding/ import { Badge, Description, FileTree, InlineBadge, Render, TabItem, Tabs, WranglerConfig, } from "~/components"; Configuring a Worker with assets requires specifying a [directory](/workers/static-assets/binding/#directory) and, optionally, an [assets binding](/workers/static-assets/binding/), in your Worker's Wrangler file. The [assets binding](/workers/static-assets/binding/) allows you to dynamically fetch assets from within your Worker script (e.g. `env.ASSETS.fetch()`), similarly to how you might with a make a `fetch()` call with a [Service binding](/workers/runtime-apis/bindings/service-bindings/http/). Only one collection of static assets can be configured in each Worker. ## `directory` The folder of static assets to be served. For many frameworks, this is the `./public/`, `./dist/`, or `./build/` folder. <WranglerConfig> ```toml title="wrangler.toml" name = "my-worker" compatibility_date = "2024-09-19" assets = { directory = "./public/" } ``` </WranglerConfig> ### Ignoring assets Sometime there are files in the asset directory that should not be uploaded. In this case, create a `.assetsignore` file in the root of the assets directory. This file takes the same format as `.gitignore`. Wrangler will not upload asset files that match lines in this file. **Example** You are migrating from a Pages project where the assets directory is `dist`. You do not want to upload the server-side Worker code nor Pages configuration files as public client-side assets. Add the following `.assetsignore` file: ```txt _worker.js _redirects _headers ``` Now Wrangler will not upload these files as client-side assets when deploying the Worker. ## `run_worker_first` Controls whether to invoke the Worker script regardless of a request which would have otherwise matched an asset. `run_worker_first = false` ([default](/workers/static-assets/routing/#default-behavior)) will serve any static asset matching a request, while `run_worker_first = true` will unconditionally [invoke your Worker script](/workers/static-assets/routing/#invoking-worker-script-ahead-of-assets). <WranglerConfig> ```toml title="wrangler.toml" name = "my-worker" compatibility_date = "2024-09-19" main = "src/index.ts" # The following configuration unconditionally invokes the Worker script at # `src/index.ts`, which can programatically fetch assets via the ASSETS binding [assets] directory = "./public/" binding = "ASSETS" run_worker_first = true ``` </WranglerConfig> ## `binding` Configuring the optional [binding](/workers/runtime-apis/bindings) gives you access to the collection of assets from within your Worker script. <WranglerConfig> ```toml title="wrangler.toml" name = "my-worker" main = "./src/index.js" compatibility_date = "2024-09-19" [assets] directory = "./public/" binding = "ASSETS" ``` </WranglerConfig> In the example above, assets would be available through `env.ASSETS`. ### Runtime API Reference #### `fetch()` **Parameters** - `request: Request | URL | string` Pass a [Request object](/workers/runtime-apis/request/), URL object, or URL string. Requests made through this method have `html_handling` and `not_found_handling` configuration applied to them. **Response** - `Promise<Response>` Returns a static asset response for the given request. **Example** Your dynamic code can make new, or forward incoming requests to your project's static assets using the assets binding. For example, `env.ASSETS.fetch(request)`, `env.ASSETS.fetch(new URL('https://assets.local/my-file'))` or `env.ASSETS.fetch('https://assets.local/my-file')`. Take the following example that configures a Worker script to return a response under all requests headed for `/api/`. Otherwise, the Worker script will pass the incoming request through to the asset binding. In this case, because a Worker script is only invoked when the requested route has not matched any static assets, this will always evaluate [`not_found_handling`](/workers/static-assets/routing/#routing-configuration) behavior. <Tabs> <TabItem label="JavaScript" icon="seti:javascript"> ```js export default { async fetch(request, env) { const url = new URL(request.url); if (url.pathname.startsWith("/api/")) { // TODO: Add your custom /api/* logic here. return new Response("Ok"); } // Passes the incoming request through to the assets binding. // No asset matched this request, so this will evaluate `not_found_handling` behavior. return env.ASSETS.fetch(request); }, }; ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts interface Env { ASSETS: Fetcher; } export default { async fetch(request, env): Promise<Response> { const url = new URL(request.url); if (url.pathname.startsWith("/api/")) { // TODO: Add your custom /api/* logic here. return new Response("Ok"); } // Passes the incoming request through to the assets binding. // No asset matched this request, so this will evaluate `not_found_handling` behavior. return env.ASSETS.fetch(request); }, } satisfies ExportedHandler<Env>; ``` </TabItem> </Tabs> ## Routing configuration For the various static asset routing configuration options, refer to [Routing](/workers/static-assets/routing/). ## Smart Placement [Smart Placement](/workers/configuration/smart-placement/) can be used to place a Worker's code close to your back-end infrastructure. Smart Placement will only have an effect if you specified a `main`, pointing to your Worker code. ### Smart Placement with Worker Code First If you desire to run your [Worker code ahead of assets](/workers/static-assets/routing/#invoking-worker-script-ahead-of-assets) by setting `run_worker_first=true`, all requests must first travel to your Smart-Placed Worker. As a result, you may experience increased latency for asset requests. Use Smart Placement with `run_worker_first=true` when you need to integrate with other backend services, authenticate requests before serving any assets, or if your want to make modifications to your assets before serving them. If you want some assets served as quickly as possible to the user, but others to be served behind a smart-placed Worker, considering splitting your app into multiple Workers and [using service bindings to connect them](/workers/configuration/smart-placement/#best-practices). ### Smart Placement with Assets First Enabling Smart Placement with `run_worker_first=false` (or not specifying it) lets you serve assets from as close as possible to your users, but moves your Worker logic to run most efficiently (such as near a database). Use Smart Placement with `run_worker_first=false` (or not specifying it) when prioritizing fast asset delivery. This will not impact the [default routing behavior](/workers/static-assets/routing/#default-behavior). --- # Workers vs. Pages (compatibility matrix) URL: https://developers.cloudflare.com/workers/static-assets/compatibility-matrix/ import { Badge, Description, FileTree, InlineBadge, Render, TabItem, Tabs, } from "~/components"; You can deploy full-stack applications, including front-end static assets and back-end APIs, as well as server-side rendered pages (SSR), to both Cloudflare [Workers](/workers/static-assets/) and [Pages](/pages/). The compatibility matrix below shows which features are available for each, to help you choose whether to build with Workers or Pages. Unless otherwise stated below, what works in Pages works in Workers, and what works in Workers works in Pages. Think something is missing from this list? [Open a pull request](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/static-assets/compatibility-matrix.mdx) or [create a GitHub issue](https://github.com/cloudflare/cloudflare-docs/issues/new). We plan to bridge the gaps between Workers and Pages and provide ways to migrate your Pages projects to Workers. **Legend** <br /> ✅: Supported <br /> â³: Coming soon <br /> 🟡: Unsupported, workaround available <br /> âŒ: Unsupported | | Workers | Pages | | ----------------------------------------------------------------------------------- | ------- | ------- | | **Writing, Testing, and Deploying Code** | | | | [Rollbacks](/workers/configuration/versions-and-deployments/rollbacks/) | ✅ | ✅ | | [Gradual Deployments](/workers/configuration/versions-and-deployments/) | ✅ | ⌠| | [Preview URLs](/workers/configuration/previews) | ✅ | ✅ | | [Testing tools](/workers/testing) | ✅ | ✅ | | [Local Development](/workers/local-development/) | ✅ | ✅ | | [Remote Development (`--remote`)](/workers/wrangler/commands/) | ✅ | ⌠| | [Quick Editor in Dashboard](https://blog.cloudflare.com/improved-quick-edit) | ✅ | ⌠| | **Static Assets** | | | | [Early Hints](/pages/configuration/early-hints/) | ⌠| ✅ | | [Custom HTTP headers for static assets](/pages/configuration/headers/) | 🟡 [^1] | ✅ | | [Middleware](/workers/static-assets/binding/#run_worker_first) | ✅ [^2] | ✅ | | [Redirects](/pages/configuration/redirects/) | 🟡 [^3] | ✅ | | [Smart Placement](/workers/configuration/smart-placement/) | ✅ | ✅ | | [Serve assets on a path](/workers/static-assets/routing/) | ✅ | ⌠| | **Observability** | | | | [Workers Logs](/workers/observability/) | ✅ | ⌠| | [Logpush](/workers/observability/logs/logpush/) | ✅ | ⌠| | [Tail Workers](/workers/observability/logs/tail-workers/) | ✅ | ⌠| | [Real-time logs](/workers/observability/logs/real-time-logs/) | ✅ | ✅ | | [Source Maps](/workers/observability/source-maps/) | ✅ | ⌠| | **Runtime APIs & Compute Models** | | | | [Node.js Compatibility Mode](/workers/runtime-apis/nodejs/) | ✅ | ✅ | | [Durable Objects](/durable-objects/api/) | ✅ | 🟡 [^4] | | [Cron Triggers](/workers/configuration/cron-triggers/) | ✅ | ⌠| | **Bindings** | | | | [AI](/workers-ai/get-started/workers-wrangler/#2-connect-your-worker-to-workers-ai) | ✅ | ✅ | | [Analytics Engine](/analytics/analytics-engine) | ✅ | ✅ | | [Assets](/workers/static-assets/binding/) | ✅ | ✅ | | [Browser Rendering](/browser-rendering) | ✅ | ✅ | | [D1](/d1/worker-api/) | ✅ | ✅ | | [Email Workers](/email-routing/email-workers/send-email-workers/) | ✅ | ⌠| | [Environment Variables](/workers/configuration/environment-variables/) | ✅ | ✅ | | [Hyperdrive](/hyperdrive/) | ✅ | ✅ | | [KV](/kv/) | ✅ | ✅ | | [mTLS](/workers/runtime-apis/bindings/mtls/) | ✅ | ✅ | | [Queue Producers](/queues/configuration/configure-queues/#producer) | ✅ | ✅ | | [Queue Consumers](/queues/configuration/configure-queues/#consumer) | ✅ | ⌠| | [R2](/r2/) | ✅ | ✅ | | [Rate Limiting](/workers/runtime-apis/bindings/rate-limit/) | ✅ | ⌠| | [Secrets](/workers/configuration/secrets/) | ✅ | ✅ | | [Service bindings](/workers/runtime-apis/bindings/service-bindings/) | ✅ | ✅ | | [Vectorize](/vectorize/get-started/intro/#3-bind-your-worker-to-your-index) | ✅ | ✅ | | **Builds (CI/CD)** | | | | [Monorepos](/workers/ci-cd/builds/advanced-setups/) | ✅ | ✅ | | [Build Watch Paths](/workers/ci-cd/builds/build-watch-paths/) | ✅ | ✅ | | [Build Caching](/workers/ci-cd/builds/build-caching/) | ✅ | ✅ | | [Deploy Hooks](/pages/configuration/deploy-hooks/) | ⌠| ✅ | | [Branch Deploy Controls](/pages/configuration/branch-build-controls/) | ⌠| ✅ | | [Custom Branch Aliases](/pages/how-to/custom-branch-aliases/) | ⌠| ✅ | | **Pages Functions** | | | | [File-based Routing](/pages/functions/routing/) | ⌠[^5] | ✅ | | [Pages Plugins](/pages/functions/plugins/) | ⌠[^6] | ✅ | | **Domain Configuration** | | | | [Custom domains](/workers/configuration/routing/custom-domains/#add-a-custom-domain)| ✅ | ✅ | | [Custom subdomains](/workers/configuration/routing/custom-domains/#set-up-a-custom-domain-in-the-dashboard)|✅|✅| | [Custom domains outside Cloudflare zones](/pages/configuration/custom-domains/#add-a-custom-cname-record)|âŒ|✅| | [Non-root routes](/workers/configuration/routing/routes/) | ✅ | ⌠| [^1]: Similar to <sup>3</sup>, to customize the HTTP headers that are returned by static assets, you can use [Service bindings](/workers/runtime-apis/bindings/service-bindings/) to connect a Worker in front of the Worker with assets. [^2]: Middleware can be configured via the [`run_worker_first`](/workers/static-assets/binding/#run_worker_first) option, but is charged as a normal Worker invocation. We plan to explore additional related options in the future. [^3]: You can handle redirects by adding code to your Worker (a [community package](https://npmjs.com/package/redirects-in-workers) is available for `_redirects` support), or you can use [Bulk Redirects](/rules/url-forwarding/bulk-redirects/). [^4]: To [use Durable Objects with your Cloudflare Pages project](/pages/functions/bindings/#durable-objects), you must create a separate Worker with a Durable Object and then declare a binding to it in both your Production and Preview environments. Using Durable Objects with Workers is simpler and recommended. [^5]: Workers [supports popular frameworks](/workers/frameworks/), many of which implement file-based routing. [^6]: Everything that is possible with Pages Functions can also be achieved by adding code to your Worker or by using framework-specific plugins for relevant third party tools. --- # Direct Uploads URL: https://developers.cloudflare.com/workers/static-assets/direct-upload/ import { Badge, Description, FileTree, InlineBadge, Render, TabItem, Tabs, } from "~/components"; import { Icon } from "astro-icon/components"; :::note Directly uploading assets via APIs is an advanced approach which, unless you are building a programatic integration, most users will not need. Instead, we encourage users to deploy your Worker with [Wrangler](/workers/static-assets/get-started/#1-create-a-new-worker-project-using-the-cli). ::: Our API empowers users to upload and include static assets as part of a Worker. These static assets can be served for free, and additionally, users can also fetch assets through an optional [assets binding](/workers/static-assets/binding/) to power more advanced applications. This guide will describe the process for attaching assets to your Worker directly with the API. <Tabs syncKey="workers-vs-platforms" IconComponent={Icon}> <TabItem icon="workers" label="Workers"> ```mermaid sequenceDiagram participant User participant Workers API User<<->>Workers API: Submit manifest<br/>POST /client/v4/accounts/:accountId/workers/scripts/:scriptName/assets-upload-session User<<->>Workers API: Upload files<br/>POST /client/v4/accounts/:accountId/workers/assets/upload?base64=true User<<->>Workers API: Upload script version<br/>PUT /client/v4/accounts/:accountId/workers/scripts/:scriptName ``` </TabItem> <TabItem icon="cloudflare-for-platforms" label="Workers for Platforms" IconComponent={Icon}> ```mermaid sequenceDiagram participant User participant Workers API User<<->>Workers API: Submit manifest<br/>POST /client/v4/accounts/:accountId/workers/dispatch/namespaces/:dispatchNamespace/scripts/:scriptName/assets-upload-session User<<->>Workers API: Upload files<br/>POST /client/v4/accounts/:accountId/workers/assets/upload?base64=true User<<->>Workers API: Upload script version<br/>PUT /client/v4/accounts/:accountId/workers/dispatch/namespaces/:dispatchNamespace/scripts/:scriptName ``` </TabItem> </Tabs> The asset upload flow can be distilled into three distinct phases: 1. Registration of a manifest 2. Upload of the assets 3. Deployment of the Worker ## Upload manifest The asset manifest is a ledger which keeps track of files we want to use in our Worker. This manifest is used to track assets associated with each Worker version, and eliminate the need to upload unchanged files prior to a new upload. The [manifest upload request](/api/resources/workers/subresources/scripts/subresources/assets/subresources/upload/methods/create/) describes each file which we intend to upload. Each file is its own key representing the file path and name, and is an object which contains metadata about the file. `hash` represents a 32 hexadecimal character hash of the file, while `size` is the size (in bytes) of the file. <Tabs syncKey="workers-vs-platforms" IconComponent={Icon}> <TabItem icon="workers" label="Workers"> ```bash curl -X POST https://api.cloudflare.com/client/v4/accounts/{account_id}/workers/scripts/{script_name}/assets-upload-session \ --header 'content-type: application/json' \ --header 'Authorization: Bearer <API_TOKEN>' \ --data '{ "manifest": { "/filea.html": { "hash": "08f1dfda4574284ab3c21666d1", "size": 12 }, "/fileb.html": { "hash": "4f1c1af44620d531446ceef93f", "size": 23 }, "/filec.html": { "hash": "54995e302614e0523757a04ec1", "size": 23 } } }' ``` </TabItem> <TabItem icon="cloudflare-for-platforms" label="Workers for Platforms"> ```bash curl -X POST https://api.cloudflare.com/client/v4/accounts/{account_id}/workers/dispatch/namespaces/{dispatch_namespace}/scripts/{script_name}/assets-upload-session \ --header 'content-type: application/json' \ --header 'Authorization: Bearer <API_TOKEN>' \ --data '{ "manifest": { "/filea.html": { "hash": "08f1dfda4574284ab3c21666d1", "size": 12 }, "/fileb.html": { "hash": "4f1c1af44620d531446ceef93f", "size": 23 }, "/filec.html": { "hash": "54995e302614e0523757a04ec1", "size": 23 } } }' ``` </TabItem> </Tabs> The resulting response will contain a JWT, which provides authentication during file upload. The JWT is valid for one hour. In addition to the JWT, the response instructs users how to optimally batch upload their files. These instructions are encoded in the `buckets` field. Each array in `buckets` contains a list of file hashes which should be uploaded together. Hashes of files that have been recently uploaded may not be returned in the API response; they do not need to be re-uploaded. ```json { "result": { "jwt": "<UPLOAD_TOKEN>", "buckets": [ ["08f1dfda4574284ab3c21666d1", "4f1c1af44620d531446ceef93f"], ["54995e302614e0523757a04ec1"] ] }, "success": true, "errors": null, "messages": null } ``` :::note If all assets have been previously uploaded, `buckets` will be empty, and `jwt` will contain a completion token. Uploading files is not necessary, and you can skip directly to [uploading a new script or version](/workers/static-assets/direct-upload/#createdeploy-new-version). ::: ### Limitations - Each file must be under 25 MiB - The overall manifest must not contain more than 20,000 file entries ## Upload Static Assets The [file upload API](/api/resources/workers/subresources/assets/subresources/upload/methods/create/) requires files be uploaded using `multipart/form-data`. The contents of each file must be base64 encoded, and the `base64` query parameter in the URL must be set to `true`. The provided `Content-Type` header of each file part will be attached when eventually serving the file. If you wish to avoid sending a `Content-Type` header in your deployment, `application/null` may be sent at upload time. The `Authorization` header must be provided as a bearer token, using the JWT (upload token) from the aforementioned manifest upload call. Once every file in the manifest has been uploaded, a status code of 201 will be returned, with the `jwt` field present. This JWT is a final "completion" token which can be used to create a deployment of a Worker with this set of assets. This completion token is valid for 1 hour. ## Create/Deploy New Version [Script](/api/resources/workers/subresources/scripts/methods/update/), [Version](/api/resources/workers/subresources/scripts/subresources/versions/methods/create/), and [Workers for Platform script](/api/resources/workers_for_platforms/subresources/dispatch/subresources/namespaces/subresources/scripts/methods/update/) upload endpoints require specifying a metadata part in the form data. Here, we can provide the completion token from the previous (upload assets) step. ```bash title="Example Worker Metadata Specifying Completion Token" { "main_module": "main.js", "assets": { "jwt": "<completion_token>" }, "compatibility_date": "2021-09-14" } ``` If this is a Worker which already has assets, and you wish to just re-use the existing set of assets, we do not have to specify the completion token again. Instead, we can pass the boolean `keep_assets` option. ```bash title="Example Worker Metadata Specifying keep_assets" { "main_module": "main.js", "keep_assets": true, "compatibility_date": "2021-09-14" } ``` Asset [routing configuration](/workers/static-assets/routing/#routing-configuration) can be provided in the `assets` object, such as `html_handling` and `not_found_handling`. ```bash title="Example Worker Metadata Specifying Asset Configuration" { "main_module": "main.js", "assets": { "jwt": "<completion_token>", "config" { "html_handling": "auto-trailing-slash" } }, "compatibility_date": "2021-09-14" } ``` Optionally, an assets binding can be provided if you wish to fetch and serve assets from within your Worker code. ```bash title="Example Worker Metadata Specifying Asset Binding" { "main_module": "main.js", "assets": { ... }, "bindings": [ ... { "name": "ASSETS", "type": "assets" } ... ] "compatibility_date": "2021-09-14" } ``` ## Programmatic Example <Tabs> <TabItem label="TypeScript" icon="seti:typescript"> ```ts import * as fs from "fs"; import * as path from "path"; import * as crypto from "crypto"; import { FormData, fetch } from "undici"; import "node:process"; const accountId: string = ""; // Replace with your actual account ID const filesDirectory: string = "assets"; // Adjust to your assets directory const scriptName: string = "my-new-script"; // Replace with desired script name const dispatchNamespace: string = ""; // Replace with a dispatch namespace if using Workers for Platforms interface FileMetadata { hash: string; size: number; } interface UploadSessionData { uploadToken: string; buckets: string[][]; fileMetadata: Record<string, FileMetadata>; } interface UploadResponse { result: { jwt: string; buckets: string[][]; }; success: boolean; errors: any; messages: any; } // Function to calculate the SHA-256 hash of a file and truncate to 32 characters function calculateFileHash(filePath: string): { fileHash: string; fileSize: number; } { const hash = crypto.createHash("sha256"); const fileBuffer = fs.readFileSync(filePath); hash.update(fileBuffer); const fileHash = hash.digest("hex").slice(0, 32); // Grab the first 32 characters const fileSize = fileBuffer.length; return { fileHash, fileSize }; } // Function to gather file metadata for all files in the directory function gatherFileMetadata(directory: string): Record<string, FileMetadata> { const files = fs.readdirSync(directory); const fileMetadata: Record<string, FileMetadata> = {}; files.forEach((file) => { const filePath = path.join(directory, file); const { fileHash, fileSize } = calculateFileHash(filePath); fileMetadata["/" + file] = { hash: fileHash, size: fileSize, }; }); return fileMetadata; } function findMatch( fileHash: string, fileMetadata: Record<string, FileMetadata>, ): string { for (let prop in fileMetadata) { const file = fileMetadata[prop] as FileMetadata; if (file.hash === fileHash) { return prop; } } throw new Error("unknown fileHash"); } // Function to upload a batch of files using the JWT from the first response async function uploadFilesBatch( jwt: string, fileHashes: string[][], fileMetadata: Record<string, FileMetadata>, ): Promise<string> { const form = new FormData(); for (const bucket of fileHashes) { bucket.forEach((fileHash) => { const fullPath = findMatch(fileHash, fileMetadata); const relPath = filesDirectory + "/" + path.basename(fullPath); const fileBuffer = fs.readFileSync(relPath); const base64Data = fileBuffer.toString("base64"); // Convert file to Base64 form.append( fileHash, new File([base64Data], fileHash, { type: "text/html", // Modify Content-Type header based on type of file }), fileHash, ); }); const response = await fetch( `https://api.cloudflare.com/client/v4/accounts/${accountId}/workers/assets/upload?base64=true`, { method: "POST", headers: { Authorization: `Bearer ${jwt}`, }, body: form, }, ); const data = (await response.json()) as UploadResponse; if (data && data.result.jwt) { return data.result.jwt; } } throw new Error("Should have received completion token"); } async function scriptUpload(completionToken: string): Promise<void> { const form = new FormData(); // Configure metadata form.append( "metadata", JSON.stringify({ main_module: "index.mjs", compatibility_date: "2022-03-11", assets: { jwt: completionToken, // Provide the completion token from file uploads }, bindings: [{ name: "ASSETS", type: "assets" }], // Optional assets binding to fetch from user worker }), ); // Configure (optional) user worker form.append( "index.js", new File( [ "export default {async fetch(request, env) { return new Response('Hello world from user worker!'); }}", ], "index.mjs", { type: "application/javascript+module", }, ), ); const url = dispatchNamespace ? `https://api.cloudflare.com/client/v4/accounts/${accountId}/workers/dispatch/namespaces/${dispatchNamespace}/scripts/${scriptName}` : `https://api.cloudflare.com/client/v4/accounts/${accountId}/workers/scripts/${scriptName}`; const response = await fetch(url, { method: "PUT", headers: { Authorization: `Bearer ${process.env.CLOUDFLARE_API_TOKEN}`, }, body: form, }); if (response.status != 200) { throw new Error("unexpected status code"); } } // Function to make the POST request to start the assets upload session async function startUploadSession(): Promise<UploadSessionData> { const fileMetadata = gatherFileMetadata(filesDirectory); const requestBody = JSON.stringify({ manifest: fileMetadata, }); const url = dispatchNamespace ? `https://api.cloudflare.com/client/v4/accounts/${accountId}/workers/dispatch/namespaces/${dispatchNamespace}/scripts/${scriptName}/assets-upload-session` : `https://api.cloudflare.com/client/v4/accounts/${accountId}/workers/scripts/${scriptName}/assets-upload-session`; const response = await fetch(url, { method: "POST", headers: { Authorization: `Bearer ${process.env.CLOUDFLARE_API_TOKEN}`, "Content-Type": "application/json", }, body: requestBody, }); const data = (await response.json()) as UploadResponse; const jwt = data.result.jwt; return { uploadToken: jwt, buckets: data.result.buckets, fileMetadata, }; } // Begin the upload session by uploading a new manifest const { uploadToken, buckets, fileMetadata } = await startUploadSession(); // If all files are already uploaded, a completion token will be immediately returned. Otherwise, // we should upload the missing files let completionToken = uploadToken; if (buckets.length > 0) { completionToken = await uploadFilesBatch(uploadToken, buckets, fileMetadata); } // Once we have uploaded all of our files, we can upload a new script, and assets, with completion token await scriptUpload(completionToken); ``` </TabItem> </Tabs> --- # Get Started URL: https://developers.cloudflare.com/workers/static-assets/get-started/ import { Badge, Description, InlineBadge, Render, PackageManagers, } from "~/components"; For most front-end applications, you'll want to use a framework. Workers supports number of popular [frameworks](/workers/frameworks/) that come with ready-to-use components, a pre-defined and structured architecture, and community support. View [framework specific guides](/workers/frameworks/) to get started using a framework. Alternatively, you may prefer to build your website from scratch if: - You're interested in learning by implementing core functionalities on your own. - You're working on a simple project where you might not need a framework. - You want to optimize for performance by minimizing external dependencies. - You require complete control over every aspect of the application. - You want to build your own framework. This guide will instruct you through setting up and deploying a static site or a full-stack application without a framework on Workers. ## Deploy a static site This guide will instruct you through setting up and deploying a static site on Workers. ### 1. Create a new Worker project using the CLI [C3 (`create-cloudflare-cli`)](https://github.com/cloudflare/workers-sdk/tree/main/packages/create-cloudflare) is a command-line tool designed to help you set up and deploy new applications to Cloudflare. Open a terminal window and run C3 to create your Worker project: <PackageManagers type="create" pkg="cloudflare@latest" args={"my-static-site --experimental"} /> <Render file="c3-post-run-steps" product="workers" params={{ category: "hello-world", type: "Hello World - Assets-only", lang: "TypeScript", }} /> After setting up your project, change your directory by running the following command: ```sh cd my-static-site ``` ### 2. Develop locally After you have created your Worker, run the [`wrangler dev`](/workers/wrangler/commands/#dev) in the project directory to start a local server. This will allow you to preview your project locally during development. ```sh npx wrangler dev ``` ### 3. Deploy your project Your project can be deployed to a `*.workers.dev` subdomain or a [Custom Domain](/workers/configuration/routing/custom-domains/), from your own machine or from any CI/CD system, including [Cloudflare's own](/workers/ci-cd/builds/). The [`wrangler deploy`](/workers/wrangler/commands/#deploy) will build and deploy your project. If you're using CI, ensure you update your ["deploy command"](/workers/ci-cd/builds/configuration/#build-settings) configuration appropriately. ```sh npx wrangler deploy ``` :::note Learn about how assets are configured and how routing works from [Routing configuration](/workers/static-assets/routing/). ::: ## Deploy a full-stack application This guide will instruct you through setting up and deploying dynamic and interactive server-side rendered (SSR) applications on Cloudflare Workers. When building a full-stack application, you can use any [Workers bindings](/workers/runtime-apis/bindings/), [including assets' own](/workers/static-assets/binding/), to interact with resources on the Cloudflare Developer Platform. ### 1. Create a new Worker project [C3 (`create-cloudflare-cli`)](https://github.com/cloudflare/workers-sdk/tree/main/packages/create-cloudflare) is a command-line tool designed to help you set up and deploy new applications to Cloudflare. Open a terminal window and run C3 to create your Worker project: <PackageManagers type="create" pkg="cloudflare@latest" args={"my-dynamic-site --experimental"} /> <Render file="c3-post-run-steps" product="workers" params={{ category: "hello-world", type: "Hello World - Worker with Assets", lang: "TypeScript", }} /> After setting up your project, change your directory by running the following command: ```sh cd my-dynamic-site ``` ### 2. Develop locally After you have created your Worker, run the [`wrangler dev`](/workers/wrangler/commands/#dev) in the project directory to start a local server. This will allow you to preview your project locally during development. ```sh npx wrangler dev ``` ### 3. Modify your Project With your new project generated and running, you can begin to write and edit your project: - The `src/index.ts` file is populated with sample code. Modify its content to change the server-side behavior of your Worker. - The `public/index.html` file is populated with sample code. Modify its content, or anything else in `public/`, to change the static assets of your Worker. Then, save the files and reload the page. Your project's output will have changed based on your modifications. ### 4. Deploy your Project Your project can be deployed to a `*.workers.dev` subdomain or a [Custom Domain](/workers/configuration/routing/custom-domains/), from your own machine or from any CI/CD system, including [Cloudflare's own](/workers/ci-cd/builds/). The [`wrangler deploy`](/workers/wrangler/commands/#deploy) will build and deploy your project. If you're using CI, ensure you update your ["deploy command"](/workers/ci-cd/builds/configuration/#build-settings) configuration appropriately. ```sh npx wrangler deploy ``` :::note Learn about how assets are configured and how routing works from [Routing configuration](/workers/static-assets/routing/). ::: --- # Static Assets URL: https://developers.cloudflare.com/workers/static-assets/ import { Aside, Badge, Card, CardGrid, Details, Description, InlineBadge, Icon, DirectoryListing, FileTree, Render, TabItem, Tabs, Feature, LinkButton, LinkCard, Stream, Flex, WranglerConfig, Steps, } from "~/components"; You can upload static assets (HTML, CSS, images and other files) as part of your Worker, and Cloudflare will handle caching and serving them to web browsers. <LinkCard title="Supported frameworks" href="/workers/frameworks/" description="Start building on Workers with our framework guides." /> ### How it works When you deploy your project, Cloudflare deploys both your Worker code and your static assets in a single operation. This deployment operates as a tightly integrated "unit" running across Cloudflare's network, combining static file hosting, custom logic, and global caching. The **assets directory** specified in your [Wrangler configuration file](/workers/wrangler/configuration/#assets) is central to this design. During deployment, Wrangler automatically uploads the files from this directory to Cloudflare's infrastructure. Once deployed, requests for these assets are routed efficiently to locations closest to your users. <WranglerConfig> ```toml {3-4} name = "my-spa" main = "src/index.js" compatibility_date = "2025-01-01" [assets] directory = "./dist" binding = "ASSETS" ``` </WranglerConfig> By adding an [**assets binding**](/workers/static-assets/binding/#binding), you can directly fetch and serve assets within your Worker code. ```js {13} // index.js export default { async fetch(request, env) { const url = new URL(request.url); if (url.pathname.startsWith("/api/")) { return new Response(JSON.stringify({ name: "Cloudflare" }), { headers: { "Content-Type": "application/json" }, }); } return env.ASSETS.fetch(request); }, }; ``` ### Routing behavior By default, if a requested URL matches a file in the static assets directory, that file will always be served — without running Worker code. If no matching asset is found and a Worker is configured, the request will be processed by the Worker instead. - If no Worker is set up, the [`not_found_handling`](/workers/static-assets/routing/#2-not_found_handling-1) setting in your Wrangler configuration determines what happens next. By default, a `404 Not Found` response is returned. - If a Worker is configured and a request does not match a static asset, the Worker will handle the request. The Worker can choose to pass the request to the asset binding (through `env.ASSETS.fetch()`), following the `not_found_handling` rules. You can configure and override this default routing behaviour. For example, if you have a Single Page Application and want to serve `index.html` for all unmatched routes, you can set `not_found_handling = "single-page-application"`: <WranglerConfig> ```toml [assets] directory = "./dist" not_found_handling = "single-page-application" ``` </WranglerConfig> If you want the Worker code to execute before serving an asset (for example, to protect an asset behind authentication), you can set `run_worker_first = true`. <WranglerConfig> ```toml [assets] directory = "./dist" run_worker_first = true ``` </WranglerConfig> <LinkCard title="Routing options" href="/workers/static-assets/routing/#routing-configuration" description="Learn more about how you can customize routing behavior." /> ### Caching behavior Cloudflare provides automatic caching for static assets across its network, ensuring fast delivery to users worldwide. When a static asset is requested, it is automatically cached for future requests. - **First Request:** When an asset is requested for the first time, it is fetched from storage and cached at the nearest Cloudflare location. - **Subsequent Requests:** If a request for the same asset reaches a data center that does not have it cached, Cloudflare's [tiered caching system](/cache/how-to/tiered-cache/) allows it to be retrieved from a nearby cache rather than going back to storage. This improves cache hit ratio, reduces latency, and reduces unnecessary origin fetches. ## Try it out #### 1. Create a new Worker project ```sh npm create cloudflare@latest -- my-dynamic-site ``` **For setup, select the following options**: - For _What would you like to start with?_, choose `Framework`. - For _Which framework would you like to use?_, choose `React`. - For _Which language do you want to use?_, choose `TypeScript`. - For _Do you want to use git for version control_?, choose `Yes`. - For _Do you want to deploy your application_?, choose `No` (we will be making some changes before deploying). After setting up the project, change the directory by running the following command: ```sh cd my-dynamic-site ``` #### 2. Build project Run the following command to build the project: ```sh npm run build ``` We should now see a new directory `/dist` in our project, which contains the compiled assets: <FileTree> - package.json - index.html - ... - dist Asset directory - ... Compiled assets - src - ... - ... </FileTree> In the next step, we use a Wrangler configuration file to allow Cloudflare to locate our compiled assets. #### 3. Add a Wrangler configuration file (`wrangler.toml` or `wrangler.json`) <WranglerConfig> ```toml name = "my-spa" compatibility_date = "2025-01-01" [assets] directory = "./dist" ``` </WranglerConfig> **Notice the `[assets]` block**: here we have specified our directory where Cloudflare can find our compiled assets (`./dist`). Our project structure should now look like this: <FileTree> - package.json - index.html - **wrangler.toml** Wrangler configuration - ... - dist Asset directory - ... Compiled assets - src - ... - ... </FileTree> #### 4. Deploy with Wrangler ```sh npx wrangler deploy ``` Our project is now deployed on Workers! But we can take this even further, by adding an **API Worker**. #### 5. Adjust our Wrangler configuration Replace the file contents of our Wrangler configuration with the following: <WranglerConfig> ```toml name = "my-spa" main = "src/api/index.js" compatibility_date = "2025-01-01" [assets] directory = "./dist" binding = "ASSETS" not_found_handling = "single-page-application" ``` </WranglerConfig> We have edited the Wrangler file in the following ways: - Added `main = "src/api/index.js"` to tell Cloudflare where to find our Worker code. - Added an `ASSETS` binding, which our Worker code can use to fetch and serve assets. - Enabled routing for Single Page Applications, which ensures that unmatched routes (such as `/dashboard`) serve our `index.html`. :::note By default, Cloudflare serves a `404 Not Found` to unmatched routes. To have the frontend handle routing instead of the server, you must enable `not_found_handling = "single-page-application"`. ::: #### 5. Create a new directory `/api`, and add an `index.js` file Copy the contents below into the index.js file. ```js {13} // api/index.js export default { async fetch(request, env) { const url = new URL(request.url); if (url.pathname.startsWith("/api/")) { return new Response(JSON.stringify({ name: "Cloudflare" }), { headers: { "Content-Type": "application/json" }, }); } return env.ASSETS.fetch(request); }, }; ``` **Consider what this Worker does:** - Our Worker receives a HTTP request and extracts the URL. - If the request is for an API route (`/api/...`), it returns a JSON response. - Otherwise, it serves static assets from our directory (`env.ASSETS`). #### 6. Call the API from the client Edit `src/App.tsx` so that it includes an additional button that calls the API, and sets some state. Replace the file contents with the following code: ```js {9,25, 33-47} // src/App.tsx import { useState } from "react"; import reactLogo from "./assets/react.svg"; import viteLogo from "/vite.svg"; import "./App.css"; function App() { const [count, setCount] = useState(0); const [name, setName] = useState("unknown"); return ( <> <div> <a href="https://vite.dev" target="_blank"> <img src={viteLogo} className="logo" alt="Vite logo" /> </a> <a href="https://react.dev" target="_blank"> <img src={reactLogo} className="logo react" alt="React logo" /> </a> </div> <h1>Vite + React</h1> <div className="card"> <button onClick={() => setCount((count) => count + 1)} aria-label="increment" > count is {count} </button> <p> Edit <code>src/App.tsx</code> and save to test HMR </p> </div> <div className="card"> <button onClick={() => { fetch("/api/") .then((res) => res.json() as Promise<{ name: string }>) .then((data) => setName(data.name)); }} aria-label="get name" > Name from API is: {name} </button> <p> Edit <code>api/index.ts</code> to change the name </p> </div> <p className="read-the-docs"> Click on the Vite and React logos to learn more </p> </> ); } export default App; ``` Before deploying again, we need to rebuild our project: ```sh npm run build ``` #### 7. Deploy with Wrangler ```sh npx wrangler deploy ``` Now we can see a new button **Name from API**, and if you click the button, we can see our API response as **Cloudflare**! ## Learn more <LinkCard title="Supported frameworks" href="/workers/frameworks/" description="Start building on Workers with our framework guides." /> <LinkCard title="Billing and limitations" href="/workers/static-assets/billing-and-limitations/" description="Learn more about how requests are billed, current limitations, and troubleshooting." /> --- # Routing URL: https://developers.cloudflare.com/workers/static-assets/routing/ import { Badge, Description, FileTree, InlineBadge, Render, TabItem, Tabs, WranglerConfig, } from "~/components"; ## Default behavior By default, assets are served by attempting to match up the incoming request's pathname to a static asset. The structure and organization of files in your static asset directory, along with any routing configuration, determine the routing paths for your application. When a request invokes a Worker with assets: 1. If a request is found with a matching path to the current route requested then that asset will always be served. In this example, request to `example.com/blog` serves the `blog.html` file.  2. If there is no Worker script, [`not_found_handling`](/workers/static-assets/routing/#2-not_found_handling) will be evaluated. By default, a null-body 404-status response will be served. If a Worker is configured, and there are no assets that match the current route requested, the Worker will be invoked. The Worker can then "fall back" to `not_found_handling` asset behavior, by passing the incoming request through to the [asset binding](/workers/static-assets/binding/#runtime-api-reference). In this example, a request to `example.com/api` doesn't match a static asset so the Worker is invoked.  ## Invoking Worker Script Ahead of Assets You may wish to run code before assets are served. This is often the case when implementing authentication, logging, personalization, internationalization, or other similar functions. [`run_worker_first`](/workers/static-assets/binding/#run_worker_first) is a configuration option available in [Wrangler configuration file](/workers/wrangler/configuration/) which controls this behavior. When enabled, `run_worker_first = true` will invoke your Worker's code, regardless of any assets that would have otherwise matched. Take the following directory structure, [Wrangler configuration file](/workers/wrangler/configuration/), and user Worker code: <FileTree> - wrangler.json - package.json - public - supersecret.txt - src - index.ts </FileTree> <WranglerConfig> ```toml title="wrangler.toml" name = "my-worker" compatibility_date = "2024-09-19" main = "src/index.ts" assets = { directory = "./public/", binding = "ASSETS", run_worker_first = true } ``` </WranglerConfig> ```js export default { async fetch( request: Request, env: Env, ctx: ExecutionContext ): Promise<Response> { const url = new URL(request.url); if (url.pathname === "/supersecret.txt") { const auth = request.headers.get("Authorization"); if (!auth) { return new Response("Forbidden", { status: 403, statusText: "Forbidden", headers: { "Content-Type": "text/plain", }, }); } } return await env.ASSETS.fetch(request); }, }; ``` In this example, any request will be routed to our user Worker, due to `run_worker_first` being enabled. As a result, any request being made `/supersecret.txt` without an `Authorization` header will result in a 403. ## Routing configuration There are two options for asset serving that can be configured in the [Wrangler configuration file](/workers/wrangler/configuration/#assets): #### 1. `html_handling` Forcing or dropping trailing slashes on request paths (for example, `example.com/page/` vs. `example.com/page`) is often something that developers wish to control for cosmetic reasons. Additionally, it can impact SEO because search engines often treat URLs with and without trailing slashes as different, separate pages. This distinction can lead to duplicate content issues, indexing problems, and overall confusion about the correct canonical version of a page. `html_handling` configuration determines the redirects and rewrites of requests for HTML content. It is used to specify the pattern for canonical URLs, thus where Cloudflare serves HTML content from, and additionally, where Cloudflare redirects non-canonical URLs to. #### 2. `not_found_handling` In the event a request does not match a static asset, and there is no Worker script (or that Worker script calls `fetch()` on [the assets binding](/workers/static-assets/binding/)), `not_found_handling` determines how Cloudflare will construct a response. It can be used to serve single-page applications (SPAs), or to serve custom 404 HTML pages. If creating a SPA, place an `/index.html` in your asset directory. When `not_found_handling` is configured to `"single-page-application"`, this page will be served with a 200 response. If you have custom 404 HTML pages, and configure `not_found_handling` to `"404-page"`, Cloudflare will recursively navigate up by pathname segment to serve the nearest `404.html` file. For example, you can have a `/404.html` and `/blog/404.html` file, and Cloudflare will serve the `/blog/404.html` file for requests to `/blog/not-found` and the `/404.html` file for requests to `/foo/bar`. ### Default configuration #### 1. `html_handling` `"auto-trailing-slash"` is the default mode if `html_handling` is not explicitly specified. Take the following directory structure: <FileTree> - file.html - folder - index.html </FileTree> Based on the incoming requests, the following assets would be served: | Incoming Request | Response | Asset Served | | ------------------ | --------------- | ------------------ | | /file | 200 | /file.html served | | /file.html | 307 to /file | - | | /file/ | 307 to /file | - | | /file/index | 307 to /file | - | | /file/index.html | 307 to /file | - | | /folder | 307 to /folder/ | - | | /folder.html | 307 to /folder/ | - | | /folder/ | 200 | /folder/index.html | | /folder/index | 307 to /folder/ | - | | /folder/index.html | 307 to /folder/ | - | #### 2. `not_found_handling` `"none"` is the default mode if `not_found_handling` is not explicitly specified. For all non-matching requests, Cloudflare will return a null-body 404-status response. ``` /not-found -> 404 /foo/path/doesnt/exist -> 404 ``` ### Alternate configuration options Alternate configuration options are outlined on this page and can be specified in your project's [Wrangler configuration file](/workers/wrangler/configuration/#assets). If you're deploying using a [framework](/workers/frameworks/), these options will be defined by the framework provider. Example Wrangler configuration file: <WranglerConfig> ```toml title="wrangler.toml" assets = { directory = "./public", binding = "ASSETS", html_handling = "force-trailing-slash", not_found_handling = "404-page" } ``` </WranglerConfig> #### `html_handling = "auto-trailing-slash" | "force-trailing-slash" | "drop-trailing-slash" | "none"` Take the following directory structure: <FileTree> - file.html - folder - index.html </FileTree> **`html_handling: "auto-trailing-slash"`** Based on the incoming requests, the following assets would be served: | Incoming Request | Response | Asset Served | | ------------------ | --------------- | ------------------ | | /file | 200 | /file.html | | /file.html | 307 to /file | - | | /file/ | 307 to /file | - | | /file/index | 307 to /file | - | | /file/index.html | 307 to /file | - | | /folder | 307 to /folder/ | - | | /folder.html | 307 to /folder | - | | /folder/ | 200 | /folder/index.html | | /folder/index | 307 to /folder | - | | /folder/index.html | 307 to /folder | - | **`html_handling: "force-trailing-slash"`** Based on the incoming requests, the following assets would be served: | Incoming Request | Response | Asset Served | | ------------------ | --------------- | ------------------ | | /file | 307 to /file/ | - | | /file.html | 307 to /file/ | - | | /file/ | 200 | /file.html | | /file/index | 307 to /file/ | - | | /file/index.html | 307 to /file/ | - | | /folder | 307 to /folder/ | - | | /folder.html | 307 to /folder/ | - | | /folder/ | 200 | /folder/index.html | | /folder/index | 307 to /folder/ | - | | /folder/index.html | 307 to /folder/ | - | **`html_handling: "drop-trailing-slash"`** Based on the incoming requests, the following assets would be served: | Incoming Request | Response | Asset Served | | ------------------ | -------------- | ------------------ | | /file | 200 | /file.html | | /file.html | 307 to /file | - | | /file/ | 307 to /file | - | | /file/index | 307 to /file | - | | /file/index.html | 307 to /file | - | | /folder | 200 | /folder/index.html | | /folder.html | 307 to /folder | - | | /folder/ | 307 to /folder | - | | /folder/index | 307 to /folder | - | | /folder/index.html | 307 to /folder | - | **`html_handling: "none"`** Based on the incoming requests, the following assets would be served: | Incoming Request | Response | Asset Served | | ------------------ | ------------------------------- | ------------------------------- | | /file | Depends on `not_found_handling` | Depends on `not_found_handling` | | /file.html | 200 | /file.html | | /file/ | Depends on `not_found_handling` | Depends on `not_found_handling` | | /file/index | Depends on `not_found_handling` | Depends on `not_found_handling` | | /file/index.html | Depends on `not_found_handling` | Depends on `not_found_handling` | | /folder | Depends on `not_found_handling` | Depends on `not_found_handling` | | /folder.html | Depends on `not_found_handling` | Depends on `not_found_handling` | | /folder/ | Depends on `not_found_handling` | Depends on `not_found_handling` | | /folder/index | Depends on `not_found_handling` | Depends on `not_found_handling` | | /folder/index.html | 200 | /folder/index.html | #### `not_found_handling = "404-page" | "single-page-application" | "none"` Take the following directory structure: <FileTree> - 404.html - index.html - folder - 404.html </FileTree> **`not_found_handling: "none"`** ``` /not-found -> 404 /folder/doesnt/exist -> 404 ``` **`not_found_handling: "404-page"`** ``` /not-found -> 404 /404.html /folder/doesnt/exist -> 404 /folder/404.html ``` **`not_found_handling: "single-page-application"`** ``` /not-found -> 200 /index.html /folder/doesnt/exist -> 200 /index.html ``` ## Serving assets from a custom path :::note This feature requires Wrangler v3.98.0 or later. ::: Like with any other Worker, [you can configure a Worker with assets to run on a path of your domain](/workers/configuration/routing/routes/). Assets defined for a Worker must be nested in a directory structure that mirrors the desired path. For example, to serve assets from `example.com/blog/*`, create a `blog` directory in your asset directory. {/* prettier-ignore */} <FileTree> - dist - blog - index.html - posts - post1.html - post2.html </FileTree> With a [Wrangler configuration file](/workers/wrangler/configuration/) like so: <WranglerConfig> ```toml name = "assets-on-a-path-example" main = "src/index.js" route = "example.com/blog/*" [assets] directory = "dist" ``` </WranglerConfig> In this example, requests to `example.com/blog/` will serve the `index.html` file, and requests to `example.com/blog/posts/post1` will serve the `post1.html` file. If you have a file outside the configured path, it will not be served. For example, if you have a `home.html` file in the root of your asset directory, it will not be served when requesting `example.com/blog/home`. However, if needed, these files can still be manually fetched over [the binding](/workers/static-assets/binding/#binding). ``` --- # Cache URL: https://developers.cloudflare.com/workers/runtime-apis/cache/ ## Background The [Cache API](https://developer.mozilla.org/en-US/docs/Web/API/Cache) allows fine grained control of reading and writing from the [Cloudflare global network](https://www.cloudflare.com/network/) cache. The Cache API is available globally but the contents of the cache do not replicate outside of the originating data center. A `GET /users` response can be cached in the originating data center, but will not exist in another data center unless it has been explicitly created. :::caution[Tiered caching] The `cache.put` method is not compatible with tiered caching. Refer to [Cache API](/workers/reference/how-the-cache-works/#cache-api) for more information. To perform tiered caching, use the [fetch API](/workers/reference/how-the-cache-works/#interact-with-the-cloudflare-cache). ::: Workers deployed to custom domains have access to functional `cache` operations. So do [Pages functions](/pages/functions/), whether attached to custom domains or `*.pages.dev` domains. However, any Cache API operations in the Cloudflare Workers dashboard editor and [Playground](/workers/playground/) previews will have no impact. For Workers fronted by [Cloudflare Access](https://www.cloudflare.com/teams/access/), the Cache API is not currently available. :::note This individualized zone cache object differs from Cloudflare’s Global CDN. For details, refer to [How the cache works](/workers/reference/how-the-cache-works/). ::: *** ## Accessing Cache The `caches.default` API is strongly influenced by the web browsers’ Cache API, but there are some important differences. For instance, Cloudflare Workers runtime exposes a single global cache object. ```js let cache = caches.default; await cache.match(request); ``` You may create and manage additional Cache instances via the [`caches.open`](https://developer.mozilla.org/en-US/docs/Web/API/CacheStorage/open) method. ```js let myCache = await caches.open('custom:cache'); await myCache.match(request); ``` *** ## Headers Our implementation of the Cache API respects the following HTTP headers on the response passed to `put()`: * `Cache-Control` * Controls caching directives. This is consistent with [Cloudflare Cache-Control Directives](/cache/concepts/cache-control#cache-control-directives). Refer to [Edge TTL](/cache/how-to/configure-cache-status-code#edge-ttl) for a list of HTTP response codes and their TTL when `Cache-Control` directives are not present. * `Cache-Tag` * Allows resource purging by tag(s) later (Enterprise only). * `ETag` * Allows `cache.match()` to evaluate conditional requests with `If-None-Match`. * `Expires` string * A string that specifies when the resource becomes invalid. * `Last-Modified` * Allows `cache.match()` to evaluate conditional requests with `If-Modified-Since`. This differs from the web browser Cache API as they do not honor any headers on the request or response. :::note Responses with `Set-Cookie` headers are never cached, because this sometimes indicates that the response contains unique data. To store a response with a `Set-Cookie` header, either delete that header or set `Cache-Control: private=Set-Cookie` on the response before calling `cache.put()`. Use the `Cache-Control` method to store the response without the `Set-Cookie` header. ::: *** ## Methods ### Put ```js cache.put(request, response); ``` * <code>put(request, response)</code> : Promise * Attempts to add a response to the cache, using the given request as the key. Returns a promise that resolves to `undefined` regardless of whether the cache successfully stored the response. :::note The `stale-while-revalidate` and `stale-if-error` directives are not supported when using the `cache.put` or `cache.match` methods. ::: #### Parameters * `request` string | Request * Either a string or a [`Request`](/workers/runtime-apis/request/) object to serve as the key. If a string is passed, it is interpreted as the URL for a new Request object. * `response` Response * A [`Response`](/workers/runtime-apis/response/) object to store under the given key. #### Invalid parameters `cache.put` will throw an error if: * The `request` passed is a method other than `GET`. * The `response` passed has a `status` of [`206 Partial Content`](https://www.webfx.com/web-development/glossary/http-status-codes/what-is-a-206-status-code/). * The `response` passed contains the header `Vary: *`. The value of the `Vary` header is an asterisk (`*`). Refer to the [Cache API specification](https://w3c.github.io/ServiceWorker/#cache-put) for more information. #### Errors `cache.put` returns a `413` error if `Cache-Control` instructs not to cache or if the response is too large. ### `Match` ```js cache.match(request, options); ``` * <code>match(request, options)</code> : Promise`<Response | undefined>` * Returns a promise wrapping the response object keyed to that request. :::note The `stale-while-revalidate` and `stale-if-error` directives are not supported when using the `cache.put` or `cache.match` methods. ::: #### Parameters * `request` string | Request * The string or [`Request`](/workers/runtime-apis/request/) object used as the lookup key. Strings are interpreted as the URL for a new `Request` object. * `options` * Can contain one possible property: `ignoreMethod` (Boolean). When `true`, the request is considered to be a `GET` request regardless of its actual value. Unlike the browser Cache API, Cloudflare Workers do not support the `ignoreSearch` or `ignoreVary` options on `match()`. You can accomplish this behavior by removing query strings or HTTP headers at `put()` time. Our implementation of the Cache API respects the following HTTP headers on the request passed to `match()`: * `Range` * Results in a `206` response if a matching response with a Content-Length header is found. Your Cloudflare cache always respects range requests, even if an `Accept-Ranges` header is on the response. * `If-Modified-Since` * Results in a `304` response if a matching response is found with a `Last-Modified` header with a value after the time specified in `If-Modified-Since`. * `If-None-Match` * Results in a `304` response if a matching response is found with an `ETag` header with a value that matches a value in `If-None-Match`. * `cache.match()` * Never sends a subrequest to the origin. If no matching response is found in cache, the promise that `cache.match()` returns is fulfilled with `undefined`. #### Errors `cache.match` generates a `504` error response when the requested content is missing or expired. The Cache API does not expose this `504` directly to the Worker script, instead returning `undefined`. Nevertheless, the underlying `504` is still visible in Cloudflare Logs. If you use Cloudflare Logs, you may see these `504` responses with the `RequestSource` of `edgeWorkerCacheAPI`. Again, these are expected if the cached asset was missing or expired. Note that `edgeWorkerCacheAPI` requests are already filtered out in other views, such as Cache Analytics. To filter out these requests or to filter requests by end users of your website only, refer to [Filter end users](/analytics/graphql-api/features/filtering/#filter-end-users). ### `Delete` ```js cache.delete(request, options); ``` * <code>delete(request, options)</code> : Promise`<boolean>` Deletes the `Response` object from the cache and returns a `Promise` for a Boolean response: * `true`: The response was cached but is now deleted * `false`: The response was not in the cache at the time of deletion. :::caution[Global purges] The `cache.delete` method only purges content of the cache in the data center that the Worker was invoked. For global purges, refer to [Purging assets stored with the Cache API](/workers/reference/how-the-cache-works/#purge-assets-stored-with-the-cache-api). ::: #### Parameters * `request` string | Request * The string or [`Request`](/workers/runtime-apis/request/) object used as the lookup key. Strings are interpreted as the URL for a new `Request` object. * `options` object * Can contain one possible property: `ignoreMethod` (Boolean). Consider the request method a GET regardless of its actual value. *** ## Related resources * [How the cache works](/workers/reference/how-the-cache-works/) * [Example: Cache using `fetch()`](/workers/examples/cache-using-fetch/) * [Example: using the Cache API](/workers/examples/cache-api/) * [Example: caching POST requests](/workers/examples/cache-post-request/) --- # Console URL: https://developers.cloudflare.com/workers/runtime-apis/console/ The `console` object provides a set of methods to help you emit logs, warnings, and debug code. All standard [methods of the `console` API](https://developer.mozilla.org/en-US/docs/Web/API/console) are present on the `console` object in Workers. However, some methods are no ops — they can be called, and do not emit an error, but do not do anything. This ensures compatibility with libraries which may use these APIs. The table below enumerates each method, and the extent to which it is supported in Workers. All methods noted as "✅ supported" have the following behavior: * They will be written to the console in local dev (`npx wrangler@latest dev`) * They will appear in live logs, when tailing logs in the dashboard or running [`wrangler tail`](https://developers.cloudflare.com/workers/observability/log-from-workers/#use-wrangler-tail) * They will create entries in the `logs` field of [Tail Worker](https://developers.cloudflare.com/workers/observability/tail-workers/) events and [Workers Trace Events](https://developers.cloudflare.com/logs/reference/log-fields/account/workers_trace_events/), which can be pushed to a destination of your choice via [Logpush](https://developers.cloudflare.com/workers/observability/logpush/). All methods noted as "🟡 partial support" have the following behavior: * In both production and local development the method can be safely called, but will do nothing (no op) * In the [Workers Playground](https://workers.cloudflare.com/playground), Quick Editor in the Workers dashboard, and remote preview mode (`wrangler dev --remote`) calling the method will behave as expected, print to the console, etc. Refer to [Log from Workers](https://developers.cloudflare.com/workers/observability/log-from-workers/) for more on debugging and adding logs to Workers. | Method | Behavior | | -------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------- | | [`console.debug()`](https://developer.mozilla.org/en-US/docs/Web/API/console/debug_static) | ✅ supported | | [`console.error()`](https://developer.mozilla.org/en-US/docs/Web/API/console/error_static) | ✅ supported | | [`console.info()`](https://developer.mozilla.org/en-US/docs/Web/API/console/info_static) | ✅ supported | | [`console.log()`](https://developer.mozilla.org/en-US/docs/Web/API/console/log_static) | ✅ supported | | [`console.warn()`](https://developer.mozilla.org/en-US/docs/Web/API/console/warn_static) | ✅ supported | | [`console.clear()`](https://developer.mozilla.org/en-US/docs/Web/API/console/clear_static) | 🟡 partial support | | [`console.count()`](https://developer.mozilla.org/en-US/docs/Web/API/console/count_static) | 🟡 partial support | | [`console.group()`](https://developer.mozilla.org/en-US/docs/Web/API/console/group_static) | 🟡 partial support | | [`console.table()`](https://developer.mozilla.org/en-US/docs/Web/API/console/table_static) | 🟡 partial support | | [`console.trace()`](https://developer.mozilla.org/en-US/docs/Web/API/console/trace_static) | 🟡 partial support | | [`console.assert()`](https://developer.mozilla.org/en-US/docs/Web/API/console/assert_static) | ⚪ no op | | [`console.countReset()`](https://developer.mozilla.org/en-US/docs/Web/API/console/countreset_static) | ⚪ no op | | [`console.dir()`](https://developer.mozilla.org/en-US/docs/Web/API/console/dir_static) | ⚪ no op | | [`console.dirxml()`](https://developer.mozilla.org/en-US/docs/Web/API/console/dirxml_static) | ⚪ no op | | [`console.groupCollapsed()`](https://developer.mozilla.org/en-US/docs/Web/API/console/groupcollapsed_static) | ⚪ no op | | [`console.groupEnd`](https://developer.mozilla.org/en-US/docs/Web/API/console/groupend_static) | ⚪ no op | | [`console.profile()`](https://developer.mozilla.org/en-US/docs/Web/API/console/profile_static) | ⚪ no op | | [`console.profileEnd()`](https://developer.mozilla.org/en-US/docs/Web/API/console/profileend_static) | ⚪ no op | | [`console.time()`](https://developer.mozilla.org/en-US/docs/Web/API/console/time_static) | ⚪ no op | | [`console.timeEnd()`](https://developer.mozilla.org/en-US/docs/Web/API/console/timeend_static) | ⚪ no op | | [`console.timeLog()`](https://developer.mozilla.org/en-US/docs/Web/API/console/timelog_static) | ⚪ no op | | [`console.timeStamp()`](https://developer.mozilla.org/en-US/docs/Web/API/console/timestamp_static) | ⚪ no op | | [`console.createTask()`](https://developer.chrome.com/blog/devtools-modern-web-debugging/#linked-stack-traces) | 🔴 Will throw an exception in production, but works in local dev, Quick Editor, and remote preview | --- # Context (ctx) URL: https://developers.cloudflare.com/workers/runtime-apis/context/ The Context API provides methods to manage the lifecycle of your Worker or Durable Object. Context is exposed via the following places: * As the third parameter in all [handlers](/workers/runtime-apis/handlers/), including the [`fetch()` handler](/workers/runtime-apis/handlers/fetch/). (`fetch(request, env, ctx)`) * As a class property of the [`WorkerEntrypoint` class](/workers/runtime-apis/bindings/service-bindings/rpc) ## `waitUntil` `ctx.waitUntil()` extends the lifetime of your Worker, allowing you to perform work without blocking returning a response, and that may continue after a response is returned. It accepts a `Promise`, which the Workers runtime will continue executing, even after a response has been returned by the Worker's [handler](/workers/runtime-apis/handlers/). `waitUntil` is commonly used to: * Fire off events to external analytics providers. (note that when you use [Workers Analytics Engine](/analytics/analytics-engine/), you do not need to use `waitUntil`) * Put items into cache using the [Cache API](/workers/runtime-apis/cache/) :::note[Alternatives to waitUntil] If you are using `waitUntil()` to emit logs or exceptions, we recommend using [Tail Workers](/workers/observability/logs/tail-workers/) instead. Even if your Worker throws an uncaught exception, the Tail Worker will execute, ensuring that you can emit logs or exceptions regardless of the Worker's invocation status. [Cloudflare Queues](/queues/) is purpose-built for performing work out-of-band, without blocking returning a response back to the client Worker. ::: You can call `waitUntil()` multiple times. Similar to `Promise.allSettled`, even if a promise passed to one `waitUntil` call is rejected, promises passed to other `waitUntil()` calls will still continue to execute. For example: ```js export default { async fetch(request, env, ctx) { // Forward / proxy original request let res = await fetch(request); // Add custom header(s) res = new Response(res.body, res); res.headers.set('x-foo', 'bar'); // Cache the response // NOTE: Does NOT block / wait ctx.waitUntil(caches.default.put(request, res.clone())); // Done return res; }, }; ``` ## `passThroughOnException` :::caution[Reuse of body] The Workers Runtime uses streaming for request and response bodies. It does not buffer the body. Hence, if an exception occurs after the body has been consumed, `passThroughOnException()` cannot send the body again. If this causes issues, we recommend cloning the request body and handling exceptions in code. This will protect against uncaught code exceptions. However some exception times such as exceed CPU or memory limits will not be mitigated. ::: The `passThroughOnException` method allows a Worker to [fail open](https://community.microfocus.com/cyberres/b/sws-22/posts/security-fundamentals-part-1-fail-open-vs-fail-closed), and pass a request through to an origin server when a Worker throws an unhandled exception. This can be useful when using Workers as a layer in front of an existing service, allowing the service behind the Worker to handle any unexpected error cases that arise in your Worker. ```js export default { async fetch(request, env, ctx) { // Proxy to origin on unhandled/uncaught exceptions ctx.passThroughOnException(); throw new Error('Oops'); }, }; ``` --- # Encoding URL: https://developers.cloudflare.com/workers/runtime-apis/encoding/ ## TextEncoder ### Background The `TextEncoder` takes a stream of code points as input and emits a stream of bytes. Encoding types passed to the constructor are ignored and a UTF-8 `TextEncoder` is created. [`TextEncoder()`](https://developer.mozilla.org/en-US/docs/Web/API/TextEncoder/TextEncoder) returns a newly constructed `TextEncoder` that generates a byte stream with UTF-8 encoding. `TextEncoder` takes no parameters and throws no exceptions. ### Constructor ```js let encoder = new TextEncoder(); ``` ### Properties * `encoder.encoding` DOMString read-only * The name of the encoder as a string describing the method the `TextEncoder` uses (always `utf-8`). ### Methods * <code>encode(inputUSVString)</code> : Uint8Array * Encodes a string input. *** ## TextDecoder ### Background The `TextDecoder` interface represents a UTF-8 decoder. Decoders take a stream of bytes as input and emit a stream of code points. [`TextDecoder()`](https://developer.mozilla.org/en-US/docs/Web/API/TextDecoder/TextDecoder) returns a newly constructed `TextDecoder` that generates a code-point stream. ### Constructor ```js let decoder = new TextDecoder(); ``` ### Properties * `decoder.encoding` DOMString read-only * The name of the decoder that describes the method the `TextDecoder` uses. * `decoder.fatal` boolean read-only * Indicates if the error mode is fatal. * `decoder.ignoreBOM` boolean read-only * Indicates if the byte-order marker is ignored. ### Methods * `decode()` : DOMString * Decodes using the method specified in the `TextDecoder` object. Learn more at [MDN’s `TextDecoder` documentation](https://developer.mozilla.org/en-US/docs/Web/API/TextDecoder/decode). --- # EventSource URL: https://developers.cloudflare.com/workers/runtime-apis/eventsource/ ## Background The [`EventSource`](https://developer.mozilla.org/en-US/docs/Web/API/EventSource) interface is a server-sent event API that allows a server to push events to a client. The `EventSource` object is used to receive server-sent events. It connects to a server over HTTP and receives events in a text-based format. ### Constructor ```js let eventSource = new EventSource(url, options); ``` * `url` USVString - The URL to which to connect. * `options` EventSourceInit - An optional dictionary containing any optional settings. By default, the `EventSource` will use the global `fetch()` function under the covers to make requests. If you need to use a different fetch implementation as provided by a Cloudflare Workers binding, you can pass the `fetcher` option: ```js export default { async fetch(req, env) { let eventSource = new EventSource(url, { fetcher: env.MYFETCHER }); // ... } }; ``` Note that the `fetcher` option is a Cloudflare Workers specific extension. ### Properties * `eventSource.url` USVString read-only * The URL of the event source. * `eventSource.readyState` USVString read-only * The state of the connection. * `eventSource.withCredentials` Boolean read-only * A Boolean indicating whether the `EventSource` object was instantiated with cross-origin (CORS) credentials set (`true`), or not (`false`). ### Methods * `eventSource.close()` * Closes the connection. * `eventSource.onopen` * An event handler called when a connection is opened. * `eventSource.onmessage` * An event handler called when a message is received. * `eventSource.onerror` * An event handler called when an error occurs. ### Events * `message` * Fired when a message is received. * `open` * Fired when the connection is opened. * `error` * Fired when an error occurs. ### Class Methods * <code>EventSource.from(readableStreamReadableStream) : EventSource</code> * This is a Cloudflare Workers specific extension that creates a new `EventSource` object from an existing `ReadableStream`. Such an instance does not initiate a new connection but instead attaches to the provided stream. --- # Fetch URL: https://developers.cloudflare.com/workers/runtime-apis/fetch/ import { TabItem, Tabs } from "~/components" The [Fetch API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API) provides an interface for asynchronously fetching resources via HTTP requests inside of a Worker. :::note Asynchronous tasks such as `fetch` must be executed within a [handler](/workers/runtime-apis/handlers/). If you try to call `fetch()` within [global scope](https://developer.mozilla.org/en-US/docs/Glossary/Global_scope), your Worker will throw an error. Learn more about [the Request context](/workers/runtime-apis/request/#the-request-context). ::: :::caution[Worker to Worker] Worker-to-Worker `fetch` requests are possible with [Service bindings](/workers/runtime-apis/bindings/service-bindings/). ::: ## Syntax <Tabs> <TabItem label="Module Worker" icon="seti:javascript"> ```js null {3-7} export default { async scheduled(event, env, ctx) { return await fetch("https://example.com", { headers: { "X-Source": "Cloudflare-Workers", }, }); }, }; ``` </TabItem> <TabItem label="Service Worker" icon="seti:javascript"> ```js null {8} addEventListener('fetch', event => { // NOTE: can’t use fetch here, as we’re not in an async scope yet event.respondWith(eventHandler(event)); }); async function eventHandler(event) { // fetch can be awaited here since `event.respondWith()` waits for the Promise it receives to settle const resp = await fetch(event.request); return resp; } ``` </TabItem> </Tabs> * <code>fetch(resource, options optional)</code> : Promise`<Response>` * Fetch returns a promise to a Response. ### Parameters * [`resource`](https://developer.mozilla.org/en-US/docs/Web/API/fetch#resource) Request | string | URL * `options` options * `cache` `undefined | 'no-store'` optional * Standard HTTP `cache` header. Only `cache: 'no-store'` is supported. Any other `cache` header will result in a `TypeError` with the message `Unsupported cache mode: <attempted-cache-mode>`. * For all requests this forwards the `Pragma: no-cache` and `Cache-Control: no-cache` headers to the origin. * For requests to origins not hosted by Cloudflare, `no-store` bypasses the use of Cloudflare's caches. * An object that defines the content and behavior of the request. *** ## How the `Accept-Encoding` header is handled When making a subrequest with the `fetch()` API, you can specify which forms of compression to prefer that the server will respond with (if the server supports it) by including the [`Accept-Encoding`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Accept-Encoding) header. Workers supports both the gzip and brotli compression algorithms. Usually it is not necessary to specify `Accept-Encoding` or `Content-Encoding` headers in the Workers Runtime production environment – brotli or gzip compression is automatically requested when fetching from an origin and applied to the response when returning data to the client, depending on the capabilities of the client and origin server. To support requesting brotli from the origin, you must enable the [`brotli_content_encoding`](/workers/configuration/compatibility-flags/#brotli-content-encoding-support) compatibility flag in your Worker. Soon, this compatibility flag will be enabled by default for all Workers past an upcoming compatibility date. ### Passthrough behavior One scenario where the Accept-Encoding header is useful is for passing through compressed data from a server to the client, where the Accept-Encoding allows the worker to directly receive the compressed data stream from the server without it being decompressed beforehand. As long as you do not read the body of the compressed response prior to returning it to the client and keep the `Content-Encoding` header intact, it will "pass through" without being decompressed and then recompressed again. This can be helpful when using Workers in front of origin servers or when fetching compressed media assets, to ensure that the same compression used by the origin server is used in the response that your Worker returns. In addition to a change in the content encoding, recompression is also needed when a response uses an encoding not supported by the client. As an example, when a Worker requests either brotli or gzip as the encoding but the client only supports gzip, recompression will still be needed if the server returns brotli-encoded data to the server (and will be applied automatically). Note that this behavior may also vary based on the [compression rules](/rules/compression-rules/), which can be used to configure what compression should be applied for different types of data on the server side. ```typescript export default { async fetch(request) { // Accept brotli or gzip compression const headers = new Headers({ 'Accept-Encoding': "br, gzip" }); let response = await fetch("https://developers.cloudflare.com", {method: "GET", headers}); // As long as the original response body is returned and the Content-Encoding header is // preserved, the same encoded data will be returned without needing to be compressed again. return new Response(response.body, { status: response.status, statusText: response.statusText, headers: response.headers, }); } } ``` ## Related resources * [Example: use `fetch` to respond with another site](/workers/examples/respond-with-another-site/) * [Example: Fetch HTML](/workers/examples/fetch-html/) * [Example: Fetch JSON](/workers/examples/fetch-json/) * [Example: cache using Fetch](/workers/examples/cache-using-fetch/) * Write your Worker code in [ES modules syntax](/workers/reference/migrate-to-module-workers/) for an optimized experience. --- # Headers URL: https://developers.cloudflare.com/workers/runtime-apis/headers/ ## Background All HTTP request and response headers are available through the [Headers API](https://developer.mozilla.org/en-US/docs/Web/API/Headers). When a header name possesses multiple values, those values will be concatenated as a single, comma-delimited string value. This means that `Headers.get` will always return a string or a `null` value. This applies to all header names except for `Set-Cookie`, which requires `Headers.getAll`. This is documented below in [Differences](#differences). ```js let headers = new Headers(); headers.get('x-foo'); //=> null headers.set('x-foo', '123'); headers.get('x-foo'); //=> "123" headers.set('x-foo', 'hello'); headers.get('x-foo'); //=> "hello" headers.append('x-foo', 'world'); headers.get('x-foo'); //=> "hello, world" ``` ## Differences * Despite the fact that the `Headers.getAll` method has been made obsolete, Cloudflare still offers this method but only for use with the `Set-Cookie` header. This is because cookies will often contain date strings, which include commas. This can make parsing multiple values in a `Set-Cookie` header more difficult. Any attempts to use `Headers.getAll` with other header names will throw an error. A brief history `Headers.getAll` is available in this [GitHub issue](https://github.com/whatwg/fetch/issues/973). * Due to [RFC 6265](https://www.rfc-editor.org/rfc/rfc6265) prohibiting folding multiple `Set-Cookie` headers into a single header, the `Headers.append` method will allow you to set multiple `Set-Cookie` response headers instead of appending the value onto the existing header. ```js const headers = new Headers(); headers.append("Set-Cookie", "cookie1=value_for_cookie_1; Path=/; HttpOnly;"); headers.append("Set-Cookie", "cookie2=value_for_cookie_2; Path=/; HttpOnly;"); console.log(headers.getAll("Set-Cookie")); // Array(2) [ cookie1=value_for_cookie_1; Path=/; HttpOnly;, cookie2=value_for_cookie_2; Path=/; HttpOnly; ] ``` * In Cloudflare Workers, the `Headers.get` method returns a [`USVString`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String) instead of a [`ByteString`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String), which is specified by the spec. For most scenarios, this should have no noticeable effect. To compare the differences between these two string classes, refer to this [Playground example](https://workers.cloudflare.com/playground#LYVwNgLglgDghgJwgegGYHsHALQBM4RwDcABAEbogB2+CAngLzbMutvvsCMALAJx-cAzAHZeANkG8AHAAZOU7t2EBWAEy9eqsXNWdOALg5HjbHv34jxk2fMUr1m7Z12cAsACgAwuioQApr7YACJQAM4w6KFQ0D76JBhYeATEJFRwwH4MAERQNH4AHgB0AFahWaSoUGAB6Zk5eUWlWR7evgEQ2AAqdDB+cXAwMGBQAMYEUD7IxXAAbnChIwiwEADUwOi44H4eHgURSCS4fqhw4BAkAN7uAJDzdFQj8X4QIwAWABQIfgCOIH6hEAAlJcbtdqucxucGCQsoBeDcAXHtZUHgkggCCoKSeAgkaFUPwAdxInQKEAAog8Nn4EO9AYUAiNKe9IYDkc8SPTKbgsVCSABlCBLKgAc0KqAQ6GAnleiG8R3ehQVaIx3JZoIZVFC6GqhTA6CF7yynVeYRIJrgJAAqryAGr8wVCkj46KvEjmyH6LIAGhIzLVPk12t1+sNxtCprD5oAQnR-Hbcg6nRAXW7sT5LZ0AGLYKQe70co5cgiq67XZDIEgACT8cCOCAjXxIoRAg0iflwJAg6EdmAA1iQfGA6I7nSRo7GBfHQt6yGj+yAEKCy6bgEM-BlfOM0yBQv9LTa48LQoUiaHUiSSMM8cOwGASDBBec4Ivy-jEFR466KLOk2FCqzzq81a1mGuIEpWQFUqE7wXDC+ZttgkJZHEcGFucAC+xbXF8EDzlQZ6EgASv8EQan4BpSn4Ix9pQ5xJn4JAAAatAGfgMa6NAdoBJBEeE-r0YBNaQR2XY7vRdFzhAMCzgyK6IGE-qFF6lwkAJwEkBhNxoe4aEeCYelGGYAiWBI0hyAoShqBoWg6HoLQ+P4gQhLxUQxFQcQJDg+CEKQaQZNkGSEF5cDlPEVQ1H5WRkLqZDNF49ntF0PR9K6gzDJCExUFMmpUDs7gXFkwBwLkAD66ybNUSH1EcjRlDp7j6Q1rCGRYogmTY5n2FZTguMwHhAA). ## Cloudflare headers Cloudflare sets a number of its own custom headers on incoming requests and outgoing responses. While some may be used for its own tracking and bookkeeping, many of these can be useful to your own applications – or Workers – too. For a list of documented Cloudflare request headers, refer to [Cloudflare HTTP headers](/fundamentals/reference/http-headers/). ## Related resources * [Logging headers to console](/workers/examples/logging-headers/) - Review how to log headers in the console. * [Cloudflare HTTP headers](/fundamentals/reference/http-headers/) - Contains a list of specific headers that Cloudflare adds. --- # HTMLRewriter URL: https://developers.cloudflare.com/workers/runtime-apis/html-rewriter/ import { Render } from "~/components" ## Background The `HTMLRewriter` class allows developers to build comprehensive and expressive HTML parsers inside of a Cloudflare Workers application. It can be thought of as a jQuery-like experience directly inside of your Workers application. Leaning on a powerful JavaScript API to parse and transform HTML, `HTMLRewriter` allows developers to build deeply functional applications. The `HTMLRewriter` class should be instantiated once in your Workers script, with a number of handlers attached using the `on` and `onDocument` functions. *** ## Constructor ```js new HTMLRewriter().on('*', new ElementHandler()).onDocument(new DocumentHandler()); ``` *** ## Global types Throughout the `HTMLRewriter` API, there are a few consistent types that many properties and methods use: * `Content` string | Response | ReadableStream * Content inserted in the output stream should be a string, [`Response`](/workers/runtime-apis/response/), or [`ReadableStream`](/workers/runtime-apis/streams/readablestream/). * `ContentOptions` Object * `{ html: Boolean }` Controls the way the HTMLRewriter treats inserted content. If the `html` boolean is set to true, content is treated as raw HTML. If the `html` boolean is set to false or not provided, content will be treated as text and proper HTML escaping will be applied to it. *** ## Handlers There are two handler types that can be used with `HTMLRewriter`: element handlers and document handlers. ### Element Handlers An element handler responds to any incoming element, when attached using the `.on` function of an `HTMLRewriter` instance. The element handler should respond to `element`, `comments`, and `text`. The example processes `div` elements with an `ElementHandler` class. ```js class ElementHandler { element(element) { // An incoming element, such as `div` console.log(`Incoming element: ${element.tagName}`); } comments(comment) { // An incoming comment } text(text) { // An incoming piece of text } } async function handleRequest(req) { const res = await fetch(req); return new HTMLRewriter().on('div', new ElementHandler()).transform(res); } ``` ### Document Handlers A document handler represents the incoming HTML document. A number of functions can be defined on a document handler to query and manipulate a document’s `doctype`, `comments`, `text`, and `end`. Unlike an element handler, a document handler’s `doctype`, `comments`, `text`, and `end` functions are not scoped by a particular selector. A document handler's functions are called for all the content on the page including the content outside of the top-level HTML tag: ```js class DocumentHandler { doctype(doctype) { // An incoming doctype, such as <!DOCTYPE html> } comments(comment) { // An incoming comment } text(text) { // An incoming piece of text } end(end) { // The end of the document } } ``` #### Async Handlers All functions defined on both element and document handlers can return either `void` or a `Promise<void>`. Making your handler function `async` allows you to access external resources such as an API via fetch, Workers KV, Durable Objects, or the cache. ```js class UserElementHandler { async element(element) { let response = await fetch(new Request('/user')); // fill in user info using response } } async function handleRequest(req) { const res = await fetch(req); // run the user element handler via HTMLRewriter on a div with ID `user_info` return new HTMLRewriter().on('div#user_info', new UserElementHandler()).transform(res); } ``` ### Element The `element` argument, used only in element handlers, is a representation of a DOM element. A number of methods exist on an element to query and manipulate it: #### Properties * `tagName` string * The name of the tag, such as `"h1"` or `"div"`. This property can be assigned different values, to modify an element’s tag. * `attributes` Iterator read-only * A `[name, value]` pair of the tag’s attributes. * `removed` boolean * Indicates whether the element has been removed or replaced by one of the previous handlers. * `namespaceURI` String * Represents the [namespace URI](https://infra.spec.whatwg.org/#namespaces) of an element. #### Methods * <code>getAttribute(namestring)</code> : string | null * Returns the value for a given attribute name on the element, or `null` if it is not found. * <code>hasAttribute(namestring)</code> : boolean * Returns a boolean indicating whether an attribute exists on the element. * <code>setAttribute(namestring, valuestring)</code> : Element * Sets an attribute to a provided value, creating the attribute if it does not exist. * <code>removeAttribute(namestring)</code> : Element * Removes the attribute. * <code>before(contentContent, contentOptionsContentOptionsoptional)</code> : Element * Inserts content before the element. <Render file="content_and_contentoptions" /> * <code>after(contentContent, contentOptionsContentOptionsoptional)</code> : Element * Inserts content right after the element. * <code>prepend(contentContent, contentOptionsContentOptionsoptional)</code> : Element * Inserts content right after the start tag of the element. * <code>append(contentContent, contentOptionsContentOptionsoptional)</code> : Element * Inserts content right before the end tag of the element. * <code>replace(contentContent, contentOptionsContentOptionsoptional)</code> : Element * Removes the element and inserts content in place of it. * <code>setInnerContent(contentContent, contentOptionsContentOptionsoptional)</code> : Element * Replaces content of the element. * <code>remove()</code> : Element * Removes the element with all its content. * <code>removeAndKeepContent()</code> : Element * Removes the start tag and end tag of the element but keeps its inner content intact. * `onEndTag(handlerFunction<void>)` : void * Registers a handler that is invoked when the end tag of the element is reached. ### EndTag The `endTag` argument, used only in handlers registered with `element.onEndTag`, is a limited representation of a DOM element. #### Properties * `name` string * The name of the tag, such as `"h1"` or `"div"`. This property can be assigned different values, to modify an element’s tag. #### Methods * <code>before(contentContent, contentOptionsContentOptionsoptional)</code> : EndTag * Inserts content right before the end tag. * <code>after(contentContent, contentOptionsContentOptionsoptional)</code> : EndTag * Inserts content right after the end tag. <Render file="content_and_contentoptions" /> * <code>remove()</code> : EndTag * Removes the element with all its content. ### Text chunks Since Cloudflare performs zero-copy streaming parsing, text chunks are not the same thing as text nodes in the lexical tree. A lexical tree text node can be represented by multiple chunks, as they arrive over the wire from the origin. Consider the following markup: `<div>Hey. How are you?</div>`. It is possible that the Workers script will not receive the entire text node from the origin at once; instead, the `text` element handler will be invoked for each received part of the text node. For example, the handler might be invoked with `“Hey. How â€,` then `“are you?â€`. When the last chunk arrives, the text’s `lastInTextNode` property will be set to `true`. Developers should make sure to concatenate these chunks together. #### Properties * `removed` boolean * Indicates whether the element has been removed or replaced by one of the previous handlers. * `text` string read-only * The text content of the chunk. Could be empty if the chunk is the last chunk of the text node. * `lastInTextNode` boolean read-only * Specifies whether the chunk is the last chunk of the text node. #### Methods * <code>before(contentContent, contentOptionsContentOptionsoptional)</code> : Element * Inserts content before the element. <Render file="content_and_contentoptions" /> * <code>after(contentContent, contentOptionsContentOptionsoptional)</code> : Element * Inserts content right after the element. * <code>replace(contentContent, contentOptionsContentOptionsoptional)</code> : Element * Removes the element and inserts content in place of it. * <code>remove()</code> : Element * Removes the element with all its content. ### Comments The `comments` function on an element handler allows developers to query and manipulate HTML comment tags. ```js class ElementHandler { comments(comment) { // An incoming comment element, such as <!-- My comment --> } } ``` #### Properties * `comment.removed` boolean * Indicates whether the element has been removed or replaced by one of the previous handlers. * `comment.text` string * The text of the comment. This property can be assigned different values, to modify comment’s text. #### Methods * <code>before(contentContent, contentOptionsContentOptionsoptional)</code> : Element * Inserts content before the element. <Render file="content_and_contentoptions" /> * <code>after(contentContent, contentOptionsContentOptionsoptional)</code> : Element * Inserts content right after the element. * <code>replace(contentContent, contentOptionsContentOptionsoptional)</code> : Element * Removes the element and inserts content in place of it. * <code>remove()</code> : Element * Removes the element with all its content. ### Doctype The `doctype` function on a document handler allows developers to query a document’s [doctype](https://developer.mozilla.org/en-US/docs/Glossary/Doctype). ```js class DocumentHandler { doctype(doctype) { // An incoming doctype element, such as // <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> } } ``` #### Properties * `doctype.name` string | null read-only * The doctype name. * `doctype.publicId` string | null read-only * The quoted string in the doctype after the PUBLIC atom. * `doctype.systemId` string | null read-only * The quoted string in the doctype after the SYSTEM atom or immediately after the `publicId`. ### End The `end` function on a document handler allows developers to append content to the end of a document. ```js class DocumentHandler { end(end) { // The end of the document } } ``` #### Methods * <code>append(contentContent, contentOptionsContentOptionsoptional)</code> : DocumentEnd * Inserts content after the end of the document. <Render file="content_and_contentoptions" /> *** ## Selectors This is what selectors are and what they are used for. * `*` * Any element. * `E` * Any element of type E. * `E:nth-child(n)` * An E element, the n-th child of its parent. * `E:first-child` * An E element, first child of its parent. * `E:nth-of-type(n)` * An E element, the n-th sibling of its type. * `E:first-of-type` * An E element, first sibling of its type. * `E:not(s)` * An E element that does not match either compound selectors. * `E.warning` * An E element belonging to the class warning. * `E#myid` * An E element with ID equal to myid. * `E[foo]` * An E element with a foo attribute. * `E[foo="bar"]` * An E element whose foo attribute value is exactly equal to bar. * `E[foo="bar" i]` * An E element whose foo attribute value is exactly equal to any (ASCII-range) case-permutation of bar. * `E[foo="bar" s]` * An E element whose foo attribute value is exactly and case-sensitively equal to bar. * `E[foo~="bar"]` * An E element whose foo attribute value is a list of whitespace-separated values, one of which is exactly equal to bar. * `E[foo^="bar"]` * An E element whose foo attribute value begins exactly with the string bar. * `E[foo$="bar"]` * An E element whose foo attribute value ends exactly with the string bar. * `E[foo*="bar"]` * An E element whose foo attribute value contains the substring bar. * `E[foo|="en"]` * An E element whose foo attribute value is a hyphen-separated list of values beginning with en. * `E F` * An F element descendant of an E element. * `E > F` * An F element child of an E element. *** ## Errors If a handler throws an exception, parsing is immediately halted, the transformed response body is errored with the thrown exception, and the untransformed response body is canceled (closed). If the transformed response body was already partially streamed back to the client, the client will see a truncated response. ```js async function handle(request) { let oldResponse = await fetch(request); let newResponse = new HTMLRewriter() .on('*', { element(element) { throw new Error('A really bad error.'); }, }) .transform(oldResponse); // At this point, an expression like `await newResponse.text()` // will throw `new Error("A really bad error.")`. // Thereafter, any use of `newResponse.body` will throw the same error, // and `oldResponse.body` will be closed. // Alternatively, this will produce a truncated response to the client: return newResponse; } ``` *** ## Related resources * [Introducing `HTMLRewriter`](https://blog.cloudflare.com/introducing-htmlrewriter/) * [Tutorial: Localize a Website](/pages/tutorials/localize-a-website/) * [Example: rewrite links](/workers/examples/rewrite-links/) --- # Runtime APIs URL: https://developers.cloudflare.com/workers/runtime-apis/ import { DirectoryListing } from "~/components"; The Workers runtime is designed to be [JavaScript standards compliant](https://ecma-international.org/publications-and-standards/standards/ecma-262/) and web-interoperable. Wherever possible, it uses web platform APIs, so that code can be reused across client and server, as well as across [WinterCG](https://wintercg.org/) JavaScript runtimes. [Workers runtime features](/workers/runtime-apis/) are [compatible with a subset of Node.js APIs](/workers/runtime-apis/nodejs) and the ability to set a [compatibility date or compatibility flag](/workers/configuration/compatibility-dates/). <DirectoryListing /> --- # Performance and timers URL: https://developers.cloudflare.com/workers/runtime-apis/performance/ ## Background The Workers runtime supports a subset of the [`Performance` API](https://developer.mozilla.org/en-US/docs/Web/API/Performance), used to measure timing and performance, as well as timing of subrequests and other operations. ### `performance.now()` The [`performance.now()` method](https://developer.mozilla.org/en-US/docs/Web/API/Performance/now) returns timestamp in milliseconds, representing the time elapsed since `performance.timeOrigin`. When Workers are deployed to Cloudflare, as a security measure to [mitigate against Spectre attacks](/workers/reference/security-model/#step-1-disallow-timers-and-multi-threading), APIs that return timers, including [`performance.now()`](https://developer.mozilla.org/en-US/docs/Web/API/Performance/now) and [`Date.now()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/now), only advance or increment after I/O occurs. Consider the following examples: ```typescript title="Time is frozen — start will have the exact same value as end." const start = performance.now(); for (let i = 0; i < 1e6; i++) { // do expensive work } const end = performance.now(); const timing = end - start; // 0 ``` ```typescript title="Time advances, because a subrequest has occurred between start and end." const start = performance.now(); const response = await fetch("https://developers.cloudflare.com/"); const end = performance.now(); const timing = end - start; // duration of the subrequest to developers.cloudflare.com ``` By wrapping a subrequest in calls to `performance.now()` or `Date.now()` APIs, you can measure the timing of a subrequest, fetching a key from KV, an object from R2, or any other form of I/O in your Worker. In local development, however, timers will increment regardless of whether I/O happens or not. This means that if you need to measure timing of a piece of code that is CPU intensive, that does not involve I/O, you can run your Worker locally, via [Wrangler](/workers/wrangler/), which uses the open-source Workers runtime, [workerd](https://github.com/cloudflare/workerd) — the same runtime that your Worker runs in when deployed to Cloudflare. ### `performance.timeOrigin` The [`performance.timeOrigin`](https://developer.mozilla.org/en-US/docs/Web/API/Performance/timeOrigin) API is a read-only property that returns a baseline timestamp to base other measurements off of. In the Workers runtime, the `timeOrigin` property returns 0. --- # Request URL: https://developers.cloudflare.com/workers/runtime-apis/request/ import { Type, MetaInfo } from "~/components"; The [`Request`](https://developer.mozilla.org/en-US/docs/Web/API/Request/Request) interface represents an HTTP request and is part of the [Fetch API](/workers/runtime-apis/fetch/). ## Background The most common way you will encounter a `Request` object is as a property of an incoming request: ```js null {2} export default { async fetch(request, env, ctx) { return new Response('Hello World!'); }, }; ``` You may also want to construct a `Request` yourself when you need to modify a request object, because the incoming `request` parameter that you receive from the [`fetch()` handler](/workers/runtime-apis/handlers/fetch/) is immutable. ```js export default { async fetch(request, env, ctx) { const url = "https://example.com"; const modifiedRequest = new Request(url, request); // ... }, }; ``` The [`fetch() handler`](/workers/runtime-apis/handlers/fetch/) invokes the `Request` constructor. The [`RequestInit`](#options) and [`RequestInitCfProperties`](#the-cf-property-requestinitcfproperties) types defined below also describe the valid parameters that can be passed to the [`fetch() handler`](/workers/runtime-apis/handlers/fetch/). *** ## Constructor ```js let request = new Request(input, options) ``` ### Parameters * `input` string | Request * Either a string that contains a URL, or an existing `Request` object. * `options` options optional * Optional options object that contains settings to apply to the `Request`. #### `options` An object containing properties that you want to apply to the request. * `cache` `undefined | 'no-store'` optional * Standard HTTP `cache` header. Only `cache: 'no-store'` is supported. Any other cache header will result in a `TypeError` with the message `Unsupported cache mode: <attempted-cache-mode>`. * `cf` RequestInitCfProperties optional * Cloudflare-specific properties that can be set on the `Request` that control how Cloudflare’s global network handles the request. * `method` <Type text="string" /> <MetaInfo text="optional" /> * The HTTP request method. The default is `GET`. In Workers, all [HTTP request methods](https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods) are supported, except for [`CONNECT`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/CONNECT). * `headers` Headers optional * A [`Headers` object](https://developer.mozilla.org/en-US/docs/Web/API/Headers). * `body` string | ReadableStream | FormData | URLSearchParams optional * The request body, if any. * Note that a request using the GET or HEAD method cannot have a body. * `redirect` <Type text="string" /> <MetaInfo text="optional" /> * The redirect mode to use: `follow`, `error`, or `manual`. The default for a new `Request` object is `follow`. Note, however, that the incoming `Request` property of a `FetchEvent` will have redirect mode `manual`. #### The `cf` property (`RequestInitCfProperties`) An object containing Cloudflare-specific properties that can be set on the `Request` object. For example: ```js // Disable ScrapeShield for this request. fetch(event.request, { cf: { scrapeShield: false } }) ``` Invalid or incorrectly-named keys in the `cf` object will be silently ignored. Consider using TypeScript and [`@cloudflare/workers-types`](https://www.npmjs.com/package/@cloudflare/workers-types) to ensure proper use of the `cf` object. * `apps` <Type text="boolean" /> <MetaInfo text="optional" /> * Whether [Cloudflare Apps](https://www.cloudflare.com/apps/) should be enabled for this request. Defaults to `true`. * `cacheEverything` <Type text="boolean" /> <MetaInfo text="optional" /> * Treats all content as static and caches all [file types](/cache/concepts/default-cache-behavior#default-cached-file-extensions) beyond the Cloudflare default cached content. Respects cache headers from the origin web server. This is equivalent to setting the Page Rule [**Cache Level** (to **Cache Everything**)](/rules/page-rules/reference/settings/). Defaults to `false`. This option applies to `GET` and `HEAD` request methods only. * `cacheKey` <Type text="string" /> <MetaInfo text="optional" /> * A request’s cache key is what determines if two requests are the same for caching purposes. If a request has the same cache key as some previous request, then Cloudflare can serve the same cached response for both. * `cacheTags` Array\<string> optional * This option appends additional [**Cache-Tag**](/cache/how-to/purge-cache/purge-by-tags/) headers to the response from the origin server. This allows for purges of cached content based on tags provided by the Worker, without modifications to the origin server. This is performed using the [**Purge by Tag**](/cache/how-to/purge-cache/purge-by-tags/#purge-using-cache-tags) feature, which is currently only available to Enterprise zones. If this option is used in a non-Enterprise zone, the additional headers will not be appended. * `cacheTtl` <Type text="number" /> <MetaInfo text="optional" /> * This option forces Cloudflare to cache the response for this request, regardless of what headers are seen on the response. This is equivalent to setting two Page Rules: [**Edge Cache TTL**](/cache/how-to/edge-browser-cache-ttl/) and [**Cache Level** (to **Cache Everything**)](/rules/page-rules/reference/settings/). The value must be zero or a positive number. A value of `0` indicates that the cache asset expires immediately. This option applies to `GET` and `HEAD` request methods only. * `cacheTtlByStatus` `{ [key: string]: number }` optional * This option is a version of the `cacheTtl` feature which chooses a TTL based on the response’s status code. If the response to this request has a status code that matches, Cloudflare will cache for the instructed time and override cache instructives sent by the origin. For example: `{ "200-299": 86400, "404": 1, "500-599": 0 }`. The value can be any integer, including zero and negative integers. A value of `0` indicates that the cache asset expires immediately. Any negative value instructs Cloudflare not to cache at all. This option applies to `GET` and `HEAD` request methods only. * `image` Object | null optional * Enables [Image Resizing](/images/transform-images/) for this request. The possible values are described in [Transform images via Workers](/images/transform-images/transform-via-workers/) documentation. * `mirage` <Type text="boolean" /> <MetaInfo text="optional" /> * Whether [Mirage](https://www.cloudflare.com/website-optimization/mirage/) should be enabled for this request, if otherwise configured for this zone. Defaults to `true`. * `polish` <Type text="string" /> <MetaInfo text="optional" /> * Sets [Polish](https://blog.cloudflare.com/introducing-polish-automatic-image-optimizati/) mode. The possible values are `lossy`, `lossless` or `off`. * `resolveOverride` <Type text="string" /> <MetaInfo text="optional" /> * Directs the request to an alternate origin server by overriding the DNS lookup. The value of `resolveOverride` specifies an alternate hostname which will be used when determining the origin IP address, instead of using the hostname specified in the URL. The `Host` header of the request will still match what is in the URL. Thus, `resolveOverride` allows a request to be sent to a different server than the URL / `Host` header specifies. However, `resolveOverride` will only take effect if both the URL host and the host specified by `resolveOverride` are within your zone. If either specifies a host from a different zone / domain, then the option will be ignored for security reasons. If you need to direct a request to a host outside your zone (while keeping the `Host` header pointing within your zone), first create a CNAME record within your zone pointing to the outside host, and then set `resolveOverride` to point at the CNAME record. Note that, for security reasons, it is not possible to set the `Host` header to specify a host outside of your zone unless the request is actually being sent to that host. * `scrapeShield` <Type text="boolean" /> <MetaInfo text="optional" /> * Whether [ScrapeShield](https://blog.cloudflare.com/introducing-scrapeshield-discover-defend-dete/) should be enabled for this request, if otherwise configured for this zone. Defaults to `true`. * `webp` <Type text="boolean" /> <MetaInfo text="optional" /> * Enables or disables [WebP](https://blog.cloudflare.com/a-very-webp-new-year-from-cloudflare/) image format in [Polish](/images/polish/). *** ## Properties All properties of an incoming `Request` object (the request you receive from the [`fetch()` handler](/workers/runtime-apis/handlers/fetch/)) are read-only. To modify the properties of an incoming request, create a new `Request` object and pass the options to modify to its [constructor](#constructor). * `body` ReadableStream read-only * Stream of the body contents. * `bodyUsed` Boolean read-only * Declares whether the body has been used in a response yet. * `cf` IncomingRequestCfProperties read-only * An object containing properties about the incoming request provided by Cloudflare’s global network. * This property is read-only (unless created from an existing `Request`). To modify its values, pass in the new values on the [`cf` key of the `init` options argument](/workers/runtime-apis/request/#the-cf-property-requestinitcfproperties) when creating a new `Request` object. * `headers` Headers read-only * A [`Headers` object](https://developer.mozilla.org/en-US/docs/Web/API/Headers). * Compared to browsers, Cloudflare Workers imposes very few restrictions on what headers you are allowed to send. For example, a browser will not allow you to set the `Cookie` header, since the browser is responsible for handling cookies itself. Workers, however, has no special understanding of cookies, and treats the `Cookie` header like any other header. :::caution If the response is a redirect and the redirect mode is set to `follow` (see below), then all headers will be forwarded to the redirect destination, even if the destination is a different hostname or domain. This includes sensitive headers like `Cookie`, `Authorization`, or any application-specific headers. If this is not the behavior you want, you should set redirect mode to `manual` and implement your own redirect policy. Note that redirect mode defaults to `manual` for requests that originated from the Worker's client, so this warning only applies to `fetch()`es made by a Worker that are not proxying the original request. ::: * `method` string read-only * Contains the request’s method, for example, `GET`, `POST`, etc. * `redirect` string read-only * The redirect mode to use: `follow`, `error`, or `manual`. The `fetch` method will automatically follow redirects if the redirect mode is set to `follow`. If set to `manual`, the `3xx` redirect response will be returned to the caller as-is. The default for a new `Request` object is `follow`. Note, however, that the incoming `Request` property of a `FetchEvent` will have redirect mode `manual`. * `url` string read-only * Contains the URL of the request. ### `IncomingRequestCfProperties` In addition to the properties on the standard [`Request`](https://developer.mozilla.org/en-US/docs/Web/API/Request) object, the `request.cf` object on an inbound `Request` contains information about the request provided by Cloudflare’s global network. All plans have access to: * `asn` Number * ASN of the incoming request, for example, `395747`. * `asOrganization` string * The organization which owns the ASN of the incoming request, for example, `Google Cloud`. * `botManagement` Object | null * Only set when using Cloudflare Bot Management. Object with the following properties: `score`, `verifiedBot`, `staticResource`, `ja3Hash`, `ja4`, and `detectionIds`. Refer to [Bot Management Variables](/bots/reference/bot-management-variables/) for more details. * `clientAcceptEncoding` string | null * If Cloudflare replaces the value of the `Accept-Encoding` header, the original value is stored in the `clientAcceptEncoding` property, for example, `"gzip, deflate, br"`. * `colo` string * The three-letter [`IATA`](https://en.wikipedia.org/wiki/IATA_airport_code) airport code of the data center that the request hit, for example, `"DFW"`. * `country` string | null * Country of the incoming request. The two-letter country code in the request. This is the same value as that provided in the `CF-IPCountry` header, for example, `"US"`. * `isEUCountry` string | null * If the country of the incoming request is in the EU, this will return `"1"`. Otherwise, this property will be omitted. * `httpProtocol` string * HTTP Protocol, for example, `"HTTP/2"`. * `requestPriority` string | null * The browser-requested prioritization information in the request object, for example, `"weight=192;exclusive=0;group=3;group-weight=127"`. * `tlsCipher` string * The cipher for the connection to Cloudflare, for example, `"AEAD-AES128-GCM-SHA256"`. * `tlsClientAuth` Object | null * Only set when using Cloudflare Access or API Shield (mTLS). Object with the following properties: `certFingerprintSHA1`, `certFingerprintSHA256`, `certIssuerDN`, `certIssuerDNLegacy`, `certIssuerDNRFC2253`, `certIssuerSKI`, `certIssuerSerial`, `certNotAfter`, `certNotBefore`, `certPresented`, `certRevoked`, `certSKI`, `certSerial`, `certSubjectDN`, `certSubjectDNLegacy`, `certSubjectDNRFC2253`, `certVerified`. * `tlsClientHelloLength` string * The length of the client hello message sent in a [TLS handshake](https://www.cloudflare.com/learning/ssl/what-happens-in-a-tls-handshake/). For example, `"508"`. Specifically, the length of the bytestring of the client hello. * `tlsClientRandom` string * The value of the 32-byte random value provided by the client in a [TLS handshake](https://www.cloudflare.com/learning/ssl/what-happens-in-a-tls-handshake/). Refer to [RFC 8446](https://datatracker.ietf.org/doc/html/rfc8446#section-4.1.2) for more details. * `tlsVersion` string * The TLS version of the connection to Cloudflare, for example, `TLSv1.3`. * `city` string | null * City of the incoming request, for example, `"Austin"`. * `continent` string | null * Continent of the incoming request, for example, `"NA"`. * `latitude` string | null * Latitude of the incoming request, for example, `"30.27130"`. * `longitude` string | null * Longitude of the incoming request, for example, `"-97.74260"`. * `postalCode` string | null * Postal code of the incoming request, for example, `"78701"`. * `metroCode` string | null * Metro code (DMA) of the incoming request, for example, `"635"`. * `region` string | null * If known, the [ISO 3166-2](https://en.wikipedia.org/wiki/ISO_3166-2) name for the first level region associated with the IP address of the incoming request, for example, `"Texas"`. * `regionCode` string | null * If known, the [ISO 3166-2](https://en.wikipedia.org/wiki/ISO_3166-2) code for the first-level region associated with the IP address of the incoming request, for example, `"TX"`. * `timezone` string * Timezone of the incoming request, for example, `"America/Chicago"`. :::caution The `request.cf` object is not available in the Cloudflare Workers dashboard or Playground preview editor. ::: *** ## Methods ### Instance methods These methods are only available on an instance of a `Request` object or through its prototype. * `clone()` : Promise\<Request> * Creates a copy of the `Request` object. * `arrayBuffer()` : Promise\<ArrayBuffer> * Returns a promise that resolves with an [`ArrayBuffer`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/ArrayBuffer) representation of the request body. * `formData()` : Promise\<FormData> * Returns a promise that resolves with a [`FormData`](https://developer.mozilla.org/en-US/docs/Web/API/FormData) representation of the request body. * `json()` : Promise\<Object> * Returns a promise that resolves with a JSON representation of the request body. * `text()` : Promise\<string> * Returns a promise that resolves with a string (text) representation of the request body. *** ## The `Request` context Each time a Worker is invoked by an incoming HTTP request, the [`fetch()` handler](/workers/runtime-apis/handlers/fetch) is called on your Worker. The `Request` context starts when the `fetch()` handler is called, and asynchronous tasks (such as making a subrequest using the [`fetch() API`](/workers/runtime-apis/fetch/)) can only be run inside the `Request` context: ```js export default { async fetch(request, env, ctx) { // Request context starts here return new Response('Hello World!'); }, }; ``` ### When passing a promise to fetch event `.respondWith()` If you pass a Response promise to the fetch event `.respondWith()` method, the request context is active during any asynchronous tasks which run before the Response promise has settled. You can pass the event to an async handler, for example: ```js addEventListener("fetch", event => { event.respondWith(eventHandler(event)) }) // No request context available here async function eventHandler(event){ // Request context available here return new Response("Hello, Workers!") } ``` ### Errors when attempting to access an inactive `Request` context Any attempt to use APIs such as `fetch()` or access the `Request` context during script startup will throw an exception: ```js const promise = fetch("https://example.com/") // Error async function eventHandler(event){..} ``` This code snippet will throw during script startup, and the `"fetch"` event listener will never be registered. *** ### Set the `Content-Length` header The `Content-Length` header will be automatically set by the runtime based on whatever the data source for the `Request` is. Any value manually set by user code in the `Headers` will be ignored. To have a `Content-Length` header with a specific value specified, the `body` of the `Request` must be either a `FixedLengthStream` or a fixed-length value just as a string or `TypedArray`. A `FixedLengthStream` is an identity `TransformStream` that permits only a fixed number of bytes to be written to it. ```js const { writable, readable } = new FixedLengthStream(11); const enc = new TextEncoder(); const writer = writable.getWriter(); writer.write(enc.encode("hello world")); writer.end(); const req = new Request('https://example.org', { method: 'POST', body: readable }); ``` Using any other type of `ReadableStream` as the body of a request will result in Chunked-Encoding being used. *** ## Related resources * [Examples: Modify request property](/workers/examples/modify-request-property/) * [Examples: Accessing the `cf` object](/workers/examples/accessing-the-cloudflare-object/) * [Reference: `Response`](/workers/runtime-apis/response/) * Write your Worker code in [ES modules syntax](/workers/reference/migrate-to-module-workers/) for an optimized experience. --- # Response URL: https://developers.cloudflare.com/workers/runtime-apis/response/ The `Response` interface represents an HTTP response and is part of the Fetch API. *** ## Constructor ```js let response = new Response(body, init); ``` ### Parameters * `body` optional * An object that defines the body text for the response. Can be `null` or any one of the following types: * BufferSource * FormData * ReadableStream * URLSearchParams * USVString * `init` optional * An `options` object that contains custom settings to apply to the response. Valid options for the `options` object include: * `cf` any | null * An object that contains Cloudflare-specific information. This object is not part of the Fetch API standard and is only available in Cloudflare Workers. This field is only used by consumers of the Response for informational purposes and does not have any impact on Workers behavior. * `encodeBody` string * Workers have to compress data according to the `content-encoding` header when transmitting, to serve data that is already compressed, this property has to be set to `"manual"`, otherwise the default is `"automatic"`. * `headers` Headers | ByteString * Any headers to add to your response that are contained within a [`Headers`](/workers/runtime-apis/request/#parameters) object or object literal of [`ByteString`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String) key-value pairs. * `status` int * The status code for the response, such as `200`. * `statusText` string * The status message associated with the status code, such as, `OK`. * `webSocket` WebSocket | null * This is present in successful WebSocket handshake responses. For example, if a client sends a WebSocket upgrade request to an origin and a Worker intercepts the request and then forwards it to the origin and the origin replies with a successful WebSocket upgrade response, the Worker sees `response.webSocket`. This establishes a WebSocket connection proxied through a Worker. Note that you cannot intercept data flowing over a WebSocket connection. ## Properties * `response.body` Readable Stream * A getter to get the body contents. * `response.bodyUsed` boolean * A boolean indicating if the body was used in the response. * `response.headers` Headers * The headers for the response. * `response.ok` boolean * A boolean indicating if the response was successful (status in the range `200`-`299`). * `response.redirected` boolean * A boolean indicating if the response is the result of a redirect. If so, its URL list has more than one entry. * `response.status` int * The status code of the response (for example, `200` to indicate success). * `response.statusText` string * The status message corresponding to the status code (for example, `OK` for `200`). * `response.url` string * The URL of the response. The value is the final URL obtained after any redirects. * `response.webSocket` WebSocket? * This is present in successful WebSocket handshake responses. For example, if a client sends a WebSocket upgrade request to an origin and a Worker intercepts the request and then forwards it to the origin and the origin replies with a successful WebSocket upgrade response, the Worker sees `response.webSocket`. This establishes a WebSocket connection proxied through a Worker. Note that you cannot intercept data flowing over a WebSocket connection. ## Methods ### Instance methods * `clone()` : Response * Creates a clone of a [`Response`](#response) object. * `json()` : Response * Creates a new response with a JSON-serialized payload. * `redirect()` : Response * Creates a new response with a different URL. ### Additional instance methods `Response` implements the [`Body`](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch#body) mixin of the [Fetch API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API), and therefore `Response` instances additionally have the following methods available: * <code>arrayBuffer()</code> : Promise\<ArrayBuffer> * Takes a [`Response`](#response) stream, reads it to completion, and returns a promise that resolves with an [`ArrayBuffer`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/ArrayBuffer). * <code>formData()</code> : Promise\<FormData> * Takes a [`Response`](#response) stream, reads it to completion, and returns a promise that resolves with a [`FormData`](https://developer.mozilla.org/en-US/docs/Web/API/FormData) object. * <code>json()</code> : Promise\<JSON> * Takes a [`Response`](#response) stream, reads it to completion, and returns a promise that resolves with the result of parsing the body text as [`JSON`](https://developer.mozilla.org/en-US/docs/Web/). * <code>text()</code> : Promise\<USVString> * Takes a [`Response`](#response) stream, reads it to completion, and returns a promise that resolves with a [`USVString`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String) (text). ### Set the `Content-Length` header The `Content-Length` header will be automatically set by the runtime based on whatever the data source for the `Response` is. Any value manually set by user code in the `Headers` will be ignored. To have a `Content-Length` header with a specific value specified, the `body` of the `Response` must be either a `FixedLengthStream` or a fixed-length value just as a string or `TypedArray`. A `FixedLengthStream` is an identity `TransformStream` that permits only a fixed number of bytes to be written to it. ```js const { writable, readable } = new FixedLengthStream(11); const enc = new TextEncoder(); const writer = writable.getWriter(); writer.write(enc.encode("hello world")); writer.end(); return new Response(readable); ``` Using any other type of `ReadableStream` as the body of a response will result in chunked encoding being used. *** ## Related resources * [Examples: Modify response](/workers/examples/modify-response/) * [Examples: Conditional response](/workers/examples/conditional-response/) * [Reference: `Request`](/workers/runtime-apis/request/) * Write your Worker code in [ES modules syntax](/workers/reference/migrate-to-module-workers/) for an optimized experience. --- # TCP sockets URL: https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/ The Workers runtime provides the `connect()` API for creating outbound [TCP connections](https://www.cloudflare.com/learning/ddos/glossary/tcp-ip/) from Workers. Many application-layer protocols are built on top of the Transmission Control Protocol (TCP). These application-layer protocols, including SSH, MQTT, SMTP, FTP, IRC, and most database wire protocols including MySQL, PostgreSQL, MongoDB, require an underlying TCP socket API in order to work. :::note Connecting to a PostgreSQL database? You should use [Hyperdrive](/hyperdrive/), which provides the `connect()` API with built-in connection pooling and query caching. ::: :::note TCP Workers outbound connections are sourced from a prefix that is not part of [list of IP ranges](https://www.cloudflare.com/ips/). ::: ## `connect()` The `connect()` function returns a TCP socket, with both a [readable](/workers/runtime-apis/streams/readablestream/) and [writable](/workers/runtime-apis/streams/writablestream/) stream of data. This allows you to read and write data on an ongoing basis, as long as the connection remains open. `connect()` is provided as a [Runtime API](/workers/runtime-apis/), and is accessed by importing the `connect` function from `cloudflare:sockets`. This process is similar to how one imports built-in modules in Node.js. Refer to the following codeblock for an example of creating a TCP socket, writing to it, and returning the readable side of the socket as a response: ```typescript import { connect } from 'cloudflare:sockets'; export default { async fetch(req): Promise<Response> { const gopherAddr = { hostname: "gopher.floodgap.com", port: 70 }; const url = new URL(req.url); try { const socket = connect(gopherAddr); const writer = socket.writable.getWriter() const encoder = new TextEncoder(); const encoded = encoder.encode(url.pathname + "\r\n"); await writer.write(encoded); await writer.close(); return new Response(socket.readable, { headers: { "Content-Type": "text/plain" } }); } catch (error) { return new Response("Socket connection failed: " + error, { status: 500 }); } } } satisfies ExportedHandler; ``` * <code>connect(address: SocketAddress | string, options?: optional SocketOptions)</code> : `Socket` * `connect()` accepts either a URL string or [`SocketAddress`](/workers/runtime-apis/tcp-sockets/#socketaddress) to define the hostname and port number to connect to, and an optional configuration object, [`SocketOptions`](/workers/runtime-apis/tcp-sockets/#socketoptions). It returns an instance of a [`Socket`](/workers/runtime-apis/tcp-sockets/#socket). ### `SocketAddress` * `hostname` string * The hostname to connect to. Example: `cloudflare.com`. * `port` number * The port number to connect to. Example: `5432`. ### `SocketOptions` * `secureTransport` "off" | "on" | "starttls" — Defaults to `off` * Specifies whether or not to use [TLS](https://www.cloudflare.com/learning/ssl/transport-layer-security-tls/) when creating the TCP socket. * `off` — Do not use TLS. * `on` — Use TLS. * `starttls` — Do not use TLS initially, but allow the socket to be upgraded to use TLS by calling [`startTls()`](/workers/runtime-apis/tcp-sockets/#opportunistic-tls-starttls). * `allowHalfOpen` boolean — Defaults to `false` * Defines whether the writable side of the TCP socket will automatically close on end-of-file (EOF). When set to `false`, the writable side of the TCP socket will automatically close on EOF. When set to `true`, the writable side of the TCP socket will remain open on EOF. * This option is similar to that offered by the Node.js [`net` module](https://nodejs.org/api/net.html) and allows interoperability with code which utilizes it. ### `SocketInfo` * `remoteAddress` string | null * The address of the remote peer the socket is connected to. May not always be set. * `localAddress` string | null * The address of the local network endpoint for this socket. May not always be set. ### `Socket` * <code>readable</code> : ReadableStream * Returns the readable side of the TCP socket. * <code>writable</code> : WritableStream * Returns the writable side of the TCP socket. * The `WritableStream` returned only accepts chunks of `Uint8Array` or its views. * `opened` `Promise<SocketInfo>` * This promise is resolved when the socket connection is established and is rejected if the socket encounters an error. * `closed` `Promise<void>` * This promise is resolved when the socket is closed and is rejected if the socket encounters an error. * `close()` `Promise<void>` * Closes the TCP socket. Both the readable and writable streams are forcibly closed. * <code>startTls()</code> : Socket * Upgrades an insecure socket to a secure one that uses TLS, returning a new [Socket](/workers/runtime-apis/tcp-sockets#socket). Note that in order to call `startTls()`, you must set [`secureTransport`](/workers/runtime-apis/tcp-sockets/#socketoptions) to `starttls` when initially calling `connect()` to create the socket. ## Opportunistic TLS (StartTLS) Many TCP-based systems, including databases and email servers, require that clients use opportunistic TLS (otherwise known as [StartTLS](https://en.wikipedia.org/wiki/Opportunistic_TLS)) when connecting. In this pattern, the client first creates an insecure TCP socket, without TLS, and then upgrades it to a secure TCP socket, that uses TLS. The `connect()` API simplifies this by providing a method, `startTls()`, which returns a new `Socket` instance that uses TLS: ```typescript import { connect } from "cloudflare:sockets" const address = { hostname: "example-postgres-db.com", port: 5432 }; const socket = connect(address, { secureTransport: "starttls" }); const secureSocket = socket.startTls(); ``` * `startTls()` can only be called if `secureTransport` is set to `starttls` when creating the initial TCP socket. * Once `startTls()` is called, the initial socket is closed and can no longer be read from or written to. In the example above, anytime after `startTls()` is called, you would use the newly created `secureSocket`. Any existing readers and writers based off the original socket will no longer work. You must create new readers and writers from the newly created `secureSocket`. * `startTls()` should only be called once on an existing socket. ## Handle errors To handle errors when creating a new TCP socket, reading from a socket, or writing to a socket, wrap these calls inside `try..catch` blocks. The following example opens a connection to Google.com, initiates a HTTP request, and returns the response. If any of this fails and throws an exception, it returns a `500` response: ```typescript import { connect } from 'cloudflare:sockets'; const connectionUrl = { hostname: "google.com", port: 80 }; export interface Env { } export default { async fetch(req, env, ctx): Promise<Response> { try { const socket = connect(connectionUrl); const writer = socket.writable.getWriter(); const encoder = new TextEncoder(); const encoded = encoder.encode("GET / HTTP/1.0\r\n\r\n"); await writer.write(encoded); await writer.close(); return new Response(socket.readable, { headers: { "Content-Type": "text/plain" } }); } catch (error) { return new Response(`Socket connection failed: ${error}`, { status: 500 }); } } } satisfies ExportedHandler<Env>; ``` ## Close TCP connections You can close a TCP connection by calling `close()` on the socket. This will close both the readable and writable sides of the socket. ```typescript import { connect } from "cloudflare:sockets" const socket = connect({ hostname: "my-url.com", port: 70 }); const reader = socket.readable.getReader(); socket.close(); // After close() is called, you can no longer read from the readable side of the socket const reader = socket.readable.getReader(); // This fails ``` ## Considerations * Outbound TCP sockets to [Cloudflare IP ranges](https://www.cloudflare.com/ips/) are temporarily blocked, but will be re-enabled shortly. * TCP sockets cannot be created in global scope and shared across requests. You should always create TCP sockets within a handler (ex: [`fetch()`](/workers/get-started/guide/#3-write-code), [`scheduled()`](/workers/runtime-apis/handlers/scheduled/), [`queue()`](/queues/configuration/javascript-apis/#consumer)) or [`alarm()`](/durable-objects/api/alarms/). * Each open TCP socket counts towards the maximum number of [open connections](/workers/platform/limits/#simultaneous-open-connections) that can be simultaneously open. * By default, Workers cannot create outbound TCP connections on port `25` to send email to SMTP mail servers. [Cloudflare Email Workers](/email-routing/email-workers/) provides APIs to process and forward email. * Support for handling inbound TCP connections is [coming soon](https://blog.cloudflare.com/workers-tcp-socket-api-connect-databases/). Currently, it is not possible to make an inbound TCP connection to your Worker, for example, by using the `CONNECT` HTTP method. ## Troubleshooting Review descriptions of common error messages you may see when working with TCP Sockets, what the error messages mean, and how to solve them. ### `proxy request failed, cannot connect to the specified address` Your socket is connecting to an address that was disallowed. Examples of a disallowed address include Cloudflare IPs, `localhost`, and private network IPs. If you need to connect to addresses on port `80` or `443` to make HTTP requests, use [`fetch`](/workers/runtime-apis/fetch/). ### `TCP Loop detected` Your socket is connecting back to the Worker that initiated the outbound connection. In other words, the Worker is connecting back to itself. This is currently not supported. ### `Connections to port 25 are prohibited` Your socket is connecting to an address on port `25`. This is usually the port used for SMTP mail servers. Workers cannot create outbound connections on port `25`. Consider using [Cloudflare Email Workers](/email-routing/email-workers/) instead. --- # Web Crypto URL: https://developers.cloudflare.com/workers/runtime-apis/web-crypto/ import { TabItem, Tabs } from "~/components" ## Background The Web Crypto API provides a set of low-level functions for common cryptographic tasks. The Workers runtime implements the full surface of this API, but with some differences in the [supported algorithms](#supported-algorithms) compared to those implemented in most browsers. Performing cryptographic operations using the Web Crypto API is significantly faster than performing them purely in JavaScript. If you want to perform CPU-intensive cryptographic operations, you should consider using the Web Crypto API. The Web Crypto API is implemented through the `SubtleCrypto` interface, accessible via the global `crypto.subtle` binding. A simple example of calculating a digest (also known as a hash) is: ```js const myText = new TextEncoder().encode('Hello world!'); const myDigest = await crypto.subtle.digest( { name: 'SHA-256', }, myText // The data you want to hash as an ArrayBuffer ); console.log(new Uint8Array(myDigest)); ``` Some common uses include [signing requests](/workers/examples/signing-requests/). :::caution The Web Crypto API differs significantly from the [Node.js Crypto API](/workers/runtime-apis/nodejs/crypto/). If you are working with code that relies on the Node.js Crypto API, you can use it by enabling the [`nodejs_compat` compatibility flag](/workers/runtime-apis/nodejs/). ::: *** ## Constructors * <code>crypto.DigestStream(algorithm)</code> DigestStream * A non-standard extension to the `crypto` API that supports generating a hash digest from streaming data. The `DigestStream` itself is a [`WritableStream`](/workers/runtime-apis/streams/writablestream/) that does not retain the data written into it. Instead, it generates a hash digest automatically when the flow of data has ended. ### Parameters * <code>algorithm</code>string | object * Describes the algorithm to be used, including any required parameters, in [an algorithm-specific format](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/digest#Syntax). ### Usage <Tabs> <TabItem label="JavaScript" icon="seti:javascript"> ```js export default { async fetch(req) { // Fetch from origin const res = await fetch(req); // We need to read the body twice so we `tee` it (get two instances) const [bodyOne, bodyTwo] = res.body.tee(); // Make a new response so we can set the headers (responses from `fetch` are immutable) const newRes = new Response(bodyOne, res); // Create a SHA-256 digest stream and pipe the body into it const digestStream = new crypto.DigestStream("SHA-256"); bodyTwo.pipeTo(digestStream); // Get the final result const digest = await digestStream.digest; // Turn it into a hex string const hexString = [...new Uint8Array(digest)] .map(b => b.toString(16).padStart(2, '0')) .join('') // Set a header with the SHA-256 hash and return the response newRes.headers.set("x-content-digest", `SHA-256=${hexString}`); return newRes; } } ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts export default { async fetch(req): Promise<Response> { // Fetch from origin const res = await fetch(req); // We need to read the body twice so we `tee` it (get two instances) const [bodyOne, bodyTwo] = res.body.tee(); // Make a new response so we can set the headers (responses from `fetch` are immutable) const newRes = new Response(bodyOne, res); // Create a SHA-256 digest stream and pipe the body into it const digestStream = new crypto.DigestStream("SHA-256"); bodyTwo.pipeTo(digestStream); // Get the final result const digest = await digestStream.digest; // Turn it into a hex string const hexString = [...new Uint8Array(digest)] .map(b => b.toString(16).padStart(2, '0')) .join('') // Set a header with the SHA-256 hash and return the response newRes.headers.set("x-content-digest", `SHA-256=${hexString}`); return newRes; } } satisfies ExportedHandler; ``` </TabItem> </Tabs> ## Methods * <code>crypto.randomUUID()</code> : string * Generates a new random (version 4) UUID as defined in [RFC 4122](https://www.rfc-editor.org/rfc/rfc4122.txt). * <code>crypto.getRandomValues(bufferArrayBufferView)</code> : ArrayBufferView * Fills the passed <code>ArrayBufferView</code> with cryptographically sound random values and returns the <code>buffer</code>. ### Parameters * <code>buffer</code>ArrayBufferView * Must be an Int8Array | Uint8Array | Uint8ClampedArray | Int16Array | Uint16Array | Int32Array | Uint32Array | BigInt64Array | BigUint64Array. ## SubtleCrypto Methods These methods are all accessed via [`crypto.subtle`](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto#Methods), which is also documented in detail on MDN. ### encrypt * <code>encrypt(algorithm, key, data)</code> : Promise\<ArrayBuffer> * Returns a Promise that fulfills with the encrypted data corresponding to the clear text, algorithm, and key given as parameters. #### Parameters * <code>algorithm</code>object * Describes the algorithm to be used, including any required parameters, in [an algorithm-specific format](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/encrypt#Syntax). * <code>key</code>CryptoKey * <code>data</code>BufferSource ### decrypt * <code>decrypt(algorithm, key, data)</code> : Promise\<ArrayBuffer> * Returns a Promise that fulfills with the clear data corresponding to the ciphertext, algorithm, and key given as parameters. #### Parameters * <code>algorithm</code>object * Describes the algorithm to be used, including any required parameters, in [an algorithm-specific format](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/decrypt#Syntax). * <code>key</code>CryptoKey * <code>data</code>BufferSource ### sign * <code>sign(algorithm, key, data)</code> : Promise\<ArrayBuffer> * Returns a Promise that fulfills with the signature corresponding to the text, algorithm, and key given as parameters. #### Parameters * <code>algorithm</code>string | object * Describes the algorithm to be used, including any required parameters, in [an algorithm-specific format](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/sign#Syntax). * <code>key</code>CryptoKey * <code>data</code>ArrayBuffer ### verify * <code>verify(algorithm, key, signature, data)</code> : Promise\<boolean> * Returns a Promise that fulfills with a Boolean value indicating if the signature given as a parameter matches the text, algorithm, and key that are also given as parameters. #### Parameters * <code>algorithm</code>string | object * Describes the algorithm to be used, including any required parameters, in [an algorithm-specific format](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/verify#Syntax). * <code>key</code>CryptoKey * <code>signature</code>ArrayBuffer * <code>data</code>ArrayBuffer ### digest * <code>digest(algorithm, data)</code> : Promise\<ArrayBuffer> * Returns a Promise that fulfills with a digest generated from the algorithm and text given as parameters. #### Parameters * <code>algorithm</code>string | object * Describes the algorithm to be used, including any required parameters, in [an algorithm-specific format](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/digest#Syntax). * <code>data</code>ArrayBuffer ### generateKey * <code>generateKey(algorithm, extractable, keyUsages)</code> : Promise\<CryptoKey> | Promise\<CryptoKeyPair> * Returns a Promise that fulfills with a newly-generated `CryptoKey`, for symmetrical algorithms, or a `CryptoKeyPair`, containing two newly generated keys, for asymmetrical algorithms. For example, to generate a new AES-GCM key: ```js let keyPair = await crypto.subtle.generateKey( { name: 'AES-GCM', length: 256, }, true, ['encrypt', 'decrypt'] ); ``` #### Parameters * <code>algorithm</code>object * Describes the algorithm to be used, including any required parameters, in [an algorithm-specific format](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/generateKey#Syntax). * <code>extractable</code>bool * <code>keyUsages</code>Array * An Array of strings indicating the [possible usages of the new key](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/generateKey#Syntax). ### deriveKey * <code>deriveKey(algorithm, baseKey, derivedKeyAlgorithm, extractable, keyUsages)</code> : Promise\<CryptoKey> * Returns a Promise that fulfills with a newly generated `CryptoKey` derived from the base key and specific algorithm given as parameters. #### Parameters * <code>algorithm</code>object * Describes the algorithm to be used, including any required parameters, in [an algorithm-specific format](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/deriveKey#Syntax). * <code>baseKeyCryptoKey</code> * <code>derivedKeyAlgorithmobject</code> * Defines the algorithm the derived key will be used for in [an algorithm-specific format](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/deriveKey#Syntax). * <code>extractablebool</code> * <code>keyUsagesArray</code> * An Array of strings indicating the [possible usages of the new key](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/deriveKey#Syntax) ### deriveBits * <code>deriveBits(algorithm, baseKey, length)</code> : Promise\<ArrayBuffer> * Returns a Promise that fulfills with a newly generated buffer of pseudo-random bits derived from the base key and specific algorithm given as parameters. It returns a Promise which will be fulfilled with an `ArrayBuffer` containing the derived bits. This method is very similar to `deriveKey()`, except that `deriveKey()` returns a `CryptoKey` object rather than an `ArrayBuffer`. Essentially, `deriveKey()` is composed of `deriveBits()` followed by `importKey()`. #### Parameters * <code>algorithm</code>object * Describes the algorithm to be used, including any required parameters, in [an algorithm-specific format](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/deriveBits#Syntax). * <code>baseKey</code>CryptoKey * <code>length</code>int * Length of the bit string to derive. ### importKey * <code>importKey(format, keyData, algorithm, extractable, keyUsages)</code> : Promise\<CryptoKey> * Transform a key from some external, portable format into a `CryptoKey` for use with the Web Crypto API. #### Parameters * <code>format</code>string * Describes [the format of the key to be imported](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/importKey#Syntax). * <code>keyData</code>ArrayBuffer * <code>algorithm</code>object * Describes the algorithm to be used, including any required parameters, in [an algorithm-specific format](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/importKey#Syntax). * <code>extractable</code>bool * <code>keyUsages</code>Array * An Array of strings indicating the [possible usages of the new key](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/importKey#Syntax) ### exportKey * <code>exportKey(formatstring, keyCryptoKey)</code> : Promise\<ArrayBuffer> * Transform a `CryptoKey` into a portable format, if the `CryptoKey` is `extractable`. #### Parameters * <code>format</code>string * Describes the [format in which the key will be exported](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/exportKey#Syntax). * <code>key</code>CryptoKey ### wrapKey * <code>wrapKey(format, key, wrappingKey, wrapAlgo)</code> : Promise\<ArrayBuffer> * Transform a `CryptoKey` into a portable format, and then encrypt it with another key. This renders the `CryptoKey` suitable for storage or transmission in untrusted environments. #### Parameters * <code>format</code>string * Describes the [format in which the key will be exported](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/wrapKey#Syntax) before being encrypted. * <code>key</code>CryptoKey * <code>wrappingKey</code>CryptoKey * <code>wrapAlgo</code>object * Describes the algorithm to be used to encrypt the exported key, including any required parameters, in [an algorithm-specific format](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/wrapKey#Syntax). ### unwrapKey * <code>unwrapKey(format, key, unwrappingKey, unwrapAlgo, <br/> unwrappedKeyAlgo, extractable, keyUsages)</code> : Promise\<CryptoKey> * Transform a key that was wrapped by `wrapKey()` back into a `CryptoKey`. #### Parameters * <code>format</code>string * Described the [data format of the key to be unwrapped](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/unwrapKey#Syntax). * <code>key</code>CryptoKey * <code>unwrappingKey</code>CryptoKey * <code>unwrapAlgo</code>object * Describes the algorithm that was used to encrypt the wrapped key, [in an algorithm-specific format](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/unwrapKey#Syntax). * <code>unwrappedKeyAlgo</code>object * Describes the key to be unwrapped, [in an algorithm-specific format](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/unwrapKey#Syntax). * <code>extractable</code>bool * <code>keyUsages</code>Array * An Array of strings indicating the [possible usages of the new key](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/unwrapKey#Syntax) ### timingSafeEqual * <code>timingSafeEqual(a, b)</code> : bool * Compare two buffers in a way that is resistant to timing attacks. This is a non-standard extension to the Web Crypto API. #### Parameters * <code>a</code>ArrayBuffer | TypedArray * <code>b</code>ArrayBuffer | TypedArray ### Supported algorithms Workers implements all operations of the [WebCrypto standard](https://www.w3.org/TR/WebCryptoAPI/), as shown in the following table. A checkmark (✓) indicates that this feature is believed to be fully supported according to the spec.<br/> An x (✘) indicates that this feature is part of the specification but not implemented.<br/> If a feature only implements the operation partially, details are listed. | Algorithm | sign()<br/>verify() | encrypt()<br/>decrypt() | digest() | deriveBits()<br/>deriveKey() | generateKey() | wrapKey()<br/>unwrapKey() | exportKey() | importKey() | | :------------------------------------------------- | :------------------ | :---------------------- | :------- | :--------------------------- | :------------ | :------------------------ | :---------- | :---------- | | RSASSA PKCS1 v1.5 | ✓ | | | | ✓ | | ✓ | ✓ | | RSA PSS | ✓ | | | | ✓ | | ✓ | ✓ | | RSA OAEP | | ✓ | | | ✓ | ✓ | ✓ | ✓ | | ECDSA | ✓ | | | | ✓ | | ✓ | ✓ | | ECDH | | | | ✓ | ✓ | | ✓ | ✓ | | Ed25519<sup><a href="#footnote-1">1</a></sup> | ✓ | | | | ✓ | | ✓ | ✓ | | X25519<sup><a href="#footnote-1">1</a></sup> | | | | ✓ | ✓ | | ✓ | ✓ | | NODE ED25519<sup><a href="#footnote-2">2</a></sup> | ✓ | | | | ✓ | | ✓ | ✓ | | AES CTR | | ✓ | | | ✓ | ✓ | ✓ | ✓ | | AES CBC | | ✓ | | | ✓ | ✓ | ✓ | ✓ | | AES GCM | | ✓ | | | ✓ | ✓ | ✓ | ✓ | | AES KW | | | | | ✓ | ✓ | ✓ | ✓ | | HMAC | ✓ | | | | ✓ | | ✓ | ✓ | | SHA 1 | | | ✓ | | | | | | | SHA 256 | | | ✓ | | | | | | | SHA 384 | | | ✓ | | | | | | | SHA 512 | | | ✓ | | | | | | | MD5<sup><a href="#footnote-3">3</a></sup> | | | ✓ | | | | | | | HKDF | | | | ✓ | | | | ✓ | | PBKDF2 | | | | ✓ | | | | ✓ | **Footnotes:** 1. <a name="footnote-1"></a> Algorithms as specified in the [Secure Curves API](https://wicg.github.io/webcrypto-secure-curves). 2. <a name="footnote-2"></a> Legacy non-standard EdDSA is supported for the Ed25519 curve in addition to the Secure Curves version. Since this algorithm is non-standard, note the following while using it: * Use <code>NODE-ED25519</code> as the algorithm and `namedCurve` parameters. * Unlike NodeJS, Cloudflare will not support raw import of private keys. * The algorithm implementation may change over time. While Cloudflare cannot guarantee it at this time, Cloudflare will strive to maintain backward compatibility and compatibility with NodeJS's behavior. Any notable compatibility notes will be communicated in release notes and via this developer documentation. 3. <a name="footnote-3"></a> MD5 is not part of the WebCrypto standard but is supported in Cloudflare Workers for interacting with legacy systems that require MD5. MD5 is considered a weak algorithm. Do not rely upon MD5 for security. *** ## Related resources * [SubtleCrypto documentation on MDN](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto) * [SubtleCrypto documentation as part of the W3C Web Crypto API specification](https://www.w3.org/TR/WebCryptoAPI//#subtlecrypto-interface) * [Example: signing requests](/workers/examples/signing-requests/) --- # WebSockets URL: https://developers.cloudflare.com/workers/runtime-apis/websockets/ ## Background WebSockets allow you to communicate in real time with your Cloudflare Workers serverless functions. For a complete example, refer to [Using the WebSockets API](/workers/examples/websockets/). :::note If your application needs to coordinate among multiple WebSocket connections, such as a chat room or game match, you will need clients to send messages to a single-point-of-coordination. Durable Objects provide a single-point-of-coordination for Cloudflare Workers, and are often used in parallel with WebSockets to persist state over multiple clients and connections. In this case, refer to [Durable Objects](/durable-objects/) to get started, and prefer using the Durable Objects' extended [WebSockets API](/durable-objects/best-practices/websockets/). ::: ## Constructor ```js // { 0: <WebSocket>, 1: <WebSocket> } let websocketPair = new WebSocketPair(); ``` The WebSocketPair returned from this constructor is an Object, with two WebSockets at keys `0` and `1`. These WebSockets are commonly referred to as `client` and `server`. The below example combines `Object.values` and ES6 destructuring to retrieve the WebSockets as `client` and `server`: ```js let [client, server] = Object.values(new WebSocketPair()); ``` ## Methods ### accept * <code>accept()</code> * Accepts the WebSocket connection and begins terminating requests for the WebSocket on Cloudflare's global network. This effectively enables the Workers runtime to begin responding to and handling WebSocket requests. ### addEventListener * <code>addEventListener(eventWebSocketEvent, callbackFunctionFunction)</code> * Add callback functions to be executed when an event has occurred on the WebSocket. #### Parameters * `event` WebSocketEvent * The WebSocket event (refer to [Events](/workers/runtime-apis/websockets/#events)) to listen to. * <code>callbackFunction(messageMessage)</code> Function * A function to be called when the WebSocket responds to a specific event. ### close * <code>close(codenumber, reasonstring)</code> * Close the WebSocket connection. #### Parameters * <code>codeinteger</code> optional * An integer indicating the close code sent by the server. This should match an option from the [list of status codes](https://developer.mozilla.org/en-US/docs/Web/API/CloseEvent#status_codes) provided by the WebSocket spec. * <code>reasonstring</code> optional * A human-readable string indicating why the WebSocket connection was closed. ### send * <code>send(messagestring | ArrayBuffer | ArrayBufferView)</code> * Send a message to the other WebSocket in this WebSocket pair. #### Parameters * <code>messagestring</code> * The message to send down the WebSocket connection to the corresponding client. This should be a string or something coercible into a string; for example, strings and numbers will be simply cast into strings, but objects and arrays should be cast to JSON strings using <code>JSON.stringify</code>, and parsed in the client. *** ## Events * <code>close</code> * An event indicating the WebSocket has closed. * <code>error</code> * An event indicating there was an error with the WebSocket. * <code>message</code> * An event indicating a new message received from the client, including the data passed by the client. :::note WebSocket messages received by a Worker have a size limit of 1 MiB (1048576). If a larger message is sent, the WebSocket will be automatically closed with a `1009` "Message is too large" response. ::: ## Types ### Message * `data` any - The data passed back from the other WebSocket in your pair. * `type` string - Defaults to `message`. *** ## Related resources * [Mozilla Developer Network's (MDN) documentation on the WebSocket class](https://developer.mozilla.org/en-US/docs/Web/API/WebSocket) * [Our WebSocket template for building applications on Workers using WebSockets](https://github.com/cloudflare/websocket-template) --- # Web standards URL: https://developers.cloudflare.com/workers/runtime-apis/web-standards/ *** ## JavaScript standards The Cloudflare Workers runtime is [built on top of the V8 JavaScript and WebAssembly engine](/workers/reference/how-workers-works/). The Workers runtime is updated at least once a week, to at least the version of V8 that is currently used by Google Chrome's stable release. This means you can safely use the latest JavaScript features, with no need for transpilers. All of the [standard built-in objects](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference) supported by the current Google Chrome stable release are supported, with a few notable exceptions: * For security reasons, the following are not allowed: * `eval()` * `new Function` * [`WebAssembly.compile`](https://developer.mozilla.org/en-US/docs/WebAssembly/JavaScript_interface/compile_static) * [`WebAssembly.compileStreaming`](https://developer.mozilla.org/en-US/docs/WebAssembly/JavaScript_interface/compileStreaming_static) * `WebAssembly.instantiate` with a [buffer parameter](https://developer.mozilla.org/en-US/docs/WebAssembly/JavaScript_interface/instantiate_static#primary_overload_%E2%80%94_taking_wasm_binary_code) * [`WebAssembly.instantiateStreaming`](https://developer.mozilla.org/en-US/docs/WebAssembly/JavaScript_interface/instantiateStreaming_static) * `Date.now()` returns the time of the last I/O; it does not advance during code execution. *** ## Web standards and global APIs The following methods are available per the [Worker Global Scope](https://developer.mozilla.org/en-US/docs/Web/API/WorkerGlobalScope): ### Base64 utility methods * atob() * Decodes a string of data which has been encoded using base-64 encoding. * btoa() * Creates a base-64 encoded ASCII string from a string of binary data. ### Timers * setInterval() * Schedules a function to execute every time a given number of milliseconds elapses. * clearInterval() * Cancels the repeated execution set using [`setInterval()`](https://developer.mozilla.org/en-US/docs/Web/API/setInterval). * setTimeout() * Schedules a function to execute in a given amount of time. * clearTimeout() * Cancels the delayed execution set using [`setTimeout()`](https://developer.mozilla.org/en-US/docs/Web/API/setTimeout). :::note Timers are only available inside of [the Request Context](/workers/runtime-apis/request/#the-request-context). ::: ### `performance.timeOrigin` and `performance.now()` * performance.timeOrigin * Returns the high resolution time origin. Workers uses the UNIX epoch as the time origin, meaning that `performance.timeOrigin` will always return `0`. * performance.now() * Returns a `DOMHighResTimeStamp` representing the number of milliseconds elapsed since `performance.timeOrigin`. Note that Workers intentionally reduces the precision of `performance.now()` such that it returns the time of the last I/O and does not advance during code execution. Effectively, because of this, and because `performance.timeOrigin` is always, `0`, `performance.now()` will always equal `Date.now()`, yielding a consistent view of the passage of time within a Worker. ### `EventTarget` and `Event` The [`EventTarget`](https://developer.mozilla.org/en-US/docs/Web/API/EventTarget) and [`Event`](https://developer.mozilla.org/en-US/docs/Web/API/Event) API allow objects to publish and subscribe to events. ### `AbortController` and `AbortSignal` The [`AbortController`](https://developer.mozilla.org/en-US/docs/Web/API/AbortController) and [`AbortSignal`](https://developer.mozilla.org/en-US/docs/Web/API/AbortSignal) APIs provide a common model for canceling asynchronous operations. ### Fetch global * fetch() * Starts the process of fetching a resource from the network. Refer to [Fetch API](/workers/runtime-apis/fetch/). :::note The Fetch API is only available inside of [the Request Context](/workers/runtime-apis/request/#the-request-context). ::: *** ## Encoding API Both `TextEncoder` and `TextDecoder` support UTF-8 encoding/decoding. [Refer to the MDN documentation for more information](https://developer.mozilla.org/en-US/docs/Web/API/Encoding_API). The [`TextEncoderStream`](https://developer.mozilla.org/en-US/docs/Web/API/TextEncoderStream) and [`TextDecoderStream`](https://developer.mozilla.org/en-US/docs/Web/API/TextDecoderStream) classes are also available. *** ## URL API The URL API supports URLs conforming to HTTP and HTTPS schemes. [Refer to the MDN documentation for more information](https://developer.mozilla.org/en-US/docs/Web/API/URL) :::note The default URL class behavior differs from the URL Spec documented above. A new spec-compliant implementation of the URL class can be enabled using the `url_standard` [compatibility flag](/workers/configuration/compatibility-flags/). ::: *** ## Compression Streams The `CompressionStream` and `DecompressionStream` classes support the deflate, deflate-raw and gzip compression methods. [Refer to the MDN documentation for more information](https://developer.mozilla.org/en-US/docs/Web/API/Compression_Streams_API) *** ## URLPattern API The `URLPattern` API provides a mechanism for matching URLs based on a convenient pattern syntax. [Refer to the MDN documentation for more information](https://developer.mozilla.org/en-US/docs/Web/API/URLPattern). *** ## `Intl` The `Intl` API allows you to format dates, times, numbers, and more to the format that is used by a provided locale (language and region). [Refer to the MDN documentation for more information](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Intl). *** ## `navigator.userAgent` When the [`global_navigator`](/workers/configuration/compatibility-flags/#global-navigator) compatibility flag is set, the [`navigator.userAgent`](https://developer.mozilla.org/en-US/docs/Web/API/Navigator/userAgent) property is available with the value `'Cloudflare-Workers'`. This can be used, for example, to reliably determine that code is running within the Workers environment. ## Unhandled promise rejections The [`unhandledrejection`](https://developer.mozilla.org/en-US/docs/Web/API/Window/unhandledrejection_event) event is emitted by the global scope when a JavaScript promise is rejected without a rejection handler attached. The [`rejectionhandled`](https://developer.mozilla.org/en-US/docs/Web/API/Window/rejectionhandled_event) event is emitted by the global scope when a JavaScript promise rejection is handled late (after a rejection handler is attached to the promise after an `unhandledrejection` event has already been emitted). ```js title="worker.js" addEventListener('unhandledrejection', (event) => { console.log(event.promise); // The promise that was rejected. console.log(event.reason); // The value or Error with which the promise was rejected. }); addEventListener('rejectionhandled', (event) => { console.log(event.promise); // The promise that was rejected. console.log(event.reason); // The value or Error with which the promise was rejected. }); ``` *** ## `navigator.sendBeacon(url[, data])` When the [`global_navigator`](/workers/configuration/compatibility-flags/#global-navigator) compatibility flag is set, the [`navigator.sendBeacon(...)`](https://developer.mozilla.org/en-US/docs/Web/API/Navigator/sendBeacon) API is available to send an HTTP `POST` request containing a small amount of data to a web server. This API is intended as a means of transmitting analytics or diagnostics information asynchronously on a best-effort basis. For example, you can replace: ```js const promise = fetch('https://example.com', { method: 'POST', body: 'hello world' }); ctx.waitUntil(promise); ``` with `navigator.sendBeacon(...)`: ```js navigator.sendBeacon('https://example.com', 'hello world'); ``` --- # Testing URL: https://developers.cloudflare.com/workers/testing/ import { Render, LinkButton } from "~/components"; The Workers platform has a variety of ways to test your applications, depending on your requirements. We recommend using the [Vitest integration](/workers/testing/vitest-integration), which allows for unit testing individual functions within your Worker. However, if you don't use Vitest, both [Miniflare's API](/workers/testing/miniflare/writing-tests) and the [`unstable_startWorker()`](/workers/wrangler/api/#unstable_startworker) API provide options for testing your Worker in any testing framework. <LinkButton href="/workers/testing/vitest-integration/get-started/write-your-first-test/"> Write your first test </LinkButton> ## Testing comparison matrix | Feature | [Vitest integration](/workers/testing/vitest-integration) | [`unstable_startWorker()`](/workers/wrangler/api/#unstable_startworker) | [Miniflare's API](/workers/testing/miniflare) | | ------------------------------------- | --------------------------------------------------------- | ----------------------------------------------------------------------- | --------------------------------------------- | | Unit testing | ✅ | ⌠| ⌠| | Integration testing | ✅ | ✅ | ✅ | | Loading [Wrangler configuration files](/workers/wrangler/configuration/) | ✅ | ✅ | ⌠| | Use bindings directly in tests | ✅ | ⌠| ✅ | | Isolated per-test storage | ✅ | ⌠| ⌠| | Outbound request mocking | ✅ | ⌠| ✅ | | Multiple Worker support | ✅ | ✅ | ✅ | | Direct access to Durable Objects | ✅ | ⌠| ⌠| | Run Durable Object alarms immediately | ✅ | ⌠| ⌠| | List Durable Objects | ✅ | ⌠| ⌠| | Testing service Workers | ⌠| ✅ | ✅ | <Render file="testing-pages-functions" product="workers" /> --- # Integration testing URL: https://developers.cloudflare.com/workers/testing/integration-testing/ import { Render } from "~/components"; import { LinkButton } from "@astrojs/starlight/components"; Integration tests test multiple units of your Worker together by sending HTTP requests to your Worker and asserting on the HTTP responses. As an example, consider the following Worker: ```js export function add(a, b) { return a + b; } export default { async fetch(request) { const url = new URL(request.url); const a = parseInt(url.searchParams.get("a")); const b = parseInt(url.searchParams.get("b")); return new Response(add(a, b)); }, }; ``` An integration test for this Worker might look like the following example: ```js // Start Worker HTTP server on port 8787 running `index.mjs` then... const response = await fetch("http://localhost:8787/?a=1&b=2"); assert((await response.text()) === "3"); ``` In the above example, instead of importing the `add` function as a [unit test](/workers/testing/unit-testing/) would do, you make a direct call to the endpoint, testing that the Worker responds at the endpoint with the appropriate response. ## Vitest integration The recommended way to write integration tests for your Workers is by using [the Workers Vitest integration](/workers/testing/vitest-integration/get-started/). Vitest can be configured to run integrations against a single Worker or multiple Workers. ### Testing via `SELF` If testing a single Worker, you can use the `SELF` fetcher provided by the [`cloudflare:test` API](/workers/testing/vitest-integration/test-apis/). ```js import { SELF } from "cloudflare:test"; it("dispatches fetch event", async () => { const response = await SELF.fetch("https://example.com"); expect(await response.text()).toMatchInlineSnapshot(...); }); ``` When using `SELF` for integration tests, your Worker code runs in the same context as the test runner. This means you can use global mocks to control your Worker, but also means your Worker uses the same subtly different module resolution behavior provided by Vite. Usually this is not a problem, but if you would like to run your Worker in a fresh environment that is as close to production as possible, using an auxiliary Worker may be a good idea. Auxiliary Workers have some developer experience (DX) limitations. ### Testing via auxiliary Workers It is also possible to configure Workers for integration testing via `vitest.config.ts`. An [example `vitest.config.ts` configuration file](https://github.com/cloudflare/workers-sdk/blob/main/fixtures/vitest-pool-workers-examples/basics-integration-auxiliary/vitest.config.ts) on GitHub. The Worker can then be referenced like the following example: ```js import { env } from "cloudflare:test"; import { expect, it } from "vitest"; it("dispatches fetch event", async () => { const response = await env.WORKER.fetch("http://example.com"); expect(await response.text()).toBe("👋"); }); ``` Instead of running the Worker-under-test in the same Worker as the test runner like `SELF`, this example defines the Worker-under-test as an _auxiliary_ Worker. This means the Worker runs in a separate isolate to the test runner, with a different global scope. The Worker-under-test runs in an environment closer to production, but Vite transformations and hot-module-reloading aren't applied to the Worker—you must compile your TypeScript to JavaScript beforehand. Auxiliary Workers cannot be configured from Wrangler files. You must use Miniflare [`WorkerOptions`](https://github.com/cloudflare/workers-sdk/tree/main/packages/miniflare#interface-workeroptions) in `vitest.config.ts`. :::note This method is less recommended than `SELF` for integration tests because of its developer experience. However, it can be useful when you are testing multiple Workers. You can define multiple Workers by different names in `vitest.config.ts` and reference them via `env`. ::: ## [Wrangler's `unstable_startWorker()` API](/workers/wrangler/api/#unstable_startworker) :::caution `unstable_startWorker()` is an experimental API subject to breaking changes. ::: If you do not want to use Vitest and would like to write integration tests for a single Worker, consider using [Wrangler's `unstable_startWorker()` API](/workers/wrangler/api/#unstable_startworker). This API exposes the internals of Wrangler's dev server, and allows you to customise how it runs. ```js import assert from "node:assert"; import { unstable_startWorker } from "wrangler"; const worker = await unstable_startWorker({ config: "wrangler.json" }); try { const response = await worker.fetch("/?a=1&b=2"); assert.strictEqual(await response.text(), "3"); } finally { await worker.dispose(); } ``` ## [Miniflare's API](/workers/testing/miniflare/writing-tests/) If you would like to write integration tests for multiple Workers, need direct access to [bindings](/workers/runtime-apis/bindings/) outside your Worker in tests, or have another advanced use case, consider using [Miniflare's API](/workers/testing/miniflare) directly. Miniflare is the foundation for the other testing tools on this page, exposing a JavaScript API for the [`workerd` runtime](https://github.com/cloudflare/workerd) and local simulators for the other Developer Platform products. Unlike `unstable_startWorker()`, Miniflare does not automatically load options from your [Wrangler configuration file](/workers/wrangler/configuration/). Refer to the [Writing tests](/workers/testing/miniflare/writing-tests/) page for an example of how to use Miniflare together with `node:test`. ```js import assert from "node:assert"; import { Miniflare } from "miniflare"; const mf = new Miniflare({ modules: true, scriptPath: "./index.mjs", }); try { const response = await mf.dispatchFetch("http://example.com/?a=1&b=2"); assert.strictEqual(await response.text(), "3"); } finally { await mf.dispose(); } ``` :::note If you have been using the test environments from Miniflare 2 for integration testing and want to migrate to Cloudflare's Vitest integration, refer to the [Migrate from Miniflare 2 migration guide](/workers/testing/vitest-integration/get-started/migrate-from-miniflare-2/) for more information. ::: <Render file="testing-pages-functions" product="workers" /> ## Related Resources - [Recipes](/workers/testing/vitest-integration/recipes/) - Example integration tests for Workers using the Workers Vitest integration. --- # Tutorials URL: https://developers.cloudflare.com/workers/tutorials/ import { GlossaryTooltip, ListTutorials } from "~/components"; :::note [Explore our community-written tutorials contributed through the Developer Spotlight program.](/developer-spotlight/) ::: View <GlossaryTooltip term="tutorial">tutorials</GlossaryTooltip> to help you get started with Workers. <ListTutorials /> --- # Unit testing URL: https://developers.cloudflare.com/workers/testing/unit-testing/ import { Render } from "~/components" In a Workers context, a unit test imports and directly calls functions from your Worker. After calling the functions, the unit test then asserts on the functions' return values. For example, consider you have the following Worker: ```js export function add(a, b) { return a + b; } export default { async fetch(request) { const url = new URL(request.url); const a = parseInt(url.searchParams.get("a")); const b = parseInt(url.searchParams.get("b")); return new Response(add(a, b)); } } ``` An example unit test for the above Worker may look like the following: ```js import { add } from "./index.mjs"; assert(add(1, 2) === 3); ``` This test only verifies that the `add` function is returning the correct value, but does not test the Worker itself like an [integration test](/workers/testing/integration-testing) would. ## Vitest integration The recommended way to unit test your Workers is by using the Workers Vitest integration. For more information on features, as well as installation and setup instructions, refer to the [Vitest integration Get Started guide](/workers/testing/vitest-integration/get-started/). <Render file="testing-pages-functions" product="workers" /> ## Related Resources * [Recipes](/workers/testing/vitest-integration/recipes/) - Examples of unit tests using the Workers Vitest integration. --- # API URL: https://developers.cloudflare.com/workers/wrangler/api/ import { Render, TabItem, Tabs, Type, MetaInfo, WranglerConfig } from "~/components"; Wrangler offers APIs to programmatically interact with your Cloudflare Workers. - [`unstable_startWorker`](#unstable_startworker) - Start a server for running integration tests against your Worker. - [`unstable_dev`](#unstable_dev) - Start a server for running either end-to-end (e2e) or integration tests against your Worker. - [`getPlatformProxy`](#getplatformproxy) - Get proxies and values for emulating the Cloudflare Workers platform in a Node.js process. ## `unstable_startWorker` This API exposes the internals of Wrangler's dev server, and allows you to customise how it runs. For example, you could use `unstable_startWorker()` to run integration tests against your Worker. This example uses `node:test`, but should apply to any testing framework: ```js import assert from "node:assert"; import test, { after, before, describe } from "node:test"; import { unstable_startWorker } from "wrangler"; describe("worker", () => { let worker; before(async () => { worker = await unstable_startWorker({ config: "wrangler.json" }); }); test("hello world", async () => { assert.strictEqual( await (await worker.fetch("http://example.com")).text(), "Hello world", ); }); after(async () => { await worker.dispose(); }); }); ``` ## `unstable_dev` Start an HTTP server for testing your Worker. Once called, `unstable_dev` will return a `fetch()` function for invoking your Worker without needing to know the address or port, as well as a `stop()` function to shut down the HTTP server. By default, `unstable_dev` will perform integration tests against a local server. If you wish to perform an e2e test against a preview Worker, pass `local: false` in the `options` object when calling the `unstable_dev()` function. Note that e2e tests can be significantly slower than integration tests. :::note The `unstable_dev()` function has an `unstable_` prefix because the API is experimental and may change in the future. We recommend migrating to the `unstable_startWorker()` API, documented above. If you have been using `unstable_dev()` for integration testing and want to migrate to Cloudflare's Vitest integration, refer to the [Migrate from `unstable_dev` migration guide](/workers/testing/vitest-integration/get-started/migrate-from-unstable-dev/) for more information. ::: ### Constructor ```js const worker = await unstable_dev(script, options); ``` ### Parameters - `script` <Type text="string" /> - A string containing a path to your Worker script, relative to your Worker project's root directory. - `options` <Type text="object" /> <MetaInfo text="optional" /> - Optional options object containing `wrangler dev` configuration settings. - Include an `experimental` object inside `options` to access experimental features such as `disableExperimentalWarning`. - Set `disableExperimentalWarning` to `true` to disable Wrangler's warning about using `unstable_` prefixed APIs. ### Return Type `unstable_dev()` returns an object containing the following methods: - `fetch()` `Promise<Response>` - Send a request to your Worker. Returns a Promise that resolves with a [`Response`](/workers/runtime-apis/response) object. - Refer to [`Fetch`](/workers/runtime-apis/fetch/). - `stop()` `Promise<void>` - Shuts down the dev server. ### Usage When initiating each test suite, use a `beforeAll()` function to start `unstable_dev()`. The `beforeAll()` function is used to minimize overhead: starting the dev server takes a few hundred milliseconds, starting and stopping for each individual test adds up quickly, slowing your tests down. In each test case, call `await worker.fetch()`, and check that the response is what you expect. To wrap up a test suite, call `await worker.stop()` in an `afterAll` function. #### Single Worker example <Tabs> <TabItem label="JavaScript" icon="seti:javascript"> ```js const { unstable_dev } = require("wrangler"); describe("Worker", () => { let worker; beforeAll(async () => { worker = await unstable_dev("src/index.js", { experimental: { disableExperimentalWarning: true }, }); }); afterAll(async () => { await worker.stop(); }); it("should return Hello World", async () => { const resp = await worker.fetch(); const text = await resp.text(); expect(text).toMatchInlineSnapshot(`"Hello World!"`); }); }); ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts import { unstable_dev } from "wrangler"; import type { UnstableDevWorker } from "wrangler"; describe("Worker", () => { let worker: UnstableDevWorker; beforeAll(async () => { worker = await unstable_dev("src/index.ts", { experimental: { disableExperimentalWarning: true }, }); }); afterAll(async () => { await worker.stop(); }); it("should return Hello World", async () => { const resp = await worker.fetch(); const text = await resp.text(); expect(text).toMatchInlineSnapshot(`"Hello World!"`); }); }); ``` </TabItem> </Tabs> #### Multi-Worker example You can test Workers that call other Workers. In the below example, we refer to the Worker that calls other Workers as the parent Worker, and the Worker being called as a child Worker. If you shut down the child Worker prematurely, the parent Worker will not know the child Worker exists and your tests will fail. <Tabs> <TabItem label="JavaScript" icon="seti:javascript"> ```js import { unstable_dev } from "wrangler"; describe("multi-worker testing", () => { let childWorker; let parentWorker; beforeAll(async () => { childWorker = await unstable_dev("src/child-worker.js", { config: "src/child-wrangler.toml", experimental: { disableExperimentalWarning: true }, }); parentWorker = await unstable_dev("src/parent-worker.js", { config: "src/parent-wrangler.toml", experimental: { disableExperimentalWarning: true }, }); }); afterAll(async () => { await childWorker.stop(); await parentWorker.stop(); }); it("childWorker should return Hello World itself", async () => { const resp = await childWorker.fetch(); const text = await resp.text(); expect(text).toMatchInlineSnapshot(`"Hello World!"`); }); it("parentWorker should return Hello World by invoking the child worker", async () => { const resp = await parentWorker.fetch(); const parsedResp = await resp.text(); expect(parsedResp).toEqual("Parent worker sees: Hello World!"); }); }); ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts import { unstable_dev } from "wrangler"; import type { UnstableDevWorker } from "wrangler"; describe("multi-worker testing", () => { let childWorker: UnstableDevWorker; let parentWorker: UnstableDevWorker; beforeAll(async () => { childWorker = await unstable_dev("src/child-worker.js", { config: "src/child-wrangler.toml", experimental: { disableExperimentalWarning: true }, }); parentWorker = await unstable_dev("src/parent-worker.js", { config: "src/parent-wrangler.toml", experimental: { disableExperimentalWarning: true }, }); }); afterAll(async () => { await childWorker.stop(); await parentWorker.stop(); }); it("childWorker should return Hello World itself", async () => { const resp = await childWorker.fetch(); const text = await resp.text(); expect(text).toMatchInlineSnapshot(`"Hello World!"`); }); it("parentWorker should return Hello World by invoking the child worker", async () => { const resp = await parentWorker.fetch(); const parsedResp = await resp.text(); expect(parsedResp).toEqual("Parent worker sees: Hello World!"); }); }); ``` </TabItem> </Tabs> ## `getPlatformProxy` The `getPlatformProxy` function provides a way to obtain an object containing proxies (to **local** `workerd` bindings) and emulations of Cloudflare Workers specific values, allowing the emulation of such in a Node.js process. :::caution `getPlatformProxy` is, by design, to be used exclusively in Node.js applications. `getPlatformProxy` cannot be run inside the Workers runtime. ::: One general use case for getting a platform proxy is for emulating bindings in applications targeting Workers, but running outside the Workers runtime (for example, framework local development servers running in Node.js), or for testing purposes (for example, ensuring code properly interacts with a type of binding). :::note Binding proxies provided by this function are a best effort emulation of the real production bindings. Although they are designed to be as close as possible to the real thing, there might be slight differences and inconsistencies between the two. ::: ### Syntax ```js const platform = await getPlatformProxy(options); ``` ### Parameters - `options` <Type text="object" /> <MetaInfo text="optional" /> - Optional options object containing preferences for the bindings: - `environment` string The environment to use. - `configPath` string The path to the config file to use. If no path is specified, the default behavior is to search from the current directory up the filesystem for a [Wrangler configuration file](/workers/wrangler/configuration/) to use. **Note:** this field is optional but if a path is specified it must point to a valid file on the filesystem. - `persist` boolean | `{ path: string }` Indicates if and where to persist the bindings data. If `true` or `undefined`, defaults to the same location used by Wrangler, so data can be shared between it and the caller. If `false`, no data is persisted to or read from the filesystem. **Note:** If you use `wrangler`'s `--persist-to` option, note that this option adds a sub directory called `v3` under the hood while `getPlatformProxy`'s `persist` does not. For example, if you run `wrangler dev --persist-to ./my-directory`, to reuse the same location using `getPlatformProxy`, you will have to specify: `persist: "./my-directory/v3"`. ### Return Type `getPlatformProxy()` returns a `Promise` resolving to an object containing the following fields. - `env` `Record<string, unknown>` - Object containing proxies to bindings that can be used in the same way as production bindings. This matches the shape of the `env` object passed as the second argument to modules-format workers. These proxy to binding implementations run inside `workerd`. - TypeScript Tip: `getPlatformProxy<Env>()` is a generic function. You can pass the shape of the bindings record as a type argument to get proper types without `unknown` values. - `cf` IncomingRequestCfProperties read-only - Mock of the `Request`'s `cf` property, containing data similar to what you would see in production. - `ctx` object - Mock object containing implementations of the [`waitUntil`](/workers/runtime-apis/context/#waituntil) and [`passThroughOnException`](/workers/runtime-apis/context/#passthroughonexception) functions that do nothing. - `caches` object - Emulation of the [Workers `caches` runtime API](/workers/runtime-apis/cache/). - For the time being, all cache operations do nothing. A more accurate emulation will be made available soon. - `dispose()` () => `Promise<void>` - Terminates the underlying `workerd` process. - Call this after the platform proxy is no longer required by the program. If you are running a long running process (such as a dev server) that can indefinitely make use of the proxy, you do not need to call this function. ### Usage The `getPlatformProxy` function uses bindings found in the [Wrangler configuration file](/workers/wrangler/configuration/). For example, if you have an [environment variable](/workers/configuration/environment-variables/#add-environment-variables-via-wrangler) configuration set up in the Wrangler configuration file: <WranglerConfig> ```toml [vars] MY_VARIABLE = "test" ``` </WranglerConfig> You can access the bindings by importing `getPlatformProxy` like this: ```js import { getPlatformProxy } from "wrangler"; const { env } = await getPlatformProxy(); ``` To access the value of the `MY_VARIABLE` binding add the following to your code: ```js console.log(`MY_VARIABLE = ${env.MY_VARIABLE}`); ``` This will print the following output: `MY_VARIABLE = test`. ### Supported bindings All supported bindings found in your [Wrangler configuration file](/workers/wrangler/configuration/) are available to you via `env`. The bindings supported by `getPlatformProxy` are: - [Environment variables](/workers/configuration/environment-variables/) - [Service bindings](/workers/runtime-apis/bindings/service-bindings/) - [KV namespace bindings](/kv/api/) - [Durable Object bindings](/durable-objects/api/) - To use a Durable Object binding with `getPlatformProxy`, always [specify a `script_name`](/workers/wrangler/configuration/#durable-objects) and have the target Worker run in a separate terminal via [`wrangler dev`](/workers/wrangler/commands/#dev). For example, you might have the following file read by `getPlatformProxy`. <WranglerConfig> ```toml [[durable_objects.bindings]] name = "MyDurableObject" class_name = "MyDurableObject" script_name = "my-worker" ``` </WranglerConfig> In order for this binding to be successfully proxied by `getPlatformProxy`, a worker named `my-worker` with a Durable Object declaration using the same `class_name` of `"MyDurableObject"` must be run separately via `wrangler dev`. - [R2 bucket bindings](/r2/api/workers/workers-api-reference/) - [Queue bindings](/queues/configuration/javascript-apis/) - [D1 database bindings](/d1/worker-api/) - [Hyperdrive bindings](/hyperdrive) :::note[Hyperdrive values are simple passthrough ones] Values provided by hyperdrive bindings such as `connectionString` and `host` do not have a valid meaning outside of a `workerd` process. This means that Hyperdrive proxies return passthrough values, which are values corresponding to the database connection provided by the user. Otherwise, it would return values which would be unusable from within node.js. ::: - [Workers AI bindings](/workers-ai/get-started/workers-wrangler/#2-connect-your-worker-to-workers-ai) <Render file="ai-local-usage-charges" product="workers" /> --- # Bundling URL: https://developers.cloudflare.com/workers/wrangler/bundling/ By default, Wrangler bundles your Worker code using [`esbuild`](https://esbuild.github.io/). This means that Wrangler has built-in support for importing modules from [npm](https://www.npmjs.com/) defined in your `package.json`. To review the exact code that Wrangler will upload to Cloudflare, run `npx wrangler deploy --dry-run --outdir dist`, which will show your Worker code after Wrangler's bundling. :::note Wrangler's inbuilt bundling usually provides the best experience, but we understand there are cases where you will need more flexibility. You can provide `rules` and set `find_additional_modules` in your configuration to control which files are included in the deployed Worker but not bundled into the entry-point file. Furthermore, we have an escape hatch in the form of [Custom Builds](/workers/wrangler/custom-builds/), which lets you run your own build before Wrangler's built-in one. ::: ## Including non-JavaScript modules Bundling your Worker code takes multiple modules and bundles them into one file. Sometimes, you might have modules that cannot be inlined directly into the bundle. For example, instead of bundling a Wasm file into your JavaScript Worker, you would want to upload the Wasm file as a separate module that can be imported at runtime. Wrangler supports this for the following file types: - `.txt` - `.html` - `.bin` - `.wasm` and `.wasm?module` Refer to [Bundling configuration](/workers/wrangler/configuration/#bundling) to customize these file types. For example, with the following import, the variable `data` will be a string containing the contents of `example.html`: ```js import data from "./example.html"; // Where `example.html` is a file in your local directory ``` This is also the basis of Wasm support with Wrangler. To use a Wasm module in a Worker developed with Wrangler, add the following to your Worker: ```js import wasm from "./example.wasm"; // Where `example.wasm` is a file in your local directory const instance = await WebAssembly.instantiate(wasm); // Instantiate Wasm modules in global scope, not within the fetch() handler export default { fetch(request) { const result = instance.exports.exported_func(); }, }; ``` :::caution Cloudflare Workers does not support `WebAssembly.instantiateStreaming()`. ::: ## Find additional modules By setting `find_additional_modules` to `true` in your configuration file, Wrangler will traverse the file tree below `base_dir`. Any files that match the `rules` you define will also be included as unbundled, external modules in the deployed Worker. This approach is useful for supporting lazy loading of large or dynamically imported JavaScript files: - Normally, a large lazy-imported file (for example, `await import("./large-dep.mjs")`) would be bundled directly into your entrypoint, reducing the effectiveness of the lazy loading. If matching rule is added to `rules`, then this file would only be loaded and executed at runtime when it is actually imported. - Previously, variable based dynamic imports (for example, ``await import(`./lang/${language}.mjs`)``) would always fail at runtime because Wrangler had no way of knowing which modules to include in the upload. Providing a rule that matches all these files, such as `{ type = "EsModule", globs = ["./land/**/*.mjs"], fallthrough = true }`, will ensure this module is available at runtime. - "Partial bundling" is supported when `find_additional_modules` is `true`, and a source file matches one of the configured `rules`, since Wrangler will then treat it as "external" and not try to bundle it into the entry-point file. ## Conditional exports Wrangler respects the [conditional `exports` field](https://nodejs.org/api/packages.html#conditional-exports) in `package.json`. This allows developers to implement isomorphic libraries that have different implementations depending on the JavaScript runtime they are running in. When bundling, Wrangler will try to load the [`workerd` key](https://runtime-keys.proposal.wintercg.org/#workerd). Refer to the Wrangler repository for [an example isomorphic package](https://github.com/cloudflare/workers-sdk/tree/main/fixtures/isomorphic-random-example). ## Disable bundling :::caution Disabling bundling is not recommended in most scenarios. Use this option only when deploying code pre-processed by other tooling. ::: If your build tooling already produces build artifacts suitable for direct deployment to Cloudflare, you can opt out of bundling by using the `--no-bundle` command line flag: `npx wrangler deploy --no-bundle`. If you opt out of bundling, Wrangler will not process your code and some features introduced by Wrangler bundling (for example minification, and polyfills injection) will not be available. Use [Custom Builds](/workers/wrangler/custom-builds/) to customize what Wrangler will bundle and upload to the Cloudflare global network when you use [`wrangler dev`](/workers/wrangler/commands/#dev) and [`wrangler deploy`](/workers/wrangler/commands/#deploy). ## Generated Wrangler configuration Some framework tools, or custom pre-build processes, generate a modified Wrangler configuration to be used to deploy the Worker code. It is possible for Wrangler to automatically use this generated configuration rather than the original, user's configuration. See [Generated Wrangler configuration](/workers/wrangler/configuration/#generated-wrangler-configuration) for more information. --- # Commands URL: https://developers.cloudflare.com/workers/wrangler/commands/ import { TabItem, Tabs, Render, Type, MetaInfo, WranglerConfig } from "~/components"; Wrangler offers a number of commands to manage your Cloudflare Workers. - [`docs`](#docs) - Open this page in your default browser. - [`init`](#init) - Create a new project from a variety of web frameworks and templates. - [`generate`](#generate) - Create a Wrangler project using an existing [Workers template](https://github.com/cloudflare/worker-template). - [`d1`](#d1) - Interact with D1. - [`vectorize`](#vectorize) - Interact with Vectorize indexes. - [`hyperdrive`](#hyperdrive) - Manage your Hyperdrives. - [`deploy`](#deploy) - Deploy your Worker to Cloudflare. - [`dev`](#dev) - Start a local server for developing your Worker. - [`publish`](#publish) - Publish your Worker to Cloudflare. - [`delete`](#delete-2) - Delete your Worker from Cloudflare. - [`kv namespace`](#kv-namespace) - Manage Workers KV namespaces. - [`kv key`](#kv-key) - Manage key-value pairs within a Workers KV namespace. - [`kv bulk`](#kv-bulk) - Manage multiple key-value pairs within a Workers KV namespace in batches. - [`r2 bucket`](#r2-bucket) - Manage Workers R2 buckets. - [`r2 object`](#r2-object) - Manage Workers R2 objects. - [`secret`](#secret) - Manage the secret variables for a Worker. - [`secret bulk`](#secretbulk) - Manage multiple secret variables for a Worker. - [`workflows`](#workflows) - Manage and configure Workflows. - [`tail`](#tail) - Start a session to livestream logs from a deployed Worker. - [`pages`](#pages) - Configure Cloudflare Pages. - [`queues`](#queues) - Configure Workers Queues. - [`login`](#login) - Authorize Wrangler with your Cloudflare account using OAuth. - [`logout`](#logout) - Remove Wrangler’s authorization for accessing your account. - [`whoami`](#whoami) - Retrieve your user information and test your authentication configuration. - [`versions`](#versions) - Retrieve details for recent versions. - [`deployments`](#deployments) - Retrieve details for recent deployments. - [`rollback`](#rollback) - Rollback to a recent deployment. - [`dispatch-namespace`](#dispatch-namespace) - Interact with a [dispatch namespace](/cloudflare-for-platforms/workers-for-platforms/reference/how-workers-for-platforms-works/#dispatch-namespace). - [`mtls-certificate`](#mtls-certificate) - Manage certificates used for mTLS connections. - [`cert`](#cert) - Manage certificates used for mTLS and Certificate Authority (CA) chain connections. - [`types`](#types) - Generate types from bindings and module rules in configuration. - [`telemetry`](#telemetry) - Configure whether Wrangler can collect anonymous usage data. :::note <Render file="wrangler-commands/global-flags" product="workers" /> ::: --- ## How to run Wrangler commands This page provides a reference for Wrangler commands. ```txt wrangler <COMMAND> <SUBCOMMAND> [PARAMETERS] [OPTIONS] ``` Since Cloudflare recommends [installing Wrangler locally](/workers/wrangler/install-and-update/) in your project(rather than globally), the way to run Wrangler will depend on your specific setup and package manager. <Tabs> <TabItem label="npm"> ```sh npx wrangler <COMMAND> <SUBCOMMAND> [PARAMETERS] [OPTIONS] ``` </TabItem> <TabItem label="yarn"> ```sh yarn wrangler <COMMAND> <SUBCOMMAND> [PARAMETERS] [OPTIONS] ``` </TabItem> <TabItem label="pnpm"> ```sh pnpm wrangler <COMMAND> <SUBCOMMAND> [PARAMETERS] [OPTIONS] ``` </TabItem> </Tabs> You can add Wrangler commands that you use often as scripts in your project's `package.json` file: ```json { ... "scripts": { "deploy": "wrangler deploy", "dev": "wrangler dev" } ... } ``` You can then run them using your package manager of choice: <Tabs> <TabItem label="npm"> ```sh npm run deploy ``` </TabItem> <TabItem label="yarn"> ```sh yarn run deploy ``` </TabItem> <TabItem label="pnpm"> ```sh pnpm run deploy ``` </TabItem> </Tabs> --- ## `docs` Open the Cloudflare developer documentation in your default browser. ```txt wrangler docs [<COMMAND>] ``` - `COMMAND` <Type text="string" /> <MetaInfo text="optional" /> - The Wrangler command you want to learn more about. This opens your default browser to the section of the documentation that describes the command. <Render file="wrangler-commands/global-flags" product="workers" /> ## `init` Create a new project via the [create-cloudflare-cli (C3) tool](/workers/get-started/guide/#1-create-a-new-worker-project). A variety of web frameworks are available to choose from as well as templates. Dependencies are installed by default, with the option to deploy your project immediately. ```txt wrangler init [<NAME>] [OPTIONS] ``` - `NAME` <Type text="string" /> <MetaInfo text="optional (default: name of working directory)" /> - The name of the Workers project. This is both the directory name and `name` property in the generated [Wrangler configuration](/workers/wrangler/configuration/). - `--yes` <Type text="boolean" /> <MetaInfo text="optional" /> - Answer yes to any prompts for new projects. - `--from-dash` <Type text="string" /> <MetaInfo text="optional" /> - Fetch a Worker initialized from the dashboard. This is done by passing the flag and the Worker name. `wrangler init --from-dash <WORKER_NAME>`. - The `--from-dash` command will not automatically sync changes made to the dashboard after the command is used. Therefore, it is recommended that you continue using the CLI. <Render file="wrangler-commands/global-flags" product="workers" /> --- ## `generate` :::note This command has been deprecated as of [Wrangler v3](/workers/wrangler/migration/update-v2-to-v3/) and will be removed in a future version. ::: Create a new project using an existing [Workers template](https://github.com/cloudflare/workers-sdk/tree/main/templates/worker). ```txt wrangler generate [<NAME>] [TEMPLATE] ``` - `NAME` <Type text="string" /> <MetaInfo text="optional (default: name of working directory)" /> - The name of the Workers project. This is both the directory name and `name` property in the generated [Wrangler configuration](/workers/wrangler/configuration/). - `TEMPLATE` <Type text="string" /> <MetaInfo text="optional" /> - The URL of a GitHub template, with a default [worker-template](https://github.com/cloudflare/worker-template). Browse a list of available templates on the [cloudflare/workers-sdk](https://github.com/cloudflare/workers-sdk/tree/main/templates#usage) repository. <Render file="wrangler-commands/global-flags" product="workers" /> --- ## `d1` Interact with Cloudflare's D1 service. <Render file="wrangler-commands/d1" product="workers" /> --- ## `hyperdrive` Manage [Hyperdrive](/hyperdrive/) database configurations. <Render file="wrangler-commands/hyperdrive" product="workers" /> --- ## `vectorize` Interact with a [Vectorize](/vectorize/) vector database. ### `create` Creates a new vector index, and provides the binding and name that you will put in your Wrangler file. ```sh npx wrangler vectorize create <INDEX_NAME> [--dimensions=<NUM_DIMENSIONS>] [--metric=<DISTANCE_METRIC>] [--description=<DESCRIPTION>] ``` - `INDEX_NAME` <Type text="string" /> <MetaInfo text="required" /> - The name of the new index to create. Must be unique for an account and cannot be changed after creation. - `--dimensions` <Type text="number" /> <MetaInfo text="required" /> - The vector dimension width to configure the index for. Cannot be changed after creation. - `--metric` <Type text="string" /> <MetaInfo text="required" /> - The distance metric to use for calculating vector distance. Must be one of `cosine`, `euclidean`, or `dot-product`. - `--description` <Type text="string" /> <MetaInfo text="optional" /> - A description for your index. - `--deprecated-v1` <Type text="boolean" /> <MetaInfo text="optional" /> - Create a legacy Vectorize index. Please note that legacy Vectorize indexes are on a [deprecation path](/vectorize/reference/transition-vectorize-legacy). <Render file="wrangler-commands/global-flags" product="workers" /> ### `list` List all Vectorize indexes in your account, including the configured dimensions and distance metric. ```sh npx wrangler vectorize list ``` - `--deprecated-v1` <Type text="boolean" /> <MetaInfo text="optional" /> - List legacy Vectorize indexes. Please note that legacy Vectorize indexes are on a [deprecation path](/vectorize/reference/transition-vectorize-legacy). <Render file="wrangler-commands/global-flags" product="workers" /> ### `get` Get details about an individual index, including its configuration. ```sh npx wrangler vectorize get <INDEX_NAME> ``` - `INDEX_NAME` <Type text="string" /> <MetaInfo text="required" /> - The name of the index to fetch details for. - `--deprecated-v1` <Type text="boolean" /> <MetaInfo text="optional" /> - Get a legacy Vectorize index. Please note that legacy Vectorize indexes are on a [deprecation path](/vectorize/reference/transition-vectorize-legacy). <Render file="wrangler-commands/global-flags" product="workers" /> ### `info` Get some additional information about an individual index, including the vector count and details about the last processed mutation. ```sh npx wrangler vectorize info <INDEX_NAME> ``` - `INDEX_NAME` <Type text="string" /> <MetaInfo text="required" /> - The name of the index to fetch details for. <Render file="wrangler-commands/global-flags" product="workers" /> ### `delete` Delete a Vectorize index. ```sh npx wrangler vectorize delete <INDEX_NAME> [OPTIONS] ``` - `INDEX_NAME` <Type text="string" /> <MetaInfo text="required" /> - The name of the Vectorize index to delete. - `--force` <Type text="boolean" /> <MetaInfo text="optional" /> - Skip confirmation when deleting the index (Note: This is not a recoverable operation). - `--deprecated-v1` <Type text="boolean" /> <MetaInfo text="optional" /> - Delete a legacy Vectorize index. Please note that legacy Vectorize indexes are on a [deprecation path](/vectorize/reference/transition-vectorize-legacy). <Render file="wrangler-commands/global-flags" product="workers" /> ### `insert` Insert vectors into an index. ```sh npx wrangler vectorize insert <INDEX_NAME> [OPTIONS] ``` - `INDEX_NAME` <Type text="string" /> <MetaInfo text="required" /> - The name of the Vectorize index to upsert vectors in. - `--file` <Type text="string" /> <MetaInfo text="required" /> - A file containing the vectors to insert in newline-delimited JSON (JSON) format. - `--batch-size` <Type text="number" /> <MetaInfo text="optional" /> - The number of vectors to insert at a time (default: `1000`). - `--deprecated-v1` <Type text="boolean" /> <MetaInfo text="optional" /> - Insert into a legacy Vectorize index. Please note that legacy Vectorize indexes are on a [deprecation path](/vectorize/reference/transition-vectorize-legacy). <Render file="wrangler-commands/global-flags" product="workers" /> ### `upsert` Upsert vectors into an index. Existing vectors in the index would be overwritten. ```sh npx wrangler vectorize upsert <INDEX_NAME> [OPTIONS] ``` - `INDEX_NAME` <Type text="string" /> <MetaInfo text="required" /> - The name of the Vectorize index to upsert vectors in. - `--file` <Type text="string" /> <MetaInfo text="required" /> - A file containing the vectors to insert in newline-delimited JSON (JSON) format. - `--batch-size` <Type text="number" /> <MetaInfo text="optional" /> - The number of vectors to insert at a time (default: `5000`). <Render file="wrangler-commands/global-flags" product="workers" /> ### `query` Query a Vectorize index for similar vectors. ```sh npx wrangler vectorize query <INDEX_NAME> [OPTIONS] ``` - `INDEX_NAME` <Type text="string" /> <MetaInfo text="required" /> - The name of the Vectorize index to query. - `--vector` <Type text="array" /> <MetaInfo text="optional" /> - Vector against which the Vectorize index is queried. Either this or the `vector-id` param must be provided. - `--vector-id` <Type text="string" /> <MetaInfo text="optional" /> - Identifier for a vector that is already present in the index against which the index is queried. Either this or the `vector` param must be provided. - `--top-k` <Type text="number" /> <MetaInfo text="optional" /> - The number of vectors to query (default: `5`). - `--return-values` <Type text="boolean" /> <MetaInfo text="optional" /> - Enable to return vector values in the response (default: `false`). - `--return-metadata` <Type text="string" /> <MetaInfo text="optional" /> - Enable to return vector metadata in the response. Must be one of `none`, `indexed`, or `all` (default: `none`). - `--namespace` <Type text="string" /> <MetaInfo text="optional" /> - Query response to only include vectors from this namespace. - `--filter` <Type text="string" /> <MetaInfo text="optional" /> - Filter vectors based on this metadata filter. Example: `'{ 'p1': 'abc', 'p2': { '$ne': true }, 'p3': 10, 'p4': false, 'nested.p5': 'abcd' }'` <Render file="wrangler-commands/global-flags" product="workers" /> ### `get-vectors` Fetch vectors from a Vectorize index using the provided ids. ```sh npx wrangler vectorize get-vectors <INDEX_NAME> [OPTIONS] ``` - `INDEX_NAME` <Type text="string" /> <MetaInfo text="required" /> - The name of the Vectorize index from which vectors need to be fetched. - `--ids` <Type text="array" /> <MetaInfo text="required" /> - List of ids for which vectors must be fetched. <Render file="wrangler-commands/global-flags" product="workers" /> ### `delete-vectors` Delete vectors in a Vectorize index using the provided ids. ```sh npx wrangler vectorize delete-vectors <INDEX_NAME> [OPTIONS] ``` - `INDEX_NAME` <Type text="string" /> <MetaInfo text="required" /> - The name of the Vectorize index from which vectors need to be deleted. - `--ids` <Type text="array" /> <MetaInfo text="required" /> - List of ids corresponding to the vectors that must be deleted. <Render file="wrangler-commands/global-flags" product="workers" /> ### `create-metadata-index` Enable metadata filtering on the specified property. ```sh npx wrangler vectorize create-metadata-index <INDEX_NAME> [OPTIONS] ``` - `INDEX_NAME` <Type text="string" /> <MetaInfo text="required" /> - The name of the Vectorize index for which metadata index needs to be created. - `--property-name` <Type text="string" /> <MetaInfo text="required" /> - Metadata property for which metadata filtering should be enabled. - `--type` <Type text="string" /> <MetaInfo text="required" /> - Data type of the property. Must be one of `string`, `number`, or `boolean`. <Render file="wrangler-commands/global-flags" product="workers" /> ### `list-metadata-index` List metadata properties on which metadata filtering is enabled. ```sh npx wrangler vectorize list-metadata-index <INDEX_NAME> [OPTIONS] ``` - `INDEX_NAME` <Type text="string" /> <MetaInfo text="required" /> - The name of the Vectorize index for which metadata indexes needs to be fetched. <Render file="wrangler-commands/global-flags" product="workers" /> ### `delete-metadata-index` Disable metadata filtering on the specified property. ```sh npx wrangler vectorize delete-metadata-index <INDEX_NAME> [OPTIONS] ``` - `INDEX_NAME` <Type text="string" /> <MetaInfo text="required" /> - The name of the Vectorize index for which metadata index needs to be disabled. - `--property-name` <Type text="string" /> <MetaInfo text="required" /> - Metadata property for which metadata filtering should be disabled. <Render file="wrangler-commands/global-flags" product="workers" /> --- ## `dev` Start a local server for developing your Worker. ```txt wrangler dev [<SCRIPT>] [OPTIONS] ``` :::note None of the options for this command are required. Many of these options can be set in your Wrangler file. Refer to the [Wrangler configuration](/workers/wrangler/configuration) documentation for more information. ::: :::caution As of Wrangler v3.2.0, `wrangler dev` is supported by any Linux distributions providing `glibc 2.31` or higher (e.g. Ubuntu 20.04/22.04, Debian 11/12, Fedora 37/38/39), macOS version 11 or higher, and Windows (x86-64 architecture). ::: - `SCRIPT` <Type text="string" /> - The path to an entry point for your Worker. Only required if your [Wrangler configuration file](/workers/wrangler/configuration/) does not include a `main` key (for example, `main = "index.js"`). - `--name` <Type text="string" /> <MetaInfo text="optional" /> - Name of the Worker. - `--no-bundle` <Type text="boolean" /> <MetaInfo text="(default: false) optional" /> - Skip Wrangler's build steps. Particularly useful when using custom builds. Refer to [Bundling](https://developers.cloudflare.com/workers/wrangler/bundling/) for more information. - `--env` <Type text="string" /> <MetaInfo text="optional" /> - Perform on a specific environment. - `--compatibility-date` <Type text="string" /> <MetaInfo text="optional" /> - A date in the form yyyy-mm-dd, which will be used to determine which version of the Workers runtime is used. - `--compatibility-flags`, `--compatibility-flag` <Type text="string[]" /> <MetaInfo text="optional" /> - Flags to use for compatibility checks. - `--latest` <Type text="boolean" /> <MetaInfo text="(default: true) optional" /> - Use the latest version of the Workers runtime. - `--ip` <Type text="string" /> <MetaInfo text="optional" /> - IP address to listen on, defaults to `localhost`. - `--port` <Type text="number" /> <MetaInfo text="optional" /> - Port to listen on. - `--inspector-port` <Type text="number" /> <MetaInfo text="optional" /> - Port for devtools to connect to. - `--routes`, `--route` <Type text="string[]" /> <MetaInfo text="optional" /> - Routes to upload. - For example: `--route example.com/*`. - `--host` <Type text="string" /> <MetaInfo text="optional" /> - Host to forward requests to, defaults to the zone of project. - `--local-protocol` <Type text="'http'|'https'" /> <MetaInfo text="(default: http) optional" /> - Protocol to listen to requests on. - `--https-key-path` <Type text="string" /> <MetaInfo text="optional" /> - Path to a custom certificate key. - `--https-cert-path` <Type text="string" /> <MetaInfo text="optional" /> - Path to a custom certificate. - `--local-upstream` <Type text="string" /> <MetaInfo text="optional" /> - Host to act as origin in local mode, defaults to `dev.host` or route. - `--assets` <Type text="string" /> <MetaInfo text="optional beta" /> - Folder of static assets to be served. Replaces [Workers Sites](/workers/configuration/sites/). Visit [assets](/workers/static-assets/) for more information. - `--legacy-assets` <Type text="string" /> <MetaInfo text="optional deprecated, use `--assets`" /> - Folder of static assets to be served. - `--site` <Type text="string" /> <MetaInfo text="optional deprecated, use `--assets`" /> - Folder of static assets for Workers Sites. :::caution Workers Sites is deprecated. Please use [Workers Assets](/workers/static-assets/) or [Pages](/pages/). ::: - `--site-include` <Type text="string[]" /> <MetaInfo text="optional deprecated" /> - Array of `.gitignore`-style patterns that match file or directory names from the sites directory. Only matched items will be uploaded. - `--site-exclude` <Type text="string[]" /> <MetaInfo text="optional deprecated" /> - Array of `.gitignore`-style patterns that match file or directory names from the sites directory. Matched items will not be uploaded. - `--upstream-protocol` <Type text="'http'|'https'" /> <MetaInfo text="(default: https) optional" /> - Protocol to forward requests to host on. - `--var` <Type text="key:value\[]" /> <MetaInfo text="optional" /> - Array of `key:value` pairs to inject as variables into your code. The value will always be passed as a string to your Worker. - For example, `--var git_hash:$(git rev-parse HEAD) test:123` makes the `git_hash` and `test` variables available in your Worker's `env`. - This flag is an alternative to defining [`vars`](/workers/wrangler/configuration/#non-inheritable-keys) in your [Wrangler configuration file](/workers/wrangler/configuration/). If defined in both places, this flag's values will be used. - `--define` <Type text="key:value\[]" /> <MetaInfo text="optional" /> - Array of `key:value` pairs to replace global identifiers in your code. - For example, `--define GIT_HASH:$(git rev-parse HEAD)` will replace all uses of `GIT_HASH` with the actual value at build time. - This flag is an alternative to defining [`define`](/workers/wrangler/configuration/#non-inheritable-keys) in your [Wrangler configuration file](/workers/wrangler/configuration/). If defined in both places, this flag's values will be used. - `--tsconfig` <Type text="string" /> <MetaInfo text="optional" /> - Path to a custom `tsconfig.json` file. - `--minify` <Type text="boolean" /> <MetaInfo text="optional" /> - Minify the Worker. - `--node-compat` <Type text="boolean" /> <MetaInfo text="optional" /> - Enable Node.js compatibility. - `--persist-to` <Type text="string" /> <MetaInfo text="optional" /> - Specify directory to use for local persistence. - `--remote` <Type text="boolean" /> <MetaInfo text="(default: false) optional" /> - Develop against remote resources and data stored on Cloudflare's network. - `--test-scheduled` <Type text="boolean" /> <MetaInfo text="(default: false) optional" /> - Exposes a `/__scheduled` fetch route which will trigger a scheduled event (Cron Trigger) for testing during development. To simulate different cron patterns, a `cron` query parameter can be passed in: `/__scheduled?cron=*+*+*+*+*`. - `--log-level` <Type text="'debug'|'info'|'log'|'warn'|'error|'none'" /> <MetaInfo text="(default: log) optional" /> - Specify Wrangler's logging level. - `--show-interactive-dev-session` <Type text="boolean" /> <MetaInfo text="(default: true if the terminal supports interactivity) optional" /> - Show the interactive dev session. - `--alias` `Array<string>` - Specify modules to alias using [module aliasing](/workers/wrangler/configuration/#module-aliasing). <Render file="wrangler-commands/global-flags" product="workers" /> `wrangler dev` is a way to [locally test](/workers/local-development/) your Worker while developing. With `wrangler dev` running, send HTTP requests to `localhost:8787` and your Worker should execute as expected. You will also see `console.log` messages and exceptions appearing in your terminal. --- ## `deploy` Deploy your Worker to Cloudflare. ```txt wrangler deploy [<SCRIPT>] [OPTIONS] ``` :::note None of the options for this command are required. Also, many can be set in your Wrangler file. Refer to the [Wrangler configuration](/workers/wrangler/configuration/) documentation for more information. ::: - `SCRIPT` <Type text="string" /> - The path to an entry point for your Worker. Only required if your [Wrangler configuration file](/workers/wrangler/configuration/) does not include a `main` key (for example, `main = "index.js"`). - `--name` <Type text="string" /> <MetaInfo text="optional" /> - Name of the Worker. - `--no-bundle` <Type text="boolean" /> <MetaInfo text="(default: false) optional" /> - Skip Wrangler's build steps. Particularly useful when using custom builds. Refer to [Bundling](https://developers.cloudflare.com/workers/wrangler/bundling/) for more information. - `--env` <Type text="string" /> <MetaInfo text="optional" /> - Perform on a specific environment. - `--outdir` <Type text="string" /> <MetaInfo text="optional" /> - Path to directory where Wrangler will write the bundled Worker files. - `--compatibility-date` <Type text="string" /> <MetaInfo text="optional" /> - A date in the form yyyy-mm-dd, which will be used to determine which version of the Workers runtime is used. - `--compatibility-flags`, `--compatibility-flag` <Type text="string[]" /> <MetaInfo text="optional" /> - Flags to use for compatibility checks. - `--latest` <Type text="boolean" /> <MetaInfo text="(default: true) optional" /> - Use the latest version of the Workers runtime. - `--assets` <Type text="string" /> <MetaInfo text="optional beta" /> - Folder of static assets to be served. Replaces [Workers Sites](/workers/configuration/sites/). Visit [assets](/workers/static-assets/) for more information. - `--legacy-assets` <Type text="string" /> <MetaInfo text="optional deprecated, use `--assets`" /> - Folder of static assets to be served. - `--site` <Type text="string" /> <MetaInfo text="optional deprecated, use `--assets`" /> - Folder of static assets for Workers Sites. :::caution Workers Sites is deprecated. Please use [Workers Assets](/workers/static-assets/) or [Pages](/pages/). ::: - `--site-include` <Type text="string[]" /> <MetaInfo text="optional deprecated" /> - Array of `.gitignore`-style patterns that match file or directory names from the sites directory. Only matched items will be uploaded. - `--site-exclude` <Type text="string[]" /> <MetaInfo text="optional deprecated" /> - Array of `.gitignore`-style patterns that match file or directory names from the sites directory. Matched items will not be uploaded. - `--var` <Type text="key:value\[]" /> <MetaInfo text="optional" /> - Array of `key:value` pairs to inject as variables into your code. The value will always be passed as a string to your Worker. - For example, `--var git_hash:$(git rev-parse HEAD) test:123` makes the `git_hash` and `test` variables available in your Worker's `env`. - This flag is an alternative to defining [`vars`](/workers/wrangler/configuration/#non-inheritable-keys) in your [Wrangler configuration file](/workers/wrangler/configuration/). If defined in both places, this flag's values will be used. - `--define` <Type text="key:value\[]" /> <MetaInfo text="optional" /> - Array of `key:value` pairs to replace global identifiers in your code. - For example, `--define GIT_HASH:$(git rev-parse HEAD)` will replace all uses of `GIT_HASH` with the actual value at build time. - This flag is an alternative to defining [`define`](/workers/wrangler/configuration/#non-inheritable-keys) in your [Wrangler configuration file](/workers/wrangler/configuration/). If defined in both places, this flag's values will be used. - `--triggers`, `--schedule`, `--schedules` <Type text="string[]" /> <MetaInfo text="optional" /> - Cron schedules to attach to the deployed Worker. Refer to [Cron Trigger Examples](/workers/configuration/cron-triggers/#examples). - `--routes`, `--route` string\[] optional - Routes where this Worker will be deployed. - For example: `--route example.com/*`. - `--tsconfig` <Type text="string" /> <MetaInfo text="optional" /> - Path to a custom `tsconfig.json` file. - `--minify` <Type text="boolean" /> <MetaInfo text="optional" /> - Minify the bundled Worker before deploying. - `--node-compat` <Type text="boolean" /> <MetaInfo text="optional" /> - Enable node.js compatibility. - `--dry-run` <Type text="boolean" /> <MetaInfo text="(default: false) optional" /> - Compile a project without actually deploying to live servers. Combined with `--outdir`, this is also useful for testing the output of `npx wrangler deploy`. It also gives developers a chance to upload our generated sourcemap to a service like Sentry, so that errors from the Worker can be mapped against source code, but before the service goes live. - `--keep-vars` <Type text="boolean" /> <MetaInfo text="(default: false) optional" /> - It is recommended best practice to treat your Wrangler developer environment as a source of truth for your Worker configuration, and avoid making changes via the Cloudflare dashboard. - If you change your environment variables or bindings in the Cloudflare dashboard, Wrangler will override them the next time you deploy. If you want to disable this behaviour set `keep-vars` to `true`. - `--dispatch-namespace` <Type text="string" /> <MetaInfo text="optional" /> - Specify the [Workers for Platforms dispatch namespace](/cloudflare-for-platforms/workers-for-platforms/get-started/configuration/#2-create-a-dispatch-namespace) to upload this Worker to. <Render file="wrangler-commands/global-flags" product="workers" /> --- ## `publish` Publish your Worker to Cloudflare. ```txt wrangler publish [OPTIONS] ``` :::note This command has been deprecated as of v3 in favor of [`wrangler deploy`](#deploy). It will be removed in v4. ::: --- ## `delete` Delete your Worker and all associated Cloudflare developer platform resources. ```txt wrangler delete [<SCRIPT>] [OPTIONS] ``` - `SCRIPT` <Type text="string" /> - The path to an entry point for your Worker. Only required if your [Wrangler configuration file](/workers/wrangler/configuration/) does not include a `main` key (for example, `main = "index.js"`). - `--name` <Type text="string" /> <MetaInfo text="optional" /> - Name of the Worker. - `--env` <Type text="string" /> <MetaInfo text="optional" /> - Perform on a specific environment. - `--dry-run` <Type text="boolean" /> <MetaInfo text="(default: false) optional" /> - Do not actually delete the Worker. This is useful for testing the output of `wrangler delete`. <Render file="wrangler-commands/global-flags" product="workers" /> --- <Render file="wrangler-commands/kv" product="workers" /> --- <Render file="wrangler-commands/r2" product="workers" /> --- ## `secret` Manage the secret variables for a Worker. This action creates a new [version](/workers/configuration/versions-and-deployments/#versions) of the Worker and [deploys](/workers/configuration/versions-and-deployments/#deployments) it immediately. To only create a new version of the Worker, use the [`wrangler versions secret`](/workers/wrangler/commands/#secret-put) commands. ### `put` Create or replace a secret for a Worker. ```txt wrangler secret put <KEY> [OPTIONS] ``` - `KEY` <Type text="string" /> <MetaInfo text="required" /> - The variable name for this secret to be accessed in the Worker. - `--name` <Type text="string" /> <MetaInfo text="optional" /> - Perform on a specific Worker rather than inheriting from a [Wrangler configuration file](/workers/wrangler/configuration/). - `--env` <Type text="string" /> <MetaInfo text="optional" /> - Perform on a specific environment. <Render file="wrangler-commands/global-flags" product="workers" /> When running this command, you will be prompted to input the secret's value: ```sh npx wrangler secret put FOO ``` ```sh output ? Enter a secret value: › *** 🌀 Creating the secret for script worker-app ✨ Success! Uploaded secret FOO ``` The `put` command can also receive piped input. For example: ```sh echo "-----BEGIN PRIVATE KEY-----\nM...==\n-----END PRIVATE KEY-----\n" | wrangler secret put PRIVATE_KEY ``` ### `delete` Delete a secret for a Worker. ```txt wrangler secret delete <KEY> [OPTIONS] ``` - `KEY` <Type text="string" /> <MetaInfo text="required" /> - The variable name for this secret to be accessed in the Worker. - `--name` <Type text="string" /> <MetaInfo text="optional" /> - Perform on a specific Worker rather than inheriting from the [Wrangler configuration file](/workers/wrangler/configuration/). - `--env` <Type text="string" /> <MetaInfo text="optional" /> - Perform on a specific environment. <Render file="wrangler-commands/global-flags" product="workers" /> ### `list` List the names of all the secrets for a Worker. ```txt wrangler secret list [OPTIONS] ``` - `--name` <Type text="string" /> <MetaInfo text="optional" /> - Perform on a specific Worker rather than inheriting from the [Wrangler configuration file](/workers/wrangler/configuration/). - `--env` <Type text="string" /> <MetaInfo text="optional" /> - Perform on a specific environment <Render file="wrangler-commands/global-flags" product="workers" /> The following is an example of listing the secrets for the current Worker. ```sh npx wrangler secret list ``` ```sh output [ { "name": "FOO", "type": "secret_text" } ] ``` --- ## `secret bulk` Upload multiple secrets for a Worker at once. ```txt wrangler secret bulk [<FILENAME>] [OPTIONS] ``` - `FILENAME` <Type text="string" /> <MetaInfo text="optional" /> - A file containing either [JSON](https://www.json.org/json-en.html) or the [.env](https://www.dotenv.org/docs/security/env) format - The JSON file containing key-value pairs to upload as secrets, in the form `{"SECRET_NAME": "secret value", ...}`. - The `.env` file containing [key-value pairs to upload as secrets](/workers/configuration/secrets/#local-development-with-secrets), in the form `SECRET_NAME=secret value`. - If omitted, Wrangler expects to receive input from `stdin` rather than a file. - `--name` <Type text="string" /> <MetaInfo text="optional" /> - Perform on a specific Worker rather than inheriting from the [Wrangler configuration file](/workers/wrangler/configuration/). - `--env` <Type text="string" /> <MetaInfo text="optional" /> - Perform on a specific environment. <Render file="wrangler-commands/global-flags" product="workers" /> The following is an example of uploading secrets from a JSON file redirected to `stdin`. When complete, the output summary will show the number of secrets uploaded and the number of secrets that failed to upload. ```json { "secret-name-1": "secret-value-1", "secret-name-2": "secret-value-2" } ``` ```sh npx wrangler secret bulk < secrets.json ``` ```sh output 🌀 Creating the secrets for the Worker "script-name" ✨ Successfully created secret for key: secret-name-1 ... 🚨 Error uploading secret for key: secret-name-1 ✨ Successfully created secret for key: secret-name-2 Finished processing secrets JSON file: ✨ 1 secrets successfully uploaded 🚨 1 secrets failed to upload ``` ## `workflows` :::note The `wrangler workflows` command requires Wrangler version `3.83.0` or greater. Use `npx wrangler@latest` to always use the latest Wrangler version when invoking commands. ::: Manage and configure [Workflows](/workflows/). ### `list` Lists the registered Workflows for this account. ```sh wrangler workflows list ``` - `--page` <Type text="number" /> <MetaInfo text="optional" /> - Show a specific page from the listing. You can configure page size using "per-page". - `--per-page` <Type text="number" /> <MetaInfo text="optional" /> - Configure the maximum number of Workflows to show per page. <Render file="wrangler-commands/global-flags" product="workers" /> ### `instances` Manage and interact with specific instances of a Workflow. ### `instances list` List Workflow instances. ```sh wrangler workflows instances list <WORKFLOW_NAME> [OPTIONS] ``` - `WORKFLOW_NAME` <Type text="string" /> <MetaInfo text="required" /> - The name of a registered Workflow. <Render file="wrangler-commands/global-flags" product="workers" /> ### `instances describe` Describe a specific instance of a Workflow, including its current status, any persisted state, and per-step outputs. ```sh wrangler workflows instances describe <WORKFLOW_NAME> <ID> [OPTIONS] ``` - `WORKFLOW_NAME` <Type text="string" /> <MetaInfo text="required" /> - The name of a registered Workflow. - `ID` <Type text="string" /> <MetaInfo text="required" /> - The ID of a Workflow instance. You can optionally provide `latest` to refer to the most recently created instance of a Workflow. <Render file="wrangler-commands/global-flags" product="workers" /> ```sh # Passing `latest` instead of an explicit ID will describe the most recently queued instance wrangler workflows instances describe my-workflow latest ``` ```sh output Workflow Name: my-workflow Instance Id: 51c73fc8-7fd5-47d9-bd82-9e301506ee72 Version Id: cedc33a0-11fa-4c26-8a8e-7d28d381a291 Status: ✅ Completed Trigger: 🌎 API Queued: 10/16/2024, 2:00:39 PM Success: ✅ Yes Start: 10/16/2024, 2:00:39 PM End: 10/16/2024, 2:01:40 PM Duration: 1 minute # Remaining output truncated ``` ### `instances terminate` Terminate (permanently stop) a Workflow instance. ```sh wrangler workflows instances terminate <WORKFLOW_NAME> <ID> [OPTIONS] ``` - `WORKFLOW_NAME` <Type text="string" /> <MetaInfo text="required" /> - The name of a registered Workflow. - `ID` <Type text="string" /> <MetaInfo text="required" /> - The ID of a Workflow instance. <Render file="wrangler-commands/global-flags" product="workers" /> ### `instances pause` Pause (until resumed) a Workflow instance. ```sh wrangler workflows instances pause <WORKFLOW_NAME> <ID> [OPTIONS] ``` - `WORKFLOW_NAME` <Type text="string" /> <MetaInfo text="required" /> - The name of a registered Workflow. - `ID` <Type text="string" /> <MetaInfo text="required" /> - The ID of a Workflow instance. <Render file="wrangler-commands/global-flags" product="workers" /> ### `instances resume` Resume a paused Workflow instance. ```sh wrangler workflows instances resume <WORKFLOW_NAME> <ID> [OPTIONS] ``` - `WORKFLOW_NAME` <Type text="string" /> <MetaInfo text="required" /> - The name of a registered Workflow. - `ID` <Type text="string" /> <MetaInfo text="required" /> - The ID of a Workflow instance. <Render file="wrangler-commands/global-flags" product="workers" /> ### `describe` ```sh wrangler workflows describe <WORKFLOW_NAME> [OPTIONS] ``` - `WORKFLOW_NAME` <Type text="string" /> <MetaInfo text="required" /> - The name of a registered Workflow. <Render file="wrangler-commands/global-flags" product="workers" /> ### `trigger` Trigger (create) a Workflow instance. ```sh wrangler workflows trigger <WORKFLOW_NAME> <PARAMS> [OPTIONS] ``` - `WORKFLOW_NAME` <Type text="string" /> <MetaInfo text="required" /> - The name of a registered Workflow. - `PARAMS` <Type text="string" /> <MetaInfo text="optional" /> - The parameters to pass to the Workflow as an event. Must be a JSON-encoded string. <Render file="wrangler-commands/global-flags" product="workers" /> ```sh # Pass optional params to the Workflow. wrangler workflows trigger my-workflow '{"hello":"world"}' ``` ## `tail` Start a session to livestream logs from a deployed Worker. ```txt wrangler tail <WORKER> [OPTIONS] ``` - `WORKER` <Type text="string" /> <MetaInfo text="required" /> - The name of your Worker or the route the Worker is running on. - `--format` <Type text="'json'|'pretty'" /> <MetaInfo text="optional" /> - The format of the log entries. - `--status` <Type text="'ok'|'error'|'canceled'" /> <MetaInfo text="optional" /> - Filter by invocation status. - `--header` <Type text="string" /> <MetaInfo text="optional" /> - Filter by HTTP header. - `--method` <Type text="string" /> <MetaInfo text="optional" /> - Filter by HTTP method. - `--sampling-rate` <Type text="number" /> <MetaInfo text="optional" /> - Add a fraction of requests to log sampling rate (between `0` and `1`). - `--search` <Type text="string" /> <MetaInfo text="optional" /> - Filter by a text match in `console.log` messages. - `--ip` <Type text="(string|'self')\[]" />" <MetaInfo text="optional" /> - Filter by the IP address the request originates from. Use `"self"` to show only messages from your own IP. - `--version-id` <Type text="string" /> <MetaInfo text="optional" /> - Filter by Worker version. <Render file="wrangler-commands/global-flags" product="workers" /> After starting `wrangler tail`, you will receive a live feed of console and exception logs for each request your Worker receives. If your Worker has a high volume of traffic, the tail might enter sampling mode. This will cause some of your messages to be dropped and a warning to appear in your tail logs. To prevent messages from being dropped, add the options listed above to filter the volume of tail messages. :::note It may take up to 1 minute (60 seconds) for a tail to exit sampling mode after adding an option to filter tail messages. ::: If sampling persists after using options to filter messages, consider using [instant logs](https://developers.cloudflare.com/logs/instant-logs/). --- ## `pages` Configure Cloudflare Pages. ### `dev` Develop your full-stack Pages application locally. ```txt wrangler pages dev [<DIRECTORY>] [OPTIONS] ``` - `DIRECTORY` <Type text="string" /> <MetaInfo text="optional" /> - The directory of static assets to serve. - `--local` <Type text="boolean" /> <MetaInfo text="optional (default: true)" /> - Run on your local machine. - `--ip` <Type text="string" /> <MetaInfo text="optional" /> - IP address to listen on, defaults to `localhost`. - `--port` <Type text="number" /> <MetaInfo text="optional (default: 8788)" /> - The port to listen on (serve from). - `--binding` <Type text="string[]" /> <MetaInfo text="optional" /> - Bind an environment variable or secret (for example, `--binding <VARIABLE_NAME>=<VALUE>`). - `--kv` <Type text="string[]" /> optional - Binding name of [KV namespace](/kv/) to bind (for example, `--kv <BINDING_NAME>`). - `--r2` <Type text="string[]" /> <MetaInfo text="optional" /> - Binding name of [R2 bucket](/pages/functions/bindings/#interact-with-your-r2-buckets-locally) to bind (for example, `--r2 <BINDING_NAME>`). - `--d1` <Type text="string[]" /> <MetaInfo text="optional" /> - Binding name of [D1 database](/pages/functions/bindings/#interact-with-your-d1-databases-locally) to bind (for example, `--d1 <BINDING_NAME>`). - `--do` <Type text="string[]" /> <MetaInfo text="optional" /> - Binding name of Durable Object to bind (for example, `--do <BINDING_NAME>=<CLASS>`). - `--live-reload` <Type text="boolean" /> <MetaInfo text="optional (default: false)" /> - Auto reload HTML pages when change is detected. - `--compatibility-flag` <Type text="string[]" /> <MetaInfo text="optional" /> - Runtime compatibility flags to apply. - `--compatibility-date` <Type text="string" /> <MetaInfo text="optional" /> - Runtime compatibility date to apply. - `--show-interactive-dev-session` <Type text="boolean" /> <MetaInfo text="optional (default: true if the terminal supports interactivity)" /> - Show the interactive dev session. - `--https-key-path` <Type text="string" /> <MetaInfo text="optional" /> - Path to a custom certificate key. - `--https-cert-path` <Type text="string" /> <MetaInfo text="optional" /> - Path to a custom certificate. <Render file="wrangler-commands/global-flags" product="workers" /> ### `download config` Download your Pages project config as a [Wrangler configuration file](/workers/wrangler/configuration/). ```txt wrangler pages download config <PROJECT_NAME> ``` <Render file="wrangler-commands/global-flags" product="workers" /> ### `project list` List your Pages projects. ```txt wrangler pages project list ``` <Render file="wrangler-commands/global-flags" product="workers" /> ### `project create` Create a new Cloudflare Pages project. ```txt wrangler pages project create <PROJECT_NAME> [OPTIONS] ``` - `PROJECT_NAME` <Type text="string" /> <MetaInfo text="required" /> - The name of your Pages project. - `--production-branch` <Type text="string" /> <MetaInfo text="optional" /> - The name of the production branch of your project. <Render file="wrangler-commands/global-flags" product="workers" /> ### `project delete` Delete a Cloudflare Pages project. ```txt wrangler pages project delete <PROJECT_NAME> [OPTIONS] ``` - `PROJECT_NAME` <Type text="string" /> <MetaInfo text="required" /> - The name of the Pages project to delete. - `--yes` <Type text="boolean" /> <MetaInfo text="optional" /> - Answer `"yes"` to confirmation prompt. <Render file="wrangler-commands/global-flags" product="workers" /> ### `deployment list` List deployments in your Cloudflare Pages project. ```txt wrangler pages deployment list [--project-name <PROJECT_NAME>] ``` - `--project-name` <Type text="string" /> <MetaInfo text="optional" /> - The name of the project you would like to list deployments for. - `--environment` <Type text="'production'|'preview'" /> <MetaInfo text="optional" /> - Environment type to list deployments for. <Render file="wrangler-commands/global-flags" product="workers" /> ### `deployment tail` Start a session to livestream logs from your deployed Pages Functions. ```txt wrangler pages deployment tail [<DEPLOYMENT>] [OPTIONS] ``` - `DEPLOYMENT` <Type text="string" /> <MetaInfo text="optional" /> - ID or URL of the deployment to tail. Specify by environment if deployment ID is unknown. - `--project-name` <Type text="string" /> <MetaInfo text="optional" /> - The name of the project you would like to tail. - `--environment` <Type text="'production'|'preview'" /> <MetaInfo text="optional" /> - When not providing a specific deployment ID, specifying environment will grab the latest production or preview deployment. - `--format` <Type text="'json'|'pretty'" /> <MetaInfo text="optional" /> - The format of the log entries. - `--status` <Type text="'ok'|'error'|'canceled'" /> <MetaInfo text="optional" /> - Filter by invocation status. - `--header` <Type text="string" /> <MetaInfo text="optional" /> - Filter by HTTP header. - `--method` <Type text="string" /> <MetaInfo text="optional" /> - Filter by HTTP method. - `--sampling-rate` <Type text="number" /> <MetaInfo text="optional" /> - Add a percentage of requests to log sampling rate. - `--search` <Type text="string" /> <MetaInfo text="optional" /> - Filter by a text match in `console.log` messages. - `--ip` <Type text="(string|'self')\[]" /> <MetaInfo text="optional" /> - Filter by the IP address the request originates from. Use `"self"` to show only messages from your own IP. <Render file="wrangler-commands/global-flags" product="workers" /> :::note Filtering with `--ip self` will allow tailing your deployed Functions beyond the normal request per second limits. ::: After starting `wrangler pages deployment tail`, you will receive a live stream of console and exception logs for each request your Functions receive. ### `deploy` Deploy a directory of static assets as a Pages deployment. ```txt wrangler pages deploy <BUILD_OUTPUT_DIRECTORY> [OPTIONS] ``` - `BUILD_OUTPUT_DIRECTORY` <Type text="string" /> <MetaInfo text="optional" /> - The [directory](/pages/configuration/build-configuration/#framework-presets) of static files to upload. As of Wrangler 3.45.0, this is only required when your Pages project does not have a Wrangler file. Refer to the [Pages Functions configuration guide](/pages/functions/wrangler-configuration/) for more information. - `--project-name` <Type text="string" /> <MetaInfo text="optional" /> - The name of the project you want to deploy to. - `--branch` <Type text="string" /> <MetaInfo text="optional" /> - The name of the branch you want to deploy to. - `--commit-hash` <Type text="string" /> <MetaInfo text="optional" /> - The SHA to attach to this deployment. - `--commit-message` <Type text="string" /> <MetaInfo text="optional" /> - The commit message to attach to this deployment. - `--commit-dirty` <Type text="boolean" /> <MetaInfo text="optional" /> - Whether or not the workspace should be considered dirty for this deployment. <Render file="wrangler-commands/global-flags" product="workers" /> :::note Your site is deployed to `<PROJECT_NAME>.pages.dev`. If you do not provide the `--project-name` argument, you will be prompted to enter a project name in your terminal after you run the command. ::: ### `publish` Publish a directory of static assets as a Pages deployment. ```txt wrangler pages publish [<DIRECTORY>] [OPTIONS] ``` <Render file="wrangler-commands/global-flags" product="workers" /> :::note This command has been deprecated as of v3 in favor of [`wrangler pages deploy`](#deploy-1). It will be removed in v4. ::: ### `secret put` Create or update a secret for a Pages project. ```txt wrangler pages secret put <KEY> [OPTIONS] ``` - `KEY` <Type text="string" /> <MetaInfo text="required" /> - The variable name for this secret to be accessed in the Pages project. - `--project-name` <Type text="string" /> <MetaInfo text="optional" /> - The name of your Pages project. <Render file="wrangler-commands/global-flags" product="workers" /> ### `secret delete` Delete a secret from a Pages project. ```txt wrangler pages secret delete <KEY> [OPTIONS] ``` - `KEY` <Type text="string" /> <MetaInfo text="required" /> - The variable name for this secret to be accessed in the Pages project. - `--project-name` <Type text="string" /> <MetaInfo text="optional" /> - The name of your Pages project. <Render file="wrangler-commands/global-flags" product="workers" /> ### `secret list` List the names of all the secrets for a Pages project. ```txt wrangler pages secret list [OPTIONS] ``` - `--project-name` <Type text="string" /> <MetaInfo text="optional" /> - The name of your Pages project. <Render file="wrangler-commands/global-flags" product="workers" /> ### `secret bulk` Upload multiple secrets for a Pages project at once. ```txt wrangler pages secret bulk [<FILENAME>] [OPTIONS] ``` - `FILENAME` <Type text="string" /> <MetaInfo text="optional" /> - A file containing either [JSON](https://www.json.org/json-en.html) or the [.env](https://www.dotenv.org/docs/security/env) format - The JSON file containing key-value pairs to upload as secrets, in the form `{"SECRET_NAME": "secret value", ...}`. - The `.env` file containing [key-value pairs to upload as secrets](/workers/configuration/secrets/#local-development-with-secrets), in the form `SECRET_NAME=secret value`. - If omitted, Wrangler expects to receive input from `stdin` rather than a file. - `--project-name` <Type text="string" /> <MetaInfo text="optional" /> - The name of your Pages project. <Render file="wrangler-commands/global-flags" product="workers" /> --- ## `queues` Manage your Workers [Queues](/queues/) configurations. ### `create` Create a new queue. ```txt wrangler queues create <name> [OPTIONS] ``` - `name` <Type text="string" /> <MetaInfo text="required" /> - The name of the queue to create. - `--delivery-delay-secs` <Type text="number" /> <MetaInfo text="optional" /> - How long a published message should be delayed for, in seconds. Must be a positive integer. - `--message-retention-period-secs` <Type text="number" /> <MetaInfo text="optional" /> - How long a published message is retained in the Queue. Must be a positive integer between 60 and 1209600 (14 days). Defaults to 345600 (4 days). <Render file="wrangler-commands/global-flags" product="workers" /> ### `update` Update an existing queue. ```txt wrangler queues update <name> [OPTIONS] ``` - `name` <Type text="string" /> <MetaInfo text="required" /> - The name of the queue to update. - `--delivery-delay-secs` <Type text="number" /> <MetaInfo text="optional" /> - How long a published message should be delayed for, in seconds. Must be a positive integer. - `--message-retention-period-secs` <Type text="number" /> <MetaInfo text="optional" /> - How long a published message is retained on the Queue. Must be a positive integer between 60 and 1209600 (14 days). Defaults to 345600 (4 days). <Render file="wrangler-commands/global-flags" product="workers" /> ### `delete` Delete an existing queue. ```txt wrangler queues delete <name> [OPTIONS] ``` - `name` <Type text="string" /> <MetaInfo text="required" /> - The name of the queue to delete. <Render file="wrangler-commands/global-flags" product="workers" /> ### `list` List all queues in the current account. ```txt wrangler queues list [OPTIONS] ``` <Render file="wrangler-commands/global-flags" product="workers" /> ### `info` Get information on individual queues. ```txt wrangler queues info <name> ``` - `name` <Type text="string" /> <MetaInfo text="required" /> - The name of the queue to inspect. ### `consumer` Manage queue consumer configurations. ### `consumer add <script-name>` Add a Worker script as a [queue consumer](/queues/reference/how-queues-works/#consumers). ```txt wrangler queues consumer add <queue-name> <script-name> [OPTIONS] ``` - `queue-name` <Type text="string" /> <MetaInfo text="required" /> - The name of the queue to add the consumer to. - `script-name` <Type text="string" /> <MetaInfo text="required" /> - The name of the Workers script to add as a consumer of the named queue. - `--batch-size` <Type text="number" /> <MetaInfo text="optional" /> - Maximum number of messages per batch. Must be a positive integer. - `--batch-timeout` <Type text="number" /> <MetaInfo text="optional" /> - Maximum number of seconds to wait to fill a batch with messages. Must be a positive integer. - `--message-retries` <Type text="number" /> <MetaInfo text="optional" /> - Maximum number of retries for each message. Must be a positive integer. - `--max-concurrency` <Type text="number" /> <MetaInfo text="optional" /> - The maximum number of concurrent consumer invocations that will be scaled up to handle incoming message volume. Must be a positive integer. - `--retry-delay-secs` <Type text="number" /> <MetaInfo text="optional" /> - How long a retried message should be delayed for, in seconds. Must be a positive integer. <Render file="wrangler-commands/global-flags" product="workers" /> ### `consumer remove` Remove a consumer from a queue. ```txt wrangler queues consumer remove <queue-name> <script-name> ``` - `queue-name` <Type text="string" /> <MetaInfo text="required" /> - The name of the queue to remove the consumer from. - `script-name` <Type text="string" /> <MetaInfo text="required" /> - The name of the Workers script to remove as the consumer. <Render file="wrangler-commands/global-flags" product="workers" /> --- ## `login` Authorize Wrangler with your Cloudflare account using OAuth. Wrangler will attempt to automatically open your web browser to login with your Cloudflare account. If you prefer to use API tokens for authentication, such as in headless or continuous integration environments, refer to [Running Wrangler in CI/CD](/workers/ci-cd/). ```txt wrangler login [OPTIONS] ``` - `--scopes-list` <Type text="string" /> <MetaInfo text="optional" /> - List all the available OAuth scopes with descriptions. - `--scopes $SCOPES` <Type text="string" /> <MetaInfo text="optional" /> - Allows to choose your set of OAuth scopes. The set of scopes must be entered in a whitespace-separated list, for example, `npx wrangler login --scopes account:read user:read`. <Render file="wrangler-commands/global-flags" product="workers" /> :::note `wrangler login` uses all the available scopes by default if no flags are provided. ::: If Wrangler fails to open a browser, you can copy and paste the URL generated by `wrangler login` in your terminal into a browser and log in. ### Use `wrangler login` on a remote machine If you are using Wrangler from a remote machine, but run the login flow from your local browser, you will receive the following error message after logging in:`This site can't be reached`. To finish the login flow, run `wrangler login` and go through the login flow in the browser: ```sh npx wrangler login ``` ```sh output â›…ï¸ wrangler 2.1.6 ------------------- Attempting to login via OAuth... Opening a link in your default browser: https://dash.cloudflare.com/oauth2/auth?xyz... ``` The browser login flow will redirect you to a `localhost` URL on your machine. Leave the login flow active. Open a second terminal session. In that second terminal session, use `curl` or an equivalent request library on the remote machine to fetch this `localhost` URL. Copy and paste the `localhost` URL that was generated during the `wrangler login` flow and run: ```sh curl <LOCALHOST_URL> ``` --- ## `logout` Remove Wrangler's authorization for accessing your account. This command will invalidate your current OAuth token. ```txt wrangler logout ``` <Render file="wrangler-commands/global-flags" product="workers" /> If you are using `CLOUDFLARE_API_TOKEN` instead of OAuth, and you can logout by deleting your API token in the Cloudflare dashboard: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com/). 2. Go to **My Profile** > **API Tokens**. 3. Select the three-dot menu on your Wrangler token. 4. Select **Delete**. --- ## `whoami` Retrieve your user information and test your authentication configuration. ```txt wrangler whoami ``` <Render file="wrangler-commands/global-flags" product="workers" /> --- ## `versions` :::note The minimum required wrangler version to use these commands is 3.40.0. For versions before 3.73.0, you will need to add the `--x-versions` flag. ::: ### `upload` Upload a new [version](/workers/configuration/versions-and-deployments/#versions) of your Worker that is not deployed immediately. ```txt wrangler versions upload [OPTIONS] ``` - `--tag` <Type text="string" /> <MetaInfo text="optional" /> - Add a version tag. Accepts empty string. - `--message` <Type text="string" /> <MetaInfo text="optional" /> - Add a version message. Accepts empty string. - `--name` <Type text="string" /> <MetaInfo text="optional" /> - Perform on a specific Worker rather than inheriting from the [Wrangler configuration file](/workers/wrangler/configuration/). <Render file="wrangler-commands/global-flags" product="workers" /> ### `deploy` Deploy a previously created [version](/workers/configuration/versions-and-deployments/#versions) of your Worker all at once or create a [gradual deployment](/workers/configuration/versions-and-deployments/gradual-deployments/) to incrementally shift traffic to a new version by following an interactive prompt. ```txt wrangler versions deploy [OPTIONS] ``` - `--name` <Type text="string" /> <MetaInfo text="optional" /> - Perform on a specific Worker rather than inheriting from the [Wrangler configuration file](/workers/wrangler/configuration/). <Render file="wrangler-commands/global-flags" product="workers" /> :::note The non-interactive version of this prompt is: `wrangler versions deploy version-id-1@percentage-1% version-id-2@percentage-2 -y` For example: `wrangler versions deploy 095f00a7-23a7-43b7-a227-e4c97cab5f22@10% 1a88955c-2fbd-4a72-9d9b-3ba1e59842f2@90% -y` ::: ### `list` Retrieve details for the 10 most recent versions. Details include `Version ID`, `Created on`, `Author`, `Source`, and optionally, `Tag` or `Message`. ```txt wrangler versions list [OPTIONS] ``` - `--name` <Type text="string" /> <MetaInfo text="optional" /> - Perform on a specific Worker rather than inheriting from the [Wrangler configuration file](/workers/wrangler/configuration/). <Render file="wrangler-commands/global-flags" product="workers" /> ### `secret put` Create or replace a secret for a Worker. Creates a new [version](/workers/configuration/versions-and-deployments/#versions) with modified secrets without [deploying](/workers/configuration/versions-and-deployments/#deployments) the Worker. ```txt wrangler versions secret put <KEY> [OPTIONS] ``` - `KEY` <Type text="string" /> <MetaInfo text="required" /> - The variable name for this secret to be accessed in the Worker. - `--name` <Type text="string" /> <MetaInfo text="optional" /> - Perform on a specific Worker rather than inheriting from the [Wrangler configuration file](/workers/wrangler/configuration/). - `--env` <Type text="string" /> <MetaInfo text="optional" /> - Perform on a specific environment. <Render file="wrangler-commands/global-flags" product="workers" /> ### `secret delete` Delete a secret for a Worker. Creates a new [version](/workers/configuration/versions-and-deployments/#versions) with modified secrets without [deploying](/workers/configuration/versions-and-deployments/#deployments) the Worker. ```txt wrangler versions delete <KEY> [OPTIONS] ``` - `KEY` <Type text="string" /> <MetaInfo text="required" /> - The variable name for this secret to be accessed in the Worker. - `--name` <Type text="string" /> <MetaInfo text="optional" /> - Perform on a specific Worker rather than inheriting from the [Wrangler configuration file](/workers/wrangler/configuration/). - `--env` <Type text="string" /> <MetaInfo text="optional" /> - Perform on a specific environment. <Render file="wrangler-commands/global-flags" product="workers" /> ### `secret bulk` Upload multiple secrets for a Worker at once. Creates a new [version](/workers/configuration/versions-and-deployments/#versions) with modified secrets without [deploying](/workers/configuration/versions-and-deployments/#deployments) the Worker. ```txt wrangler versions secret bulk <FILENAME> [OPTIONS] ``` - `FILENAME` <Type text="string" /> <MetaInfo text="optional" /> - A file containing either [JSON](https://www.json.org/json-en.html) or the [.env](https://www.dotenv.org/docs/security/env) format - The JSON file containing key-value pairs to upload as secrets, in the form `{"SECRET_NAME": "secret value", ...}`. - The `.env` file containing key-value pairs to upload as secrets, in the form `SECRET_NAME=secret value`. - If omitted, Wrangler expects to receive input from `stdin` rather than a file. - `--name` <Type text="string" /> <MetaInfo text="optional" /> - Perform on a specific Worker rather than inheriting from the [Wrangler configuration file](/workers/wrangler/configuration/). - `--env` <Type text="string" /> <MetaInfo text="optional" /> - Perform on a specific environment. <Render file="wrangler-commands/global-flags" product="workers" /> --- ## `triggers` :::note The minimum required wrangler version to use these commands is 3.40.0. For versions before 3.73.0, you will need to add the `--x-versions` flag. ::: ### `deploy` Apply changes to triggers ([Routes or domains](/workers/configuration/routing/) and [Cron Triggers](/workers/configuration/cron-triggers/)) when using [`wrangler versions upload`](/workers/wrangler/commands/#upload). ```txt wrangler triggers deploy [OPTIONS] ``` - `--name` <Type text="string" /> <MetaInfo text="optional" /> - Perform on a specific Worker rather than inheriting from the [Wrangler configuration file](/workers/wrangler/configuration/). <Render file="wrangler-commands/global-flags" product="workers" /> --- ## `deployments` [Deployments](/workers/configuration/versions-and-deployments/#deployments) track the version(s) of your Worker that are actively serving traffic. ### `list` :::note The minimum required wrangler version to use these commands is 3.40.0. For versions before 3.73.0, you will need to add the `--x-versions` flag. ::: Retrieve details for the 10 most recent [deployments](/workers/configuration/versions-and-deployments/#deployments). Details include `Created on`, `Author`, `Source`, an optional `Message`, and metadata about the `Version(s)` in the deployment. ```txt wrangler deployments list [OPTIONS] ``` - `--name` <Type text="string" /> <MetaInfo text="optional" /> - Perform on a specific Worker rather than inheriting from the [Wrangler configuration file](/workers/wrangler/configuration/). <Render file="wrangler-commands/global-flags" product="workers" /> ### `status` Retrieve details for the most recent deployment. Details include `Created on`, `Author`, `Source`, an optional `Message`, and metadata about the `Version(s)` in the deployment. ```txt wrangler deployments status ``` - `--name` <Type text="string" /> <MetaInfo text="optional" /> - Perform on a specific Worker rather than inheriting from the [Wrangler configuration file](/workers/wrangler/configuration/). <Render file="wrangler-commands/global-flags" product="workers" /> ## `rollback` :::caution A rollback will immediately create a new deployment with the specified version of your Worker and become the active deployment across all your deployed routes and domains. This change will not affect work in your local development environment. ::: ```txt wrangler rollback [<VERSION_ID>] [OPTIONS] ``` - `VERSION_ID` <Type text="string" /> <MetaInfo text="optional" /> - The ID of the version you wish to roll back to. If not supplied, the `rollback` command defaults to the version uploaded before the latest version. - `--name` <Type text="string" /> <MetaInfo text="optional" /> - Perform on a specific Worker rather than inheriting from the [Wrangler configuration file](/workers/wrangler/configuration/). - `--message` <Type text="string" /> <MetaInfo text="optional" /> - Add message for rollback. Accepts empty string. When specified, interactive prompts for rollback confirmation and message are skipped. <Render file="wrangler-commands/global-flags" product="workers" /> --- ## dispatch namespace ### `list` List all dispatch namespaces. ```txt wrangler dispatch-namespace list ``` <Render file="wrangler-commands/global-flags" product="workers" /> ### `get` Get information about a dispatch namespace. ```txt wrangler dispatch-namespace get <NAME> ``` - `NAME` <Type text="string" /> <MetaInfo text="required" /> - The name of the dispatch namespace to get details about. <Render file="wrangler-commands/global-flags" product="workers" /> ### `create` Create a dispatch namespace. ```txt wrangler dispatch-namespace create <NAME> ``` - `NAME` <Type text="string" /> <MetaInfo text="required" /> - The name of the dispatch namespace to create. <Render file="wrangler-commands/global-flags" product="workers" /> ### `delete` Delete a dispatch namespace. ```txt wrangler dispatch-namespace get <NAME> ``` :::note You must delete all user Workers in the dispatch namespace before it can be deleted. ::: - `NAME` <Type text="string" /> <MetaInfo text="required" /> - The name of the dispatch namespace to delete. <Render file="wrangler-commands/global-flags" product="workers" /> ### `rename` Rename a dispatch namespace. ```txt wrangler dispatch-namespace get <OLD_NAME> <NEW_NAME> ``` - `OLD_NAME` <Type text="string" /> <MetaInfo text="required" /> - The previous name of the dispatch namespace. - `NEW_NAME` <Type text="string" /> <MetaInfo text="required" /> - The new name of the dispatch namespace. <Render file="wrangler-commands/global-flags" product="workers" /> --- ## `mtls-certificate` Manage client certificates used for mTLS connections in subrequests. These certificates can be used in [`mtls_certificate` bindings](/workers/runtime-apis/bindings/mtls), which allow a Worker to present the certificate when establishing a connection with an origin that requires client authentication (mTLS). ### `upload` Upload a client certificate. ```txt wrangler mtls-certificate upload --cert <PATH> --key <PATH> [OPTIONS] ``` - `--cert` <Type text="string" /> <MetaInfo text="required" /> - A path to the TLS certificate to upload. Certificate chains are supported. - `--key` <Type text="string" /> <MetaInfo text="required" /> - A path to the private key to upload. - `--name` <Type text="string" /> <MetaInfo text="optional" /> - The name assigned to the mTLS certificate at upload. <Render file="wrangler-commands/global-flags" product="workers" /> The following is an example of using the `upload` command to upload an mTLS certificate. ```sh npx wrangler mtls-certificate upload --cert cert.pem --key key.pem --name my-origin-cert ``` ```sh output Uploading mTLS Certificate my-origin-cert... Success! Uploaded mTLS Certificate my-origin-cert ID: 99f5fef1-6cc1-46b8-bd79-44a0d5082b8d Issuer: CN=my-secured-origin.com,OU=my-team,O=my-org,L=San Francisco,ST=California,C=US Expires: 1/01/2025 ``` You can then add this certificate as a [binding](/workers/runtime-apis/bindings/) in your [Wrangler configuration file](/workers/wrangler/configuration/): <WranglerConfig> ```toml mtls_certificates = [ { binding = "MY_CERT", certificate_id = "99f5fef1-6cc1-46b8-bd79-44a0d5082b8d" } ] ``` </WranglerConfig> Note that the certificate and private keys must be in separate (typically `.pem`) files when uploading. ### `list` List mTLS certificates associated with the current account ID. ```txt wrangler mtls-certificate list ``` <Render file="wrangler-commands/global-flags" product="workers" /> The following is an example of using the `list` command to upload an mTLS certificate. ```sh npx wrangler mtls-certificate list ``` ```sh output ID: 99f5fef1-6cc1-46b8-bd79-44a0d5082b8d Name: my-origin-cert Issuer: CN=my-secured-origin.com,OU=my-team,O=my-org,L=San Francisco,ST=California,C=US Created on: 1/01/2023 Expires: 1/01/2025 ID: c5d004d1-8312-402c-b8ed-6194328d5cbe Issuer: CN=another-origin.com,OU=my-team,O=my-org,L=San Francisco,ST=California,C=US Created on: 1/01/2023 Expires: 1/01/2025 ``` ### `delete` Delete a client certificate. ```txt wrangler mtls-certificate delete {--id <ID|--name <NAME>} ``` - `--id` <Type text="string" /> - The ID of the mTLS certificate. - `--name` <Type text="string" /> - The name assigned to the mTLS certificate at upload. <Render file="wrangler-commands/global-flags" product="workers" /> The following is an example of using the `delete` command to delete an mTLS certificate. ```sh npx wrangler mtls-certificate delete --id 99f5fef1-6cc1-46b8-bd79-44a0d5082b8d ``` ```sh output Are you sure you want to delete certificate 99f5fef1-6cc1-46b8-bd79-44a0d5082b8d (my-origin-cert)? [y/n] yes Deleting certificate 99f5fef1-6cc1-46b8-bd79-44a0d5082b8d... Deleted certificate 99f5fef1-6cc1-46b8-bd79-44a0d5082b8d successfully ``` --- ## `cert` Manage mTLS client certificates and Certificate Authority (CA) chain certificates used for secured connections. These certificates can be used in Hyperdrive configurations, enabling them to present the certificate when connecting to an origin database that requires client authentication (mTLS) or a custom Certificate Authority (CA). ### `upload mtls-certificate` Upload a client certificate. ```txt wrangler cert upload mtls-certificate --cert <PATH> --key <PATH> [OPTIONS] ``` - `--cert` <Type text="string" /> <MetaInfo text="required" /> - A path to the TLS certificate to upload. Certificate chains are supported. - `--key` <Type text="string" /> <MetaInfo text="required" /> - A path to the private key to upload. - `--name` <Type text="string" /> <MetaInfo text="optional" /> - The name assigned to the mTLS certificate at upload. <Render file="wrangler-commands/global-flags" product="workers" /> The following is an example of using the `upload` command to upload an mTLS certificate. ```sh npx wrangler cert upload --cert cert.pem --key key.pem --name my-origin-cert ``` ```sh output Uploading mTLS Certificate my-origin-cert... Success! Uploaded mTLS Certificate my-origin-cert ID: 99f5fef1-6cc1-46b8-bd79-44a0d5082b8d Issuer: CN=my-secured-origin.com,OU=my-team,O=my-org,L=San Francisco,ST=California,C=US Expires: 1/01/2025 ``` Note that the certificate and private keys must be in separate (typically `.pem`) files when uploading. ### `upload certificate-authority` Upload a client certificate. ```txt wrangler cert upload certificate-authority --ca-cert <PATH> [OPTIONS] ``` - `--ca-cert` <Type text="string" /> <MetaInfo text="required" /> - A path to the Certificate Authority (CA) chain certificate to upload. - `--name` <Type text="string" /> <MetaInfo text="optional" /> - The name assigned to the mTLS certificate at upload. <Render file="wrangler-commands/global-flags" product="workers" /> The following is an example of using the `upload` command to upload an CA certificate. ```sh npx wrangler cert upload certificate-authority --ca-cert server-ca-chain.pem --name SERVER_CA_CHAIN ``` ```sh output Uploading CA Certificate SERVER_CA_CHAIN... Success! Uploaded CA Certificate SERVER_CA_CHAIN ID: 99f5fef1-6cc1-46b8-bd79-44a0d5082b8d Issuer: CN=my-secured-origin.com,OU=my-team,O=my-org,L=San Francisco,ST=California,C=US Expires: 1/01/2025 ``` ### `list` List mTLS certificates associated with the current account ID. This will display both mTLS certificates and CA certificates. ```txt wrangler cert list ``` <Render file="wrangler-commands/global-flags" product="workers" /> The following is an example of using the `list` command to upload an mTLS or CA certificate. ```sh npx wrangler cert list ``` ```sh output ID: 99f5fef1-6cc1-46b8-bd79-44a0d5082b8d Name: my-origin-cert Issuer: CN=my-secured-origin.com,OU=my-team,O=my-org,L=San Francisco,ST=California,C=US Created on: 1/01/2023 Expires: 1/01/2025 ID: c5d004d1-8312-402c-b8ed-6194328d5cbe Issuer: CN=another-origin.com,OU=my-team,O=my-org,L=San Francisco,ST=California,C=US Created on: 1/01/2023 Expires: 1/01/2025 ``` ### `delete` Delete a client certificate. ```txt wrangler cert delete {--id <ID|--name <NAME>} ``` - `--id` <Type text="string" /> - The ID of the mTLS or CA certificate. - `--name` <Type text="string" /> - The name assigned to the mTLS or CA certificate at upload. <Render file="wrangler-commands/global-flags" product="workers" /> The following is an example of using the `delete` command to delete an mTLS or CA certificate. ```sh npx wrangler cert delete --id 99f5fef1-6cc1-46b8-bd79-44a0d5082b8d ``` ```sh output Are you sure you want to delete certificate 99f5fef1-6cc1-46b8-bd79-44a0d5082b8d (my-origin-cert)? [y/n] yes Deleting certificate 99f5fef1-6cc1-46b8-bd79-44a0d5082b8d... Deleted certificate 99f5fef1-6cc1-46b8-bd79-44a0d5082b8d successfully ``` --- ## `types` Generate types from bindings and module rules in configuration. ```txt wrangler types [<PATH>] [OPTIONS] ``` :::note The `--experimental-include-runtime` flag dynamically generates runtime types according to the `compatibility_date` and `compatibility_flags` defined in your [config file](/workers/wrangler/configuration/). It is a replacement for the [`@cloudflare/workers-types` package](https://www.npmjs.com/package/@cloudflare/workers-types), so that package, if installed, should be uninstalled to avoid any potential conflict. After running the command, you must add the path of the generated runtime types file to the [`compilerOptions.types` field](https://www.typescriptlang.org/tsconfig/#types) in your project's [tsconfig.json](https://www.typescriptlang.org/docs/handbook/tsconfig-json.html) file. You may use the shorthand `--x-include-runtime` flag in place of `--experimental-include-runtime` anywhere it is mentioned. The minimum required Wrangler version to use this command is 3.66.0. ::: - `PATH` <Type text="string" /> <MetaInfo text="(default: `./worker-configuration.d.ts`)" /> - The path to where **the `Env` types** for your Worker will be written. - The path must have a `d.ts` extension. - `--env-interface` <Type text="string" /> <MetaInfo text="(default: `Env`)" /> - The name of the interface to generate for the environment object. - Not valid if the Worker uses the Service Worker syntax. - `--experimental-include-runtime` <Type text="string" /> <MetaInfo text="optional (default: `./.wrangler/types/runtime.d.ts`)" /> - The path to where the **runtime types** file will be written. - Leave the path blank to use the default option, e.g. `npx wrangler types --x-include-runtime` - A custom path must be relative to the project root, e.g. `./my-runtime-types.d.ts` - A custom path must have a `d.ts` extension. - `--strict-vars` <Type text="boolean" /> <MetaInfo text="optional (default: true)" /> - Control the types that Wrangler generates for `vars` bindings. - If `true`, (the default) Wrangler generates literal and union types for bindings (e.g. `myEnv: 'my dev variable' | 'my prod variable'`). - If `false`, Wrangler generates generic types (e.g. `myEnv: string`). This is useful when variables change frequently, especially when working across multiple environments. <Render file="wrangler-commands/global-flags" product="workers" /> --- ## `telemetry` Cloudflare collects anonymous usage data to improve Wrangler. You can learn more about this in our [data policy](https://github.com/cloudflare/workers-sdk/tree/main/packages/wrangler/telemetry.md). You can manage sharing of usage data at any time using these commands. ### `disable` Disable telemetry collection for Wrangler. ```txt wrangler telemetry disable ``` ### `enable` Enable telemetry collection for Wrangler. ```txt wrangler telemetry enable ``` ### `status` Check whether telemetry collection is currently enabled. The return result is specific to the directory where you have run the command. This will resolve the global status set by `wrangler telemetry disable / enable`, the environment variable [`WRANGLER_SEND_METRICS`](/workers/wrangler/system-environment-variables/#supported-environment-variables), and the [`send_metrics`](/workers/wrangler/configuration/#top-level-only-keys) key in the [Wrangler configuration file](/workers/wrangler/configuration/). ```txt wrangler telemetry status ``` --- # Configuration URL: https://developers.cloudflare.com/workers/wrangler/configuration/ import { Render, Type, MetaInfo, WranglerConfig } from "~/components"; import { FileTree } from "@astrojs/starlight/components"; Wrangler optionally uses a configuration file to customize the development and deployment setup for a Worker. :::note As of Wrangler v3.91.0 Wrangler supports both JSON (`wrangler.json` or `wrangler.jsonc`) and TOML (`wrangler.toml`) for its configuration file. Prior to that version, only `wrangler.toml` was supported. The format of Wrangler's configuration file is exactly the same across both languages, except that the syntax is `JSON` rather than `TOML`. Throughout this page and the rest of Cloudflare's documentation config snippets are provided as both JSON and TOML. ::: It is best practice to treat Wrangler's configuration file as the [source of truth](#source-of-truth) for configuring a Worker. ## Sample Wrangler configuration <WranglerConfig> ```toml # Top-level configuration name = "my-worker" main = "src/index.js" compatibility_date = "2022-07-12" workers_dev = false route = { pattern = "example.org/*", zone_name = "example.org" } kv_namespaces = [ { binding = "<MY_NAMESPACE>", id = "<KV_ID>" } ] [env.staging] name = "my-worker-staging" route = { pattern = "staging.example.org/*", zone_name = "example.org" } kv_namespaces = [ { binding = "<MY_NAMESPACE>", id = "<STAGING_KV_ID>" } ] ``` </WranglerConfig> ## Environments The configuration for a Worker can become complex when you define different [environments](/workers/wrangler/environments/), and each environment has its own configuration. There is a default (top-level) environment and named environments that provide environment-specific configuration. These are defined under `[env.name]` keys, such as `[env.staging]` which you can then preview or deploy with the `-e` / `--env` flag in the `wrangler` commands like `npx wrangler deploy --env staging`. The majority of keys are inheritable, meaning that top-level configuration can be used in environments. [Bindings](/workers/runtime-apis/bindings/), such as `vars` or `kv_namespaces`, are not inheritable and need to be defined explicitly. Further, there are a few keys that can _only_ appear at the top-level. ## Top-level only keys Top-level keys apply to the Worker as a whole (and therefore all environments). They cannot be defined within named environments. - `keep_vars` <Type text="boolean" /> <MetaInfo text="optional" /> - Whether Wrangler should keep variables configured in the dashboard on deploy. Refer to [source of truth](#source-of-truth). - `migrations` <Type text="object[]" /> <MetaInfo text="optional" /> - When making changes to your Durable Object classes, you must perform a migration. Refer to [Durable Object migrations](/durable-objects/reference/durable-objects-migrations/). - `send_metrics` <Type text="boolean" /> <MetaInfo text="optional" /> - Whether Wrangler should send usage data to Cloudflare for this project. Defaults to `true`. You can learn more about this in our [data policy](https://github.com/cloudflare/workers-sdk/tree/main/packages/wrangler/telemetry.md). - `site` <Type text="object" /> <MetaInfo text="optional deprecated" /> - See the [Workers Sites](#workers-sites) section below for more information. Cloudflare Pages and Workers Assets is preferred over this approach. ## Inheritable keys Inheritable keys are configurable at the top-level, and can be inherited (or overridden) by environment-specific configuration. :::note At a minimum, the `name`, `main` and `compatibility_date` keys are required to deploy a Worker. ::: - `name` <Type text="string" /> <MetaInfo text="required" /> - The name of your Worker. Alphanumeric characters (`a`,`b`,`c`, etc.) and dashes (`-`) only. Do not use underscores (`_`). - `main` <Type text="string" /> <MetaInfo text="required" /> - The path to the entrypoint of your Worker that will be executed. For example: `./src/index.ts`. - `compatibility_date` <Type text="string" /> <MetaInfo text="required" /> - A date in the form `yyyy-mm-dd`, which will be used to determine which version of the Workers runtime is used. Refer to [Compatibility dates](/workers/configuration/compatibility-dates/). - `account_id` <Type text="string" /> <MetaInfo text="optional" /> - This is the ID of the account associated with your zone. You might have more than one account, so make sure to use the ID of the account associated with the zone/route you provide, if you provide one. It can also be specified through the `CLOUDFLARE_ACCOUNT_ID` environment variable. - `compatibility_flags` <Type text="string[]" /> <MetaInfo text="optional" /> - A list of flags that enable features from upcoming features of the Workers runtime, usually used together with `compatibility_date`. Refer to [compatibility dates](/workers/configuration/compatibility-dates/). - `workers_dev` <Type text="boolean" /> <MetaInfo text="optional" /> - Enables use of `*.workers.dev` subdomain to deploy your Worker. If you have a Worker that is only for `scheduled` events, you can set this to `false`. Defaults to `true`. Refer to [types of routes](#types-of-routes). - `preview_urls` <Type text="boolean" /> <MetaInfo text="optional" /> - Enables use of Preview URLs to test your Worker. Defaults to `true`. Refer to [Preview URLs](/workers/configuration/previews). - `route` <Type text="Route" /> <MetaInfo text="optional" /> - A route that your Worker should be deployed to. Only one of `routes` or `route` is required. Refer to [types of routes](#types-of-routes). - `routes` <Type text="Route[]" /> <MetaInfo text="optional" /> - An array of routes that your Worker should be deployed to. Only one of `routes` or `route` is required. Refer to [types of routes](#types-of-routes). - `tsconfig` <Type text="string" /> <MetaInfo text="optional" /> - Path to a custom `tsconfig`. - `triggers` <Type text="object" /> <MetaInfo text="optional" /> - Cron definitions to trigger a Worker's `scheduled` function. Refer to [triggers](#triggers). - `rules` <Type text="Rule" /> <MetaInfo text="optional" /> - An ordered list of rules that define which modules to import, and what type to import them as. You will need to specify rules to use `Text`, `Data` and `CompiledWasm` modules, or when you wish to have a `.js` file be treated as an `ESModule` instead of `CommonJS`. - `build` <Type text="Build" /> <MetaInfo text="optional" /> - Configures a custom build step to be run by Wrangler when building your Worker. Refer to [Custom builds](#custom-builds). - `no_bundle` <Type text="boolean" /> <MetaInfo text="optional" /> - Skip internal build steps and directly deploy your Worker script. You must have a plain JavaScript Worker with no dependencies. - `find_additional_modules` <Type text="boolean" /> <MetaInfo text="optional" /> - If true then Wrangler will traverse the file tree below `base_dir`. Any files that match `rules` will be included in the deployed Worker. Defaults to true if `no_bundle` is true, otherwise false. Can only be used with Module format Workers (not Service Worker format). - `base_dir` <Type text="string" /> <MetaInfo text="optional" /> - The directory in which module "rules" should be evaluated when including additional files (via `find_additional_modules`) into a Worker deployment. Defaults to the directory containing the `main` entry point of the Worker if not specified. - `preserve_file_names` <Type text="boolean" /> <MetaInfo text="optional" /> - Determines whether Wrangler will preserve the file names of additional modules bundled with the Worker. The default is to prepend filenames with a content hash. For example, `34de60b44167af5c5a709e62a4e20c4f18c9e3b6-favicon.ico`. - `minify` <Type text="boolean" /> <MetaInfo text="optional" /> - Minify the Worker script before uploading. - `node_compat` <Type text="boolean" /> <MetaInfo text="optional" /> - Deprecated — Instead, [enable the `nodejs_compat` compatibility flag](/workers/configuration/compatibility-flags/#nodejs-compatibility-flag), which enables both built-in Node.js APIs, and adds polyfills as necessary. Setting `node_compat = true` will add polyfills for Node.js built-in modules and globals to your Worker's code, when bundled with Wrangler. This is powered by `@esbuild-plugins/node-globals-polyfill` which in itself is powered by [rollup-plugin-node-polyfills](https://github.com/ionic-team/rollup-plugin-node-polyfills/). - `logpush` <Type text="boolean" /> <MetaInfo text="optional" /> - Enables Workers Trace Events Logpush for a Worker. Any scripts with this property will automatically get picked up by the Workers Logpush job configured for your account. Defaults to `false`. Refer to [Workers Logpush](/workers/observability/logs/logpush/). - `limits` <Type text="Limits" /> <MetaInfo text="optional" /> - Configures limits to be imposed on execution at runtime. Refer to [Limits](#limits). * `observability` <Type text="object" /> <MetaInfo text="optional" /> - Configures automatic observability settings for telemetry data emitted from your Worker. Refer to [Observability](#observability). * `assets` <Type text="Assets" /> <MetaInfo text="optional" /> - Configures static assets that will be served. Refer to [Assets](/workers/static-assets/binding/) for more details. ### Usage model As of March 1, 2024 the [usage model](/workers/platform/pricing/#workers) configured in your Worker's configuration file will be ignored. The [Standard](/workers/platform/pricing/#example-pricing-standard-usage-model) usage model applies. Some Workers Enterprise customers maintain the ability to change usage models. Your usage model must be configured through the Cloudflare dashboard by going to **Workers & Pages** > select your Worker > **Settings** > **Usage Model**. ## Non-inheritable keys Non-inheritable keys are configurable at the top-level, but cannot be inherited by environments and must be specified for each environment. - `define` <Type text="Record<string, string>" /> <MetaInfo text="optional" /> - A map of values to substitute when deploying your Worker. - `vars` <Type text="object" /> <MetaInfo text="optional" /> - A map of environment variables to set when deploying your Worker. Refer to [Environment variables](/workers/configuration/environment-variables/). - `durable_objects` <Type text="object" /> <MetaInfo text="optional" /> - A list of Durable Objects that your Worker should be bound to. Refer to [Durable Objects](#durable-objects). - `kv_namespaces` <Type text="object" /> <MetaInfo text="optional" /> - A list of KV namespaces that your Worker should be bound to. Refer to [KV namespaces](#kv-namespaces). - `r2_buckets` <Type text="object" /> <MetaInfo text="optional" /> - A list of R2 buckets that your Worker should be bound to. Refer to [R2 buckets](#r2-buckets). - `vectorize` <Type text="object" /> <MetaInfo text="optional" /> - A list of Vectorize indexes that your Worker should be bound to. Refer to [Vectorize indexes](#vectorize-indexes). - `services` <Type text="object" /> <MetaInfo text="optional" /> - A list of service bindings that your Worker should be bound to. Refer to [service bindings](#service-bindings). - `tail_consumers` <Type text="object" /> <MetaInfo text="optional" /> - A list of the Tail Workers your Worker sends data to. Refer to [Tail Workers](/workers/observability/logs/tail-workers/). ## Types of routes There are three types of [routes](/workers/configuration/routing/): [Custom Domains](/workers/configuration/routing/custom-domains/), [routes](/workers/configuration/routing/routes/), and [`workers.dev`](/workers/configuration/routing/workers-dev/). ### Custom Domains [Custom Domains](/workers/configuration/routing/custom-domains/) allow you to connect your Worker to a domain or subdomain, without having to make changes to your DNS settings or perform any certificate management. - `pattern` <Type text="string" /> <MetaInfo text="required" /> - The pattern that your Worker should be run on, for example, `"example.com"`. - `custom_domain` <Type text="boolean" /> <MetaInfo text="optional" /> - Whether the Worker should be on a Custom Domain as opposed to a route. Defaults to `false`. Example: <WranglerConfig> ```toml title="wrangler.toml" routes = [ { pattern = "shop.example.com", custom_domain = true } ] ``` </WranglerConfig> ### Routes [Routes](/workers/configuration/routing/routes/) allow users to map a URL pattern to a Worker. A route can be configured as a zone ID route, a zone name route, or a simple route. #### Zone ID route - `pattern` <Type text="string" /> <MetaInfo text="required" /> - The pattern that your Worker can be run on, for example,`"example.com/*"`. - `zone_id` <Type text="string" /> <MetaInfo text="required" /> - The ID of the zone that your `pattern` is associated with. Refer to [Find zone and account IDs](/fundamentals/setup/find-account-and-zone-ids/). Example: <WranglerConfig> ```toml title="wrangler.toml" routes = [ { pattern = "subdomain.example.com/*", zone_id = "<YOUR_ZONE_ID>" } ] ``` </WranglerConfig> #### Zone name route - `pattern` <Type text="string" /> <MetaInfo text="required" /> - The pattern that your Worker should be run on, for example, `"example.com/*"`. - `zone_name` <Type text="string" /> <MetaInfo text="required" /> - The name of the zone that your `pattern` is associated with. If you are using API tokens, this will require the `Account` scope. Example: <WranglerConfig> ```toml title="wrangler.toml" routes = [ { pattern = "subdomain.example.com/*", zone_name = "example.com" } ] ``` </WranglerConfig> #### Simple route This is a simple route that only requires a pattern. Example: <WranglerConfig> ```toml title="wrangler.toml" route = "example.com/*" ``` </WranglerConfig> ### `workers.dev` Cloudflare Workers accounts come with a `workers.dev` subdomain that is configurable in the Cloudflare dashboard. - `workers_dev` <Type text="boolean" /> <MetaInfo text="optional" /> - Whether the Worker runs on a custom `workers.dev` account subdomain. Defaults to `true`. <WranglerConfig> ```toml title="wrangler.toml" workers_dev = false ``` </WranglerConfig> ## Triggers Triggers allow you to define the `cron` expression to invoke your Worker's `scheduled` function. Refer to [Supported cron expressions](/workers/configuration/cron-triggers/#supported-cron-expressions). - `crons` <Type text="string[]" /> <MetaInfo text="required" /> - An array of `cron` expressions. - To disable a Cron Trigger, set `crons = []`. Commenting out the `crons` key will not disable a Cron Trigger. Example: <WranglerConfig> ```toml title="wrangler.toml" [triggers] crons = ["* * * * *"] ``` </WranglerConfig> ## Observability The [Observability](/workers/observability/logs/workers-logs) setting allows you to automatically ingest, store, filter, and analyze logging data emitted from Cloudflare Workers directly from your Cloudflare Worker's dashboard. - `enabled` <Type text="boolean" /> <MetaInfo text="required" /> - When set to `true` on a Worker, logs for the Worker are persisted. Defaults to `true` for all new Workers. - `head_sampling_rate` <Type text="number" /> <MetaInfo text="optional" /> - A number between 0 and 1, where 0 indicates zero out of one hundred requests are logged, and 1 indicates every request is logged. If `head_sampling_rate` is unspecified, it is configured to a default value of 1 (100%). Read more about [head-based sampling](/workers/observability/logs/workers-logs/#head-based-sampling). Example: <WranglerConfig> ```toml title="wrangler.toml" [observability] enabled = true head_sampling_rate = 0.1 # 10% of requests are logged ``` </WranglerConfig> ## Custom builds You can configure a custom build step that will be run before your Worker is deployed. Refer to [Custom builds](/workers/wrangler/custom-builds/). - `command` <Type text="string" /> <MetaInfo text="optional" /> - The command used to build your Worker. On Linux and macOS, the command is executed in the `sh` shell and the `cmd` shell for Windows. The `&&` and `||` shell operators may be used. - `cwd` <Type text="string" /> <MetaInfo text="optional" /> - The directory in which the command is executed. - `watch_dir` <Type text="string | string[]" /> <MetaInfo text="optional" /> - The directory to watch for changes while using `wrangler dev`. Defaults to the current working directory. Example: <WranglerConfig> ```toml title="wrangler.toml" [build] command = "npm run build" cwd = "build_cwd" watch_dir = "build_watch_dir" ``` </WranglerConfig> ## Limits You can impose limits on your Worker's behavior at runtime. Limits are only supported for the [Standard Usage Model](/workers/platform/pricing/#example-pricing-standard-usage-model). Limits are only enforced when deployed to Cloudflare's network, not in local development. The CPU limit can be set to a maximum of 30,000 milliseconds (30 seconds). <Render file="isolate-cpu-flexibility" /> <br /> - `cpu_ms` <Type text="number" /> <MetaInfo text="optional" /> - The maximum CPU time allowed per invocation, in milliseconds. Example: <WranglerConfig> ```toml title="wrangler.toml" [limits] cpu_ms = 100 ``` </WranglerConfig> ## Bindings ### Browser Rendering The [Workers Browser Rendering API](/browser-rendering/) allows developers to programmatically control and interact with a headless browser instance and create automation flows for their applications and products. A [browser binding](/workers/runtime-apis/bindings/) will provide your Worker with an authenticated endpoint to interact with a dedicated Chromium browser instance. - `binding` <Type text="string" /> <MetaInfo text="required" /> - The binding name used to refer to the browser binding. The value (string) you set will be used to reference this headless browser in your Worker. The binding must be [a valid JavaScript variable name](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Grammar_and_types#variables). For example, `binding = "HEAD_LESS"` or `binding = "simulatedBrowser"` would both be valid names for the binding. Example: <WranglerConfig> ```toml title="wrangler.toml" [browser] binding = "<BINDING_NAME>" ``` </WranglerConfig> ### D1 databases [D1](/d1/) is Cloudflare's serverless SQL database. A Worker can query a D1 database (or databases) by creating a [binding](/workers/runtime-apis/bindings/) to each database for [D1 Workers Binding API](/d1/worker-api/). To bind D1 databases to your Worker, assign an array of the below object to the `[[d1_databases]]` key. - `binding` <Type text="string" /> <MetaInfo text="required" /> - The binding name used to refer to the D1 database. The value (string) you set will be used to reference this database in your Worker. The binding must be [a valid JavaScript variable name](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Grammar_and_types#variables). For example, `binding = "MY_DB"` or `binding = "productionDB"` would both be valid names for the binding. - `database_name` <Type text="string" /> <MetaInfo text="required" /> - The name of the database. This is a human-readable name that allows you to distinguish between different databases, and is set when you first create the database. - `database_id` <Type text="string" /> <MetaInfo text="required" /> - The ID of the database. The database ID is available when you first use `wrangler d1 create` or when you call `wrangler d1 list`, and uniquely identifies your database. - `preview_database_id` <Type text="string" /> <MetaInfo text="optional" /> - The preview ID of this D1 database. If provided, `wrangler dev` uses this ID. Otherwise, it uses `database_id`. This option is required when using `wrangler dev --remote`. - `migrations_dir` <Type text="string" /> <MetaInfo text="optional" /> - The migration directory containing the migration files. By default, `wrangler d1 migrations create` creates a folder named `migrations`. You can use `migrations_dir` to specify a different folder containing the migration files (for example, if you have a mono-repo setup, and want to use a single D1 instance across your apps/packages). - For more information, refer to [D1 Wrangler `migrations` commands](/workers/wrangler/commands/#migrations-create) and [D1 migrations](/d1/reference/migrations/). :::note When using Wrangler in the default local development mode, files will be written to local storage instead of the preview or production database. Refer to [Local development and testing](/workers/local-development/) for more details. ::: Example: <WranglerConfig> ```toml title="wrangler.toml" [[d1_databases]] binding = "<BINDING_NAME>" database_name = "<DATABASE_NAME>" database_id = "<DATABASE_ID>" ``` </WranglerConfig> ### Dispatch namespace bindings (Workers for Platforms) Dispatch namespace bindings allow for communication between a [dynamic dispatch Worker](/cloudflare-for-platforms/workers-for-platforms/reference/how-workers-for-platforms-works/#dynamic-dispatch-worker) and a [dispatch namespace](/cloudflare-for-platforms/workers-for-platforms/reference/how-workers-for-platforms-works/#dispatch-namespace). Dispatch namespace bindings are used in [Workers for Platforms](/cloudflare-for-platforms/workers-for-platforms/). Workers for Platforms helps you deploy serverless functions programmatically on behalf of your customers. - `binding` <Type text="string" /> <MetaInfo text="required" /> - The binding name. The value (string) you set will be used to reference this database in your Worker. The binding must be [a valid JavaScript variable name](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Grammar_and_types#variables). For example, `binding = "MY_NAMESPACE"` or `binding = "productionNamespace"` would both be valid names for the binding. - `namespace` <Type text="string" /> <MetaInfo text="required" /> - The name of the [dispatch namespace](/cloudflare-for-platforms/workers-for-platforms/reference/how-workers-for-platforms-works/#dispatch-namespace). - `outbound` <Type text="object" /> <MetaInfo text="optional" /> - `service` <Type text="string" /> <MetaInfo text="required" /> The name of the [outbound Worker](/cloudflare-for-platforms/workers-for-platforms/configuration/outbound-workers/) to bind to. - `parameters` array optional A list of parameters to pass data from your [dynamic dispatch Worker](/cloudflare-for-platforms/workers-for-platforms/reference/how-workers-for-platforms-works/#dynamic-dispatch-worker) to the [outbound Worker](/cloudflare-for-platforms/workers-for-platforms/configuration/outbound-workers/). <WranglerConfig> ```toml title="wrangler.toml" [[dispatch_namespaces]] binding = "<BINDING_NAME>" namespace = "<NAMESPACE_NAME>" outbound = {service = "<WORKER_NAME>", parameters = ["params_object"]} ``` </WranglerConfig> ### Durable Objects [Durable Objects](/durable-objects/) provide low-latency coordination and consistent storage for the Workers platform. To bind Durable Objects to your Worker, assign an array of the below object to the `durable_objects.bindings` key. - `name` <Type text="string" /> <MetaInfo text="required" /> - The name of the binding used to refer to the Durable Object. - `class_name` <Type text="string" /> <MetaInfo text="required" /> - The exported class name of the Durable Object. - `script_name` <Type text="string" /> <MetaInfo text="optional" /> - The name of the Worker where the Durable Object is defined, if it is external to this Worker. This option can be used both in local and remote development. In local development, you must run the external Worker in a separate process (via `wrangler dev`). In remote development, the appropriate remote binding must be used. - `environment` <Type text="string" /> <MetaInfo text="optional" /> - The environment of the `script_name` to bind to. Example: <WranglerConfig> ```toml title="wrangler.toml" [[durable_objects.bindings]] name = "<BINDING_NAME>" class_name = "<CLASS_NAME>" ``` </WranglerConfig> #### Migrations When making changes to your Durable Object classes, you must perform a migration. Refer to [Durable Object migrations](/durable-objects/reference/durable-objects-migrations/). - `tag` <Type text="string" /> <MetaInfo text="required" /> - A unique identifier for this migration. - `new_classes` <Type text="string[]" /> <MetaInfo text="optional" /> - The new Durable Objects being defined. - `renamed_classes` <Type text="{from: string, to: string}[]" /> <MetaInfo text="optional" /> - The Durable Objects being renamed. - `deleted_classes` <Type text="string[]" /> <MetaInfo text="optional" /> - The Durable Objects being removed. Example: <WranglerConfig> ```toml title="wrangler.toml" [[migrations]] tag = "v1" # Should be unique for each entry new_classes = ["DurableObjectExample"] # Array of new classes [[migrations]] tag = "v2" renamed_classes = [{from = "DurableObjectExample", to = "UpdatedName" }] # Array of rename directives deleted_classes = ["DeprecatedClass"] # Array of deleted class names ``` </WranglerConfig> ### Email bindings <Render file="send-emails-workers-intro" product="email-routing" params={{ one: "Then, assign an array to the object (send_email) with the type of email binding you need.", }} /> - `name` <Type text="string" /> <MetaInfo text="required" /> - The binding name. - `destination_address` <Type text="string" /> <MetaInfo text="optional" /> - The [chosen email address](/email-routing/email-workers/send-email-workers/#types-of-bindings) you send emails to. - `allowed_destination_addresses` <Type text="string[]" /> <MetaInfo text="optional" /> - The [allowlist of email addresses](/email-routing/email-workers/send-email-workers/#types-of-bindings) you send emails to. <Render file="types-bindings" product="email-routing" /> ### Environment variables [Environment variables](/workers/configuration/environment-variables/) are a type of binding that allow you to attach text strings or JSON values to your Worker. Example: <Render file="envvar-example" /> ### Hyperdrive [Hyperdrive](/hyperdrive/) bindings allow you to interact with and query any Postgres database from within a Worker. - `binding` <Type text="string" /> <MetaInfo text="required" /> - The binding name. - `id` <Type text="string" /> <MetaInfo text="required" /> - The ID of the Hyperdrive configuration. Example: <WranglerConfig> ```toml title="wrangler.toml" # required for database drivers to function compatibility_flags = ["nodejs_compat_v2"] [[hyperdrive]] binding = "<BINDING_NAME>" id = "<ID>" ``` </WranglerConfig> ### Images [Cloudflare Images](/images/transform-images/transform-via-workers/) lets you make transformation requests to optimize, resize, and manipulate images stored in remote sources. To bind Images to your Worker, assign an array of the below object to the `images` key. `binding` (required). The name of the binding used to refer to the Images API. <WranglerConfig> ```toml title="wrangler.toml" [[images]] binding = "IMAGES" # i.e. available in your Worker on env.IMAGES ``` </WranglerConfig> ### KV namespaces [Workers KV](/kv/api/) is a global, low-latency, key-value data store. It stores data in a small number of centralized data centers, then caches that data in Cloudflare’s data centers after access. To bind KV namespaces to your Worker, assign an array of the below object to the `kv_namespaces` key. - `binding` <Type text="string" /> <MetaInfo text="required" /> - The binding name used to refer to the KV namespace. - `id` <Type text="string" /> <MetaInfo text="required" /> - The ID of the KV namespace. - `preview_id` <Type text="string" /> <MetaInfo text="optional" /> - The preview ID of this KV namespace. This option is **required** when using `wrangler dev --remote` to develop against remote resources. If developing locally (without `--remote`), this is an optional field. `wrangler dev` will use this ID for the KV namespace. Otherwise, `wrangler dev` will use `id`. :::note When using Wrangler in the default local development mode, files will be written to local storage instead of the preview or production namespace. Refer to [Local development and testing](/workers/local-development/) for more details. ::: Example: <WranglerConfig> ```toml title="wrangler.toml" [[kv_namespaces]] binding = "<BINDING_NAME1>" id = "<NAMESPACE_ID1>" [[kv_namespaces]] binding = "<BINDING_NAME2>" id = "<NAMESPACE_ID2>" ``` </WranglerConfig> ### Queues [Queues](/queues/) is Cloudflare's global message queueing service, providing [guaranteed delivery](/queues/reference/delivery-guarantees/) and [message batching](/queues/configuration/batching-retries/). To interact with a queue with Workers, you need a producer Worker to send messages to the queue and a consumer Worker to pull batches of messages out of the Queue. A single Worker can produce to and consume from multiple Queues. To bind Queues to your producer Worker, assign an array of the below object to the `[[queues.producers]]` key. - `queue` <Type text="string" /> <MetaInfo text="required" /> - The name of the queue, used on the Cloudflare dashboard. - `binding` <Type text="string" /> <MetaInfo text="required" /> - The binding name used to refer to the queue in your Worker. The binding must be [a valid JavaScript variable name](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Grammar_and_types#variables). For example, `binding = "MY_QUEUE"` or `binding = "productionQueue"` would both be valid names for the binding. - `delivery_delay` <Type text="number" /> <MetaInfo text="optional" /> - The number of seconds to [delay messages sent to a queue](/queues/configuration/batching-retries/#delay-messages) for by default. This can be overridden on a per-message or per-batch basis. Example: <WranglerConfig> ```toml title="wrangler.toml" [[queues.producers]] binding = "<BINDING_NAME>" queue = "<QUEUE_NAME>" delivery_delay = 60 # Delay messages by 60 seconds before they are delivered to a consumer ``` </WranglerConfig> To bind Queues to your consumer Worker, assign an array of the below object to the `[[queues.consumers]]` key. - `queue` <Type text="string" /> <MetaInfo text="required" /> - The name of the queue, used on the Cloudflare dashboard. - `max_batch_size` <Type text="number" /> <MetaInfo text="optional" /> - The maximum number of messages allowed in each batch. - `max_batch_timeout` <Type text="number" /> <MetaInfo text="optional" /> - The maximum number of seconds to wait for messages to fill a batch before the batch is sent to the consumer Worker. - `max_retries` <Type text="number" /> <MetaInfo text="optional" /> - The maximum number of retries for a message, if it fails or [`retryAll()`](/queues/configuration/javascript-apis/#messagebatch) is invoked. - `dead_letter_queue` <Type text="string" /> <MetaInfo text="optional" /> - The name of another queue to send a message if it fails processing at least `max_retries` times. - If a `dead_letter_queue` is not defined, messages that repeatedly fail processing will be discarded. - If there is no queue with the specified name, it will be created automatically. - `max_concurrency` <Type text="number" /> <MetaInfo text="optional" /> - The maximum number of concurrent consumers allowed to run at once. Leaving this unset will mean that the number of invocations will scale to the [currently supported maximum](/queues/platform/limits/). - Refer to [Consumer concurrency](/queues/configuration/consumer-concurrency/) for more information on how consumers autoscale, particularly when messages are retried. - `retry_delay` <Type text="number" /> <MetaInfo text="optional" /> - The number of seconds to [delay retried messages](/queues/configuration/batching-retries/#delay-messages) for by default, before they are re-delivered to the consumer. This can be overridden on a per-message or per-batch basis [when retrying messages](/queues/configuration/batching-retries/#explicit-acknowledgement-and-retries). Example: <WranglerConfig> ```toml title="wrangler.toml" [[queues.consumers]] queue = "my-queue" max_batch_size = 10 max_batch_timeout = 30 max_retries = 10 dead_letter_queue = "my-queue-dlq" max_concurrency = 5 retry_delay = 120 # Delay retried messages by 2 minutes before re-attempting delivery ``` </WranglerConfig> ### R2 buckets [Cloudflare R2 Storage](/r2) allows developers to store large amounts of unstructured data without the costly egress bandwidth fees associated with typical cloud storage services. To bind R2 buckets to your Worker, assign an array of the below object to the `r2_buckets` key. - `binding` <Type text="string" /> <MetaInfo text="required" /> - The binding name used to refer to the R2 bucket. - `bucket_name` <Type text="string" /> <MetaInfo text="required" /> - The name of this R2 bucket. - `jurisdiction` <Type text="string" /> <MetaInfo text="optional" /> - The jurisdiction where this R2 bucket is located, if a jurisdiction has been specified. Refer to [Jurisdictional Restrictions](/r2/reference/data-location/#jurisdictional-restrictions). - `preview_bucket_name` <Type text="string" /> <MetaInfo text="optional" /> - The preview name of this R2 bucket. If provided, `wrangler dev` will use this name for the R2 bucket. Otherwise, it will use `bucket_name`. This option is required when using `wrangler dev --remote`. :::note When using Wrangler in the default local development mode, files will be written to local storage instead of the preview or production bucket. Refer to [Local development and testing](/workers/local-development/) for more details. ::: Example: <WranglerConfig> ```toml title="wrangler.toml" [[r2_buckets]] binding = "<BINDING_NAME1>" bucket_name = "<BUCKET_NAME1>" [[r2_buckets]] binding = "<BINDING_NAME2>" bucket_name = "<BUCKET_NAME2>" ``` </WranglerConfig> ### Vectorize indexes A [Vectorize index](/vectorize/) allows you to insert and query vector embeddings for semantic search, classification and other vector search use-cases. To bind Vectorize indexes to your Worker, assign an array of the below object to the `vectorize` key. - `binding` <Type text="string" /> <MetaInfo text="required" /> - The binding name used to refer to the bound index from your Worker code. - `index_name` <Type text="string" /> <MetaInfo text="required" /> - The name of the index to bind. Example: <WranglerConfig> ```toml title="wrangler.toml" [[vectorize]] binding = "<BINDING_NAME>" index_name = "<INDEX_NAME>" ``` </WranglerConfig> ### Service bindings A service binding allows you to send HTTP requests to another Worker without those requests going over the Internet. The request immediately invokes the downstream Worker, reducing latency as compared to a request to a third-party service. Refer to [About Service Bindings](/workers/runtime-apis/bindings/service-bindings/). To bind other Workers to your Worker, assign an array of the below object to the `services` key. - `binding` <Type text="string" /> <MetaInfo text="required" /> - The binding name used to refer to the bound Worker. - `service` <Type text="string" /> <MetaInfo text="required" /> - The name of the Worker. - `entrypoint` <Type text="string" /> <MetaInfo text="optional" /> - The name of the [entrypoint](/workers/runtime-apis/bindings/service-bindings/rpc/#named-entrypoints) to bind to. If you do not specify an entrypoint, the default export of the Worker will be used. Example: <WranglerConfig> ```toml title="wrangler.toml" [[services]] binding = "<BINDING_NAME>" service = "<WORKER_NAME>" entrypoint = "<ENTRYPOINT_NAME>" ``` </WranglerConfig> ### Static assets Refer to [Assets](#assets). ### Analytics Engine Datasets [Workers Analytics Engine](/analytics/analytics-engine/) provides analytics, observability and data logging from Workers. Write data points to your Worker binding then query the data using the [SQL API](/analytics/analytics-engine/sql-api/). To bind Analytics Engine datasets to your Worker, assign an array of the below object to the `analytics_engine_datasets` key. - `binding` <Type text="string" /> <MetaInfo text="required" /> - The binding name used to refer to the dataset. - `dataset` <Type text="string" /> <MetaInfo text="optional" /> - The dataset name to write to. This will default to the same name as the binding if it is not supplied. Example: <WranglerConfig> ```toml [[analytics_engine_datasets]] binding = "<BINDING_NAME>" dataset = "<DATASET_NAME>" ``` </WranglerConfig> ### mTLS Certificates To communicate with origins that require client authentication, a Worker can present a certificate for mTLS in subrequests. Wrangler provides the `mtls-certificate` [command](/workers/wrangler/commands#mtls-certificate) to upload and manage these certificates. To create a [binding](/workers/runtime-apis/bindings/) to an mTLS certificate for your Worker, assign an array of objects with the following shape to the `mtls_certificates` key. - `binding` <Type text="string" /> <MetaInfo text="required" /> - The binding name used to refer to the certificate. - `certificate_id` <Type text="string" /> <MetaInfo text="required" /> - The ID of the certificate. Wrangler displays this via the `mtls-certificate upload` and `mtls-certificate list` commands. Example of a Wrangler configuration file that includes an mTLS certificate binding: <WranglerConfig> ```toml title="wrangler.toml" [[mtls_certificates]] binding = "<BINDING_NAME1>" certificate_id = "<CERTIFICATE_ID1>" [[mtls_certificates]] binding = "<BINDING_NAME2>" certificate_id = "<CERTIFICATE_ID2>" ``` </WranglerConfig> mTLS certificate bindings can then be used at runtime to communicate with secured origins via their [`fetch` method](/workers/runtime-apis/bindings/mtls). ### Workers AI [Workers AI](/workers-ai/) allows you to run machine learning models, on the Cloudflare network, from your own code – whether that be from Workers, Pages, or anywhere via REST API. <Render file="ai-local-usage-charges" product="workers" /> Unlike other bindings, this binding is limited to one AI binding per Worker project. - `binding` <Type text="string" /> <MetaInfo text="required" /> - The binding name. Example: <WranglerConfig> ```toml [ai] binding = "AI" # available in your Worker code on `env.AI` ``` </WranglerConfig> ## Assets [Static assets](/workers/static-assets/) allows developers to run front-end websites on Workers. You can configure the directory of assets, an optional runtime binding, and routing configuration options. You can only configure one collection of assets per Worker. The following options are available under the `assets` key. - `directory` <Type text="string" /> <MetaInfo text="required" /> - Folder of static assets to be served. - `binding` <Type text="string" /> <MetaInfo text="optional" /> - The binding name used to refer to the assets. Optional, and only useful when a Worker script is set with `main`. - `run_worker_first` <Type text="boolean" /> <MetaInfo text="optional, defaults to false" /> - Controls whether static assets are fetched directly, or a Worker script is invoked. Learn more about fetching assets when using [`run_worker_first`](/workers/static-assets/routing/#invoking-worker-script-ahead-of-assets). - `html_handling`: <Type text={'"auto-trailing-slash" | "force-trailing-slash" | "drop-trailing-slash" | "none"'} /> <MetaInfo text={'optional, defaults to "auto-trailing-slash"'} /> - Determines the redirects and rewrites of requests for HTML content. Learn more about the various options in [assets routing](/workers/static-assets/routing/#html_handling). - `not_found_handling`: <Type text={'"single-page-application" | "404-page" | "none"'} /> <MetaInfo text={'optional, defaults to "none"'} /> - Determines the handling of requests that do not map to an asset. Learn more about the various options in [assets routing](/workers/static-assets/routing/#not_found_handling). Example: <WranglerConfig> ```toml title="wrangler.toml" assets = { directory = "./public", binding = "ASSETS", html_handling = "force-trailing-slash", not_found_handling = "404-page" } ``` </WranglerConfig> ## Bundling Wrangler can operate in two modes: the default bundling mode and `--no-bundle` mode. In bundling mode, Wrangler will traverse all the imports of your code and generate a single JavaScript "entry-point" file. Imported source code is "inlined/bundled" into this entry-point file. It is also possible to include additional modules into your Worker, which are uploaded alongside the entry-point. You specify which additional modules should be included into your Worker using the `rules` key, making these modules available to be imported when your Worker is invoked. The `rules` key will be an array of the below object. - `type` <Type text="string" /> <MetaInfo text="required" /> - The type of module. Must be one of: `ESModule`, `CommonJS`, `CompiledWasm`, `Text` or `Data`. - `globs` <Type text="string[]" /> <MetaInfo text="required" /> - An array of glob rules (for example, `["**/*.md"]`). Refer to [glob](https://man7.org/linux/man-pages/man7/glob.7.html). - `fallthrough` <Type text="boolean" /> <MetaInfo text="optional" /> - When set to `true` on a rule, this allows you to have multiple rules for the same `Type`. Example: <WranglerConfig> ```toml title="wrangler.toml" rules = [ { type = "Text", globs = ["**/*.md"], fallthrough = true } ] ``` </WranglerConfig> ### Importing modules within a Worker You can import and refer to these modules within your Worker, like so: ```js title="index.js" {1} import markdown from "./example.md"; export default { async fetch() { return new Response(markdown); }, }; ``` ### Find additional modules Normally Wrangler will only include additional modules that are statically imported in your source code as in the example above. By setting `find_additional_modules` to `true` in your configuration file, Wrangler will traverse the file tree below `base_dir`. Any files that match `rules` will also be included as unbundled, external modules in the deployed Worker. `base_dir` defaults to the directory containing your `main` entrypoint. See https://developers.cloudflare.com/workers/wrangler/bundling/ for more details and examples. ## Local development settings You can configure various aspects of local development, such as the local protocol or port. - `ip` <Type text="string" /> <MetaInfo text="optional" /> * IP address for the local dev server to listen on. Defaults to `localhost`. - `port` <Type text="number" /> <MetaInfo text="optional" /> * Port for the local dev server to listen on. Defaults to `8787`. - `local_protocol` <Type text="string" /> <MetaInfo text="optional" /> - Protocol that local dev server listens to requests on. Defaults to `http`. - `upstream_protocol` <Type text="string" /> <MetaInfo text="optional" /> - Protocol that the local dev server forwards requests on. Defaults to `https`. - `host` <Type text="string" /> <MetaInfo text="optional" /> - Host to forward requests to, defaults to the host of the first `route` of the Worker. Example: <WranglerConfig> ```toml title="wrangler.toml" [dev] ip = "192.168.1.1" port = 8080 local_protocol = "http" ``` </WranglerConfig> ### Secrets [Secrets](/workers/configuration/secrets/) are a type of binding that allow you to [attach encrypted text values](/workers/wrangler/commands/#secret) to your Worker. <Render file="secrets-in-dev" /> ## Module Aliasing You can configure Wrangler to replace all calls to import a particular package with a module of your choice, by configuring the `alias` field: <WranglerConfig> ```toml title="wrangler.toml" [alias] "foo" = "./replacement-module-filepath" ``` </WranglerConfig> ```js title="replacement-module-filepath.js" export const bar = "baz"; ``` With the configuration above, any calls to `import` or `require()` the module `foo` will be aliased to point to your replacement module: ```js import { bar } from "foo"; console.log(bar); // returns "baz" ``` ### Example: Aliasing dependencies from NPM You can use module aliasing to provide an implementation of an NPM package that does not work on Workers — even if you only rely on that NPM package indirectly, as a dependency of one of your Worker's dependencies. For example, some NPM packages depend on [`node-fetch`](https://www.npmjs.com/package/node-fetch), a package that provided a polyfill of the [`fetch()` API](/workers/runtime-apis/fetch/), before it was built into Node.js. `node-fetch` isn't needed in Workers, because the `fetch()` API is provided by the Workers runtime. And `node-fetch` doesn't work on Workers, because it relies on currently unsupported Node.js APIs from the `http`/`https` modules. You can alias all imports of `node-fetch` to instead point directly to the `fetch()` API that is built into the Workers runtime: <WranglerConfig> ```toml title="wrangler.toml" [alias] "node-fetch" = "./fetch-nolyfill" ``` </WranglerConfig> ```js title="./fetch-nolyfill" export default fetch; ``` ### Example: Aliasing Node.js APIs You can use module aliasing to provide your own polyfill implementation of a Node.js API that is not yet available in the Workers runtime. For example, let's say the NPM package you rely on calls [`fs.readFile`](https://nodejs.org/api/fs.html#fsreadfilepath-options-callback). You can alias the fs module by adding the following to your Worker's Wrangler configuration file: <WranglerConfig> ```toml title="wrangler.toml" [alias] "fs" = "./fs-polyfill" ``` </WranglerConfig> ```js title="./fs-polyfill" export function readFile() { // ... } ``` In many cases, this allows you to work provide just enough of an API to make a dependency work. You can learn more about Cloudflare Workers' support for Node.js APIs on the [Cloudflare Workers Node.js API documentation page](/workers/runtime-apis/nodejs/). ## Source maps [Source maps](/workers/observability/source-maps/) translate compiled and minified code back to the original code that you wrote. Source maps are combined with the stack trace returned by the JavaScript runtime to present you with a stack trace. - `upload_source_maps` <Type text="boolean" /> - When `upload_source_maps` is set to `true`, Wrangler will automatically generate and upload source map files when you run [`wrangler deploy`](/workers/wrangler/commands/#deploy) or [`wrangler versions deploy`](/workers/wrangler/commands/#deploy-2). Example: <WranglerConfig> ```toml title="wrangler.toml" upload_source_maps = true ``` </WranglerConfig> ## Workers Sites <Render file="workers_sites" /> [Workers Sites](/workers/configuration/sites/) allows you to host static websites, or dynamic websites using frameworks like Vue or React, on Workers. - `bucket` <Type text="string" /> <MetaInfo text="required" /> - The directory containing your static assets. It must be a path relative to your Wrangler configuration file. - `include` <Type text="string[]" /> <MetaInfo text="optional" /> - An exclusive list of `.gitignore`-style patterns that match file or directory names from your bucket location. Only matched items will be uploaded. - `exclude` <Type text="string[]" /> <MetaInfo text="optional" /> - A list of `.gitignore`-style patterns that match files or directories in your bucket that should be excluded from uploads. Example: <WranglerConfig> ```toml title="wrangler.toml" [site] bucket = "./public" include = ["upload_dir"] exclude = ["ignore_dir"] ``` </WranglerConfig> ## Proxy support Corporate networks will often have proxies on their networks and this can sometimes cause connectivity issues. To configure Wrangler with the appropriate proxy details, use the below environmental variables: - `https_proxy` - `HTTPS_PROXY` - `http_proxy` - `HTTP_PROXY` To configure this on macOS, add `HTTP_PROXY=http://<YOUR_PROXY_HOST>:<YOUR_PROXY_PORT>` before your Wrangler commands. Example: ```sh $ HTTP_PROXY=http://localhost:8080 wrangler dev ``` If your IT team has configured your computer's proxy settings, be aware that the first non-empty environment variable in this list will be used when Wrangler makes outgoing requests. For example, if both `https_proxy` and `http_proxy` are set, Wrangler will only use `https_proxy` for outgoing requests. ## Source of truth We recommend treating your Wrangler configuration file as the source of truth for your Worker configuration, and to avoid making changes to your Worker via the Cloudflare dashboard if you are using Wrangler. If you need to make changes to your Worker from the Cloudflare dashboard, the dashboard will generate a TOML snippet for you to copy into your Wrangler configuration file, which will help ensure your Wrangler configuration file is always up to date. If you change your environment variables in the Cloudflare dashboard, Wrangler will override them the next time you deploy. If you want to disable this behavior, add `keep_vars = true` to your Wrangler configuration file. If you change your routes in the dashboard, Wrangler will override them in the next deploy with the routes you have set in your Wrangler configuration file. To manage routes via the Cloudflare dashboard only, remove any route and routes keys from your Wrangler configuration file. Then add `workers_dev = false` to your Wrangler configuration file. For more information, refer to [Deprecations](/workers/wrangler/deprecations/#other-deprecated-behavior). Wrangler will not delete your secrets (encrypted environment variables) unless you run `wrangler secret delete <key>`. ## Generated Wrangler configuration :::note This section describes a feature that can be implemented by frameworks and other build tools that are integrating with Wrangler. It is unlikely that an application developer will need to use this feature, but it is documented here to help you understand when Wrangler is using a generated configuration rather than the original, user's configuration. ::: Some framework tools, or custom pre-build processes, generate a modified Wrangler configuration to be used to deploy the Worker code. In this case, the tool may also create a special `.wrangler/deploy/config.json` file that redirects Wrangler to use the generated configuration rather than the original, user's configuration. Wrangler uses this generated configuration only for the following deploy and dev related commands: - `wrangler deploy` - `wrangler dev` - `wrangler versions upload` - `wrangler versions deploy` - `wrangler pages deploy` - `wrangler pages build` - `wrangler pages build-env` When running these commands, Wrangler looks up the directory tree from the current working directory for a file at the path `.wrangler/deploy/config.json`. This file must contain only a single JSON object of the form: ```json { "configPath": "../../path/to/wrangler.jsonc" } ``` When this `config.json` file exists, Wrangler will follow the `configPath` (relative to the `.wrangler/deploy/config.json` file) to find the generated Wrangler configuration file to load and use in the current command. Wrangler will display messaging to the user to indicate that the configuration has been redirected to a different file than the user's configuration file. ### Custom build tool example A common example of using a redirected configuration is where a custom build tool, or framework, wants to modify the user's configuration to be used when deploying, by generating a new configuration in a `dist` directory. - First, the user writes code that uses Cloudflare Workers resources, configured via a user's Wrangler configuration file. <WranglerConfig> ```toml title="wrangler.toml" name = "my-worker" main = "src/index.ts" [[kv_namespaces]] binding = "<BINDING_NAME1>" id = "<NAMESPACE_ID1>" ``` </WranglerConfig> Note that this configuration points `main` at the user's code entry-point. - Then, the user runs a custom build, which might read the user's Wrangler configuration file to find the source code entry-point: ```bash > my-tool build ``` - This `my-tool` generates a `dist` directory that contains both compiled code and a new generated deployment configuration file. It also creates a `.wrangler/deploy/config.json` file that redirects Wrangler to the new, generated deployment configuration file: <FileTree> - dist - index.js - wrangler.jsonc - .wrangler - deploy - config.json </FileTree> The generated `dist/wrangler.jsonc` might contain: ```json { "name": "my-worker", "main": "./index.js", "kv_namespaces": [{ "binding": "<BINDING_NAME1>", "id": "<NAMESPACE_ID1>" }] } ``` Note that, now, the `main` property points to the generated code entry-point. And the `.wrangler/deploy/config.json` contains the path to the generated configuration file: ```json { "configPath": "../../dist/wrangler.jsonc" } ``` --- # Custom builds URL: https://developers.cloudflare.com/workers/wrangler/custom-builds/ import { Render, Type, MetaInfo, WranglerConfig } from "~/components" Custom builds are a way for you to customize how your code is compiled, before being processed by Wrangler. :::note With the release of Wrangler v2, it is no longer necessary to use custom builds to bundle your code via webpack and similar bundlers. Wrangler runs [esbuild](https://esbuild.github.io/) by default as part of the `dev` and `publish` commands, and bundles your Worker project into a single Worker script. Refer to [Bundling](/workers/wrangler/bundling/). ::: ## Configure custom builds Custom builds are configured by adding a `[build]` section in your [Wrangler configuration file](/workers/wrangler/configuration/), and using the following options for configuring your custom build. * `command` <Type text="string" /> <MetaInfo text="optional" /> * The command used to build your Worker. On Linux and macOS, the command is executed in the `sh` shell and the `cmd` shell for Windows. The `&&` and `||` shell operators may be used. This command will be run as part of `wrangler dev` and `npx wrangler deploy`. * `cwd` <Type text="string" /> <MetaInfo text="optional" /> * The directory in which the command is executed. * `watch_dir` <Type text="string | string\[]" /> <MetaInfo text="optional" /> * The directory to watch for changes while using `wrangler dev`. Defaults to the current working directory. Example: <WranglerConfig> ```toml title="wrangler.toml" [build] command = "npm run build" cwd = "build_cwd" watch_dir = "build_watch_dir" ``` </WranglerConfig> --- # Deprecations URL: https://developers.cloudflare.com/workers/wrangler/deprecations/ Review the difference between Wrangler versions, specifically deprecations and breaking changes. ## Wrangler v3 ### Deprecated commands The following commands are deprecated in Wrangler as of Wrangler v3. These commands will be fully removed in a future version of Wrangler. #### `generate` The `wrangler generate` command is deprecated, but still active in v3. `wrangler generate` will be fully removed in v4. Use `npm create cloudflare@latest` for new Workers and Pages projects. #### `publish` The `wrangler publish` command is deprecated, but still active in v3. `wrangler publish` will be fully removed in v4. Use [`npx wrangler deploy`](/workers/wrangler/commands/#deploy) to deploy Workers. #### `pages publish` The `wrangler pages publish` command is deprecated, but still active in v3. `wrangler pages publish` will be fully removed in v4. Use [`wrangler pages deploy`](/workers/wrangler/commands/#deploy-1) to deploy Pages. #### `version` Instead, use `wrangler --version` to check the current version of Wrangler. ### Deprecated options #### `--experimental-local` `wrangler dev` in v3 is local by default so this option is no longer necessary. #### `--local` `wrangler dev` in v3 is local by default so this option is no longer necessary. #### `--persist` `wrangler dev` automatically persists data by default so this option is no longer necessary. #### `-- <command>`, `--proxy`, and `--script-path` in `wrangler pages dev` These options prevent `wrangler pages dev` from being able to accurately emulate production's behavior for serving static assets and have therefore been deprecated. Instead of relying on Wrangler to proxy through to some other upstream dev server, you can emulate a more accurate behavior by building your static assets to a directory and pointing Wrangler to that directory with `wrangler pages dev <directory>`. #### `--legacy-assets` and the `legacy_assets` config file property We recommend you [migrate to Workers assets](https://developers.cloudflare.com/workers/static-assets/) #### `--node-compat` and the `node_compat` config file property Instead, use the [`nodejs_compat` compatibility flag](https://developers.cloudflare.com/workers/runtime-apis/nodejs). This includes the functionality from legacy `node_compat` polyfills and natively implemented Node.js APIs. #### The `usage_model` config file property This no longer has any effect, after the [rollout of Workers Standard Pricing](https://blog.cloudflare.com/workers-pricing-scale-to-zero/). ## Wrangler v2 Wrangler v2 introduces new fields for configuration and new features for developing and deploying a Worker, while deprecating some redundant fields. * `wrangler.toml` is no longer mandatory. * `dev` and `publish` accept CLI arguments. * `tail` can be run on arbitrary Worker names. * `init` creates a project boilerplate. * JSON bindings for `vars`. * Local mode for `wrangler dev`. * Module system (for both modules and service worker format Workers). * DevTools. * TypeScript support. * Sharing development environment on the Internet. * Wider platform compatibility. * Developer hotkeys. * Better configuration validation. The following video describes some of the major changes in Wrangler v2, and shows you how Wrangler v2 can help speed up your workflow. <div style="position: relative; padding-top: 56.25%;"><iframe src="https://iframe.videodelivery.net/6ce3c7bd51288e1e8439f50ad63eda1d?poster=https%3A%2F%2Fcloudflarestream.com%2F6ce3c7bd51288e1e8439f50ad63eda1d%2Fthumbnails%2Fthumbnail.jpg%3Ftime%3D%26height%3D600" style="border: none; position: absolute; top: 0; left: 0; height: 100%; width: 100%;" allow="accelerometer; gyroscope; autoplay; encrypted-media; picture-in-picture;" allowfullscreen="true"></iframe></div> ### Common deprecations Refer to the following list for common fields that are no longer required. * `type` is no longer required. Wrangler will infer the correct project type automatically. * `zone_id` is no longer required. It can be deduced from the routes directly. * `build.upload.format` is no longer used. The format is now inferred automatically from the code. * `build.upload.main` and `build.upload.dir` are no longer required. Use the top level `main` field, which now serves as the entry-point for the Worker. * `site.entry-point` is no longer required. The entry point should be specified through the `main` field. * `webpack_config` and `webpack` properties are no longer supported. Refer to [Migrate webpack projects from Wrangler version 1](/workers/wrangler/migration/v1-to-v2/eject-webpack/). Here are the Wrangler v1 commands that are no longer supported: * `wrangler preview` - Use the `wrangler dev` command, for running your worker in your local environment. * `wrangler generate` - If you want to use a starter template, clone its GitHub repository and manually initialize it. * `wrangler route` - Routes are defined in the [Wrangler configuration file](/workers/wrangler/configuration/). * `wrangler report` - If you find a bug, report it at [Wrangler issues](https://github.com/cloudflare/workers-sdk/issues/new/choose). * `wrangler build` - If you wish to access the output from bundling your Worker, use `wrangler publish --outdir=path/to/output`. #### New fields These are new fields that can be added to your [Wrangler configuration file](/workers/wrangler/configuration/). * **`main`**: `string`, optional The `main` field is used to specify an entry point to the Worker. It may be in the established service worker format, or the newer, preferred modules format. An entry point is now explicitly required, and can be configured either via the `main` field, or passed directly as a command line, for example, `wrangler dev index.js`. This field replaces the legacy `build.upload.main` field (which only applied to modules format Workers). * **`rules`**: `array`, optional The `rules` field is an array of mappings between module types and file patterns. It instructs Wrangler to interpret specific files differently than JavaScript. For example, this is useful for reading text-like content as text files, or compiled WASM as ready to instantiate and execute. These rules can apply to Workers of both the established service worker format, and the newer modules format. This field replaces the legacy `build.upload.rules` field (which only applied to modules format Workers). {/* <!-- - **`legacy_env`**: _boolean_, optional. default: `true` The `legacy_env` field toggles how environments are handled by `wrangler`. - When `legacy_env` is `true`, it uses the legacy-style environments, where each environment is treated as a separate Worker in the dashboard, and environment names are appended to the `name` when published. - When `legacy_env` is `false`, it uses the newer service environments, where scripts for a given Worker are grouped under the same script name in the Cloudflare Workers dashboard, and environments are subdomains for a given published script (when `workers_dev = true`). Read more at (ref:)[] --> */} {/* <!-- - **`services`**: TODO - **`node-compat`**: TODO - **`public`**: TODO --> */} #### Non-mandatory fields A few configuration fields which were previously required, are now optional in particular situations. They can either be inferred, or added as an optimization. No fields are required anymore when starting with Wrangler v2, and you can gradually add configuration as the need arises. * **`name`**: `string` The `name` configuration field is now not required for `wrangler dev`, or any of the `wrangler kv:*` commands. Further, it can also be passed as a command line argument as `--name <name>`. It is still required for `wrangler publish`. * **`account_id`**: `string` The `account_id` field is not required for any of the commands. Any relevant commands will check if you are logged in, and if not, will prompt you to log in. Once logged in, it will use your account ID and will not prompt you again until your login session expires. If you have multiple account IDs, you will be presented with a list of accounts to choose from. You can still configure `account_id` in your Wrangler file, or as an environment variable `CLOUDFLARE_ACCOUNT_ID`. This makes startup faster and bypasses the list of choices if you have multiple IDs. The `CLOUDFLARE_API_TOKEN` environment variable is also useful for situations where it is not possible to login interactively. To learn more, visit [Running in CI/CD](/workers/ci-cd/external-cicd/). * **`workers_dev`** `boolean`, default: `true` when no routes are present The `workers_dev` field is used to indicate that the Worker should be published to a `*.workers.dev` subdomain. For example, for a Worker named `my-worker` and a previously configured `*.workers.dev` subdomain `username`, the Worker will get published to `my-worker.username.workers.dev.com`. This field is not mandatory, and defaults to `true` when `route` or `routes` are not configured. When routes are present, it defaults to `false`. If you want to neither publish it to a `*.workers.dev` subdomain nor any routes, set `workers_dev` to `false`. This useful when you are publishing a Worker as a standalone service that can only be accessed from another Worker with (`services`). #### Deprecated fields (non-breaking) A few configuration fields are deprecated, but their presence is not a breaking change yet. It is recommended to read the warning messages and follow the instructions to migrate to the new configuration. They will be removed and stop working in a future version. * **`zone_id`**: `string`, deprecated The `zone_id` field is deprecated and will be removed in a future release. It is now inferred from `route`/`routes`, and optionally from `dev.host` when using `wrangler dev`. This also makes it simpler to deploy a single Worker to multiple domains. * **`build.upload`**: `object`, deprecated The `build.upload` field is deprecated and will be removed in a future release. Its usage results in a warning with suggestions on rewriting the configuration file to remove the warnings. * `build.upload.main`/`build.upload.dir` are replaced by the `main` fields and are applicable to both service worker format and modules format Workers. * `build.upload.rules` is replaced by the `rules` field and is applicable to both service worker format and modules format Workers. * `build.upload.format` is no longer specified and is automatically inferred by `wrangler`. #### Deprecated fields (breaking) A few configuration fields are deprecated and will not work as expected anymore. It is recommended to read the error messages and follow the instructions to migrate to the new configuration. * **`site.entry-point`**: `string`, deprecated The `site.entry-point` configuration was used to specify an entry point for Workers with a `[site]` configuration. This has been replaced by the top-level `main` field. * **`type`**: `rust` | `javascript` | `webpack`, deprecated The `type` configuration was used to specify the type of Worker. It has since been made redundant and is now inferred from usage. If you were using `type = "webpack"` (and the optional `webpack_config` field), you should read the [webpack migration guide](/workers/wrangler/migration/v1-to-v2/eject-webpack/) to modify your project and use a custom build instead. ### Deprecated commands The following commands are deprecated in Wrangler as of Wrangler v2. #### `build` The `wrangler build` command is no longer available for building the Worker. The equivalent functionality can be achieved by `wrangler publish --dry-run --outdir=path/to/build`. #### `config` The `wrangler config` command is no longer available for authenticating via an API token. Use `wrangler login` / `wrangler logout` to manage OAuth authentication, or provide an API token via the `CLOUDFLARE_API_TOKEN` environment variable. #### `preview` The `wrangler preview` command is no longer available for creating a temporary preview instance of the Worker. Try using `wrangler dev` to try out a worker during development. #### subdomain The `wrangler subdomain` command is no longer available for creating a `workers.dev` subdomain. Create the `workers.dev` subdomain in **Workers & Pages** > select your Worker > Your subdomain > **Change**. #### route The `wrangler route` command is no longer available to configure a route for a Worker. Routes are specified in the [Wrangler configuration file](/workers/wrangler/configuration/). ### Other deprecated behavior * Cloudflare dashboard-defined routes will not be added alongside Wrangler-defined routes. Wrangler-defined routes are the `route` or `routes` key in your `wrangler.toml`. If both are defined, only routes defined in `wrangler.toml` will be valid. To manage routes via the Cloudflare dashboard only, remove any `route` and `routes` keys from and add `workers_dev = false` to your Wrangler file. * Wrangler will no longer use `index.js` in the directory where `wrangler dev` is called as the entry point to a Worker. Use the `main` configuration field, or explicitly pass it as a command line argument, for example: `wrangler dev index.js`. * Wrangler will no longer assume that bare specifiers are file names if they are not represented as a path. For example, in a folder like so: ``` project ├── index.js └── some-dependency.js ``` where the content of `index.js` is: ```js import SomeDependency from "some-dependency.js"; addEventListener("fetch", (event) => { // ... }); ``` Wrangler v1 would resolve `import SomeDependency from "some-dependency.js";` to the file `some-dependency.js`. This will also work in Wrangler v2, but will also log a deprecation warning. In the future, this will break with an error. Instead, you should rewrite the import to specify that it is a relative path, like so: ```diff - import SomeDependency from "some-dependency.js"; + import SomeDependency from "./some-dependency.js"; ``` ### Wrangler v1 and v2 comparison tables #### Commands | Command | v1 | v2 | Notes | | ----------- | -- | -- | ---------------------------------------------- | | `publish` | ✅ | ✅ | | | `dev` | ✅ | ✅ | | | `preview` | ✅ | ⌠| Removed, use `dev` instead. | | `init` | ✅ | ✅ | | | `generate` | ✅ | ⌠| Removed, use `git clone` instead. | | `build` | ✅ | ⌠| Removed, invoke your own build script instead. | | `secret` | ✅ | ✅ | | | `route` | ✅ | ⌠| Removed, use `publish` instead. | | `tail` | ✅ | ✅ | | | `kv` | ✅ | ✅ | | | `r2` | 🚧 | ✅ | Introduced in Wrangler v1.19.8. | | `pages` | ⌠| ✅ | | | `config` | ✅ | â“ | | | `login` | ✅ | ✅ | | | `logout` | ✅ | ✅ | | | `whoami` | ✅ | ✅ | | | `subdomain` | ✅ | â“ | | | `report` | ✅ | ⌠| Removed, error reports are made interactively. | #### Configuration | Property | v1 | v2 | Notes | | --------------------- | -- | -- | ------------------------------------------------------------------------------------------------------------------------------------------------ | | `type = "webpack"` | ✅ | ⌠| Removed, refer to [this guide](/workers/wrangler/migration/v1-to-v2/eject-webpack/) to migrate. | | `type = "rust"` | ✅ | ⌠| Removed, use [`workers-rs`](https://github.com/cloudflare/workers-rs) instead. | | `type = "javascript"` | ✅ | 🚧 | No longer required, can be omitted. | #### Features | Feature | v1 | v2 | Notes | | ---------- | -- | -- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | TypeScript | ⌠| ✅ | You can give wrangler a TypeScript file, and it will automatically transpile it to JavaScript using [`esbuild`](https://github.com/evanw/esbuild) under-the-hood. | | Local mode | ⌠| ✅ | `wrangler dev --local` will run your Worker on your local machine instead of on our network. This is powered by [Miniflare](https://github.com/cloudflare/workers-sdk/tree/main/packages/miniflare/). | --- # Wrangler URL: https://developers.cloudflare.com/workers/wrangler/ import { DirectoryListing } from "~/components"; Wrangler, the Cloudflare Developer Platform command-line interface (CLI), allows you to manage Worker projects. <DirectoryListing descriptions /> --- # Environments URL: https://developers.cloudflare.com/workers/wrangler/environments/ import { WranglerConfig } from "~/components"; ## Background Wrangler allows you to deploy the same Worker application with different configuration for each environment. You must configure environments in your Worker application's Wrangler file. Review the following environments flow: 1. You have created a Worker application named `my-worker`. 2. You create an environment, for example, `dev`, in the Worker's [Wrangler configuration file](/workers/wrangler/configuration/). 3. In the Wrangler configuration file, you configure the `dev` environment by [adding bindings](/workers/runtime-apis/bindings/) and/or [routes](/workers/configuration/routing/routes/). 4. You deploy the Worker using `npx wrangler deploy -e dev`. 5. In the background, Wrangler creates a new Worker named `my-worker-dev`. 6. You can now change your `my-worker` Worker code and configuration, and choose which environment to deploy your changes to. Environments are used with the `--env` or `-e` flag on `wrangler dev`, `npx wrangler deploy`, and `wrangler secret`. ## Configuration To create an environment: 1. Open your Worker's Wrangler file. 2. Add `[env.<NAME>]` and change `<NAME>` to the desired name of your environment. 3. Repeat step 2 to create multiple environments. Be careful when naming your environments that they do not contain sensitive information, such as, `migrating-service-from-company1-to-company2` or `company1-acquisition-load-test`. Review the layout of an example `[env.dev]` environment that sets up a custom `dev.example.com` route: <WranglerConfig> ```toml name = "your-worker" route = "example.com" [env.dev] route = "dev.example.com" ``` </WranglerConfig> You cannot specify multiple environments with the same name. Wrangler appends the environment name to the top-level name to deploy a Worker. For example, a Worker project named `my-worker` with an environment `[env.dev]` would deploy a Worker named `my-worker-dev`. After you have configured your environment, run `npx wrangler deploy` in your Worker project directory for the changes to take effect. ## Non-inheritable keys and environments [Non-inheritable keys](/workers/wrangler/configuration/#non-inheritable-keys) are configurable at the top-level, but cannot be inherited by environments and must be specified for each environment. [Bindings](/workers/runtime-apis/bindings/) and [environment variables](/workers/configuration/environment-variables/) must be specified per each [environment](/workers/wrangler/environments/) in your [Wrangler configuration file](/workers/wrangler/configuration/). Review the following example Wrangler file: <WranglerConfig> ```toml name = "my-worker" vars = { API_HOST = "example.com" } kv_namespaces = [ { binding = "<BINDING_NAME>", id = "<KV_NAMESPACE_ID_DEV>" } ] [env.production] vars = { API_HOST = "production.example.com" } kv_namespaces = [ { binding = "<BINDING_NAME>", id = "<KV_NAMESPACE_ID_PRODUCTION>" } ] ``` </WranglerConfig> You may assign environment-specific [secrets](/workers/configuration/secrets/) by running the command [`wrangler secret put <KEY> -env`](/workers/wrangler/commands/#put). --- ## Examples ### Staging and production environments The following Wrangler file adds two environments, `[env.staging]` and `[env.production]`, to the Wrangler file. If you are deploying to a [Custom Domain](/workers/configuration/routing/custom-domains/) or [route](/workers/configuration/routing/routes/), you must provide a [`route` or `routes` key](/workers/wrangler/configuration/) for each environment. <WranglerConfig> ```toml name = "my-worker" route = "dev.example.com/*" vars = { ENVIRONMENT = "dev" } [env.staging] vars = { ENVIRONMENT = "staging" } route = "staging.example.com/*" [env.production] vars = { ENVIRONMENT = "production" } routes = [ "example.com/foo/*", "example.com/bar/*" ] ``` </WranglerConfig> In order to use environments with this configuration, you can pass the name of the environment via the `--env` flag. With this configuration, Wrangler will behave in the following manner: ```sh npx wrangler deploy ``` ```sh output Uploaded my-worker Published my-worker dev.example.com/* ``` ```sh npx wrangler deploy --env staging ``` ```sh output Uploaded my-worker-staging Published my-worker-staging staging.example.com/* ``` ```sh npx wrangler deploy --env production ``` ```sh output Uploaded my-worker-production Published my-worker-production example.com/* ``` Any defined [environment variables](/workers/configuration/environment-variables/) (the [`vars`](/workers/wrangler/configuration/) key) are exposed as global variables to your Worker. With this configuration, the `ENVIRONMENT` variable can be used to call specific code depending on the given environment: ```js if (ENVIRONMENT === "staging") { // staging-specific code } else if (ENVIRONMENT === "production") { // production-specific code } ``` ### Staging environment with \*.workers.dev To deploy your code to your `*.workers.dev` subdomain, include `workers_dev = true` in the desired environment. Your Wrangler file may look like this: <WranglerConfig> ```toml name = "my-worker" route = "example.com/*" [env.staging] workers_dev = true ``` </WranglerConfig> With this configuration, Wrangler will behave in the following manner: ```sh npx wrangler deploy ``` ```sh output Uploaded my-worker Published my-worker example.com/* ``` ```sh npx wrangler deploy --env staging ``` ```sh output Uploaded my-worker Published my-worker https://my-worker-staging.<YOUR_SUBDOMAIN>.workers.dev ``` :::caution When you create a Worker via an environment, Cloudflare automatically creates an SSL certification for it. SSL certifications are discoverable and a matter of public record. ::: --- # Install/Update Wrangler URL: https://developers.cloudflare.com/workers/wrangler/install-and-update/ import { Render } from "~/components"; Wrangler is a command-line tool for building with Cloudflare developer products. ## Install Wrangler To install [Wrangler](https://github.com/cloudflare/workers-sdk/tree/main/packages/wrangler), ensure you have [Node.js](https://nodejs.org/en/) and [npm](https://docs.npmjs.com/getting-started) installed, preferably using a Node version manager like [Volta](https://volta.sh/) or [nvm](https://github.com/nvm-sh/nvm). Using a version manager helps avoid permission issues and allows you to change Node.js versions. Wrangler requires a Node version of `16.17.0` or later. Wrangler is installed locally into each of your projects. This allows you and your team to use the same Wrangler version, control Wrangler versions for each project, and roll back to an earlier version of Wrangler, if needed. To install Wrangler within your Worker project, run: <Render file="install_wrangler" /> Since Cloudflare recommends installing Wrangler locally in your project (rather than globally), the way to run Wrangler will depend on your specific setup and package manager. Refer to [How to run Wrangler commands](/workers/wrangler/commands/#how-to-run-wrangler-commands) for more information. :::caution If Wrangler is not installed, running `npx wrangler` will use the latest version of Wrangler. ::: ## Check your Wrangler version To check your Wrangler version, run: ```sh npx wrangler --version // or npx wrangler version // or npx wrangler -v ``` ## Update Wrangler To update the version of Wrangler used in your project, run: ```sh npm install wrangler@latest ``` ## Related resources - [Commands](/workers/wrangler/commands/) - A detailed list of the commands that Wrangler supports. - [Configuration](/workers/wrangler/configuration/) - Learn more about Wrangler's configuration file. --- # System environment variables URL: https://developers.cloudflare.com/workers/wrangler/system-environment-variables/ import { Render, Type, MetaInfo } from "~/components" System environment variables are local environment variables that can change Wrangler's behavior. There are three ways to set system environment variables: 1. Create an `.env` file in your project directory. Set the values of your environment variables in your [`.env`](/workers/wrangler/system-environment-variables/#example-env-file) file. This is the recommended way to set these variables, as it persists the values between Wrangler sessions. 2. Inline the values in your Wrangler command. For example, `WRANGLER_LOG="debug" npx wrangler deploy` will set the value of `WRANGLER_LOG` to `"debug"` for this execution of the command. 3. Set the values in your shell environment. For example, if you are using Z shell, adding `export CLOUDFLARE_API_TOKEN=...` to your `~/.zshrc` file will set this token as part of your shell configuration. :::note To set different system environment variables for each environment, create files named `.env.<environment-name>`. When you use `wrangler <command> --env <environment-name>`, the corresponding environment-specific file will be loaded instead of the `.env` file, so the two files are not merged. ::: ## Supported environment variables Wrangler supports the following environment variables: * `CLOUDFLARE_ACCOUNT_ID` <Type text="string" /> <MetaInfo text="optional" /> * The [account ID](/fundamentals/setup/find-account-and-zone-ids/) for the Workers related account. * `CLOUDFLARE_API_TOKEN` <Type text="string" /> <MetaInfo text="optional" /> * The [API token](/fundamentals/api/get-started/create-token/) for your Cloudflare account, can be used for authentication for situations like CI/CD, and other automation. * `CLOUDFLARE_API_KEY` <Type text="string" /> <MetaInfo text="optional" /> * The API key for your Cloudflare account, usually used for older authentication method with `CLOUDFLARE_EMAIL=`. * `CLOUDFLARE_EMAIL` <Type text="string" /> <MetaInfo text="optional" /> * The email address associated with your Cloudflare account, usually used for older authentication method with `CLOUDFLARE_API_KEY=`. * `WRANGLER_SEND_METRICS` <Type text="string" /> <MetaInfo text="optional" /> * Options for this are `true` and `false`. Defaults to `true`. Controls whether Wrangler can send anonymous usage data to Cloudflare for this project. You can learn more about this in our [data policy](https://github.com/cloudflare/workers-sdk/tree/main/packages/wrangler/telemetry.md). * `WRANGLER_HYPERDRIVE_LOCAL_CONNECTION_STRING_<BINDING_NAME>`<Type text="string" /> <MetaInfo text="optional" /> * The [local connection string](/hyperdrive/configuration/local-development/) for your database to use in local development with [Hyperdrive](/hyperdrive/). For example, if the binding for your Hyperdrive is named `PROD_DB`, this would be `WRANGLER_HYPERDRIVE_LOCAL_CONNECTION_STRING_PROD_DB="postgres://user:password@127.0.0.1:5432/testdb"`. Each Hyperdrive is uniquely distinguished by the binding name. * `CLOUDFLARE_API_BASE_URL` <Type text="string" /> <MetaInfo text="optional" /> * The default value is `"https://api.cloudflare.com/client/v4"`. * `WRANGLER_LOG` <Type text="string" /> <MetaInfo text="optional" /> * Options for Logging levels are `"none"`, `"error"`, `"warn"`, `"info"`, `"log"` and `"debug"`. Levels are case-insensitive and default to `"log"`. If an invalid level is specified, Wrangler will fallback to the default. Logs can include requests to Cloudflare's API, any usage data being collected, and more verbose error logs. * `FORCE_COLOR` <Type text="string" /> <MetaInfo text="optional" /> * By setting this to `0`, you can disable Wrangler's colorised output, which makes it easier to read with some terminal setups. For example, `FORCE_COLOR=0`. ## Example `.env` file The following is an example `.env` file: ```bash CLOUDFLARE_ACCOUNT_ID=<YOUR_ACCOUNT_ID_VALUE> CLOUDFLARE_API_TOKEN=<YOUR_API_TOKEN_VALUE> CLOUDFLARE_EMAIL=<YOUR_EMAIL> WRANGLER_SEND_METRICS=true CLOUDFLARE_API_BASE_URL=https://api.cloudflare.com/client/v4 WRANGLER_LOG=debug ``` ## Deprecated global variables The following variables are deprecated. Use the new variables listed above to prevent any issues or unwanted messaging. * `CF_ACCOUNT_ID` * `CF_API_TOKEN` * `CF_API_KEY` * `CF_EMAIL` * `CF_API_BASE_URL` --- # API Reference URL: https://developers.cloudflare.com/workers-ai/function-calling/embedded/api-reference/ Learn more about the API reference for [embedded function calling](/workers-ai/function-calling/embedded). ## runWithTools This wrapper method enables you to do embedded function calling. You pass it the AI binding, model, inputs (`messages` array and `tools` array), and optional configurations. * `AI Binding`Ai * The AI binding, such as `env.AI`. * `model`BaseAiTextGenerationModels * The ID of the model that supports function calling. For example, `@hf/nousresearch/hermes-2-pro-mistral-7b`. * `input`Object * `messages`RoleScopedChatInput\[] * `tools`AiTextGenerationToolInputWithFunction\[] * `config`Object * `streamFinalResponse`boolean optional * `maxRecursiveToolRuns`number optional * `strictValidation`boolean optional * `verbose`boolean optional * `trimFunction`boolean optional - For the `trimFunction`, you can pass it `autoTrimTools`, which is another helper method we've devised to automatically choose the correct tools (using an LLM) before sending it off for inference. This means that your final inference call will have fewer input tokens. ## createToolsFromOpenAPISpec This method lets you automatically create tool schemas based on OpenAPI specs, so you don't have to manually write or hardcode the tool schemas. You can pass the OpenAPI spec for any API in JSON or YAML format. `createToolsFromOpenAPISpec` has a config input that allows you to perform overrides if you need to provide headers like Authentication or User-Agent. * `spec`string * The OpenAPI specification in either JSON or YAML format, or a URL to a remote OpenAPI specification. * `config`Config optional - Configuration options for the createToolsFromOpenAPISpec function * `overrides`ConfigRule\[] optional * `matchPatterns`RegExp\[] optional * `options` Object optional \{ `verbose` boolean optional \} --- # Get Started URL: https://developers.cloudflare.com/workers-ai/function-calling/embedded/get-started/ This guide will instruct you through setting up and deploying your first Workers AI project with embedded function calling. You will use Workers, a Workers AI binding, the [`ai-utils package`](https://github.com/cloudflare/ai-utils), and a large language model (LLM) to deploy your first AI-powered application on the Cloudflare global network with embedded function calling. ## 1. Create a Worker project with Workers AI Follow the [Workers AI Get Started Guide](/workers-ai/get-started/workers-wrangler/) until step 2. ## 2. Install additional npm package Next, run the following command in your project repository to install the Worker AI utilities package. ```sh npm install @cloudflare/ai-utils --save ``` ## 3. Add Workers AI Embedded function calling Update the `index.ts` file in your application directory with the following code: ```ts title="Embedded function calling example" import { runWithTools } from "@cloudflare/ai-utils"; type Env = { AI: Ai; }; export default { async fetch(request, env, ctx) { // Define function const sum = (args: { a: number; b: number }): Promise<string> => { const { a, b } = args; return Promise.resolve((a + b).toString()); }; // Run AI inference with function calling const response = await runWithTools( env.AI, // Model with function calling support "@hf/nousresearch/hermes-2-pro-mistral-7b", { // Messages messages: [ { role: "user", content: "What the result of 123123123 + 10343030?", }, ], // Definition of available tools the AI model can leverage tools: [ { name: "sum", description: "Sum up two numbers and returns the result", parameters: { type: "object", properties: { a: { type: "number", description: "the first number" }, b: { type: "number", description: "the second number" }, }, required: ["a", "b"], }, // reference to previously defined function function: sum, }, ], }, ); return new Response(JSON.stringify(response)); }, } satisfies ExportedHandler<Env>; ``` This example imports the utils with `import { runWithTools} from "@cloudflare/ai-utils"` and follows the API reference below. Moreover, in this example we define and describe a list of tools that the LLM can leverage to respond to the user query. Here, the list contains of only one tool, the `sum` function. Abstracted by the `runWithTools` function, the following steps occur: ```mermaid sequenceDiagram participant Worker as Worker participant WorkersAI as Workers AI Worker->>+WorkersAI: Send messages, function calling prompt, and available tools WorkersAI->>+Worker: Select tools and arguments for function calling Worker-->>-Worker: Execute function Worker-->>+WorkersAI: Send messages, function calling prompt and function result WorkersAI-->>-Worker: Send response incorporating function output ``` The `ai-utils package` is also open-sourced on [Github](https://github.com/cloudflare/ai-utils). ## 4. Local development & deployment Follow steps 4 and 5 of the [Workers AI Get Started Guide](/workers-ai/get-started/workers-wrangler/) for local development and deployment. :::note[Workers AI Embedded Function Calling charges] Embedded function calling runs Workers AI inference requests. Standard charges for inference (e.g. tokens) usage will be charged. Resources consumed (e.g. CPU time) during embedded functions' code execution will be charged just as any other Worker's code execution. ::: ## API reference For more details, refer to [API reference](/workers-ai/function-calling/embedded/api-reference/). --- # Embedded URL: https://developers.cloudflare.com/workers-ai/function-calling/embedded/ import { DirectoryListing } from "~/components" Cloudflare has a unique [embedded function calling](https://blog.cloudflare.com/embedded-function-calling) feature that allows you to execute function code alongside your tool call inference. Our npm package [`@cloudflare/ai-utils`](https://www.npmjs.com/package/@cloudflare/ai-utils) is the developer toolkit to get started. Embedded function calling can be used to easily make complex agents that interact with websites and APIs, like using natural language to create meetings on Google Calendar, saving data to Notion, automatically routing requests to other APIs, saving data to an R2 bucket - or all of this at the same time. All you need is a prompt and an OpenAPI spec to get started. :::caution[REST API support] Embedded function calling depends on features native to the Workers platform. This means that embedded function calling is only supported via [Cloudflare Workers](/workers-ai/get-started/workers-wrangler/), not via the [REST API](/workers-ai/get-started/rest-api/). ::: ## Resources <DirectoryListing /> --- # Troubleshooting URL: https://developers.cloudflare.com/workers-ai/function-calling/embedded/troubleshooting/ This section will describe tools for troubleshooting and address common errors. ## Logging General [logging](/workers/observability/logs/) capabilities for Workers also apply to embedded function calling. ### Function invocations The invocations of tools can be logged as in any Worker using `console.log()`: ```ts title="Logging tool invocations" {6} export default { async fetch(request, env, ctx) { const sum = (args: { a: number; b: number }): Promise<string> => { const { a, b } = args; // Logging from within embedded function invocations console.log(`The sum function has been invoked with the arguments a: ${a} and b: ${b}`) return Promise.resolve((a + b).toString()); }; ... } } ``` ### Logging within `runWithTools` The `runWithTools` function has a `verbose` mode that emits helpful logs for debugging of function calls as well input and output statistics. ```ts title="Enabled verbose mode" {13} const response = await runWithTools( env.AI, '@hf/nousresearch/hermes-2-pro-mistral-7b', { messages: [ ... ], tools: [ ... ], }, // Enable verbose mode { verbose: true } ); ``` ## Performance To respond to a LLM prompt with embedded function, potentially multiple AI inference requests and function invocations are needed, which can have an impact on user experience. Consider the following to improve performance: * Shorten prompts (to reduce time for input processing) * Reduce number of tools provided * Stream the final response to the end user (to minimize the time to interaction). See example below: ```ts title="Streamed response example" {15} async fetch(request, env, ctx) { const response = (await runWithTools( env.AI, '@hf/nousresearch/hermes-2-pro-mistral-7b', { messages: [ ... ], tools: [ ... ], }, { // Enable response streaming streamFinalResponse: true, } )) as ReadableStream; // Set response headers for streaming return new Response(response, { headers: { 'content-type': 'text/event-stream', }, }); } ``` ## Common Errors If you are getting a `BadInput` error, your inputs may exceed our current context window for our models. Try reducing input tokens to resolve this error. --- # Add New AI Models to your Playground (Part 2) URL: https://developers.cloudflare.com/workers-ai/tutorials/image-generation-playground/image-generator-flux-newmodels/ import { Details, DirectoryListing, Stream } from "~/components" In part 2, Kristian expands upon the existing environment built in part 1, by showing you how to integrate new AI models and introduce new parameters that allow you to customize how images are generated. <Stream id="167ba3a7a86f966650f3315e6cb02e0d" title="Add New AI Models to your Playground (Part 2)" thumbnail="13.5s" showMoreVideos={false} /> Refer to the AI Image Playground [GitHub repository](https://github.com/kristianfreeman/workers-ai-image-playground) to follow along locally. <Details header="Video series" open> <DirectoryListing folder="workers-ai/tutorials/image-generation-playground" /> </Details> --- # Build an AI Image Generator Playground (Part 1) URL: https://developers.cloudflare.com/workers-ai/tutorials/image-generation-playground/image-generator-flux/ import { Details, DirectoryListing, Stream } from "~/components" The new flux models on Workers AI are our most powerful text-to-image AI models yet. In this video, we show you how to deploy your own Workers AI Image Playground in just a few minutes. There are many businesses being built on top of AI image generation models. Using Workers AI, you can get access to the best models in the industry without having to worry about inference, ops, or deployment. We provide the API for AI image generation, and in a couple of seconds get an image back. <Stream id="aeafae151e84a81be19c52c2348e9bab" title="Build an AI Image Generator Playground (Part 1)" thumbnail="2.5s" showMoreVideos={false} /> Refer to the AI Image Playground [GitHub repository](https://github.com/kristianfreeman/workers-ai-image-playground) to follow along locally. <Details header="Video series" open> <DirectoryListing folder="workers-ai/tutorials/image-generation-playground" /> </Details> --- # Store and Catalog AI Generated Images with R2 (Part 3) URL: https://developers.cloudflare.com/workers-ai/tutorials/image-generation-playground/image-generator-store-and-catalog/ import { Details, DirectoryListing, Stream } from "~/components" In the final part of the AI Image Playground series, Kristian teaches how to utilize Cloudflare's [R2](/r2) object storage in order to maintain and keep track of each AI generated image. <Stream id="86488269da24984c76fb10f69f4abb44" title="Store and Catalog AI Generated Images (Part 3)" thumbnail="2.5s" showMoreVideos={false} /> Refer to the AI Image Playground [GitHub repository](https://github.com/kristianfreeman/workers-ai-image-playground) to follow along locally. <Details header="Video series" open> <DirectoryListing folder="workers-ai/tutorials/image-generation-playground" /> </Details> --- # How to Build an Image Generator using Workers AI URL: https://developers.cloudflare.com/workers-ai/tutorials/image-generation-playground/ import { Details, DirectoryListing, Stream } from "~/components" In this series of videos, Kristian Freeman builds an AI Image Playground. To get started, click on part 1 below. <Details header="Video Series" open> <DirectoryListing folder="workers-ai/tutorials/image-generation-playground" /> </Details> --- # GitHub Actions URL: https://developers.cloudflare.com/workers/ci-cd/external-cicd/github-actions/ You can deploy Workers with [GitHub Actions](https://github.com/marketplace/actions/deploy-to-cloudflare-workers-with-wrangler). Here is how you can set up your GitHub Actions workflow. ## 1. Authentication When running Wrangler locally, authentication to the Cloudflare API happens via the [`wrangler login`](/workers/wrangler/commands/#login) command, which initiates an interactive authentication flow. Since CI/CD environments are non-interactive, Wrangler requires a [Cloudflare API token](/fundamentals/api/get-started/create-token/) and [account ID](/fundamentals/setup/find-account-and-zone-ids/) to authenticate with the Cloudflare API. ### Cloudflare account ID To find your Cloudflare account ID, refer to [Find account and zone IDs](/fundamentals/setup/find-account-and-zone-ids/). ### API token To create an API token to authenticate Wrangler in your CI job: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com). 2. Select **My Profile** > **API Tokens**. 3. Select **Create Token** > find **Edit Cloudflare Workers** > select **Use Template**. 4. Customize your token name. 5. Scope your token. You will need to choose the account and zone resources that the generated API token will have access to. We recommend scoping these down as much as possible to limit the access of your token. For example, if you have access to three different Cloudflare accounts, you should restrict the generated API token to only the account on which you will be deploying a Worker. ## 2. Set up CI/CD The method for running Wrangler in your CI/CD environment will depend on the specific setup for your project (whether you use GitHub Actions/Jenkins/GitLab or something else entirely). To set up your CI/CD: 1. Go to your CI/CD platform and add the following as secrets: - `CLOUDFLARE_ACCOUNT_ID`: Set to the [Cloudflare account ID](#cloudflare-account-id) for the account on which you want to deploy your Worker. - `CLOUDFLARE_API_TOKEN`: Set to the [Cloudflare API token you generated](#api-token). :::caution Don't store the value of `CLOUDFLARE_API_TOKEN` in your repository, as it gives access to deploy Workers on your account. Instead, you should utilize your CI/CD provider's support for storing secrets. ::: 2. Create a workflow that will be responsible for deploying the Worker. This workflow should run `wrangler deploy`. Review an example [GitHub Actions](https://docs.github.com/en/actions/using-workflows/about-workflows) workflow in the follow section. ### GitHub Actions Cloudflare provides [an official action](https://github.com/cloudflare/wrangler-action) for deploying Workers. Refer to the following example workflow which deploys your Worker on push to the `main` branch. ```yaml name: Deploy Worker on: push: branches: - main jobs: deploy: runs-on: ubuntu-latest timeout-minutes: 60 needs: test steps: - uses: actions/checkout@v4 - name: Build & Deploy Worker uses: cloudflare/wrangler-action@v3 with: apiToken: ${{ secrets.CLOUDFLARE_API_TOKEN }} accountId: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }} ``` --- # GitLab CI/CD URL: https://developers.cloudflare.com/workers/ci-cd/external-cicd/gitlab-cicd/ You can deploy Workers with [GitLab CI/CD](https://docs.gitlab.com/ee/ci/pipelines/index.html). Here is how you can set up your GitHub Actions workflow. ## 1. Authentication When running Wrangler locally, authentication to the Cloudflare API happens via the [`wrangler login`](/workers/wrangler/commands/#login) command, which initiates an interactive authentication flow. Since CI/CD environments are non-interactive, Wrangler requires a [Cloudflare API token](/fundamentals/api/get-started/create-token/) and [account ID](/fundamentals/setup/find-account-and-zone-ids/) to authenticate with the Cloudflare API. ### Cloudflare account ID To find your Cloudflare account ID, refer to [Find account and zone IDs](/fundamentals/setup/find-account-and-zone-ids/). ### API token To create an API token to authenticate Wrangler in your CI job: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com). 2. Select **My Profile** > **API Tokens**. 3. Select **Create Token** > find **Edit Cloudflare Workers** > select **Use Template**. 4. Customize your token name. 5. Scope your token. You will need to choose the account and zone resources that the generated API token will have access to. We recommend scoping these down as much as possible to limit the access of your token. For example, if you have access to three different Cloudflare accounts, you should restrict the generated API token to only the account on which you will be deploying a Worker. ## 2. Set up CI The method for running Wrangler in your CI/CD environment will depend on the specific setup for your project (whether you use GitHub Actions/Jenkins/GitLab or something else entirely). To set up your CI: 1. Go to your CI platform and add the following as secrets: - `CLOUDFLARE_ACCOUNT_ID`: Set to the [Cloudflare account ID](#cloudflare-account-id) for the account on which you want to deploy your Worker. - `CLOUDFLARE_API_TOKEN`: Set to the [Cloudflare API token you generated](#api-token). :::caution Don't store the value of `CLOUDFLARE_API_TOKEN` in your repository, as it gives access to deploy Workers on your account. Instead, you should utilize your CI/CD provider's support for storing secrets. ::: 2. Create a workflow that will be responsible for deploying the Worker. This workflow should run `wrangler deploy`. Review an example [GitHub Actions](https://docs.github.com/en/actions/using-workflows/about-workflows) workflow in the follow section. ### GitLab Pipelines Refer to [GitLab's blog](https://about.gitlab.com/blog/2022/11/21/deploy-remix-with-gitlab-and-cloudflare/) for an example pipeline. Under the `script` key, replace `npm run deploy` with [`npx wrangler deploy`](/workers/wrangler/commands/#deploy). --- # External CI/CD URL: https://developers.cloudflare.com/workers/ci-cd/external-cicd/ Deploying Cloudflare Workers with CI/CD ensures reliable, automated deployments for every code change. If you prefer to use your existing CI/CD provider instead of [Workers Builds](/workers/ci-cd/builds/), this section offers guides for popular providers: - [**GitHub Actions**](/workers/ci-cd/external-cicd/github-actions/) - [**GitLab CI/CD**](/workers/ci-cd/external-cicd/gitlab-cicd/) Other CI/CD options including but not limited to Terraform, CircleCI, Jenkins, and more, can also be used to deploy Workers following a similar set up process. --- # Advanced setups URL: https://developers.cloudflare.com/workers/ci-cd/builds/advanced-setups/ ## Monorepos To set up a monorepo workflow: 1. Find the Workers associated with your project in the [Workers & Pages Dashboard](https://dash.cloudflare.com). 2. Connect your monorepo to each Worker in the repository. 3. Set the root directory for each Worker to specify the location of its `wrangler.toml` and where build and deploy commands should run. 4. Optionally, configure unique build and deploy commands for each Worker. 5. Optionally, configure [build watch paths](/workers/ci-cd/builds/build-watch-paths/) for each Worker to monitor specific paths for changes. When a new commit is made to the monorepo, a new build and deploy will trigger for each Worker if the change is within each of its included watch paths. You can also check on the status of each build associated with your repository within GitHub with [check runs](/workers/ci-cd/builds/git-integration/github-integration/#check-run) or within GitLab with [commit statuses](/workers/ci-cd/builds/git-integration/gitlab-integration/#commit-status). ### Example In the example `ecommerce-monorepo`, a Workers project should be created for `product-service`, `order-service`, and `notification-service`. A Git connection to `ecommerce-monorepo` should be added in all of the Workers projects. If you are using a monorepo tool, such as [Turborepo](https://turbo.build/), you can configure a different deploy command for each Worker, for example, `turbo deploy -F product-service`. Set the root directory of each Worker to where its wrangler.toml is located. For example, for `product-service`, the root directory should be `/workers/product-service/`. Optionally, you can add [build watch paths](/workers/ci-cd/builds/build-watch-paths/) to optimize your builds. When a new commit is made to `ecommerce-monorepo`, a build and deploy will be triggered for each of the Workers if the change is within its included watch paths using the configured commands for that Worker. ``` ecommerce-monorepo/ │ ├── workers/ │ ├── product-service/ │ │ ├── src/ │ │ └── wrangler.toml │ ├── order-service/ │ │ ├── src/ │ │ └── wrangler.toml │ └── notification-service/ │ ├── src/ │ └── wrangler.toml ├── shared/ │ └── utils/ └── README.md ``` ## Wrangler Environments You can use [Wrangler Environments](/workers/wrangler/environments/) with Workers Builds by completing the following steps: 1. [Deploy via Wrangler](/workers/wrangler/commands/#deploy) to create the Workers for your environments on the Dashboard, if you do not already have them. 2. Find the Workers for your environments. They are typically named `[name of Worker] - [environment name]`. 3. Connect your repository to each of the Workers for your environment. 4. In each of the Workers, edit your Wrangler deploy command to include the flag `--env: <environment name>` in the build configurations. When a new commit is detected in the repository, a new build/deploy will trigger for each associated Worker. ### Example Imagine you have a Worker named `my-worker`, and you want to set up two environments `staging` and `production` set in the `wrangler.toml`. If you have not already, you can deploy `my-worker` for each environment using the commands `wrangler deploy --env staging` and `wrangler deploy --env production`. In your Cloudflare Dashboard, you should find the two Workers `my-worker-staging` and `my-worker-production`. Then, connect the Git repository for the Worker, `my-worker`, to both of the environment Workers. In the build configurations of each environment Worker, edit the deploy commands to be `npx wrangler deploy --env staging` and `npx wrangler deploy --env production` respectively. --- # Build caching URL: https://developers.cloudflare.com/workers/ci-cd/builds/build-caching/ Improve Workers build times by caching dependencies and build output between builds with a project-wide shared cache. The first build to occur after enabling build caching on your Workers project will save relevant artifacts to cache. Every subsequent build will restore from cache unless configured otherwise. ## About build cache When enabled, build caching will automatically detect which package manager and framework the project is using from its `package.json` and cache data accordingly for the build. The following shows which package managers and frameworks are supported for dependency and build output caching respectively. ### Package managers Workers build cache will cache the global cache directories of the following package managers: | Package Manager | Directories cached | | ----------------------------- | -------------------- | | [npm](https://www.npmjs.com/) | `.npm` | | [yarn](https://yarnpkg.com/) | `.cache/yarn` | | [pnpm](https://pnpm.io/) | `.pnpm-store` | | [bun](https://bun.sh/) | `.bun/install/cache` | ### Frameworks Some frameworks provide a cache directory that is typically populated by the framework with intermediate build outputs or dependencies during build time. Workers Builds will automatically detect the framework you are using and cache this directory for reuse in subsequent builds. The following frameworks support build output caching: | Framework | Directories cached | | ---------- | --------------------------------------------- | | Astro | `node_modules/.astro` | | Docusaurus | `node_modules/.cache`, `.docusaurus`, `build` | | Eleventy | `.cache` | | Gatsby | `.cache`, `public` | | Next.js | `.next/cache` | | Nuxt | `node_modules/.cache/nuxt` | :::note [Static assets](/workers/static-assets/) and [frameworks](/workers/frameworks/) are now supported in Cloudflare Workers. ::: ### Limits The following limits are imposed for build caching: - **Retention**: Cache is purged 7 days after its last read date. Unread cache artifacts are purged 7 days after creation. - **Storage**: Every project is allocated 10 GB. If the project cache exceeds this limit, the project will automatically start deleting artifacts that were read least recently. ## Enable build cache To enable build caching: 1. Navigate to [Workers & Pages Overview](https://dash.cloudflare.com) on the Dashboard. 2. Find your Workers project. 3. Go to **Settings** > **Build** > **Build cache**. 4. Select **Enable** to turn on build caching. ## Clear build cache The build cache can be cleared for a project when needed, such as when debugging build issues. To clear the build cache: 1. Navigate to [Workers & Pages Overview](https://dash.cloudflare.com) on the Dashboard. 2. Find your Workers project. 3. Go to **Settings** > **Build** > **Build cache**. 4. Select **Clear Cache** to clear the build cache. --- # Build image URL: https://developers.cloudflare.com/workers/ci-cd/builds/build-image/ Workers Builds uses a build image with support for a variety of languages and tools such as Node.js, Python, PHP, Ruby, and Go. ## Supported Tooling Workers Builds supports a variety of runtimes, languages, and tools. Builds will use the default versions listed below unless a custom version is detected or specified. You can [override the default versions](/workers/ci-cd/builds/build-image/#overriding-default-versions) using environment variables or version files. All versions are available for override. :::note[Default version updates] The default versions will be updated regularly to the latest minor version. No major version updates will be made without notice. If you need a specific minor version, please specify it by overriding the default version. ::: ### Runtime | Tool | Default version | Environment variable | File | | ----------- | --------------- | -------------------- | ---------------------------- | | **Go** | 1.23.0 | `GO_VERSION` | | | **Node.js** | 22.9.0 | `NODE_VERSION` | .nvmrc, .node-version | | **Python** | 3.12.5 | `PYTHON_VERSION` | .python-version, runtime.txt | | **Ruby** | 3.3.5 | `RUBY_VERSION` | .ruby-version | ### Tools and langauges | Tool | Default version | Environment variable | | ----------- | ---------------- | -------------------- | | **Bun** | 1.1.33 | `BUN_VERSION` | | **Hugo** | extended_0.134.3 | `HUGO_VERSION` | | **npm** | 10.8.3 | | | **yarn** | 4.5.0 | `YARN_VERSION` | | **pnpm** | 9.10.0 | `PNPM_VERSION` | | **pip** | 24.2 | | | **gem** | 3.5.19 | | | **poetry** | 1.8.3 | | | **pipx** | 1.7.1 | | | **bundler** | 2.4.10 | | ## Advanced Settings ### Overriding Default Versions If you need to override a [specific version](/workers/ci-cd/builds/build-image/#overriding-default-versions) of a language or tool within the image, you can specify it as a [build environment variable](/workers/ci-cd/builds/configuration/#build-settings), or set the relevant file in your source code as shown above. To set the version using a build environment variables, you can: 1. Find the environment variable name for the language or tool and desired version (e.g. `NODE_VERSION = 22`) 2. Add and save the environment variable on the dashboard by going to **Settings** > **Build** > **Build Variables and Secrets** in your Workers project Or, to set the version by adding a file to your project, you can: 1. Find the filename for the language or tool (e.g. `.nvmrc`) 2. Add the specified file name to the root directory and set the desired version number as the file's content. For example, if the version number is 22, the file should contain '22'. ### Skip dependency install You can add the following build variable to disable automatic dependency installation and run a custom install command instead. | Build variable | Value | | ------------------------- | ------------- | | `SKIP_DEPENDENCY_INSTALL` | `1` or `true` | ## Pre-installed Packages In the following table, review the pre-installed packages in the build image. The packages are installed with `apt`, a package manager for Linux distributions. | | | | | ----------------- | ----------------- | ----------------- | | `curl` | `libbz2-dev` | `libreadline-dev` | | `git` | `libc++1` | `libssl-dev` | | `git-lfs` | `libdb-dev` | `libvips-dev` | | `unzip` | `libgdbm-dev` | `libyaml-dev` | | `autoconf` | `libgdbm6` | `tzdata` | | `build-essential` | `libgbm1` | `wget` | | `bzip2` | `libgmp-dev` | `zlib1g-dev` | | `gnupg` | `liblzma-dev` | `zstd` | | `libffi-dev` | `libncurses5-dev` | | ## Build Environment Workers Builds are run in the following environment: | | | | --------------------- | ------------ | | **Build Environment** | Ubuntu 24.04 | | **Architecture** | x86_64 | --- # Build watch paths URL: https://developers.cloudflare.com/workers/ci-cd/builds/build-watch-paths/ When you connect a git repository to Workers, by default a change to any file in the repository will trigger a build. You can configure Workers to include or exclude specific paths to specify if Workers should skip a build for a given path. This can be especially helpful if you are using a monorepo project structure and want to limit the amount of builds being kicked off. ## Configure Paths To configure which paths are included and excluded: 1. In **Overview**, select your Workers project. 2. Go to **Settings** > **Build** > **Build watch paths**. Workers will default to setting your project’s includes paths to everything (\[\*]) and excludes paths to nothing (`[]`). The configuration fields can be filled in two ways: - **Static filepaths**: Enter the precise name of the file you are looking to include or exclude (for example, `docs/README.md`). - **Wildcard syntax:** Use wildcards to match multiple path directories. You can specify wildcards at the start or end of your rule. :::note[Wildcard syntax] A wildcard (`*`) is a character that is used within rules. It can be placed alone to match anything or placed at the start or end of a rule to allow for better control over branch configuration. A wildcard will match zero or more characters.For example, if you wanted to match all branches that started with `fix/` then you would create the rule `fix/*` to match strings like `fix/1`, `fix/bugs`or `fix/`. ::: For each path in a push event, build watch paths will be evaluated as follows: - Paths satisfying excludes conditions are ignored first - Any remaining paths are checked against includes conditions - If any matching path is found, a build is triggered. Otherwise the build is skipped Workers will bypass the path matching for a push event and default to building the project if: - A push event contains 0 file changes, in case a user pushes a empty push event to trigger a build - A push event contains 3000+ file changes or 20+ commits ## Examples ### Example 1 If you want to trigger a build from all changes within a set of directories, such as all changes in the folders `project-a/` and `packages/` - Include paths: `project-a/*, packages/*` - Exclude paths: \`\` ### Example 2 If you want to trigger a build for any changes, but want to exclude changes to a certain directory, such as all changes in a docs/ directory - Include paths: `*` - Exclude paths: `docs/*` ### Example 3 If you want to trigger a build for a specific file or specific filetype, for example all files ending in `.md`. - Include paths: `*.md` - Exclude paths: \`\` --- # Configuration URL: https://developers.cloudflare.com/workers/ci-cd/builds/configuration/ import { DirectoryListing } from "~/components"; When connecting your Git repository to your Worker, you can customize the configurations needed to build and deploy your Worker. ## Build settings Build settings can be found by navigating to **Settings** > **Build** within your Worker. Note that when you update and save build settings, the updated settings will be applied to your _next_ build. When you _retry_ a build, the build configurations that exist when the build is retried will be applied. ### Overview | Setting | Description | | ---------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **Git account** | Select the Git account you would like to use. After the initial connection, you can continue to use this Git account for future projects. | | **Git repository** | Choose the Git repository you would like to connect your Worker to. | | **Git branch** | Select the branch you would like Cloudflare to listen to for new commits. This will be defaulted to `main`. | | **Build command** _(Optional)_ | Set a build command if your project requires a build step (e.g. `npm run build`). This is necessary, for example, when using a [front-end framework](/workers/ci-cd/builds/configuration/#framework-support) such as Next.js or Remix. | | **[Deploy command](/workers/ci-cd/builds/configuration/#deploy-command)** | The deploy command lets you set the [specific Wrangler command](/workers/wrangler/commands/#deploy) used to deploy your Worker. Your deploy command will default to `npx wrangler deploy` but you may customize this command. Workers Builds will use the Wrangler version set in your `package json`. | | **Root directory** _(Optional)_ | Specify the path to your project. The root directory defines where the build command will be run and can be helpful in [monorepos](/workers/ci-cd/builds/advanced-setups/#monorepos) to isolate a specific project within the repository for builds. | | **[API token](/workers/ci-cd/builds/configuration/#api-token)** _(Optional)_ | The API token is used to authenticate your build request and authorize the upload and deployment of your Worker to Cloudflare. By default, Cloudflare will automatically generate an API token for your account when using Workers Builds, and continue to use this API token for all subsequent builds. Alternatively, you can [create your own API token](/workers/wrangler/migration/v1-to-v2/wrangler-legacy/authentication/#generate-tokens), or select one that you already own. | | **Build variables and secrets** _(Optional)_ | Add environment variables and secrets accessible only to your build. Build variables will not be accessible at runtime. If you would like to configure runtime variables you can do so in **Settings** > **Variables & Secrets** | :::note Currently, Workers Builds does not honor the configurations set in [Custom Builds](/workers/wrangler/custom-builds/) within your wrangler.toml file. ::: ### Deploy command You can run your deploy command using the package manager of your choice. If you have added a Wrangler deploy command as a script in your `package.json`, then you can run it by setting it as your deploy command. For example, `npm run deploy`. Examples of other deploy commands you can set include: | Example Command | Description | | ---------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `npx wrangler versions upload` | Upload a [version](/workers/configuration/versions-and-deployments/) of your Worker without deploying it as the Active Deployment. | | `npx wrangler deploy --assets ./public/` | Deploy your Worker along with static assets from the specified directory. Alternatively, you can use the [assets binding](/workers/static-assets/binding/). | | `npx wrangler deploy --env staging` | If you have a [Wrangler environment](/workers/ci-cd/builds/advanced-setups/#wrangler-environments) Worker, you should set your deploy command with the environment flag. For more details, see [Advanced Setups](/workers/ci-cd/builds/advanced-setups/#wrangler-environments). | ### API token The API token in Workers Builds defines the access granted to Workers Builds for interacting with your account's resources. Currently, only user tokens are supported, with account-owned token support coming soon. When you select **Create new token**, a new API token will be created automatically with the following permissions: - **Account:** Account Settings (read), Workers Scripts (edit), Workers KV Storage (edit), Workers R2 Storage (edit) - **Zone:** Workers Routes (edit) for all zones on the account - **User:** User Details (read), Memberships (read) You can configure the permissions of this API token by navigating to **My Profile** > **API Tokens** for user tokens. It is recommended to consistently use the same API token across all uploads and deployments of your Worker to maintain consistent access permissions. ## Framework support [Static assets](/workers/static-assets/) and [frameworks](/workers/frameworks/) are now supported in Cloudflare Workers. Learn to set up Workers projects and the commands for each framework in the framework guides: <DirectoryListing folder="workers/frameworks/framework-guides" /> --- # Builds URL: https://developers.cloudflare.com/workers/ci-cd/builds/ Workers Builds allows you to connect an _existing Worker_ to its GitHub or GitLab repository, enabling automated builds and deployments for your project on push. Support for creating a _new Worker_ from importing a Git repository is coming soon. ## Get started If you have an existing Worker and have pushed the project to a Git repository, you can now connect the repository to your Worker, enabling automatic builds and deployments. To set up builds for your Worker: 1. Select your Worker in the [Workers & Pages Dashboard](https://dash.cloudflare.com) and navigate to **Settings > Build**. 2. Select the Git provider you would like to connect to or select **Connect** and follow the prompts to install the Cloudflare [Git integration](/workers/ci-cd/builds/git-integration/) on your Git account. 3. Configure your [build settings](/workers/ci-cd/builds/configuration/) by selecting your desired Git repository, branch, and configure commands for your build. 4. Push a commit to your Git repository to trigger a build and deploy for your Worker. :::caution When connecting a repository to a Workers project, the Worker name in the Cloudflare dashboard must match the `name` in the wrangler.toml file in the specified root directory, or the build will fail. This ensures that the Worker deployed from the repository is consistent with the Worker registered in the Cloudflare dashboard. For details, see [Workers name requirement](/workers/ci-cd/builds/troubleshoot/#workers-name-requirement). ::: ## View build and preview URL You can monitor a build's status and its build logs by navigating to **View build history** at the bottom of the **Deployments** tab of your Worker. If the build is successful, you can view the build details by selecting **View build** in the associated new [version](/workers/configuration/versions-and-deployments/) created under Version History. There you will also find the [preview URL](/workers/configuration/previews/) generated by the version under Version ID. :::note[Builds, versions, deployments] If a build succeeds, it is uploaded as a version. If the build is configured to deploy (for example, with `wrangler deploy` set as the deploy command), the uploaded version will be automatically promoted to the Active Deployment. ::: ## Disconnecting builds To disable automatic builds and deployments from your Git repository, go to **Settings** > **Builds** and select **Disconnect** under **Git Repositories**. If you want to switch to a different repository for your Worker, you must first disable builds, then reconnect to select the new repository. To disable automatic deployments while still allowing builds to run automatically and save as [versions](/workers/configuration/versions-and-deployments/) (without promoting them to an active deployment), update your deploy command to: `npx wrangler versions upload`. --- # Limits & pricing URL: https://developers.cloudflare.com/workers/ci-cd/builds/limits-and-pricing/ ## Limits While in open beta, the following limits are applicable for Workers Builds. Please note, these are subject to change while in beta. | Metric | Description | Limit | | --------------------- | ---------------------------------------------------------------------- | --------- | | **Build Minutes** | The amount of minutes that it takes to build a project. | Unlimited | | **Concurrent Builds** | Number of builds that run in parallel across an account. | 1 | | **Build Timeout** | The amount of minutes that a build can be run before it is terminated. | 20 mins | | **CPU** | Number of CPU cores available to your build | 2 CPUs | | **Memory** | Amount of memory available to the build. | 8GB | | **Disk space** | Disk space available to your build | 8GB | ## Future pricing During the beta, Workers Builds will be free to try with the limits stated above. In the future, once Workers Builds becomes generally available, you can expect the following updates to the limits and pricing structure. | Metric | Workers Free | Paid | | ------------------------- | ------------- | ---------------------------------- | | **Build Minutes / Month** | 3,000 minutes | 6,000 minutes (then +$0.005 / min) | | **Concurrent Builds** | 1 | 6 | --- # Troubleshooting builds URL: https://developers.cloudflare.com/workers/ci-cd/builds/troubleshoot/ This guide explains how to identify and resolve build errors, as well as troubleshoot common issues in the Workers Builds deployment process. To view your build history, go to your Worker project in the Cloudflare dashboard, select **Deployment**, select **View Build History** at the bottom of the page, and select the build you want to view. To retry a build, select the ellipses next to the build and select **Retry build**. Alternatively, you can select **Retry build** on the Build Details page. ## Known issues or limitations Here are some common build errors that may surface in the build logs or general issues and how you can resolve them. ### Workers name requirement `✘ [ERROR] The name in your wrangler.toml file (<Worker name>) must match the name of your Worker. Please update the name field in your wrangler.toml file.` When connecting a Git repository to your Workers project, the specified name for the Worker on the Cloudflare dashboard must match the `name` argument in the wrangler.toml file located in the specified root directory. If it does not match, update the name field in your wrangler.toml file to match the name of the Worker on the dashboard. The build system uses the `name` argument in the wrangler.toml to determine which Worker to deploy to Cloudflare's global network. This requirement ensures consistency between the Worker's name on the dashboard and the deployed Worker. :::note This does not apply to [Wrangler Environments](/workers/wrangler/environments/) if the Worker name before the `-<env_name>` suffix matches the name in wrangler.toml. For example, a Worker named `my-worker-staging` on the dashboard can be deployed from a repository that contains a wrangler.toml with the arguments `name = my-worker` and `[env.staging]` using the deploy command `npx wrangler deploy --env staging`. ::: ### Missing wrangler.toml `✘ [ERROR] Missing entry-point: The entry-point should be specified via the command line (e.g. wrangler deploy path/to/script) or the main config field.` If you see this error, a wrangler.toml is likely missing from the root directory. Navigate to **Settings** > **Build** > **Build Configuration** to update the root directory, or add a [wrangler.toml](/workers/wrangler/configuration/) to the specified directory. ### Incorrect account_id `Could not route to /client/v4/accounts/<Account ID>/workers/services/<Worker name>, perhaps your object identifier is invalid? [code: 7003]` If you see this error, the wrangler.toml likely has an `account_id` for a different account. Remove the `account_id` argument or update it with your account's `account_id`, available in **Workers & Pages Overview** under **Account Details**. ### Stale API token ` Failed: The build token selected for this build has been deleted or rolled and cannot be used for this build. Please update your build token in the Worker Builds settings and retry the build.` The API Token dropdown in Build Configuration settings may show stale tokens that were edited, deleted, or rolled. If you encounter an error due to a stale token, create a new API Token and select it for the build. ### Build timed out `Build was timed out` There is a maximum build duration of 20 minutes. If a build exceeds this time, then the build will be terminated and the above error log is shown. For more details, see [Workers Builds limits](/workers/ci-cd/builds/limits-and-pricing/). ### Git integration issues If you are running into errors associated with your Git integration, you can try removing access to your [GitHub](/workers/ci-cd/builds/git-integration/github-integration/#removing-access) or [GitLab](/workers/ci-cd/builds/git-integration/gitlab-integration/#removing-access) integration from Cloudflare, then reinstalling the [GitHub](/workers/ci-cd/builds/git-integration/github-integration/#reinstall-a-git-integration) or [GitLab](/workers/ci-cd/builds/git-integration/gitlab-integration/#reinstall-a-git-integration) integration. ## For additional support If you discover additional issues or would like to provide feedback, reach out to us in the [Cloudflare Developers Discord](https://discord.com/channels/595317990191398933/1052656806058528849). --- # APIs URL: https://developers.cloudflare.com/workers/configuration/integrations/apis/ To integrate with third party APIs from Cloudflare Workers, use the [fetch API](/workers/runtime-apis/fetch/) to make HTTP requests to the API endpoint. Then use the response data to modify or manipulate your content as needed. For example, if you want to integrate with a weather API, make a fetch request to the API endpoint and retrieve the current weather data. Then use this data to display the current weather conditions on your website. To make the `fetch()` request, add the following code to your project's `src/index.js` file: ```js async function handleRequest(request) { // Make the fetch request to the third party API endpoint const response = await fetch("https://weather-api.com/endpoint", { method: "GET", headers: { "Content-Type": "application/json", }, }); // Retrieve the data from the response const data = await response.json(); // Use the data to modify or manipulate your content as needed return new Response(data); } ``` ## Authentication If your API requires authentication, use Wrangler secrets to securely store your credentials. To do this, create a secret in your Cloudflare Workers project using the following [`wrangler secret`](/workers/wrangler/commands/#secret) command: ```sh wrangler secret put SECRET_NAME ``` Then, retrieve the secret value in your code using the following code snippet: ```js const secretValue = env.SECRET_NAME; ``` Then use the secret value to authenticate with the external service. For example, if the external service requires an API key for authentication, include it in your request headers. For services that require mTLS authentication, use [mTLS certificates](/workers/runtime-apis/bindings/mtls) to present a client certificate. ## Tips - Use the Cache API to cache data from the third party API. This allows you to optimize cacheable requests made to the API. Integrating with third party APIs from Cloudflare Workers adds additional functionality and features to your application. - Use [Custom Domains](/workers/configuration/routing/custom-domains/) when communicating with external APIs, which treat your Worker as your core application. --- # External Services URL: https://developers.cloudflare.com/workers/configuration/integrations/external-services/ Many external services provide libraries and SDKs to interact with their APIs. While many Node-compatible libraries work on Workers right out of the box, some, which implement `fs`, `http/net`, or access the browser `window` do not directly translate to the Workers runtime, which is v8-based. ## Authentication If your service requires authentication, use Wrangler secrets to securely store your credentials. To do this, create a secret in your Cloudflare Workers project using the following [`wrangler secret`](/workers/wrangler/commands/#secret) command: ```sh wrangler secret put SECRET_NAME ``` Then, retrieve the secret value in your code using the following code snippet: ```js const secretValue = env.SECRET_NAME; ``` Then use the secret value to authenticate with the external service. For example, if the external service requires an API key for authentication, include the secret in your library's configuration. For services that require mTLS authentication, use [mTLS certificates](/workers/runtime-apis/bindings/mtls) to present a client certificate. Use [Custom Domains](/workers/configuration/routing/custom-domains/) when communicating with external APIs, which treat your Worker as your core application. --- # Integrations URL: https://developers.cloudflare.com/workers/configuration/integrations/ One of the key features of Cloudflare Workers is the ability to integrate with other services and products. In this document, we will explain the types of integrations available with Cloudflare Workers and provide step-by-step instructions for using them. ## Types of integrations Cloudflare Workers offers several types of integrations, including: * [Databases](/workers/databases/): Cloudflare Workers can be integrated with a variety of databases, including SQL and NoSQL databases. This allows you to store and retrieve data from your databases directly from your Cloudflare Workers code. * [APIs](/workers/configuration/integrations/apis/): Cloudflare Workers can be used to integrate with external APIs, allowing you to access and use the data and functionality exposed by those APIs in your own code. * [Third-party services](/workers/configuration/integrations/external-services/): Cloudflare Workers can be used to integrate with a wide range of third-party services, such as payment gateways, authentication providers, and more. This makes it possible to use these services in your Cloudflare Workers code. ## How to use integrations To use any of the available integrations: * Determine which integration you want to use and make sure you have the necessary accounts and credentials for it. * In your Cloudflare Workers code, import the necessary libraries or modules for the integration. * Use the provided APIs and functions to connect to the integration and access its data or functionality. * Store necessary secrets and keys using secrets via [`wrangler secret put <KEY>`](/workers/wrangler/commands/#secret). ## Tips and best practices To help you get the most out of using integrations with Cloudflare Workers: * Secure your integrations and protect sensitive data. Ensure you use secure authentication and authorization where possible, and ensure the validity of libraries you import. * Use [caching](/workers/reference/how-the-cache-works) to improve performance and reduce the load on an external service. * Split your Workers into service-oriented architecture using [Service bindings](/workers/runtime-apis/bindings/service-bindings/) to make your application more modular, easier to maintain, and more performant. * Use [Custom Domains](/workers/configuration/routing/custom-domains/) when communicating with external APIs and services, which create a DNS record on your behalf and treat your Worker as an application instead of a proxy. --- # Momento URL: https://developers.cloudflare.com/workers/configuration/integrations/momento/ [Momento](https://gomomento.com/) is a truly serverless caching service. It automatically optimizes, scales, and manages your cache for you. This integration allows you to connect to Momento from your Worker by getting Momento cache configuration and adding it as [secrets](/workers/configuration/environment-variables/) to your Worker. ## Momento Cache To set up an integration with Momento Cache: 1. You need to have an existing Momento cache to connect to or create a new cache through the [Momento console](https://console.gomomento.com/). 2. If you do not have an existing cache, create one and assign `user-profiles` as the cache name. 3. Add the Momento database integration to your Worker: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. In **Account Home**, select **Workers & Pages**. 3. In **Overview**, select your Worker. 4. Select **Integrations** > **Momento**. 5. Follow the setup flow, review and grant permissions needed to add secrets to your Worker. 6. Next, connect to Momento. 7. Select a preferred region. 8. Click **Add integration**. 4. The following example code show how to set an item in your cache, get it, and return it as a JSON object. The credentials needed to connect to Momento Cache have been automatically added as [secrets](/workers/configuration/secrets/) to your Worker through the integration. ```ts export default { async fetch(request, env, ctx): Promise<Response> { const client = new MomentoFetcher(env.MOMENTO_API_KEY, env.MOMENTO_REST_ENDPOINT); const cache = env.MOMENTO_CACHE_NAME; const key = 'user'; const f_name = 'mo'; const l_name = 'squirrel'; const value = `${f_name}_${l_name}`; // set a value into cache const setResponse = await client.set(cache, key, value); console.log('setResponse', setResponse); // read a value from cache const getResponse = await client.get(cache, key); console.log('getResponse', getResponse); return new Response(JSON.stringify({ response: getResponse })); }, } satisfies ExportedHandler<Env>; ``` To learn more about Momento, refer to [Momento's official documentation](https://docs.momentohq.com/getting-started). --- # Routes and domains URL: https://developers.cloudflare.com/workers/configuration/routing/ To allow a Worker to receive inbound HTTP requests, you must connect it to an external endpoint such that it can be accessed by the Internet. There are three types of routes: - [Custom Domains](/workers/configuration/routing/custom-domains): Routes to a domain or subdomain (such as `example.com` or `shop.example.com`) within a Cloudflare zone where the Worker is the origin. - [Routes](/workers/configuration/routing/routes/): Routes that are set within a Cloudflare zone where your origin server, if you have one, is behind a Worker that the Worker can communicate with. - [`workers.dev`](/workers/configuration/routing/workers-dev/): A `workers.dev` subdomain route is automatically created for each Worker to help you getting started quickly. You may choose to [disable](/workers/configuration/routing/workers-dev/) your `workers.dev` subdomain. ## What is best for me? It's recommended to run production Workers on a [Workers route or custom domain](/workers/configuration/routing/), rather than on your `workers.dev` subdomain. Your `workers.dev` subdomain is treated as a [Free website](https://www.cloudflare.com/plans/) and is intended for personal or hobby projects that aren't business-critical. Custom Domains are recommended for use cases where your Worker is your application's origin server. Custom Domains can also be invoked within the same zone via `fetch()`, unlike Routes. Routes are recommended for use cases where your application's origin server is external to Cloudflare. Note that Routes cannot be the target of a same-zone `fetch()` call. --- # Custom Domains URL: https://developers.cloudflare.com/workers/configuration/routing/custom-domains/ import { WranglerConfig } from "~/components"; ## Background Custom Domains allow you to connect your Worker to a domain or subdomain, without having to make changes to your DNS settings or perform any certificate management. After you set up a Custom Domain for your Worker, Cloudflare will create DNS records and issue necessary certificates on your behalf. The created DNS records will point directly to your Worker. Unlike [Routes](/workers/configuration/routing/routes/#set-up-a-route), Custom Domains point all paths of a domain or subdomain to your Worker. Custom Domains are routes to a domain or subdomain (such as `example.com` or `shop.example.com`) within a Cloudflare zone where the Worker is the origin. Custom Domains are recommended if you want to connect your Worker to the Internet and do not have an application server that you want to always communicate with. If you do have external dependencies, you can create a `Request` object with the target URI, and use `fetch()` to reach out. Custom Domains can stack on top of each other. For example, if you have Worker A attached to `app.example.com` and Worker B attached to `api.example.com`, Worker A can call `fetch()` on `api.example.com` and invoke Worker B.  Custom Domains can also be invoked within the same zone via `fetch()`, unlike Routes. ## Add a Custom Domain To add a Custom Domain, you must have: 1. An [active Cloudflare zone](/dns/zone-setups/). 2. A Worker to invoke. Custom Domains can be attached to your Worker via the [Cloudflare dashboard](/workers/configuration/routing/custom-domains/#set-up-a-custom-domain-in-the-dashboard), [Wrangler](/workers/configuration/routing/custom-domains/#set-up-a-custom-domain-in-your-wranglertoml--wranglerjson-file) or the [API](/api/resources/workers/subresources/domains/methods/list/). :::caution You cannot create a Custom Domain on a hostname with an existing CNAME DNS record or on a zone you do not own. ::: ### Set up a Custom Domain in the dashboard To set up a Custom Domain in the dashboard: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. Select **Workers & Pages** and in **Overview**, select your Worker. 3. Go to **Settings** > **Domains & Routes** > **Add** > **Custom Domain**. 4. Enter the domain you want to configure for your Worker. 5. Select **Add Custom Domain**. After you have added the domain or subdomain, Cloudflare will create a new DNS record for you. You can add multiple Custom Domains. ### Set up a Custom Domain in your Wrangler configuration file To configure a Custom Domain in your [Wrangler configuration file](/workers/wrangler/configuration/), add the `custom_domain=true` option on each pattern under `routes`. For example, to configure a Custom Domain: <WranglerConfig> ```toml routes = [ { pattern = "shop.example.com", custom_domain = true } ] ``` </WranglerConfig> To configure multiple Custom Domains: <WranglerConfig> ```toml routes = [ { pattern = "shop.example.com", custom_domain = true }, { pattern = "shop-two.example.com", custom_domain = true } ] ``` </WranglerConfig> ## Worker to Worker communication On the same zone, the only way for a Worker to communicate with another Worker running on a [route](/workers/configuration/routing/routes/#set-up-a-route), or on a [`workers.dev`](/workers/configuration/routing/routes/#_top) subdomain, is via [service bindings](/workers/runtime-apis/bindings/service-bindings/). On the same zone, if a Worker is attempting to communicate with a target Worker running on a Custom Domain rather than a route, the limitation is removed. Fetch requests sent on the same zone from one Worker to another Worker running on a Custom Domain will succeed without a service binding. For example, consider the following scenario, where both Workers are running on the `example.com` Cloudflare zone: * `worker-a` running on the [route](/workers/configuration/routing/routes/#set-up-a-route) `auth.example.com/*`. * `worker-b` running on the [route](/workers/configuration/routing/routes/#set-up-a-route) `shop.example.com/*`. If `worker-a` sends a fetch request to `worker-b`, the request will fail, because of the limitation on same-zone fetch requests. `worker-a` must have a service binding to `worker-b` for this request to resolve. ```js title="worker-a" {4} export default { fetch(request) { // This will fail return fetch("https://shop.example.com") } } ``` However, if `worker-b` was instead set up to run on the Custom Domain `shop.example.com`, the fetch request would succeed. ## Request matching behaviour Custom Domains do not support [wildcard DNS records](/dns/manage-dns-records/reference/wildcard-dns-records/). An incoming request must exactly match the domain or subdomain your Custom Domain is registered to. Other parts (path, query parameters) of the URL are not considered when executing this matching logic. For example, if you create a Custom Domain on `api.example.com` attached to your `api-gateway` Worker, a request to either `api.example.com/login` or `api.example.com/user` would invoke the same `api-gateway` Worker.  ## Interaction with Routes A Worker running on a Custom Domain is treated as an origin. Any Workers running on routes before your Custom Domain can optionally call the Worker registered on your Custom Domain by issuing `fetch(request)` with the incoming `Request` object. That means that you are able to set up Workers to run before a request gets to your Custom Domain Worker. In other words, you can chain together two Workers in the same request. For example, consider the following workflow: 1. A Custom Domain for `api.example.com` points to your `api-worker` Worker. 2. A route added to `api.example.com/auth` points to your `auth-worker` Worker. 3. A request to `api.example.com/auth` will trigger your `auth-worker` Worker. 4. Using `fetch(request)` within the `auth-worker` Worker will invoke the `api-worker` Worker, as if it was a normal application server. ```js title="auth-worker" {8} export default { fetch(request) { const url = new URL(request.url) if(url.searchParams.get("auth") !== "SECRET_TOKEN") { return new Response(null, { status: 401 }) } else { // This will invoke `api-worker` return fetch(request) } } } ``` ## Certificates Creating a Custom Domain will also generate an [Advanced Certificate](/ssl/edge-certificates/advanced-certificate-manager/) on your target zone for your target hostname. These certificates are generated with default settings. To override these settings, delete the generated certificate and create your own certificate in the Cloudflare dashboard. Refer to [Manage advanced certificates](/ssl/edge-certificates/advanced-certificate-manager/manage-certificates/) for instructions. ## Migrate from Routes If you are currently invoking a Worker using a [route](/workers/configuration/routing/routes/) with `/*`, and you have a CNAME record pointing to `100::` or similar, a Custom Domain is a recommended replacement. ### Migrate from Routes via the dashboard To migrate the route `example.com/*`: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. Go to **DNS** and delete the CNAME record for `example.com`. 3. Go to **Account Home** > **Workers & Pages**. 4. In **Overview**, select your Worker > **Settings** > **Domains & Routes**. 5. Select **Add** > **Custom domain** and add `example.com`. 6. Delete the route `example.com/*` located in your Worker > **Settings** > **Domains & Routes**. ### Migrate from Routes via Wrangler To migrate the route `example.com/*` in your [Wrangler configuration file](/workers/wrangler/configuration/): 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. Go to **DNS** and delete the CNAME record for `example.com`. 3. Add the following to your Wrangler file: <WranglerConfig> ```toml routes = [ { pattern = "example.com", custom_domain = true } ] ``` </WranglerConfig> 4. Run `npx wrangler deploy` to create the Custom Domain your Worker will run on. --- # Routes URL: https://developers.cloudflare.com/workers/configuration/routing/routes/ import { WranglerConfig } from "~/components"; ## Background Routes allow users to map a URL pattern to a Worker. When a request comes in to the Cloudflare network that matches the specified URL pattern, your Worker will execute on that route. Routes are a set of rules that evaluate against a request's URL. Routes are recommended for you if you have a designated application server you always need to communicate with. Calling `fetch()` on the incoming `Request` object will trigger a subrequest to your application server, as defined in the **DNS** settings of your Cloudflare zone. Routes add Workers functionality to your existing proxied hostnames, in front of your application server. These allow your Workers to act as a proxy and perform any necessary work before reaching out to an application server behind Cloudflare.  Routes can `fetch()` Custom Domains and take precedence if configured on the same hostname. If you would like to run a logging Worker in front of your application, for example, you can create a Custom Domain on your application Worker for `app.example.com`, and create a Route for your logging Worker at `app.example.com/*`. Calling `fetch()` will invoke the application Worker on your Custom Domain. Note that Routes cannot be the target of a same-zone `fetch()` call. ## Set up a route To add a route, you must have: 1. An [active Cloudflare zone](/dns/zone-setups/). 2. A Worker to invoke. 3. A DNS record set up for the [domain](/dns/manage-dns-records/how-to/create-zone-apex/) or [subdomain](/dns/manage-dns-records/how-to/create-subdomain/) proxied by Cloudflare (also known as orange-clouded) you would like to route to. :::caution Route setup will differ depending on if your application's origin is a Worker or not. If your Worker is your application's origin, use [Custom Domains](/workers/configuration/routing/custom-domains/). ::: If your Worker is not your application's origin, follow the instructions below to set up a route. :::note Routes can also be created via the API. Refer to the [Workers Routes API documentation](/api/resources/workers/subresources/routes/methods/create/) for more information. ::: ### Set up a route in the dashboard Before you set up a route, make sure you have a DNS record set up for the [domain](/dns/manage-dns-records/how-to/create-zone-apex/) or [subdomain](/dns/manage-dns-records/how-to/create-subdomain/) you would like to route to. To set up a route in the dashboard: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. Go to **Workers & Pages** and in **Overview**, select your Worker. 3. Go to **Settings** > **Domains & Routes** > **Add** > **Route**. 4. Select the zone and enter the route pattern. 5. Select **Add route**. ### Set up a route in the Wrangler configuration file Before you set up a route, make sure you have a DNS record set up for the [domain](/dns/manage-dns-records/how-to/create-zone-apex/) or [subdomain](/dns/manage-dns-records/how-to/create-subdomain/) you would like to route to. To configure a route using your [Wrangler configuration file](/workers/wrangler/configuration/), refer to the following example. <WranglerConfig> ```toml routes = [ { pattern = "subdomain.example.com/*", zone_name = "example.com" }, # or { pattern = "subdomain.example.com/*", zone_id = "<YOUR_ZONE_ID>" } ] ``` </WranglerConfig> Add the `zone_name` or `zone_id` option after each route. The `zone_name` and `zone_id` options are interchangeable. If using `zone_id`, find your zone ID by logging in to the [Cloudflare dashboard](https://dash.cloudflare.com) > select your account > select your website > find the **Zone ID** in the lefthand side of **Overview**. To add multiple routes: <WranglerConfig> ```toml routes = [ { pattern = "subdomain.example.com/*", zone_name = "example.com" }, { pattern = "subdomain-two.example.com/example", zone_id = "<YOUR_ZONE_ID>" } ] ``` </WranglerConfig> :::note The `zone_id` and `zone_name` options are interchangeable. However, if using Cloudflare for SaaS with a `*/*` pattern, use the `zone_name` option to avoid errors. Currently, [publishing `*/*` routes with a `zone_id` option fails with an `Invalid URL` error](https://github.com/cloudflare/workers-sdk/issues/2953). ::: ## Matching behavior Route patterns look like this: ```txt https://*.example.com/images/* ``` This pattern would match all HTTPS requests destined for a subhost of example.com and whose paths are prefixed by `/images/`. A pattern to match all requests looks like this: ```txt *example.com/* ``` While they look similar to a [regex](https://en.wikipedia.org/wiki/Regular_expression) pattern, route patterns follow specific rules: - The only supported operator is the wildcard (`*`), which matches zero or more of any character. - Route patterns may not contain infix wildcards or query parameters. For example, neither `example.com/*.jpg` nor `example.com/?foo=*` are valid route patterns. - When more than one route pattern could match a request URL, the most specific route pattern wins. For example, the pattern `www.example.com/*` would take precedence over `*.example.com/*` when matching a request for `https://www.example.com/`. The pattern `example.com/hello/*` would take precedence over `example.com/*` when matching a request for `example.com/hello/world`. - Route pattern matching considers the entire request URL, including the query parameter string. Since route patterns may not contain query parameters, the only way to have a route pattern match URLs with query parameters is to terminate it with a wildcard, `*`. - The path component of route patterns is case sensitive, for example, `example.com/Images/*` and `example.com/images/*` are two distinct routes. - For routes created before October 15th, 2023, the host component of route patterns is case sensitive, for example, `example.com/*` and `Example.com/*` are two distinct routes. - For routes created on or after October 15th, 2023, the host component of route patterns is not case sensitive, for example, `example.com/*` and `Example.com/*` are equivalent routes. A route can be specified without being associated with a Worker. This will act to negate any less specific patterns. For example, consider this pair of route patterns, one with a Workers script and one without: ```txt *example.com/images/cat.png -> <no script> *example.com/images/* -> worker-script ``` In this example, all requests destined for example.com and whose paths are prefixed by `/images/` would be routed to `worker-script`, _except_ for `/images/cat.png`, which would bypass Workers completely. Requests with a path of `/images/cat.png?foo=bar` would be routed to `worker-script`, due to the presence of the query string. ## Validity The following set of rules govern route pattern validity. #### Route patterns must include your zone If your zone is `example.com`, then the simplest possible route pattern you can have is `example.com`, which would match `http://example.com/` and `https://example.com/`, and nothing else. As with a URL, there is an implied path of `/` if you do not specify one. #### Route patterns may not contain any query parameters For example, `https://example.com/?anything` is not a valid route pattern. #### Route patterns may optionally begin with `http://` or `https://` If you omit a scheme in your route pattern, it will match both `http://` and `https://` URLs. If you include `http://` or `https://`, it will only match HTTP or HTTPS requests, respectively. - `https://*.example.com/` matches `https://www.example.com/` but not `http://www.example.com/`. - `*.example.com/` matches both `https://www.example.com/` and `http://www.example.com/`. #### Hostnames may optionally begin with `*` If a route pattern hostname begins with `*`, then it matches the host and all subhosts. If a route pattern hostname begins with `*.`, then it only matches all subhosts. - `*example.com/` matches `https://example.com/` and `https://www.example.com/`. - `*.example.com/` matches `https://www.example.com/` but not `https://example.com/`. #### Paths may optionally end with `*` If a route pattern path ends with `*`, then it matches all suffixes of that path. - `https://example.com/path*` matches `https://example.com/path` and `https://example.com/path2` and `https://example.com/path/readme.txt` :::caution There is a well-known bug associated with path matching concerning wildcards (`*`) and forward slashes (`/`) that is documented in [Known issues](/workers/platform/known-issues/). ::: #### Domains and subdomains must have a DNS Record All domains and subdomains must have a [DNS record](/dns/manage-dns-records/how-to/create-dns-records/) to be proxied on Cloudflare and used to invoke a Worker. For example, if you want to put a Worker on `myname.example.com`, and you have added `example.com` to Cloudflare but have not added any DNS records for `myname.example.com`, any request to `myname.example.com` will result in the error `ERR_NAME_NOT_RESOLVED`. :::caution If you have previously used the Cloudflare dashboard to add an `AAAA` record for `myname` to `example.com`, pointing to `100::` (the [reserved IPv6 discard prefix](https://tools.ietf.org/html/rfc6666)), Cloudflare recommends creating a [Custom Domain](/workers/configuration/routing/custom-domains/) pointing to your Worker instead. ::: --- # workers.dev URL: https://developers.cloudflare.com/workers/configuration/routing/workers-dev/ import { WranglerConfig } from "~/components"; Cloudflare Workers accounts come with a `workers.dev` subdomain that is configurable in the Cloudflare dashboard. Your `workers.dev` subdomain allows you getting started quickly by deploying Workers without first onboarding your custom domain to Cloudflare. It's recommended to run production Workers on a [Workers route or custom domain](/workers/configuration/routing/), rather than on your `workers.dev` subdomain. Your `workers.dev` subdomain is treated as a [Free website](https://www.cloudflare.com/plans/) and is intended for personal or hobby projects that aren't business-critical. ## Configure `workers.dev` `workers.dev` subdomains take the format: `<YOUR_ACCOUNT_SUBDOMAIN>.workers.dev`. To change your `workers.dev` subdomain: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. Select **Workers & Pages**. 3. Select **Change** next to **Your subdomain**. All Workers are assigned a `workers.dev` route when they are created or renamed following the syntax `<YOUR_WORKER_NAME>.<YOUR_SUBDOMAIN>.workers.dev`. The [`name`](/workers/wrangler/configuration/#inheritable-keys) field in your Worker configuration is used as the subdomain for the deployed Worker. ## Disabling `workers.dev` ### Disabling `workers.dev` in the dashboard To disable the `workers.dev` route for a Worker: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. Go to **Workers & Pages** and in **Overview**, select your Worker. 3. Go to **Settings** > **Domains & Routes**. 4. On `workers.dev` click "Disable". 5. Confirm you want to disable. ### Disabling `workers.dev` in the Wrangler configuration file To disable the `workers.dev` route for a Worker, include the following in your Worker's [Wrangler configuration file](/workers/wrangler/configuration/): <WranglerConfig> ```toml workers_dev = false ``` </WranglerConfig> When you redeploy your Worker with this change, the `workers.dev` route will be disabled. Disabling your `workers.dev` route does not disable Preview URLs. Learn how to [disable Preview URLs](/workers/configuration/previews/#disabling-preview-urls). If you do not specify `workers_dev = false` but add a [`routes` component](/workers/wrangler/configuration/#routes) to your [Wrangler configuration file](/workers/wrangler/configuration/), the value of `workers_dev` will be inferred as `false` on the next deploy. :::caution If you disable your `workers.dev` route in the Cloudflare dashboard but do not update your Worker's Wrangler file with `workers_dev = false`, the `workers.dev` route will be re-enabled the next time you deploy your Worker with Wrangler. ::: ## Related resources - [Announcing `workers.dev`](https://blog.cloudflare.com/announcing-workers-dev) - [Wrangler routes configuration](/workers/wrangler/configuration/#types-of-routes) --- # Workers Sites configuration URL: https://developers.cloudflare.com/workers/configuration/sites/configuration/ import { Render, WranglerConfig } from "~/components"; <Render file="workers_sites" /> Workers Sites require the latest version of [Wrangler](https://github.com/cloudflare/workers-sdk/tree/main/packages/wrangler). ## Wrangler configuration file There are a few specific configuration settings for Workers Sites in your Wrangler file: - `bucket` required - The directory containing your static assets, path relative to your [Wrangler configuration file](/workers/wrangler/configuration/). Example: `bucket = "./public"`. - `include` optional - A list of gitignore-style patterns for files or directories in `bucket` you exclusively want to upload. Example: `include = ["upload_dir"]`. - `exclude` optional - A list of gitignore-style patterns for files or directories in `bucket` you want to exclude from uploads. Example: `exclude = ["ignore_dir"]`. To learn more about the optional `include` and `exclude` fields, refer to [Ignoring subsets of static assets](#ignoring-subsets-of-static-assets). :::note If your project uses [environments](/workers/wrangler/environments/), make sure to place `site` above any environment-specific configuration blocks. ::: Example of a [Wrangler configuration file](/workers/wrangler/configuration/): <WranglerConfig> ```toml name = "docs-site-blah" [site] bucket = "./public" [env.production] name = "docs-site" route = "https://example.com/docs*" [env.staging] name = "docs-site-staging" route = "https://staging.example.com/docs*" ``` </WranglerConfig> ## Storage limits For very exceptionally large pages, Workers Sites might not work for you. There is a 25 MiB limit per page or file. ## Ignoring subsets of static assets Workers Sites require [Wrangler](https://github.com/cloudflare/workers-sdk/tree/main/packages/wrangler) - make sure to use the [latest version](/workers/wrangler/install-and-update/#update-wrangler). There are cases where users may not want to upload certain static assets to their Workers Sites. In this case, Workers Sites can also be configured to ignore certain files or directories using logic similar to [Cargo's optional include and exclude fields](https://doc.rust-lang.org/cargo/reference/manifest.html#the-exclude-and-include-fields-optional). This means that you should use gitignore semantics when declaring which directory entries to include or ignore in uploads. ### Exclusively including files/directories If you want to include only a certain set of files or directories in your `bucket`, you can add an `include` field to your `[site]` section of your Wrangler file: <WranglerConfig> ```toml [site] bucket = "./public" include = ["included_dir"] # must be an array. ``` </WranglerConfig> Wrangler will only upload files or directories matching the patterns in the `include` array. ### Excluding files/directories If you want to exclude files or directories in your `bucket`, you can add an `exclude` field to your `[site]` section of your Wrangler file: <WranglerConfig> ```toml [site] bucket = "./public" exclude = ["excluded_dir"] # must be an array. ``` </WranglerConfig> Wrangler will ignore files or directories matching the patterns in the `exclude` array when uploading assets to Workers KV. ### Include > exclude If you provide both `include` and `exclude` fields, the `include` field will be used and the `exclude` field will be ignored. ### Default ignored entries Wrangler will always ignore: - `node_modules` - Hidden files and directories - Symlinks #### More about include/exclude patterns Learn more about the standard patterns used for include and exclude in the [gitignore documentation](https://git-scm.com/docs/gitignore). --- # Workers Sites URL: https://developers.cloudflare.com/workers/configuration/sites/ import { Render } from "~/components"; import { LinkButton } from "~/components"; <Render file="workers_sites" /> Workers Sites enables developers to deploy static applications directly to Workers. It can be used for deploying applications built with static site generators like [Hugo](https://gohugo.io) and [Gatsby](https://www.gatsbyjs.org), or front-end frameworks like [Vue](https://vuejs.org) and [React](https://reactjs.org). To deploy with Workers Sites, select from one of these three approaches depending on the state of your target project: --- ## 1. Start from scratch If you are ready to start a brand new project, this quick start guide will help you set up the infrastructure to deploy a HTML website to Workers. <LinkButton href="/workers/configuration/sites/start-from-scratch/"> Start from scratch </LinkButton> --- ## 2. Deploy an existing static site If you have an existing project or static assets that you want to deploy with Workers, this quick start guide will help you install Wrangler and configure Workers Sites for your project. <LinkButton href="/workers/configuration/sites/start-from-existing/"> Start from an existing static site </LinkButton> --- ## 3. Add static assets to an existing Workers project If you already have a Worker deployed to Cloudflare, this quick start guide will show you how to configure the existing codebase to use Workers Sites. <LinkButton href="/workers/configuration/sites/start-from-worker/"> Start from an existing Worker </LinkButton> :::note Workers Sites is built on Workers KV, and usage rates may apply. Refer to [Pricing](/workers/platform/pricing/) to learn more. ::: --- # Start from existing URL: https://developers.cloudflare.com/workers/configuration/sites/start-from-existing/ import { Render, TabItem, Tabs, WranglerConfig } from "~/components"; <Render file="workers_sites" /> Workers Sites require [Wrangler](https://github.com/cloudflare/workers-sdk/tree/main/packages/wrangler) — make sure to use the [latest version](/workers/wrangler/install-and-update/#update-wrangler). To deploy a pre-existing static site project, start with a pre-generated site. Workers Sites works with all static site generators, for example: - [Hugo](https://gohugo.io/getting-started/quick-start/) - [Gatsby](https://www.gatsbyjs.org/docs/quick-start/), requires Node - [Jekyll](https://jekyllrb.com/docs/), requires Ruby - [Eleventy](https://www.11ty.io/#quick-start), requires Node - [WordPress](https://wordpress.org) (refer to the tutorial on [deploying static WordPress sites with Pages](/pages/how-to/deploy-a-wordpress-site/)) ## Getting started 1. Run the `wrangler init` command in the root of your project's directory to generate a basic Worker: ```sh wrangler init -y ``` This command adds/update the following files: - `wrangler.jsonc`: The file containing project configuration. - `package.json`: Wrangler `devDependencies` are added. - `tsconfig.json`: Added if not already there to support writing the Worker in TypeScript. - `src/index.ts`: A basic Cloudflare Worker, written in TypeScript. 2. Add your site's build/output directory to the Wrangler file: <WranglerConfig> ```toml [site] bucket = "./public" # <-- Add your build directory name here. ``` </WranglerConfig> The default directories for the most popular static site generators are listed below: - Hugo: `public` - Gatsby: `public` - Jekyll: `_site` - Eleventy: `_site` 3. Install the `@cloudflare/kv-asset-handler` package in your project: ```sh npm i -D @cloudflare/kv-asset-handler ``` 4. Replace the contents of `src/index.ts` with the following code snippet: <Tabs> <TabItem label="Module Worker" icon="seti:javascript"> ```js import { getAssetFromKV } from "@cloudflare/kv-asset-handler"; import manifestJSON from "__STATIC_CONTENT_MANIFEST"; const assetManifest = JSON.parse(manifestJSON); export default { async fetch(request, env, ctx) { try { // Add logic to decide whether to serve an asset or run your original Worker code return await getAssetFromKV( { request, waitUntil: ctx.waitUntil.bind(ctx), }, { ASSET_NAMESPACE: env.__STATIC_CONTENT, ASSET_MANIFEST: assetManifest, }, ); } catch (e) { let pathname = new URL(request.url).pathname; return new Response(`"${pathname}" not found`, { status: 404, statusText: "not found", }); } }, }; ``` </TabItem> <TabItem label="Service Worker" icon="seti:javascript"> ```js import { getAssetFromKV } from "@cloudflare/kv-asset-handler"; addEventListener("fetch", (event) => { event.respondWith(handleEvent(event)); }); async function handleEvent(event) { try { // Add logic to decide whether to serve an asset or run your original Worker code return await getAssetFromKV(event); } catch (e) { let pathname = new URL(event.request.url).pathname; return new Response(`"${pathname}" not found`, { status: 404, statusText: "not found", }); } } ``` </TabItem> </Tabs> 5. Run `wrangler dev` or `npx wrangler deploy` to preview or deploy your site on Cloudflare. Wrangler will automatically upload the assets found in the configured directory. ```sh npx wrangler deploy ``` 6. Deploy your site to a [custom domain](/workers/configuration/routing/custom-domains/) that you own and have already attached as a Cloudflare zone. Add a `route` property to the Wrangler file. <WranglerConfig> ```toml route = "https://example.com/*" ``` </WranglerConfig> :::note Refer to the documentation on [Routes](/workers/configuration/routing/routes/) to configure a `route` properly. ::: Learn more about [configuring your project](/workers/wrangler/configuration/). --- # Start from scratch URL: https://developers.cloudflare.com/workers/configuration/sites/start-from-scratch/ import { Render, WranglerConfig } from "~/components"; <Render file="workers_sites" /> This guide shows how to quickly start a new Workers Sites project from scratch. ## Getting started 1. Ensure you have the latest version of [git](https://git-scm.com/downloads) and [Node.js](https://nodejs.org/en/download/) installed. 2. In your terminal, clone the `worker-sites-template` starter repository. The following example creates a project called `my-site`: ```sh git clone --depth=1 --branch=wrangler2 https://github.com/cloudflare/worker-sites-template my-site ``` 3. Run `npm install` to install all dependencies. 4. You can preview your site by running the [`wrangler dev`](/workers/wrangler/commands/#dev) command: ```sh wrangler dev ``` 5. Deploy your site to Cloudflare: ```sh npx wrangler deploy ``` ## Project layout The template project contains the following files and directories: - `public`: The static assets for your project. By default it contains an `index.html` and a `favicon.ico`. - `src`: The Worker configured for serving your assets. You do not need to edit this but if you want to see how it works or add more functionality to your Worker, you can edit `src/index.ts`. - `wrangler.jsonc`: The file containing project configuration. The `bucket` property tells Wrangler where to find the static assets (e.g. `site = { bucket = "./public" }`). - `package.json`/`package-lock.json`: define the required Node.js dependencies. ## Customize the `wrangler.jsonc` file: - Change the `name` property to the name of your project: <WranglerConfig> ```toml name = "my-site" ``` </WranglerConfig> - Consider updating`compatibility_date` to today's date to get access to the most recent Workers features: <WranglerConfig> ```toml compatibility_date = "yyyy-mm-dd" ``` </WranglerConfig> - Deploy your site to a [custom domain](/workers/configuration/routing/custom-domains/) that you own and have already attached as a Cloudflare zone: <WranglerConfig> ```toml route = "https://example.com/*" ``` </WranglerConfig> :::note Refer to the documentation on [Routes](/workers/configuration/routing/routes/) to configure a `route` properly. ::: Learn more about [configuring your project](/workers/wrangler/configuration/). --- # Start from Worker URL: https://developers.cloudflare.com/workers/configuration/sites/start-from-worker/ import { Render, TabItem, Tabs, WranglerConfig } from "~/components"; <Render file="workers_sites" /> Workers Sites require [Wrangler](https://github.com/cloudflare/workers-sdk/tree/main/packages/wrangler) — make sure to use the [latest version](/workers/wrangler/install-and-update/#update-wrangler). If you have a pre-existing Worker project, you can use Workers Sites to serve static assets to the Worker. ## Getting started 1. Create a directory that will contain the assets in the root of your project (for example, `./public`) 2. Add configuration to your Wrangler file to point to it. <WranglerConfig> ```toml [site] bucket = "./public" # Add the directory with your static assets! ``` </WranglerConfig> 3. Install the `@cloudflare/kv-asset-handler` package in your project: ```sh npm i -D @cloudflare/kv-asset-handler ``` 4. Import the `getAssetFromKV()` function into your Worker entry point and use it to respond with static assets. <Tabs> <TabItem label="Module Worker" icon="seti:javascript"> ```js import { getAssetFromKV } from "@cloudflare/kv-asset-handler"; import manifestJSON from "__STATIC_CONTENT_MANIFEST"; const assetManifest = JSON.parse(manifestJSON); export default { async fetch(request, env, ctx) { try { // Add logic to decide whether to serve an asset or run your original Worker code return await getAssetFromKV( { request, waitUntil: ctx.waitUntil.bind(ctx), }, { ASSET_NAMESPACE: env.__STATIC_CONTENT, ASSET_MANIFEST: assetManifest, }, ); } catch (e) { let pathname = new URL(request.url).pathname; return new Response(`"${pathname}" not found`, { status: 404, statusText: "not found", }); } }, }; ``` </TabItem> <TabItem label="Service Worker" icon="seti:javascript"> ```js import { getAssetFromKV } from "@cloudflare/kv-asset-handler"; addEventListener("fetch", (event) => { event.respondWith(handleEvent(event)); }); async function handleEvent(event) { try { // Add logic to decide whether to serve an asset or run your original Worker code return await getAssetFromKV(event); } catch (e) { let pathname = new URL(event.request.url).pathname; return new Response(`"${pathname}" not found`, { status: 404, statusText: "not found", }); } } ``` </TabItem> </Tabs> For more information on the configurable options of `getAssetFromKV()` refer to [kv-asset-handler docs](https://github.com/cloudflare/workers-sdk/tree/main/packages/kv-asset-handler). 5. Run `wrangler deploy` or `npx wrangler deploy` as you would normally with your Worker project. Wrangler will automatically upload the assets found in the configured directory. ```sh npx wrangler deploy ``` --- # Gradual deployments URL: https://developers.cloudflare.com/workers/configuration/versions-and-deployments/gradual-deployments/ import { Example } from "~/components"; Gradual Deployments give you the ability to incrementally deploy new [versions](/workers/configuration/versions-and-deployments/#versions) of Workers by splitting traffic across versions.  Using gradual deployments, you can: - Gradually shift traffic to a newer version of your Worker. - Monitor error rates and exceptions across versions using [analytics and logs](/workers/configuration/versions-and-deployments/gradual-deployments/#observability) tooling. - [Roll back](/workers/configuration/versions-and-deployments/rollbacks/) to a previously stable version if you notice issues when deploying a new version. ## Use gradual deployments The following section guides you through an example usage of gradual deployments. You will choose to use either [Wrangler](/workers/configuration/versions-and-deployments/gradual-deployments/#via-wrangler) or the [Cloudflare dashboard](/workers/configuration/versions-and-deployments/gradual-deployments/#via-the-cloudflare-dashboard) to: - Create a new Worker. - Publish a new version of that Worker without deploying it. - Create a gradual deployment between the two versions. - Progress the deployment of the new version to 100% of traffic. ### Via Wrangler :::note Minimum required Wrangler version: 3.40.0. Versions before 3.73.0 require you to specify a `--x-versions` flag. ::: #### 1. Create and deploy a new Worker Create a new `"Hello World"` Worker using the [`create-cloudflare` CLI (C3)](/pages/get-started/c3/) and deploy it. ```sh npm create cloudflare@latest <NAME> -- --type=hello-world ``` Answer `yes` or `no` to using TypeScript. Answer `yes` to deploying your application. This is the first version of your Worker. #### 2. Create a new version of the Worker To create a new version of the Worker, edit the Worker code by changing the `Response` content to your desired text and upload the Worker by using the [`wrangler versions upload`](/workers/wrangler/commands/#upload) command. ```sh npx wrangler versions upload ``` This will create a new version of the Worker that is not automatically deployed. #### 3. Create a new deployment Use the [`wrangler versions deploy`](/workers/wrangler/commands/#deploy-2) command to create a new deployment that splits traffic between two versions of the Worker. Follow the interactive prompts to create a deployment with the versions uploaded in [step #1](/workers/configuration/versions-and-deployments/gradual-deployments/#1-create-and-deploy-a-new-worker) and [step #2](/workers/configuration/versions-and-deployments/gradual-deployments/#2-create-a-new-version-of-the-worker). Select your desired percentages for each version. ```sh npx wrangler versions deploy ``` #### 4. Test the split deployment Run a cURL command on your Worker to test the split deployment. ```bash for j in {0..10} do curl -s https://$WORKER_NAME.$SUBDOMAIN.workers.dev done ``` You should see 10 responses. Responses will reflect the content returned by the versions in your deployment. Responses will vary depending on the percentages configured in [step #3](/workers/configuration/versions-and-deployments/gradual-deployments/#3-create-a-new-deployment). You can test also target a specific version using [version overrides](#version-overrides). #### 5. Set your new version to 100% deployment Run `wrangler versions deploy` again and follow the interactive prompts. Select the version uploaded in [step 2](/workers/configuration/versions-and-deployments/gradual-deployments/#2-create-a-new-version-of-the-worker) and set it to 100% deployment. ```sh npx wrangler versions deploy ``` ### Via the Cloudflare dashboard 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com/?to=/:account/workers) and select your account. 2. Go to **Workers & Pages**. 3. Select **Create application** > **Hello World** template > deploy your Worker. 4. Once the Worker is deployed, go to the online code editor through **Edit code**. Edit the Worker code (change the `Response` content) and upload the Worker. 5. To save changes, select the **down arrow** next to **Deploy** > **Save**. This will create a new version of your Worker. 6. Create a new deployment that splits traffic between the two versions created in step 3 and 5 by going to **Deployments** and selecting **Deploy Version**. 7. cURL your Worker to test the split deployment. ```bash for j in {0..10} do curl -s https://$WORKER_NAME.$SUBDOMAIN.workers.dev done ``` You should see 10 responses. Responses will reflect the content returned by the versions in your deployment. Responses will vary depending on the percentages configured in step #6. ## Version affinity By default, the percentages configured when using gradual deployments operate on a per-request basis — a request has a X% probability of invoking one of two versions of the Worker in the [deployment](/workers/configuration/versions-and-deployments/#deployments). You may want requests associated with a particular identifier (such as user, session, or any unique ID) to be handled by a consistent version of your Worker to prevent version skew. Version skew occurs when there are multiple versions of an application deployed that are not forwards/backwards compatible. You can configure version affinity to prevent the Worker's version from changing back and forth on a per-request basis. You can do this by setting the `Cloudflare-Workers-Version-Key` header on the incoming request to your Worker. For example: ```sh curl -s https://example.com -H 'Cloudflare-Workers-Version-Key: foo' ``` For a given [deployment](/workers/configuration/versions-and-deployments/#deployments), all requests with a version key set to `foo` will be handled by the same version of your Worker. The specific version of your Worker that the version key `foo` corresponds to is determined by the percentages you have configured for each Worker version in your deployment. You can set the `Cloudflare-Workers-Version-Key` header both when making an external request from the Internet to your Worker, as well as when making a subrequest from one Worker to another Worker using a [service binding](/workers/runtime-apis/bindings/service-bindings/). ### Setting `Cloudflare-Workers-Version-Key` using Ruleset Engine You may want to extract a version key from certain properties of your request such as the URL, headers or cookies. You can configure a [Ruleset Engine](/ruleset-engine/) rule on your zone to do this. This allows you to specify version affinity based on these properties without having to modify the external client that makes the request. For example, if your worker serves video assets under the URI path `/assets/` and you wanted requests to each unique asset to be handled by a consistent version, you could define the following [request header modification](/rules/transform/request-header-modification/) rule: <Example> Text in **Expression Editor**: ```txt starts_with(http.request.uri.path, "/asset/") ``` Selected operation under **Modify request header**: _Set dynamic_ **Header name**: `Cloudflare-Workers-Version-Key` **Value**: `regex_replace(http.request.uri.path, "/asset/(.*)", "${1}")` </Example> ## Version overrides You can use version overrides to send a request to a specific version of your Worker in your gradual deployment. To specify a version override in your request, you can set the `Cloudflare-Workers-Version-Overrides` header on the request to your Worker. For example: ```sh curl -s https://example.com -H 'Cloudflare-Workers-Version-Overrides: my-worker-name="dc8dcd28-271b-4367-9840-6c244f84cb40"' ``` `Cloudflare-Workers-Version-Overrides` is a [Dictionary Structured Header](https://www.rfc-editor.org/rfc/rfc8941#name-dictionaries). The dictionary can contain multiple key-value pairs. Each key indicates the name of the Worker the override should be applied to. The value indicates the version ID that should be used and must be a [String](https://www.rfc-editor.org/rfc/rfc8941#name-strings). A version override will only be applied if the specified version is in the current deployment. The versions in the current deployment can be found using the [`wrangler deployments list`](/workers/wrangler/commands/#list-5) command or on the [Workers Dashboard](https://dash.cloudflare.com/?to=/:account/workers) under Worker > Deployments > Active Deployment. :::note[Verifying that the version override was applied] There are a number of reasons why a request's version override may not be applied. For example: - The deployment containing the specified version may not have propagated yet. - The header value may not be a valid [Dictionary](https://www.rfc-editor.org/rfc/rfc8941#name-dictionaries). In the case that a request's version override is not applied, the request will be routed according to the percentages set in the gradual deployment configuration. To make sure that the request's version override was applied correctly, you can [observe](#observability) the version of your Worker that was invoked. You could even automate this check by using the [runtime binding](#runtime-binding) to return the version in the Worker's response. ::: ### Example You may want to test a new version in production before gradually deploying it to an increasing proportion of external traffic. In this example, your deployment is initially configured to route all traffic to a single version: | Version ID | Percentage | | :----------------------------------: | :--------: | | db7cd8d3-4425-4fe7-8c81-01bf963b6067 | 100% | Create a new deployment using [`wrangler versions deploy`](/workers/wrangler/commands/#deploy-2) and specify 0% for the new version whilst keeping the previous version at 100%. | Version ID | Percentage | | :----------------------------------: | :--------: | | dc8dcd28-271b-4367-9840-6c244f84cb40 | 0% | | db7cd8d3-4425-4fe7-8c81-01bf963b6067 | 100% | Now test the new version with a version override before gradually progressing the new version to 100%: ```sh curl -s https://example.com -H 'Cloudflare-Workers-Version-Overrides: my-worker-name="dc8dcd28-271b-4367-9840-6c244f84cb40"' ``` ## Gradual deployments for Durable Objects Due to [global uniqueness](/durable-objects/platform/known-issues/#global-uniqueness), only one version of each [Durable Object](/durable-objects/) can run at a time. This means that gradual deployments work slightly differently for Durable Objects. When you create a new gradual deployment for a Durable Object Worker, each Durable Object is assigned a Worker version based on the percentages you configured in your [deployment](/workers/configuration/versions-and-deployments/#deployments). This version will not change until you create a new deployment.  ### Example This example assumes that you have previously created 3 Durable Objects and [derived their IDs from the names](/durable-objects/api/namespace/#idfromname) "foo", "bar" and "baz". Your Worker is currently on a version that we will call version "A" and you want to gradually deploy a new version "B" of your Worker. Here is how the versions of your Durable Objects might change as you progress your gradual deployment: | Deployment config | "foo" | "bar" | "baz" | | :---------------------------------: | :---: | :---: | :---: | | Version A: 100% <br/> | A | A | A | | Version B: 20% <br/> Version A: 80% | B | A | A | | Version B: 50% <br/> Version A: 50% | B | B | A | | Version B: 100% <br/> | B | B | B | This is only an example, so the versions assigned to your Durable Objects may be different. However, the following is guaranteed: - For a given deployment, requests to each Durable Object will always use the same Worker version. - When you specify each version in the same order as the previous deployment and increase the percentage of a version, Durable Objects which were previously assigned that version will not be assigned a different version. In this example, Durable Object "foo" would never revert from version "B" to version "A". - The Durable Object will only be [reset](/durable-objects/observability/troubleshooting/#durable-object-reset-because-its-code-was-updated) when it is assigned a different version, so each Durable Object will only be reset once in this example. :::note Typically, your Durable Object Worker will define both your Durable Object class and the Worker that interacts with it. In this case, you cannot deploy changes to your Durable Object and its Worker independently. You should ensure that API changes between your Durable Object and its Worker are [forwards and backwards compatible](/durable-objects/platform/known-issues/#code-updates) whether you are using gradual deployments or not. However, using gradual deployments will make it even more likely that different versions of your Durable Objects and its Worker will interact with each other. ::: ## Observability When using gradual deployments, you may want to attribute Workers invocations to a specific version in order to get visibility into the impact of deploying new versions. ### Logpush A new `ScriptVersion` object is available in [Workers Logpush](/workers/observability/logs/logpush/). `ScriptVersion` can only be added through the Logpush API right now. Sample API call: ```bash curl -X POST 'https://api.cloudflare.com/client/v4/accounts/<ACCOUNT_ID>/logpush/jobs' \ -H 'Authorization: Bearer <TOKEN>' \ -H 'Content-Type: application/json' \ -d '{ "name": "workers-logpush", "output_options": { "field_names": ["Event", "EventTimestampMs", "Outcome", "Logs", "ScriptName", "ScriptVersion"], }, "destination_conf": "<DESTINATION_URL>", "dataset": "workers_trace_events", "enabled": true }'| jq . ``` `ScriptVersion` is an object with the following structure: ```json scriptVersion: { id: "<UUID>", message: "<MESSAGE>", tag: "<TAG>" } ``` ### Runtime binding Use the [Version metadata binding](/workers/runtime-apis/bindings/version-metadata/) in to access version ID or version tag in your Worker. ## Limits ### Deployments limit You can only create a new deployment with the last 10 uploaded versions of your Worker. --- # Versions & Deployments URL: https://developers.cloudflare.com/workers/configuration/versions-and-deployments/ Versions track changes to your Worker. Deployments configure how those changes are deployed to your traffic. You can upload changes (versions) to your Worker independent of changing the version that is actively serving traffic (deployment).  Using versions and deployments is useful if: - You are running critical applications on Workers and want to reduce risk when deploying new versions of your Worker using a rolling deployment strategy. - You want to monitor for performance differences when deploying new versions of your Worker. - You have a CI/CD pipeline configured for Workers but want to cut manual releases. ## Versions A version is defined by the state of code as well as the state of configuration in a Worker's [Wrangler configuration file](/workers/wrangler/configuration/). Versions track historical changes to [bundled code](/workers/wrangler/bundling/), [static assets](/workers/static-assets/) and changes to configuration like [bindings](/workers/runtime-apis/bindings/) and [compatibility date and compatibility flags](/workers/configuration/compatibility-dates/) over time. Versions also track metadata associated with a version, including: the version ID, the user that created the version, deploy source, and timestamp. Optionally, a version message and version tag can be configured on version upload. :::note State changes for associated Workers [storage resources](/workers/platform/storage-options/) such as [KV](/kv/), [R2](/r2/), [Durable Objects](/durable-objects/) and [D1](/d1/) are not tracked with versions. ::: ## Deployments Deployments track the version(s) of your Worker that are actively serving traffic. A deployment can consist of one or two versions of a Worker. By default, Workers supports an all-at-once deployment model where traffic is immediately shifted from one version to the newly deployed version automatically. Alternatively, you can use [gradual deployments](/workers/configuration/versions-and-deployments/gradual-deployments/) to create a rolling deployment strategy. You can also track metadata associated with a deployment, including: the user that created the deployment, deploy source, timestamp and the version(s) in the deployment. Optionally, you can configure a deployment message when you create a deployment. ## Use versions and deployments ### Create a new version Review the different ways you can create versions of your Worker and deploy them. #### Upload a new version and deploy it immediately A new version that is automatically deployed to 100% of traffic when: - Changes are uploaded with [`wrangler deploy`](/workers/wrangler/commands/#deploy) via the Cloudflare Dashboard - Changes are deployed with the command [`npx wrangler deploy`](/workers/wrangler/commands/#deploy) via [Workers Builds](/workers/ci-cd/builds) - Changes are uploaded with the [Workers Script Upload API](/api/resources/workers/subresources/scripts/methods/update/) #### Upload a new version to be gradually deployed or deployed at a later time :::note Wrangler versions before 3.73.0 require you to specify a `--x-versions` flag. ::: To create a new version of your Worker that is not deployed immediately, use the [`wrangler versions upload`](/workers/wrangler/commands/#upload) command or create a new version via the Cloudflare dashboard using the **Save** button. You can find the **Save** option under the down arrow beside the "Deploy" button. Versions created in this way can then be deployed all at once or gradually deployed using the [`wrangler versions deploy`](/workers/wrangler/commands/#deploy-2) command or via the Cloudflare dashboard under the **Deployments** tab. :::note When using [Wrangler](/workers/wrangler/), changes made to a Worker's triggers [routes, domains](/workers/configuration/routing/) or [cron triggers](/workers/configuration/cron-triggers/) need to be applied with the command [`wrangler triggers deploy`](/workers/wrangler/commands/#triggers). ::: :::note New versions are not created when you make changes to [resources connected to your Worker](/workers/runtime-apis/bindings/). For example, if two Workers (Worker A and Worker B) are connected via a [service binding](/workers/runtime-apis/bindings/service-bindings/), changing the code of Worker B will not create a new version of Worker A. Changing the code of Worker B will only create a new version of Worker B. Changes to the service binding (such as, deleting the binding or updating the [environment](/workers/wrangler/environments/) it points to) on Worker A will also not create a new version of Worker B. ::: ### View versions and deployments #### Via Wrangler Wrangler allows you to view the 10 most recent versions and deployments. Refer to the [`versions list`](/workers/wrangler/commands/#list-4) and [`deployments`](/workers/wrangler/commands/#list-5) documentation to view the commands. #### Via the Cloudflare dashboard To view your deployments in the Cloudflare dashboard: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com/?to=/:account/workers) and select your account. 2. Go to **Workers & Pages**. 3. Select your Worker > **Deployments**. ## Limits ### First upload You must use [C3](/workers/get-started/guide/#1-create-a-new-worker-project) or [`wrangler deploy`](/workers/wrangler/commands/#deploy) the first time you create a new Workers project. Using [`wrangler versions upload`](/workers/wrangler/commands/#upload) the first time you upload a Worker will fail. ### Service worker syntax Service worker syntax is not supported for versions that are uploaded through [`wrangler versions upload`](/workers/wrangler/commands/#upload). You must use ES modules format. Refer to [Migrate from Service Workers to ES modules](/workers/reference/migrate-to-module-workers/#advantages-of-migrating) to learn how to migrate your Workers from the service worker format to the ES modules format. ### Durable Object migrations Uploading a version with [Durable Object migrations](/durable-objects/reference/durable-objects-migrations/) is not supported. Use [`wrangler deploy`](/workers/wrangler/commands/#deploy) if you are applying a [Durable Object migration](/durable-objects/reference/durable-objects-migrations/). This will be supported in the near future. --- # Fauna URL: https://developers.cloudflare.com/workers/databases/native-integrations/fauna/ import { Render } from "~/components"; [Fauna](https://fauna.com/) is a true serverless database that combines document flexibility with native relational capabilities, offering auto-scaling, multi-active replication, and HTTPS connectivity. <Render file="database_integrations_definition" /> ## Set up an integration with Fauna To set up an integration with Fauna: 1. You need to have an existing Fauna database to connect to. [Create a Fauna database with demo data](https://docs.fauna.com/fauna/current/get-started/quick-start/?lang=javascript#create-a-database). 2. Once your database is created with demo data, you can query it directly using the Shell tab in the Fauna dashboard: ```sh Customer.all() ``` 3. Add the Fauna database integration to your Worker: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. In **Account Home**, select **Workers & Pages**. 3. In **Overview**, select your Worker. 4. Select **Integrations** > **Fauna**. 5. Follow the setup flow, selecting the database created in step 1. 4. In your Worker, install the `fauna` driver to connect to your database and start manipulating data: ```sh npm install fauna ``` 5. The following example shows how to make a query to your Fauna database in a Worker. The credentials needed to connect to Fauna have been automatically added as secrets to your Worker through the integration. ```javascript import { Client, fql } from "fauna"; export default { async fetch(request, env) { const fauna = new Client({ secret: env.FAUNA_SECRET, }); const query = fql`Customer.all()`; const result = await fauna.query(query); return Response.json(result.data); }, }; ``` 6. You can manage the Cloudflare Fauna integration from the [Fauna Dashboard](https://dashboard.fauna.com/): - To view Fauna keys for an integrated Cloudflare Worker, select your database and click the **Keys** tab. Keys for a Cloudflare Worker integration are prepended with `_cloudflare_key_`. You can delete the key to disable the integration. - When you connect a Cloudflare Worker to your database, Fauna creates an OAuth client app in your Fauna account. To view your account's OAuth apps, go to **Account Settings > OAuth Apps** in the Fauna Dashboard.  You can delete the app to disable the integration. To learn more about Fauna, refer to [Fauna's official documentation](https://docs.fauna.com/). --- # Rollbacks URL: https://developers.cloudflare.com/workers/configuration/versions-and-deployments/rollbacks/ You can roll back to a previously deployed [version](/workers/configuration/versions-and-deployments/#versions) of your Worker using [Wrangler](/workers/wrangler/commands/#rollback) or the Cloudflare dashboard. Rolling back to a previous version of your Worker will immediately create a new [deployment](/workers/configuration/versions-and-deployments/#deployments) with the version specified and become the active deployment across all your deployed routes and domains. ## Via Wrangler To roll back to a specified version of your Worker via Wrangler, use the [`wrangler rollback`](/workers/wrangler/commands/#rollback) command. ## Via the Cloudflare Dashboard To roll back to a specified version of your Worker via the Cloudflare dashboard: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com/?to=/:account/workers) and select your account. 2. Go to **Workers & Pages** > select your Worker > **Deployments**. 3. Select the three dot icon on the right of the version you would like to roll back to and select **Rollback**. :::caution **[Resources connected to your Worker](/workers/runtime-apis/bindings/) will not be changed during a rollback.** Errors could occur if using code for a prior version if the structure of data has changed between the version in the active deployment and the version selected to rollback to. ::: ## Limits ### Rollbacks limit You can only roll back to the 10 most recently published versions. ### Bindings You cannot roll back to a previous version of your Worker if the [Cloudflare Developer Platform resources](/workers/runtime-apis/bindings/) (such as [KV](/kv/) and [D1](/d1/)) have been deleted or modified between the version selected to roll back to and the version in the active deployment. Specifically, rollbacks will not be allowed if: * A [Durable Object migration](/durable-objects/reference/durable-objects-migrations/) has occurred between the version in the active deployment and the version selected to roll back to. * If the target deployment has a [binding](/workers/runtime-apis/bindings/) to an R2 bucket, KV namespace, or queue that no longer exists. --- # Neon URL: https://developers.cloudflare.com/workers/databases/native-integrations/neon/ import { Render } from "~/components"; [Neon](https://neon.tech/) is a fully managed serverless PostgreSQL. It separates storage and compute to offer modern developer features, such as serverless, branching, and bottomless storage. <Render file="database_integrations_definition" /> ## Set up an integration with Neon To set up an integration with Neon: 1. You need to have an existing Neon database to connect to. [Create a Neon database](https://neon.tech/docs/postgres/tutorial-createdb#create-a-table) or [load data from an existing database to Neon](https://neon.tech/docs/import/import-from-postgres). 2. Create an `elements` table using the Neon SQL editor. The SQL Editor allows you to query your databases directly from the Neon Console. ```sql CREATE TABLE elements ( id INTEGER NOT NULL, elementName TEXT NOT NULL, atomicNumber INTEGER NOT NULL, symbol TEXT NOT NULL ); ``` 3. Insert some data into your newly created table. ```sql INSERT INTO elements (id, elementName, atomicNumber, symbol) VALUES (1, 'Hydrogen', 1, 'H'), (2, 'Helium', 2, 'He'), (3, 'Lithium', 3, 'Li'), (4, 'Beryllium', 4, 'Be'), (5, 'Boron', 5, 'B'), (6, 'Carbon', 6, 'C'), (7, 'Nitrogen', 7, 'N'), (8, 'Oxygen', 8, 'O'), (9, 'Fluorine', 9, 'F'), (10, 'Neon', 10, 'Ne'); ``` 4. Add the Neon database integration to your Worker: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. In **Account Home**, select **Workers & Pages**. 3. In **Overview**, select your Worker. 4. Select **Integrations** > **Neon**. 5. Follow the setup flow, selecting the database created in step 1. 5. In your Worker, install the `@neondatabase/serverless` driver to connect to your database and start manipulating data: ```sh npm install @neondatabase/serverless ``` 6. The following example shows how to make a query to your Neon database in a Worker. The credentials needed to connect to Neon have been automatically added as secrets to your Worker through the integration. ```sql import { Client } from '@neondatabase/serverless'; export default { async fetch(request, env, ctx) { const client = new Client(env.DATABASE_URL); await client.connect(); const { rows } = await client.query('SELECT * FROM elements'); ctx.waitUntil(client.end()); // this doesn’t hold up the response return new Response(JSON.stringify(rows)); } } ``` To learn more about Neon, refer to [Neon's official documentation](https://neon.tech/docs/introduction). --- # Database Integrations URL: https://developers.cloudflare.com/workers/databases/native-integrations/ import { DirectoryListing } from "~/components" ## Background Connect to databases using the new Database Integrations (beta) experience. Enable Database Integrations in the [Cloudflare dashboard](https://dash.cloudflare.com). With Database Integrations, Cloudflare automatically handles the process of creating a connection string and adding it as secrets to your Worker. :::note[Making multiple round trip calls to a centralized database from a Worker?] If your Worker is making multiple round trip calls to a centralized database, your Worker may be a good fit for Smart Placement. Smart Placement speeds up applications by automatically running your Worker closer to your back-end infrastructure rather than the end user. Learn more about [how Smart Placement works](/workers/configuration/smart-placement/). ::: ## Database credentials If you rotate or delete database credentials, you must delete the integration and go through the setup flow again. ## Database limits At this time, Database Integrations only support access to one database per provider. To add multiple, you must manually configure [secrets](/workers/configuration/environment-variables/). ## Supported platforms <DirectoryListing /> --- # Supabase URL: https://developers.cloudflare.com/workers/databases/native-integrations/supabase/ import { Render } from "~/components"; [Supabase](https://supabase.com/) is an open source Firebase alternative and a PostgreSQL database service that offers real-time functionality, database backups, and extensions. With Supabase, developers can quickly set up a PostgreSQL database and build applications. <Render file="database_integrations_definition" /> ## Set up an integration with Supabase To set up an integration with Supabase: 1. You need to have an existing Supabase database to connect to. [Create a Supabase database](https://supabase.com/docs/guides/database/tables#creating-tables) or [have an existing database to connect to Supabase and load data from](https://supabase.com/docs/guides/database/tables#loading-data). 2. Create a `countries` table with the following query. You can create a table in your Supabase dashboard in two ways: - Use the table editor, which allows you to set up Postgres similar to a spreadsheet. - Alternatively, use the [SQL editor](https://supabase.com/docs/guides/database/overview#the-sql-editor): ```sql CREATE TABLE countries ( id SERIAL PRIMARY KEY, name VARCHAR(255) NOT NULL ); ``` 3. Insert some data in your newly created table. Run the following commands to add countries to your table: ```sql INSERT INTO countries (name) VALUES ('United States'); INSERT INTO countries (name) VALUES ('Canada'); INSERT INTO countries (name) VALUES ('The Netherlands'); ``` 4. Add the Supabase database integration to your Worker: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. In **Account Home**, select **Workers & Pages**. 3. In **Overview**, select your Worker. 4. Select **Integrations** > **Supabase**. 5. Follow the setup flow, selecting the database created in step 1. 5. In your Worker, install the `@supabase/supabase-js` driver to connect to your database and start manipulating data: ```sh npm install @supabase/supabase-js ``` 6. The following example shows how to make a query to your Supabase database in a Worker. The credentials needed to connect to Supabase have been automatically added as secrets to your Worker through the integration. ```sql import { createClient } from '@supabase/supabase-js'; export default { async fetch(request, env) { const supabase = createClient(env.SUPABASE_URL, env.SUPABASE_KEY); const { data, error } = await supabase.from("countries").select('*'); if (error) throw error; return new Response(JSON.stringify(data), { headers: { "Content-Type": "application/json", }, }); }, }; ``` To learn more about Supabase, refer to [Supabase's official documentation](https://supabase.com/docs). --- # PlanetScale URL: https://developers.cloudflare.com/workers/databases/native-integrations/planetscale/ import { Render } from "~/components"; [PlanetScale](https://planetscale.com/) is a MySQL-compatible platform that makes databases infinitely scalable, easier and safer to manage. <Render file="database_integrations_definition" /> ## Set up an integration with PlanetScale To set up an integration with PlanetScale: 1. You need to have an existing PlanetScale database to connect to. [Create a PlanetScale database](https://planetscale.com/docs/tutorials/planetscale-quick-start-guide#create-a-database) or [import an existing database to PlanetScale](https://planetscale.com/docs/imports/database-imports#overview). 2. From the [PlanetScale web console](https://planetscale.com/docs/concepts/web-console#get-started), create a `products` table with the following query: ```sql CREATE TABLE products ( id int NOT NULL AUTO_INCREMENT PRIMARY KEY, name varchar(255) NOT NULL, image_url varchar(255), category_id INT, KEY category_id_idx (category_id) ); ``` 3. Insert some data in your newly created table. Run the following command to add a product and category to your table: ```sql INSERT INTO products (name, image_url, category_id) VALUES ('Ballpoint pen', 'https://example.com/500x500', '1'); ``` 4. Add the PlanetScale integration to your Worker: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. In **Account Home**, select **Workers & Pages**. 3. In **Overview**, select your Worker. 4. Select **Integrations** > **PlanetScale**. 5. Follow the setup flow, selecting the database created in step 1. 5. In your Worker, install the `@planetscale/database` driver to connect to your PlanetScale database and start manipulating data: ```sh npm install @planetscale/database ``` 6. The following example shows how to make a query to your PlanetScale database in a Worker. The credentials needed to connect to PlanetScale have been automatically added as secrets to your Worker through the integration. ```js import { connect } from "@planetscale/database"; export default { async fetch(request, env) { const config = { host: env.DATABASE_HOST, username: env.DATABASE_USERNAME, password: env.DATABASE_PASSWORD, // see https://github.com/cloudflare/workerd/issues/698 fetch: (url, init) => { delete init["cache"]; return fetch(url, init); }, }; const conn = connect(config); const data = await conn.execute("SELECT * FROM products;"); return new Response(JSON.stringify(data.rows), { status: 200, headers: { "Content-Type": "application/json", }, }); }, }; ``` To learn more about PlanetScale, refer to [PlanetScale's official documentation](https://docs.planetscale.com/). --- # Turso URL: https://developers.cloudflare.com/workers/databases/native-integrations/turso/ import { Render } from "~/components"; [Turso](https://turso.tech/) is an edge-hosted, distributed database based on [libSQL](https://libsql.org/), an open-source fork of SQLite. Turso was designed to minimize query latency for applications where queries comes from anywhere in the world. <Render file="database_integrations_definition" /> ## Set up an integration with Turso To set up an integration with Turso: 1. You need to install Turso CLI to create and populate a database. Use one of the following two commands in your terminal to install the Turso CLI: ```sh # On macOS and linux with homebrew brew install tursodatabase/tap/turso # Manual scripted installation curl -sSfL https://get.tur.so/install.sh | bash ``` Next, run the following command to make sure the Turso CLI is installed: ```sh turso --version ``` 2. Before you create your first Turso database, you have to authenticate with your GitHub account by running: ```sh turso auth login ``` ```sh output Waiting for authentication... ✔ Success! Logged in as <YOUR_GITHUB_USERNAME> ``` After you have authenticated, you can create a database using the command `turso db create <DATABASE_NAME>`. Turso will create a database and automatically choose a location closest to you. ```sh turso db create my-db ``` ```sh output # Example: Creating database my-db in Amsterdam, Netherlands (ams) # Once succeeded: Created database my-db in Amsterdam, Netherlands (ams) in 13 seconds. ``` With the first database created, you can now connect to it directly and execute SQL queries against it. ```sh turso db shell my-db ``` 3. Copy the following SQL query into the shell you just opened: ```sql CREATE TABLE elements ( id INTEGER NOT NULL, elementName TEXT NOT NULL, atomicNumber INTEGER NOT NULL, symbol TEXT NOT NULL ); INSERT INTO elements (id, elementName, atomicNumber, symbol) VALUES (1, 'Hydrogen', 1, 'H'), (2, 'Helium', 2, 'He'), (3, 'Lithium', 3, 'Li'), (4, 'Beryllium', 4, 'Be'), (5, 'Boron', 5, 'B'), (6, 'Carbon', 6, 'C'), (7, 'Nitrogen', 7, 'N'), (8, 'Oxygen', 8, 'O'), (9, 'Fluorine', 9, 'F'), (10, 'Neon', 10, 'Ne'); ``` 4. Add the Turso database integration to your Worker: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. In **Account Home**, select **Workers & Pages**. 3. In **Overview**, select your Worker. 4. Select **Integrations** > **Turso**. 5. Follow the setup flow, selecting the database created in step 1. 5. In your Worker, install the Turso client library: ```sh npm install @libsql/client ``` 6. The following example shows how to make a query to your Turso database in a Worker. The credentials needed to connect to Turso have been automatically added as [secrets](/workers/configuration/secrets/) to your Worker through the integration. ```ts import { Client as LibsqlClient, createClient } from "@libsql/client/web"; export interface Env { TURSO_URL?: string; TURSO_AUTH_TOKEN?: string; } export default { async fetch(request, env, ctx): Promise<Response> { const client = buildLibsqlClient(env); try { const res = await client.execute("SELECT * FROM elements"); return new Response(JSON.stringify(res), { status: 200, headers: { "Content-Type": "application/json" }, }); } catch (error) { console.error("Error executing SQL query:", error); return new Response(JSON.stringify({ error: "Internal Server Error" }), { status: 500, }); } }, } satisfies ExportedHandler<Env>; function buildLibsqlClient(env: Env): LibsqlClient { const url = env.TURSO_URL?.trim(); if (url === undefined) { throw new Error("TURSO_URL env var is not defined"); } const authToken = env.TURSO_AUTH_TOKEN?.trim(); if (authToken == undefined) { throw new Error("TURSO_AUTH_TOKEN env var is not defined"); } return createClient({ url, authToken }); } ``` - The libSQL client library import `@libsql/client/web` must be imported exactly as shown when working with Cloudflare Workers. The non-web import will not work in the Workers environment. - The `Env` interface contains the [environment variable](/workers/configuration/environment-variables/) and [secret](/workers/configuration/secrets/) defined when you added the Turso integration in step 4. - The `Env` interface also caches the libSQL client object and router, which was created on the first request to the Worker. - The Worker uses `buildLibsqlClient` to query the `elements` database and returns the response as a JSON object. With your environment configured and your code ready, you can now test your Worker locally before you deploy. To learn more about Turso, refer to [Turso's official documentation](https://docs.turso.tech). --- # Upstash URL: https://developers.cloudflare.com/workers/databases/native-integrations/upstash/ import { Render } from "~/components"; [Upstash](https://upstash.com/) is a serverless database with Redis\* and Kafka API. Upstash also offers QStash, a task queue/scheduler designed for the serverless. <Render file="database_integrations_definition" /> ## Upstash for Redis To set up an integration with Upstash: 1. You need an existing Upstash database to connect to. [Create an Upstash database](https://docs.upstash.com/redis#create-a-database) or [load data from an existing database to Upstash](https://docs.upstash.com/redis/howto/connectclient). 2. Insert some data to your Upstash database. You can add data to your Upstash database in two ways: - Use the CLI directly from your Upstash console. - Alternatively, install [redis-cli](https://redis.io/docs/getting-started/installation/) locally and run the following commands. ```sh set GB "Ey up?" ``` ```sh output OK ``` ```sh set US "Yo, what’s up?" ``` ```sh output OK ``` ```sh set NL "Hoi, hoe gaat het?" ``` ```sh output OK ``` 3. Add the Upstash Redis integration to your Worker: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. In **Account Home**, select **Workers & Pages**. 3. In **Overview**, select your Worker. 4. Select **Settings** > **Integrations** > **Upstash Redis**. 5. Follow the setup flow, selecting the database created in step 1. 4. In your Worker, install the `@upstash/redis`, a HTTP client to connect to your database and start manipulating data: ```sh npm install @upstash/redis ``` 5. The following example shows how to make a query to your Upstash database in a Worker. The credentials needed to connect to Upstash have been automatically added as secrets to your Worker through the integration. ```js import { Redis } from "@upstash/redis/cloudflare"; export default { async fetch(request, env) { const redis = Redis.fromEnv(env); const country = request.headers.get("cf-ipcountry"); if (country) { const greeting = await redis.get(country); if (greeting) { return new Response(greeting); } } return new Response("Hello What's up!"); }, }; ``` :::note `Redis.fromEnv(env)` automatically picks up the default `url` and `token` names created in the integration. If you have renamed the secrets, you must declare them explicitly like in the [Upstash basic example](https://docs.upstash.com/redis/sdks/redis-ts/getstarted#basic-usage). ::: To learn more about Upstash, refer to the [Upstash documentation](https://docs.upstash.com/redis). ## Upstash Kafka To set up an integration with Upstash Kafka: 1. Create a [Kafka cluster and topic](https://docs.upstash.com/kafka). 2. Add the Upstash Kafka integration to your Worker: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. In **Account Home**, select **Workers & Pages**. 3. In **Overview**, select your Worker. 4. Select **Settings** > **Integrations** > **Upstash Kafka**. 5. Follow the setup flow, selecting the cluster and topic. 3. In your Worker, install `@upstash/kafka`, a HTTP/REST based Kafka client: ```sh npm install @upstash/kafka ``` 4. Use the [upstash-kafka](https://github.com/upstash/upstash-kafka/blob/main/README.md) JavaScript SDK to send data to Kafka. Refer to [Upstash documentation on Kafka setup with Workers](https://docs.upstash.com/kafka/real-time-analytics/realtime_analytics_serverless_kafka_setup#option-1-cloudflare-workers) for more information. Replace `url`, `username` and `password` with the variables set by the integration. ## Upstash QStash To set up an integration with Upstash QStash: 1. Configure the [publicly available HTTP endpoint](https://docs.upstash.com/qstash#1-public-api) that you want to send your messages to. 2. Add the Upstash QStash integration to your Worker: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. In **Account Home**, select **Workers & Pages**. 3. In **Overview**, select your Worker. 4. Select **Integrations** > **Upstash QStash**. 5. Follow the setup flow. 3. In your Worker, install the `@upstash/qstash`, a HTTP client to connect to your database QStash endpoint: ```sh npm install @upstash/qstash ``` 4. Refer to the [Upstash documentation on how to receive webhooks from QStash in your Cloudflare Worker](https://docs.upstash.com/qstash/quickstarts/cloudflare-workers#3-use-qstash-in-your-handler). \* Redis is a trademark of Redis Ltd. Any rights therein are reserved to Redis Ltd. Any use by Upstash is for referential purposes only and does not indicate any sponsorship, endorsement or affiliation between Redis and Upstash. --- # Xata URL: https://developers.cloudflare.com/workers/databases/native-integrations/xata/ import { Render } from "~/components"; [Xata](https://xata.io) is a serverless data platform powered by PostgreSQL. Xata uniquely combines multiple types of stores (relational databases, search engines, analytics engines) into a single service, accessible through a consistent REST API. <Render file="database_integrations_definition" /> ## Set up an integration with Xata To set up an integration with Xata: 1. You need to have an existing Xata database to connect to or create a new database from your Xata workspace [Create a Database](https://app.xata.io/workspaces). 2. In your database, you have several options for creating a table: you can start from scratch, use a template filled with sample data, or import data from a CSV file. For this guide, choose **Start with sample data**. This option automatically populates your database with two sample tables: `Posts` and `Users`. 3. Add the Xata integration to your Worker: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. In **Account Home**, select **Workers & Pages**. 3. In **Overview**, select your Worker. 4. Select **Integrations** > **Xata**. 5. Follow the setup flow, selecting the database created in step 1. 4. Install the [Xata CLI](https://xata.io/docs/getting-started/installation) and authenticate the CLI by running the following commands: ```sh npm install -g @xata.io/cli xata auth login ``` 5. Once you have the CLI set up, In your Worker, run the following code in the root directory of your project: ```sh xata init ``` Accept the default settings during the configuration process. After completion, a `.env` and `.xatarc` file will be generated in your project folder. 6. To enable Cloudflare access the secret values generated when running in development mode, create a `.dev.vars` file in your project's root directory and add the following content, replacing placeholders with the specific values: ```txt XATA_API_KEY=<YOUR_API_KEY_HERE> XATA_BRANCH=<YOUR_BRANCH_HERE> XATA_DATABASE_URL=<YOUR_DATABASE_URL_HERE> ``` 7. The following example shows how to make a query to your Xata database in a Worker. The credentials needed to connect to Xata have been automatically added as secrets to your Worker through the integration. ```ts export default { async fetch(request, env, ctx): Promise<Response> { const xata = new XataClient({ apiKey: env.XATA_API_KEY, branch: env.XATA_BRANCH, databaseURL: env.XATA_DATABASE_URL, }); const records = await xata.db.Posts.select([ "id", "title", "author.name", "author.email", "author.bio", ]).getAll(); return Response.json(records); }, } satisfies ExportedHandler<Env>; ``` To learn more about Xata, refer to [Xata's official documentation](https://xata.io/docs). --- # Angular URL: https://developers.cloudflare.com/workers/frameworks/framework-guides/angular/ import { Badge, Description, InlineBadge, Render, PackageManagers, } from "~/components"; In this guide, you will create a new [Angular](https://angular.dev/) application and deploy to Cloudflare Workers (with the new [<InlineBadge preset="beta" /> Workers Assets](/workers/static-assets/)). ## 1. Set up a new project Use the [`create-cloudflare`](https://www.npmjs.com/package/create-cloudflare) CLI (C3) to set up a new project. C3 will create a new project directory, initiate Angular's official setup tool, and provide the option to deploy instantly. To use `create-cloudflare` to create a new Angular project with <InlineBadge preset="beta" /> Workers Assets, run the following command: <PackageManagers type="create" pkg="cloudflare@latest my-angular-app" args={"--framework=angular --experimental"} /> <Render file="c3-post-run-steps" product="workers" params={{ category: "web-framework", framework: "Angular", }} /> After setting up your project, change your directory by running the following command: ```sh cd my-angular-app ``` ## 2. Develop locally After you have created your project, run the following command in the project directory to start a local server. This will allow you to preview your project locally during development. <PackageManagers type="run" args={"dev"} /> ## 3. Deploy your Project Your project can be deployed to a `*.workers.dev` subdomain or a [Custom Domain](/workers/configuration/routing/custom-domains/), from your own machine or from any CI/CD system, including [Cloudflare's own](/workers/ci-cd/builds/). The following command will build and deploy your project. If you're using CI, ensure you update your ["deploy command"](/workers/ci-cd/builds/configuration/#build-settings) configuration appropriately. <PackageManagers type="run" args={"deploy"} /> --- ## Static assets You can serve static assets your Angular application by placing them in [the `./public/` directory](https://angular.dev/reference/configs/file-structure#workspace-configuration-files). This can be useful for resource files such as images, stylesheets, fonts, and manifests. <Render file="workers-assets-routing-summary" /> --- # Astro URL: https://developers.cloudflare.com/workers/frameworks/framework-guides/astro/ import { Badge, Description, InlineBadge, Render, PackageManagers, } from "~/components"; In this guide, you will create a new [Astro](https://astro.build/) application and deploy to Cloudflare Workers (with the new [<InlineBadge preset="beta" /> Workers Assets](/workers/static-assets/)). ## 1. Set up a new project Use the [`create-cloudflare`](https://www.npmjs.com/package/create-cloudflare) CLI (C3) to set up a new project. C3 will create a new project directory, initiate Astro's official setup tool, and provide the option to deploy instantly. To use `create-cloudflare` to create a new Astro project with <InlineBadge preset="beta" /> Workers Assets, run the following command: <PackageManagers type="create" pkg="cloudflare@latest my-astro-app" args={"--framework=astro --experimental"} /> <Render file="c3-post-run-steps" product="workers" params={{ category: "web-framework", framework: "Astro", }} /> After setting up your project, change your directory by running the following command: ```sh cd my-astro-app ``` ## 2. Develop locally After you have created your project, run the following command in the project directory to start a local server. This will allow you to preview your project locally during development. <PackageManagers type="run" args={"dev"} /> ## 3. Deploy your Project Your project can be deployed to a `*.workers.dev` subdomain or a [Custom Domain](/workers/configuration/routing/custom-domains/), from your own machine or from any CI/CD system, including [Cloudflare's own](/workers/ci-cd/builds/). The following command will build and deploy your project. If you're using CI, ensure you update your ["deploy command"](/workers/ci-cd/builds/configuration/#build-settings) configuration appropriately. <PackageManagers type="run" args={"deploy"} /> --- ## Bindings Your Astro application can be fully integrated with the Cloudflare Developer Platform, in both local development and in production, by using product bindings. The [Astro documentation](https://docs.astro.build/en/guides/integrations-guide/cloudflare/#cloudflare-runtime) provides information about configuring bindings and how you can access them in your `locals`. ## Static assets You can serve static assets your Astro application by placing them in [the `./public/` directory](https://docs.astro.build/en/basics/project-structure/#public). This can be useful for resource files such as images, stylesheets, fonts, and manifests. <Render file="workers-assets-routing-summary" /> --- # Docusaurus URL: https://developers.cloudflare.com/workers/frameworks/framework-guides/docusaurus/ import { Badge, Description, InlineBadge, Render, PackageManagers, } from "~/components"; In this guide, you will create a new [Docusaurus](https://docusaurus.io/) application and deploy to Cloudflare Workers (with the new [<InlineBadge preset="beta" /> Workers Assets](/workers/static-assets/)). ## 1. Set up a new project Use the [`create-cloudflare`](https://www.npmjs.com/package/create-cloudflare) CLI (C3) to set up a new project. C3 will create a new project directory, initiate Docusaurus' official setup tool, and provide the option to deploy instantly. To use `create-cloudflare` to create a new Docusaurus project with <InlineBadge preset="beta" /> Workers Assets, run the following command: <PackageManagers type="create" pkg="cloudflare@latest my-docusaurus-app" args={"--framework=docusaurus --platform=workers"} /> <Render file="c3-post-run-steps" product="workers" params={{ category: "web-framework", framework: "Docusaurus", }} /> After setting up your project, change your directory by running the following command: ```sh cd my-docusaurus-app ``` ## 2. Develop locally After you have created your project, run the following command in the project directory to start a local server. This will allow you to preview your project locally during development. <PackageManagers type="run" args={"dev"} /> ## 3. Deploy your Project Your project can be deployed to a `*.workers.dev` subdomain or a [Custom Domain](/workers/configuration/routing/custom-domains/), from your own machine or from any CI/CD system, including [Cloudflare's own](/workers/ci-cd/builds/). The following command will build and deploy your project. If you're using CI, ensure you update your ["deploy command"](/workers/ci-cd/builds/configuration/#build-settings) configuration appropriately. <PackageManagers type="run" args={"deploy"} /> --- ## Static assets You can serve static assets your Docusaurus application by placing them in [the `./static/` directory](https://docusaurus.io/docs/static-assets). This can be useful for resource files such as images, stylesheets, fonts, and manifests. --- # Gatsby URL: https://developers.cloudflare.com/workers/frameworks/framework-guides/gatsby/ import { Badge, Description, InlineBadge, Render, PackageManagers, } from "~/components"; In this guide, you will create a new [Gatsby](https://www.gatsbyjs.com/) application and deploy to Cloudflare Workers (with the new [<InlineBadge preset="beta" /> Workers Assets](/workers/static-assets/)). ## 1. Set up a new project Use the [`create-cloudflare`](https://www.npmjs.com/package/create-cloudflare) CLI (C3) to set up a new project. C3 will create a new project directory, initiate Gatsby's official setup tool, and provide the option to deploy instantly. To use `create-cloudflare` to create a new Gatsby project with <InlineBadge preset="beta" /> Workers Assets, run the following command: <PackageManagers type="create" pkg="cloudflare@latest my-gatsby-app" args={"--framework=gatsby --experimental"} /> <Render file="c3-post-run-steps" product="workers" params={{ category: "web-framework", framework: "Gatsby", }} /> After setting up your project, change your directory by running the following command: ```sh cd my-gatsby-app ``` ## 2. Develop locally After you have created your project, run the following command in the project directory to start a local server. This will allow you to preview your project locally during development. <PackageManagers type="run" args={"dev"} /> ## 3. Deploy your Project Your project can be deployed to a `*.workers.dev` subdomain or a [Custom Domain](/workers/configuration/routing/custom-domains/), from your own machine or from any CI/CD system, including [Cloudflare's own](/workers/ci-cd/builds/). The following command will build and deploy your project. If you're using CI, ensure you update your ["deploy command"](/workers/ci-cd/builds/configuration/#build-settings) configuration appropriately. <PackageManagers type="run" args={"deploy"} /> --- ## Static assets You can serve static assets your Gatsby application by placing them in [the `./static/` directory](https://www.gatsbyjs.com/docs/how-to/images-and-media/static-folder/). This can be useful for resource files such as images, stylesheets, fonts, and manifests. --- # Next.js URL: https://developers.cloudflare.com/workers/frameworks/framework-guides/nextjs/ import { Badge, Description, InlineBadge, Render, PackageManagers, Stream, WranglerConfig } from "~/components"; ## New apps To create a new Next.js app, pre-configured to run on Cloudflare using [`@opennextjs/cloudflare`](https://opennext.js.org/cloudflare), run: <PackageManagers type="create" pkg="cloudflare@latest my-next-app" args={"--framework=next --experimental"} /> <Render file="c3-post-run-steps" product="workers" params={{ category: "web-framework", framework: "Next.js", }} /> ## Existing Next.js apps :::note[Minimum required Wrangler version: 3.78.10.] Check your version by running `wrangler --version`. To update Wrangler, refer to [Install/Update Wrangler](/workers/wrangler/install-and-update/). ::: ### 1. Install @opennextjs/cloudflare First, install [@opennextjs/cloudflare](https://www.npmjs.com/package/@opennextjs/cloudflare): ```sh npm install --save-dev @opennextjs/cloudflare ``` ### 2. Add a Wrangler configuration file Then, add a [Wrangler configuration file](/workers/wrangler/configuration/) to the root directory of your Next.js app: <WranglerConfig> ```toml main = ".open-next/worker.js" name = "my-app" compatibility_date = "2024-09-23" compatibility_flags = ["nodejs_compat"] assets = { directory = ".open-next/assets", binding = "ASSETS" } ``` </WranglerConfig> :::note As shown above, you must enable the [`nodejs_compat` compatibility flag](/workers/runtime-apis/nodejs/) _and_ set your [compatibility date](/workers/configuration/compatibility-dates/) to `2024-09-23` or later for your Next.js app to work with @opennextjs/cloudflare. ::: You configure your Worker and define what resources it can access via [bindings](/workers/runtime-apis/bindings/) in the [Wrangler configuration file](/workers/wrangler/configuration/). ### 3. Update `package.json` Add the following to the scripts field of your `package.json` file: ```json "build:worker": "opennextjs-cloudflare", "dev:worker": "wrangler dev --port 8771", "preview:worker": "npm run build:worker && npm run dev:worker", "deploy:worker": "npm run build:worker && wrangler deploy" ``` - `npm run build:worker`: Runs the [@opennextjs/cloudflare](https://www.npmjs.com/package/@opennextjs/cloudflare) adapter. This first builds your app by running `next build` behind the scenes, and then transforms the build output to a format that you can run locally using [Wrangler](/workers/wrangler/) and deploy to Cloudflare. - `npm run dev:worker`: Takes the output generated by `build:worker` and runs it locally in [workerd](https://github.com/cloudflare/workerd), the open-source Workers Runtime, allowing you to run the app locally in the same environment that it will run in production. If you instead run `next dev`, your app will run in Node.js, which is a different JavaScript runtime from the Workers runtime, with differences in behavior and APIs. - `npm run preview:worker`: Runs `build:worker` and then `dev:worker`, allowing you to quickly preview your app running locally in the Workers runtime, via a single command. - `npm run deploy:worker`: Builds your app, and then deploys it to Cloudflare ### 4. Add caching with Workers KV `opennextjs/cloudflare` uses [Workers KV](/kv/) as the cache for your Next.js app. Workers KV is [fast](https://blog.cloudflare.com/faster-workers-kv) and uses Cloudflare's [Tiered Cache](/cache/how-to/tiered-cache/) to increase cache hit rates. When you write cached data to Workers KV, you write to storage that can be read by any Cloudflare location. This means your app can fetch data, cache it in KV, and then be read by subsequent requests from this cache anywhere in the world. To enable caching, you must: #### Create a KV namespace ```sh npx wrangler@latest kv namespace create NEXT_CACHE_WORKERS_KV ``` #### Add the KV namespace to your Worker <WranglerConfig> ```toml [[kv_namespaces]] binding = "NEXT_CACHE_WORKERS_KV" id = "<YOUR_NAMESPACE_ID>" ``` </WranglerConfig> #### Set the name of the binding to `NEXT_CACHE_WORKERS_KV` As shown above, the name of the binding that you configure for the KV namespace must be set to `NEXT_CACHE_WORKERS_KV`. ### 5. Develop locally You can continue to run `next dev` when developing locally. ### 6. Preview locally your application and create an OpenNext config file In step 3, we also added the `npm run preview:worker`, which allows you to quickly preview your app running locally in the Workers runtime, rather than in Node.js. This allows you to test changes in the same runtime that your app runs in, when deployed to Cloudflare. To preview your application in such way run: ```sh npm run preview:worker ``` This command will build your OpenNext application, also creating, if not already present, an `open-next.configs.ts` file for you. This is necessary if you want to deploy your application with a GibHub/GitLab integration as presented in the next step. ### 7. Deploy to Cloudflare Workers Either deploy via the command line: ```sh npm run deploy:worker ``` Or [connect a GitHub or GitLab repository](/workers/ci-cd/), and Cloudflare will automatically build and deploy each pull request you merge to your production branch. --- ## Static assets You can serve static assets your Next.js application by placing them in the `./public/` directory. This can be useful for resource files such as images, stylesheets, fonts, and manifests. <Render file="workers-assets-routing-summary" /> --- # Nuxt URL: https://developers.cloudflare.com/workers/frameworks/framework-guides/nuxt/ import { Badge, Description, InlineBadge, Render, PackageManagers, } from "~/components"; In this guide, you will create a new [Nuxt](https://nuxt.com/) application and deploy to Cloudflare Workers (with the new [<InlineBadge preset="beta" /> Workers Assets](/workers/static-assets/)). ## 1. Set up a new project Use the [`create-cloudflare`](https://www.npmjs.com/package/create-cloudflare) CLI (C3) to set up a new project. C3 will create a new project directory, initiate Nuxt's official setup tool, and provide the option to deploy instantly. To use `create-cloudflare` to create a new Nuxt project with <InlineBadge preset="beta" /> Workers Assets, run the following command: <PackageManagers type="create" pkg="cloudflare@latest my-nuxt-app" args={"--framework=nuxt --experimental"} /> <Render file="c3-post-run-steps" product="workers" params={{ category: "web-framework", framework: "Nuxt", }} /> After setting up your project, change your directory by running the following command: ```sh cd my-nuxt-app ``` ## 2. Develop locally After you have created your project, run the following command in the project directory to start a local server. This will allow you to preview your project locally during development. <PackageManagers type="run" args={"dev"} /> ## 3. Deploy your Project Your project can be deployed to a `*.workers.dev` subdomain or a [Custom Domain](/workers/configuration/routing/custom-domains/), from your own machine or from any CI/CD system, including [Cloudflare's own](/workers/ci-cd/builds/). The following command will build and deploy your project. If you're using CI, ensure you update your ["deploy command"](/workers/ci-cd/builds/configuration/#build-settings) configuration appropriately. <PackageManagers type="run" args={"deploy"} /> --- ## Bindings Your Nuxt application can be fully integrated with the Cloudflare Developer Platform, in both local development and in production, by using product bindings. The [Nuxt documentation](https://nitro.unjs.io/deploy/providers/cloudflare#direct-access-to-cloudflare-bindings) provides information about configuring bindings and how you can access them in your Nuxt event handlers. ## Static assets You can serve static assets your Nuxt application by placing them in [the `./public/` directory](https://nuxt.com/docs/guide/directory-structure/public). This can be useful for resource files such as images, stylesheets, fonts, and manifests. <Render file="workers-assets-routing-summary" /> --- # Qwik URL: https://developers.cloudflare.com/workers/frameworks/framework-guides/qwik/ import { Badge, Description, InlineBadge, Render, PackageManagers, } from "~/components"; In this guide, you will create a new [Qwik](https://qwik.dev/) application and deploy to Cloudflare Workers (with the new [<InlineBadge preset="beta" /> Workers Assets](/workers/static-assets/)). ## 1. Set up a new project Use the [`create-cloudflare`](https://www.npmjs.com/package/create-cloudflare) CLI (C3) to set up a new project. C3 will create a new project directory, initiate Qwik's official setup tool, and provide the option to deploy instantly. To use `create-cloudflare` to create a new Qwik project with <InlineBadge preset="beta" /> Workers Assets, run the following command: <PackageManagers type="create" pkg="cloudflare@latest my-qwik-app" args={"--framework=qwik --experimental"} /> <Render file="c3-post-run-steps" product="workers" params={{ category: "web-framework", framework: "Qwik", }} /> After setting up your project, change your directory by running the following command: ```sh cd my-qwik-app ``` ## 2. Develop locally After you have created your project, run the following command in the project directory to start a local server. This will allow you to preview your project locally during development. <PackageManagers type="run" args={"dev"} /> ## 3. Deploy your Project Your project can be deployed to a `*.workers.dev` subdomain or a [Custom Domain](/workers/configuration/routing/custom-domains/), from your own machine or from any CI/CD system, including [Cloudflare's own](/workers/ci-cd/builds/). The following command will build and deploy your project. If you're using CI, ensure you update your ["deploy command"](/workers/ci-cd/builds/configuration/#build-settings) configuration appropriately. <PackageManagers type="run" args={"deploy"} /> --- ## Bindings Your Qwik application can be fully integrated with the Cloudflare Developer Platform, in both local development and in production, by using product bindings. The [Qwik documentation](https://qwik.dev/docs/deployments/cloudflare-pages/#context) provides information about configuring bindings and how you can access them in your Qwik endpoint methods. ## Static assets You can serve static assets your Qwik application by placing them in [the `./public/` directory](https://qwik.dev/docs/advanced/static-assets/). This can be useful for resource files such as images, stylesheets, fonts, and manifests. <Render file="workers-assets-routing-summary" /> --- # Remix URL: https://developers.cloudflare.com/workers/frameworks/framework-guides/remix/ import { Badge, Description, InlineBadge, Render, PackageManagers, } from "~/components"; In this guide, you will create a new [Remix](https://remix.run/) application and deploy to Cloudflare Workers (with the new [<InlineBadge preset="beta" /> Workers Assets](/workers/static-assets/)). ## 1. Set up a new project Use the [`create-cloudflare`](https://www.npmjs.com/package/create-cloudflare) CLI (C3) to set up a new project. C3 will create a new project directory, initiate Remix's official setup tool, and provide the option to deploy instantly. To use `create-cloudflare` to create a new Remix project with <InlineBadge preset="beta" /> Workers Assets, run the following command: <PackageManagers type="create" pkg="cloudflare@latest my-remix-app" args={"--framework=remix --experimental"} /> <Render file="c3-post-run-steps" product="workers" params={{ category: "web-framework", framework: "Remix", }} /> After setting up your project, change your directory by running the following command: ```sh cd my-remix-app ``` ## 2. Develop locally After you have created your project, run the following command in the project directory to start a local server. This will allow you to preview your project locally during development. <PackageManagers type="run" args={"dev"} /> ## 3. Deploy your Project Your project can be deployed to a `*.workers.dev` subdomain or a [Custom Domain](/workers/configuration/routing/custom-domains/), from your own machine or from any CI/CD system, including [Cloudflare's own](/workers/ci-cd/builds/). The following command will build and deploy your project. If you're using CI, ensure you update your ["deploy command"](/workers/ci-cd/builds/configuration/#build-settings) configuration appropriately. <PackageManagers type="run" args={"deploy"} /> --- ## Bindings Your Remix application can be fully integrated with the Cloudflare Developer Platform, in both local development and in production, by using product bindings. The [Remix documentation](https://remix.run/docs/en/main/guides/vite#bindings) provides information about configuring bindings and how you can access them in your Remix page loaders. ## Static assets You can serve static assets your Remix application by placing them in the `./public/` directory. This can be useful for resource files such as images, stylesheets, fonts, and manifests. <Render file="workers-assets-routing-summary" /> --- # Solid URL: https://developers.cloudflare.com/workers/frameworks/framework-guides/solid/ import { Badge, Description, InlineBadge, Render, PackageManagers, } from "~/components"; In this guide, you will create a new [Solid](https://www.solidjs.com/) application and deploy to Cloudflare Workers (with the new [<InlineBadge preset="beta" /> Workers Assets](/workers/static-assets/)). ## 1. Set up a new project Use the [`create-cloudflare`](https://www.npmjs.com/package/create-cloudflare) CLI (C3) to set up a new project. C3 will create a new project directory, initiate Solid's official setup tool, and provide the option to deploy instantly. To use `create-cloudflare` to create a new Solid project with <InlineBadge preset="beta" /> Workers Assets, run the following command: <PackageManagers type="create" pkg="cloudflare@latest my-solid-app" args={"--framework=solid --experimental"} /> <Render file="c3-post-run-steps" product="workers" params={{ category: "web-framework", framework: "Solid", }} /> After setting up your project, change your directory by running the following command: ```sh cd my-solid-app ``` ## 2. Develop locally After you have created your project, run the following command in the project directory to start a local server. This will allow you to preview your project locally during development. <PackageManagers type="run" args={"dev"} /> ## 3. Deploy your Project Your project can be deployed to a `*.workers.dev` subdomain or a [Custom Domain](/workers/configuration/routing/custom-domains/), from your own machine or from any CI/CD system, including [Cloudflare's own](/workers/ci-cd/builds/). The following command will build and deploy your project. If you're using CI, ensure you update your ["deploy command"](/workers/ci-cd/builds/configuration/#build-settings) configuration appropriately. <PackageManagers type="run" args={"deploy"} /> --- ## Bindings Your Solid application can be fully integrated with the Cloudflare Developer Platform, in both local development and in production, by using product bindings. The [Solid documentation](https://docs.solidjs.com/reference/server-utilities/get-request-event) provides information about how to access platform primitives, including bindings. Specifically, for Cloudflare, you can use [`getRequestEnv().nativeEvent.context.cloudflare.env`](https://docs.solidjs.com/solid-start/advanced/request-events#nativeevent) to access bindings. ## Static assets You can serve static assets your Solid application by placing them in [the `./public/` directory](https://docs.solidjs.com/solid-start/building-your-application/static-assets). This can be useful for resource files such as images, stylesheets, fonts, and manifests. <Render file="workers-assets-routing-summary" /> --- # Svelte URL: https://developers.cloudflare.com/workers/frameworks/framework-guides/svelte/ import { Badge, Description, InlineBadge, Render, PackageManagers, } from "~/components"; In this guide, you will create a new [Svelte](https://svelte.dev/) application and deploy to Cloudflare Workers (with the new [<InlineBadge preset="beta" /> Workers Assets](/workers/static-assets/)). ## 1. Set up a new project Use the [`create-cloudflare`](https://www.npmjs.com/package/create-cloudflare) CLI (C3) to set up a new project. C3 will create a new project directory, initiate Svelte's official setup tool, and provide the option to deploy instantly. To use `create-cloudflare` to create a new Svelte project with <InlineBadge preset="beta" /> Workers Assets, run the following command: <PackageManagers type="create" pkg="cloudflare@latest my-svelte-app" args={"--framework=svelte --experimental"} /> <Render file="c3-post-run-steps" product="workers" params={{ category: "web-framework", framework: "Svelte", }} /> After setting up your project, change your directory by running the following command: ```sh cd my-svelte-app ``` ## 2. Develop locally After you have created your project, run the following command in the project directory to start a local server. This will allow you to preview your project locally during development. <PackageManagers type="run" args={"dev"} /> ## 3. Deploy your Project Your project can be deployed to a `*.workers.dev` subdomain or a [Custom Domain](/workers/configuration/routing/custom-domains/), from your own machine or from any CI/CD system, including [Cloudflare's own](/workers/ci-cd/builds/). The following command will build and deploy your project. If you're using CI, ensure you update your ["deploy command"](/workers/ci-cd/builds/configuration/#build-settings) configuration appropriately. <PackageManagers type="run" args={"deploy"} /> --- ## Bindings Your Svelte application can be fully integrated with the Cloudflare Developer Platform, in both local development and in production, by using product bindings. The [Svelte documentation](https://kit.svelte.dev/docs/adapter-cloudflare#runtime-apis) provides information about configuring bindings and how you can access them in your Svelte hooks and endpoints. ## Static assets You can serve static assets your Svelte application by placing them in [the `./static/` directory](https://kit.svelte.dev/docs/project-structure#project-files-static). This can be useful for resource files such as images, stylesheets, fonts, and manifests. <Render file="workers-assets-routing-summary" /> --- # JavaScript URL: https://developers.cloudflare.com/workers/languages/javascript/ ## JavaScript The Workers platform is designed to be [JavaScript standards compliant](https://ecma-international.org/publications-and-standards/standards/ecma-262/) and web-interoperable, and supports JavaScript standards, as defined by [TC39](https://tc39.es/) (ECMAScript). Wherever possible, it uses web platform APIs, so that code can be reused across client and server, as well as across [WinterCG](https://wintercg.org/) JavaScript runtimes. Refer to [Runtime APIs](/workers/runtime-apis/) for more information on specific JavaScript APIs available in Workers. ### Resources * [Getting Started](/workers/get-started/guide/) * [Quickstarts](/workers/get-started/quickstarts/) – More example repos to use as a basis for your projects * [TypeScript type definitions](https://github.com/cloudflare/workers-types) * [JavaScript and web standard APIs](/workers/runtime-apis/web-standards/) * [Tutorials](/workers/tutorials/) * [Examples](/workers/examples/?languages=JavaScript) --- # Rust URL: https://developers.cloudflare.com/workers/languages/rust/ import { WranglerConfig } from "~/components"; Cloudflare Workers provides support for Rust via the [`workers-rs` crate](https://github.com/cloudflare/workers-rs), which makes [Runtime APIs](/workers/runtime-apis) and [bindings](/workers/runtime-apis/bindings/) to developer platform products, such as [Workers KV](/kv/concepts/how-kv-works/), [R2](/r2/), and [Queues](/queues/), available directly from your Rust code. By following this guide, you will learn how to build a Worker entirely in the Rust programming language. ## Prerequisites Before starting this guide, make sure you have: - A recent version of [`Rust`](https://rustup.rs/) - [`npm`](https://docs.npmjs.com/getting-started) - The Rust `wasm32-unknown-unknown` toolchain: ```sh rustup target add wasm32-unknown-unknown ``` - And `cargo-generate` sub-command by running: ```sh cargo install cargo-generate ``` ## 1. Create a new project with Wrangler Open a terminal window, and run the following command to generate a Worker project template in Rust: ```sh cargo generate cloudflare/workers-rs ``` Your project will be created in a new directory that you named, in which you will find the following files and folders: - `Cargo.toml` - The standard project configuration file for Rust's [`Cargo`](https://doc.rust-lang.org/cargo/) package manager. The template pre-populates some best-practice settings for building for Wasm on Workers. - `wrangler.toml` - Wrangler configuration, pre-populated with a custom build command to invoke `worker-build` (Refer to [Wrangler Bundling](/workers/languages/rust/#bundling-worker-build)). - `src` - Rust source directory, pre-populated with Hello World Worker. ## 2. Develop locally After you have created your first Worker, run the [`wrangler dev`](/workers/wrangler/commands/#dev) command to start a local server for developing your Worker. This will allow you to test your Worker in development. ```sh npx wrangler dev ``` If you have not used Wrangler before, it will try to open your web browser to login with your Cloudflare account. :::note If you have issues with this step or you do not have access to a browser interface, refer to the [`wrangler login`](/workers/wrangler/commands/#login) documentation for more information. ::: Go to [http://localhost:8787](http://localhost:8787) to review your Worker running. Any changes you make to your code will trigger a rebuild, and reloading the page will show you the up-to-date output of your Worker. ## 3. Write your Worker code With your new project generated, write your Worker code. Find the entrypoint to your Worker in `src/lib.rs`: ```rust use worker::*; #[event(fetch)] async fn main(req: Request, env: Env, ctx: Context) -> Result<Response> { Response::ok("Hello, World!") } ``` :::note There is some counterintuitive behavior going on here: 1. `workers-rs` provides an `event` macro which expects a handler function signature identical to those seen in JavaScript Workers. 2. `async` is not generally supported by Wasm, but you are able to use `async` in a `workers-rs` project (refer to [`async`](/workers/languages/rust/#async-wasm-bindgen-futures)). ::: ### Related runtime APIs `workers-rs` provides a runtime API which closely matches Worker's JavaScript API, and enables integration with Worker's platform features. For detailed documentation of the API, refer to [`docs.rs/worker`](https://docs.rs/worker/latest/worker/). #### `event` macro This macro allows you to define entrypoints to your Worker. The `event` macro supports the following events: - `fetch` - Invoked by an incoming HTTP request. - `scheduled` - Invoked by [`Cron Triggers`](/workers/configuration/cron-triggers/). - `queue` - Invoked by incoming message batches from [Queues](/queues/) (Requires `queue` feature in `Cargo.toml`, refer to the [`workers-rs` GitHub repository and `queues` feature flag](https://github.com/cloudflare/workers-rs#queues)). - `start` - Invoked when the Worker is first launched (such as, to install panic hooks). #### `fetch` parameters The `fetch` handler provides three arguments which match the JavaScript API: 1. **[`Request`](https://docs.rs/worker/latest/worker/struct.Request.html)** An object representing the incoming request. This includes methods for accessing headers, method, path, Cloudflare properties, and body (with support for asynchronous streaming and JSON deserialization with [Serde](https://serde.rs/)). 2. **[`Env`](https://docs.rs/worker/latest/worker/struct.Env.html)** Provides access to Worker [bindings](/workers/runtime-apis/bindings/). - [`Secret`](https://github.com/cloudflare/workers-rs/blob/e15f88110d814c2d7759b2368df688433f807694/worker/src/env.rs#L92) - Secret value configured in Cloudflare dashboard or using `wrangler secret put`. - [`Var`](https://github.com/cloudflare/workers-rs/blob/e15f88110d814c2d7759b2368df688433f807694/worker/src/env.rs#L92) - Environment variable defined in `wrangler.toml`. - [`KvStore`](https://docs.rs/worker-kv/latest/worker_kv/struct.KvStore.html) - Workers [KV](/kv/api/) namespace binding. - [`ObjectNamespace`](https://docs.rs/worker/latest/worker/durable/struct.ObjectNamespace.html) - [Durable Object](/durable-objects/) binding. - [`Fetcher`](https://docs.rs/worker/latest/worker/struct.Fetcher.html) - [Service binding](/workers/runtime-apis/bindings/service-bindings/) to another Worker. - [`Bucket`](https://docs.rs/worker/latest/worker/struct.Bucket.html) - [R2](/r2/) Bucket binding. 3. **[`Context`](https://docs.rs/worker/latest/worker/struct.Context.html)** Provides access to [`waitUntil`](/workers/runtime-apis/context/#waituntil) (deferred asynchronous tasks) and [`passThroughOnException`](/workers/runtime-apis/context/#passthroughonexception) (fail open) functionality. #### [`Response`](https://docs.rs/worker/latest/worker/struct.Response.html) The `fetch` handler expects a [`Response`](https://docs.rs/worker/latest/worker/struct.Response.html) return type, which includes support for streaming responses to the client asynchronously. This is also the return type of any subrequests made from your Worker. There are methods for accessing status code and headers, as well as streaming the body asynchronously or deserializing from JSON using [Serde](https://serde.rs/). #### `Router` Implements convenient [routing API](https://docs.rs/worker/latest/worker/struct.Router.html) to serve multiple paths from one Worker. Refer to the [`Router` example in the `worker-rs` GitHub repository](https://github.com/cloudflare/workers-rs#or-use-the-router). ## 4. Deploy your Worker project With your project configured, you can now deploy your Worker, to a `*.workers.dev` subdomain, or a [Custom Domain](/workers/configuration/routing/custom-domains/), if you have one configured. If you have not configured any subdomain or domain, Wrangler will prompt you during the deployment process to set one up. ```sh npx wrangler deploy ``` Preview your Worker at `<YOUR_WORKER>.<YOUR_SUBDOMAIN>.workers.dev`. :::note When pushing to your `*.workers.dev` subdomain for the first time, you may see [`523` errors](/support/troubleshooting/cloudflare-errors/troubleshooting-cloudflare-5xx-errors/#error-523-origin-is-unreachable) while DNS is propagating. These errors should resolve themselves after a minute or so. ::: After completing these steps, you will have a basic Rust-based Worker deployed. From here, you can add crate dependencies and write code in Rust to implement your Worker application. If you would like to know more about the inner workings of how Rust compiled to Wasm is supported by Workers, the next section outlines the libraries and tools involved. ## How this deployment works Wasm Workers are invoked from a JavaScript entrypoint script which is created automatically for you when using `workers-rs`. ### JavaScript Plumbing (`wasm-bindgen`) To access platform features such as bindings, Wasm Workers must be able to access methods from the JavaScript runtime API. This interoperability is achieved using [`wasm-bindgen`](https://rustwasm.github.io/wasm-bindgen/), which provides the glue code needed to import runtime APIs to, and export event handlers from, the Wasm module. `wasm-bindgen` also provides [`js-sys`](https://docs.rs/js-sys/latest/js_sys/), which implements types for interacting with JavaScript objects. In practice, this is an implementation detail, as `workers-rs`'s API handles conversion to and from JavaScript objects, and interaction with imported JavaScript runtime APIs for you. :::note If you are using `wasm-bindgen` without `workers-rs` / `worker-build`, then you will need to patch the JavaScript that it emits. This is because when you import a `wasm` file in Workers, you get a `WebAssembly.Module` instead of a `WebAssembly.Instance` for performance and security reasons. To patch the JavaScript that `wasm-bindgen` emits: 1. Run `wasm-pack build --target bundler` as you normally would. 2. Patch the JavaScript file that it produces (the following code block assumes the file is called `mywasmlib.js`): ```js import * as imports from "./mywasmlib_bg.js"; // switch between both syntax for node and for workerd import wkmod from "./mywasmlib_bg.wasm"; import * as nodemod from "./mywasmlib_bg.wasm"; if (typeof process !== "undefined" && process.release.name === "node") { imports.__wbg_set_wasm(nodemod); } else { const instance = new WebAssembly.Instance(wkmod, { "./mywasmlib_bg.js": imports, }); imports.__wbg_set_wasm(instance.exports); } export * from "./mywasmlib_bg.js"; ``` 3. In your Worker entrypoint, import the function and use it directly: ```js import { myFunction } from "path/to/mylib.js"; ``` ::: ### Async (`wasm-bindgen-futures`) [`wasm-bindgen-futures`](https://rustwasm.github.io/wasm-bindgen/api/wasm_bindgen_futures/) (part of the `wasm-bindgen` project) provides interoperability between Rust Futures and JavaScript Promises. `workers-rs` invokes the entire event handler function using `spawn_local`, meaning that you can program using async Rust, which is turned into a single JavaScript Promise and run on the JavaScript event loop. Calls to imported JavaScript runtime APIs are automatically converted to Rust Futures that can be invoked from async Rust functions. ### Bundling (`worker-build`) To run the resulting Wasm binary on Workers, `workers-rs` includes a build tool called [`worker-build`](https://github.com/cloudflare/workers-rs/tree/main/worker-build) which: 1. Creates a JavaScript entrypoint script that properly invokes the module using `wasm-bindgen`'s JavaScript API. 2. Invokes `web-pack` to minify and bundle the JavaScript code. 3. Outputs a directory structure that Wrangler can use to bundle and deploy the final Worker. `worker-build` is invoked by default in the template project using a custom build command specified in the `wrangler.toml` file. ### Binary Size (`wasm-opt`) Unoptimized Rust Wasm binaries can be large and may exceed Worker bundle size limits or experience long startup times. The template project pre-configures several useful size optimizations in your `Cargo.toml` file: <WranglerConfig> ```toml [profile.release] lto = true strip = true codegen-units = 1 ``` </WranglerConfig> Finally, `worker-bundle` automatically invokes [`wasm-opt`](https://github.com/brson/wasm-opt-rs) to further optimize binary size before upload. ## Related resources - [Rust Wasm Book](https://rustwasm.github.io/docs/book/) --- # Supported crates URL: https://developers.cloudflare.com/workers/languages/rust/crates/ ## Background Learn about popular Rust crates which have been confirmed to work on Workers when using [`workers-rs`](https://github.com/cloudflare/workers-rs) (or in some cases just `wasm-bindgen`), to write Workers in WebAssembly. Each Rust crate example includes any custom configuration that is required. This is not an exhaustive list, many Rust crates can be compiled to the [`wasm32-unknown-unknown`](https://doc.rust-lang.org/rustc/platform-support/wasm64-unknown-unknown.html) target that is supported by Workers. In some cases, this may require disabling default features or enabling a Wasm-specific feature. It is important to consider the addition of new dependencies, as this can significantly increase the [size](/workers/platform/limits/#worker-size) of your Worker. ## `time` Many crates which have been made Wasm-friendly, will use the `time` crate instead of `std::time`. For the `time` crate to work in Wasm, the `wasm-bindgen` feature must be enabled to obtain timing information from JavaScript. ## `tracing` Tracing can be enabled by using the `tracing-web` crate and the `time` feature for `tracing-subscriber`. Due to [timing limitations](/workers/reference/security-model/#step-1-disallow-timers-and-multi-threading) on Workers, spans will have identical start and end times unless they encompass I/O. [Refer to the `tracing` example](https://github.com/cloudflare/workers-rs/tree/main/examples/tracing) for more information. ## `reqwest` The [`reqwest` library](https://docs.rs/reqwest/latest/reqwest/) can be compiled to Wasm, and hooks into the JavaScript `fetch` API automatically using `wasm-bindgen`. ## `tokio-postgres` `tokio-postgres` can be compiled to Wasm. It must be configured to use a `Socket` from `workers-rs`: [Refer to the `tokio-postgres` example](https://github.com/cloudflare/workers-rs/tree/main/examples/tokio-postgres) for more information. ## `hyper` The `hyper` crate contains two HTTP clients, the lower-level `conn` module and the higher-level `Client`. The `conn` module can be used with Workers `Socket`, however `Client` requires timing dependencies which are not yet Wasm friendly. [Refer to the `hyper` example](https://github.com/cloudflare/workers-rs/tree/main/examples/hyper) for more information. --- # Examples URL: https://developers.cloudflare.com/workers/languages/python/examples/ Cloudflare has a wide range of Python examples in the [Workers Example gallery](/workers/examples/?languages=Python). In addition to those examples, consider the following ones that illustrate Python-specific behavior. ## Parse an incoming request URL ```python from js import Response from urllib.parse import urlparse, parse_qs async def on_fetch(request, env): # Parse the incoming request URL url = urlparse(request.url) # Parse the query parameters into a Python dictionary params = parse_qs(url.query) if "name" in params: greeting = "Hello there, {name}".format(name=params["name"][0]) return Response.new(greeting) if url.path == "/favicon.ico": return Response.new("") return Response.new("Hello world!") ``` ## Parse JSON from the incoming request ```python from js import Response async def on_fetch(request): name = (await request.json()).name return Response.new("Hello, {name}".format(name=name)) ``` ## Emit logs from your Python Worker ```python # To use the JavaScript console APIs from js import console, Response # To use the native Python logging import logging async def on_fetch(request): # Use the console APIs from JavaScript # https://developer.mozilla.org/en-US/docs/Web/API/console console.log("console.log from Python!") # Alternatively, use the native Python logger logger = logging.getLogger(__name__) # The default level is warning. We can change that to info. logging.basicConfig(level=logging.INFO) logger.error("error from Python!") logger.info("info log from Python!") # Or just use print() print("print() from Python!") return Response.new("We're testing logging!") ``` ## Publish to a Queue ```python from js import Response, Object from pyodide.ffi import to_js as _to_js # to_js converts between Python dictionaries and JavaScript Objects def to_js(obj): return _to_js(obj, dict_converter=Object.fromEntries) async def on_fetch(request, env): # Bindings are available on the 'env' parameter # https://developers.cloudflare.com/queues/ # The default contentType is "json" # We can also pass plain text strings await env.QUEUE.send("hello", contentType="text") # Send a JSON payload await env.QUEUE.send(to_js({"hello": "world"})) # Return a response return Response.json(to_js({"write": "success"})) ``` ## Query a D1 Database ```python from js import Response async def on_fetch(request, env): results = await env.DB.prepare("PRAGMA table_list").all() # Return a JSON response return Response.json(results) ``` Refer to [Query D1 from Python Workers](/d1/examples/query-d1-from-python-workers/) for a more in-depth tutorial that covers how to create a new D1 database and configure bindings to D1. ## Next steps * If you're new to Workers and Python, refer to the [get started](/workers/languages/python/) guide * Learn more about [calling JavaScript methods and accessing JavaScript objects](/workers/languages/python/ffi/) from Python * Understand the [supported packages and versions](/workers/languages/python/packages/) currently available to Python Workers. --- # Foreign Function Interface (FFI) URL: https://developers.cloudflare.com/workers/languages/python/ffi/ import { WranglerConfig } from "~/components"; Via [Pyodide](https://pyodide.org/en/stable/), Python Workers provide a [Foreign Function Interface (FFI)](https://en.wikipedia.org/wiki/Foreign_function_interface) to JavaScript. This allows you to: * Use [bindings](/workers/runtime-apis/bindings/) to resources on Cloudflare, including [Workers AI](/workers-ai/), [Vectorize](/vectorize/), [R2](/r2/), [KV](/kv/), [D1](/d1/), [Queues](/queues/), [Durable Objects](/durable-objects/), [Service Bindings](/workers/runtime-apis/bindings/service-bindings/) and more. * Use JavaScript globals, like [`Request`](/workers/runtime-apis/request/), [`Response`](/workers/runtime-apis/response/), and [`fetch()`](/workers/runtime-apis/fetch/). * Use the full feature set of Cloudflare Workers — if an API is accessible in JavaScript, you can also access it in a Python Worker, writing exclusively Python code. The details of Pyodide's Foreign Function Interface are documented [here](https://pyodide.org/en/stable/usage/type-conversions.html), and Workers written in Python are able to take full advantage of this. ## Using Bindings from Python Workers Bindings allow your Worker to interact with resources on the Cloudflare Developer Platform. When you declare a binding on your Worker, you grant it a specific capability, such as being able to read and write files to an [R2](/r2/) bucket. For example, to access a [KV](/kv) namespace from a Python Worker, you would declare the following in your Worker's [Wrangler configuration file](/workers/wrangler/configuration/): <WranglerConfig> ```toml main = "./src/index.py" kv_namespaces = [ { binding = "FOO", id = "<YOUR_KV_NAMESPACE_ID>" } ] ``` </WranglerConfig> ...and then call `.get()` on the binding object that is exposed on `env`: ```python from js import Response async def on_fetch(request, env): await env.FOO.put("bar", "baz") bar = await env.FOO.get("bar") return Response.new(bar) # returns "baz" ``` Under the hood, `env` is actually a JavaScript object. When you call `.FOO`, you are accessing this property via a [`JsProxy`](https://pyodide.org/en/stable/usage/api/python-api/ffi.html#pyodide.ffi.JsProxy) — special proxy object that makes a JavaScript object behave like a Python object. ## Using JavaScript globals from Python Workers When writing Workers in Python, you can access JavaScript globals by importing them from the `js` module. For example, note how `Response` is imported from `js` in the example below: ```python from js import Response def on_fetch(request): return Response.new("Hello World!") ``` Refer to the [Python examples](/workers/languages/python/examples/) to learn how to call into JavaScript functions from Python, including `console.log` and logging, providing options to `Response`, and parsing JSON. --- # How Python Workers Work URL: https://developers.cloudflare.com/workers/languages/python/how-python-workers-work/ import { Render, WranglerConfig } from "~/components" Workers written in Python are executed by [Pyodide](https://pyodide.org/en/stable/index.html). Pyodide is a port of [CPython](https://github.com/python) (the reference implementation of Python — commonly referred to as just "Python") to WebAssembly. When you write a Python Worker, your code is interpreted directly by Pyodide, within a V8 isolate. Refer to [How Workers works](/workers/reference/how-workers-works/) to learn more. ## Local Development Lifecycle ```python from js import Response async def on_fetch(request, env): return Response.new("Hello world!") ``` …with a [Wrangler configuration file](/workers/wrangler/configuration/) that points to a .py file: <WranglerConfig> ```toml name = "hello-world-python-worker" main = "src/entry.py" compatibility_date = "2024-04-01" ``` </WranglerConfig> When you run `npx wrangler@latest dev` in local dev, the Workers runtime will: 1. Determine which version of Pyodide is required, based on your compatibility date 2. Create a new v8 isolate for your Worker, and automatically inject Pyodide 3. Serve your Python code using Pyodide There no extra toolchain or precompilation steps needed. The Python execution environment is provided directly by the Workers runtime, mirroring how Workers written in JavaScript work. Refer to the [Python examples](/workers/languages/python/examples/) to learn how to use Python within Workers. ## Deployment Lifecycle To reduce cold start times, when you deploy a Python Worker, Cloudflare performs as much of the expensive work as possible upfront, at deploy time. When you run npx `wrangler@latest deploy`, the following happens: 1. Wrangler uploads your Python code and your `requirements.txt` file to the Workers API. 2. Cloudflare sends your Python code, and your `requirements.txt` file to the Workers runtime to be validated. 3. Cloudflare creates a new v8 isolate for your Worker, and automatically injects Pyodide plus any packages you’ve specified in your `requirements.txt` file. 4. Cloudflare scans the Worker’s code for import statements, execute them, and then take a snapshot of the Worker’s WebAssembly linear memory. Effectively, we perform the expensive work of importing packages at deploy time, rather than at runtime. 5. Cloudflare deploys this snapshot alongside your Worker’s Python code to the Cloudflare network. <Render file="python-workers-beta-packages" product="workers" /> When a request comes in to your Worker, we load this snapshot and use it to bootstrap your Worker in an isolate, avoiding expensive initialization time:  Refer to the [blog post introducing Python Workers](https://blog.cloudflare.com/python-workers) for more detail about performance optimizations and how the Workers runtime will reduce cold starts for Python Workers. ## Pyodide and Python versions A new version of Python is released every year in August, and a new version of Pyodide is released six (6) months later. When this new version of Pyodide is published, we will add it to Workers by gating it behind a Compatibility Flag, which is only enabled after a specified Compatibility Date. This lets us continually provide updates, without risk of breaking changes, extending the commitment we’ve made for JavaScript to Python. Each Python release has a [five (5) year support window](https://devguide.python.org/versions/). Once this support window has passed for a given version of Python, security patches are no longer applied, making this version unsafe to rely on. To mitigate this risk, while still trying to hold as true as possible to our commitment of stability and long-term support, after five years any Python Worker still on a Python release that is outside of the support window will be automatically moved forward to the next oldest Python release. Python is a mature and stable language, so we expect that in most cases, your Python Worker will continue running without issue. But we recommend updating the compatibility date of your Worker regularly, to stay within the support window. --- # Python URL: https://developers.cloudflare.com/workers/languages/python/ import { WranglerConfig } from "~/components"; Cloudflare Workers provides first-class support for Python, including support for: - The majority of Python's [Standard library](/workers/languages/python/stdlib/) - All [bindings](/workers/runtime-apis/bindings/), including [Workers AI](/workers-ai/), [Vectorize](/vectorize), [R2](/r2), [KV](/kv), [D1](/d1), [Queues](/queues/), [Durable Objects](/durable-objects/), [Service Bindings](/workers/runtime-apis/bindings/service-bindings/) and more. - [Environment Variables](/workers/configuration/environment-variables/), and [Secrets](/workers/configuration/secrets/) - A robust [foreign function interface (FFI)](/workers/languages/python/ffi) that lets you use JavaScript objects and functions directly from Python — including all [Runtime APIs](/workers/runtime-apis/) - [Built-in packages](/workers/languages/python/packages), including [FastAPI](https://fastapi.tiangolo.com/), [Langchain](https://pypi.org/project/langchain/), [httpx](https://www.python-httpx.org/) and more. :::caution[Python Workers are in beta. Packages do not run in production.] Currently, you can only deploy Python Workers that use the standard library. [Packages](/workers/languages/python/packages/#supported-packages) **cannot be deployed** and will only work in local development for the time being. You must add the `python_workers` compatibility flag to your Worker, while Python Workers are in open beta. We'd love your feedback. Join the #python-workers channel in the [Cloudflare Developers Discord](https://discord.cloudflare.com/) and let us know what you'd like to see next. ::: ## Get started ```bash git clone https://github.com/cloudflare/python-workers-examples cd python-workers-examples/01-hello npx wrangler@latest dev ``` A Python Worker can be as simple as three lines of code: ```python from js import Response def on_fetch(request): return Response.new("Hello World!") ``` Similar to Workers written in [JavaScript](/workers/languages/javascript), [TypeScript](/workers/languages/typescript), or [Rust](/workers/languages/rust/), the main entry point for a Python worker is the [`fetch` handler](/workers/runtime-apis/handlers/fetch). In a Python Worker, this handler is named `on_fetch`. To run a Python Worker locally, you use [Wrangler](/workers/wrangler/), the CLI for Cloudflare Workers: ```bash npx wrangler@latest dev ``` To deploy a Python Worker to Cloudflare, run [`wrangler deploy`](/workers/wrangler/commands/#deploy): ```bash npx wrangler@latest deploy ``` ## Modules Python workers can be split across multiple files. Let's create a new Python file, called `src/hello.py`: ```python def hello(name): return "Hello, " + name + "!" ``` Now, we can modify `src/entry.py` to make use of the new module. ```python from hello import hello from js import Response def on_fetch(request): return Response.new(hello("World")) ``` Once you edit `src/entry.py`, Wrangler will automatically detect the change and reload your Worker. ## The `Request` Interface The `request` parameter passed to your `fetch` handler is a JavaScript Request object, exposed via the foreign function interface, allowing you to access it directly from your Python code. Let's try editing the worker to accept a POST request. We know from the [documentation for `Request`](/workers/runtime-apis/request) that we can call `await request.json()` within an `async` function to parse the request body as JSON. In a Python Worker, you would write: ```python from js import Response from hello import hello async def on_fetch(request): name = (await request.json()).name return Response.new(hello(name)) ``` Once you edit the `src/entry.py`, Wrangler should automatically restart the local development server. Now, if you send a POST request with the appropriate body, your Worker should respond with a personalized message. ```bash curl --header "Content-Type: application/json" \ --request POST \ --data '{"name": "Python"}' http://localhost:8787 ``` ```bash output Hello, Python! ``` ## The `env` Parameter In addition to the `request` parameter, the `env` parameter is also passed to the Python `fetch` handler and can be used to access [environment variables](/workers/configuration/environment-variables/), [secrets](/workers/configuration/secrets/),and [bindings](/workers/runtime-apis/bindings/). For example, let us try setting and using an environment variable in a Python Worker. First, add the environment variable to your Worker's [Wrangler configuration file](/workers/wrangler/configuration/): <WranglerConfig> ```toml name = "hello-python-worker" main = "src/entry.py" compatibility_flags = ["python_workers"] compatibility_date = "2024-03-20" [vars] API_HOST = "example.com" ``` </WranglerConfig> Then, you can access the `API_HOST` environment variable via the `env` parameter: ```python from js import Response async def on_fetch(request, env): return Response.new(env.API_HOST) ``` ## Further Reading - Understand which parts of the [Python Standard Library](/workers/languages/python/stdlib) are supported in Python Workers. - Learn about Python Workers' [foreign function interface (FFI)](/workers/languages/python/ffi), and how to use it to work with [bindings](/workers/runtime-apis/bindings) and [Runtime APIs](/workers/runtime-apis/). - Explore the [Built-in Python packages](/workers/languages/python/packages) that the Workers runtime provides. --- # Standard Library URL: https://developers.cloudflare.com/workers/languages/python/stdlib/ Workers written in Python are executed by [Pyodide](https://pyodide.org/en/stable/index.html). Pyodide is a port of CPython to WebAssembly — for the most part it behaves identically to [CPython](https://github.com/python) (the reference implementation of Python — commonly referred to as just "Python"). The majority of the CPython test suite passes when run against Pyodide. For the most part, you shouldn't need to worry about differences in behavior. The full [Python Standard Library](https://docs.python.org/3/library/index.html) is available in Python Workers, with the following exceptions: ## Modules with limited functionality * `hashlib`: Hash algorithms that depend on OpenSSL are not available by default. * `decimal`: The decimal module has C (\_decimal) and Python (\_pydecimal) implementations with the same functionality. Only the C implementation is available (compiled to WebAssembly) * `pydoc`: Help messages for Python builtins are not available * `webbrowser`: The original webbrowser module is not available. ## Excluded modules The following modules are not available in Python Workers: * curses * dbm * ensurepip * fcntl * grp * idlelib * lib2to3 * msvcrt * pwd * resource * syslog * termios * tkinter * turtle.py * turtledemo * venv * winreg * winsound The following modules can be imported, but are not functional due to the limitations of the WebAssembly VM. * multiprocessing * threading * sockets The following are present but cannot be imported due to a dependency on the termios package which has been removed: * pty * tty --- # TypeScript URL: https://developers.cloudflare.com/workers/languages/typescript/ import { TabItem, Tabs } from "~/components"; ## TypeScript TypeScript is a first-class language on Cloudflare Workers. Cloudflare publishes type definitions to [GitHub](https://github.com/cloudflare/workers-types) and [npm](https://www.npmjs.com/package/@cloudflare/workers-types) (`npm install -D @cloudflare/workers-types`). All APIs provided in Workers are fully typed, and type definitions are generated directly from [workerd](https://github.com/cloudflare/workerd), the open-source Workers runtime. ### Generate types that match your Worker's configuration (experimental) Cloudflare continuously improves [workerd](https://github.com/cloudflare/workerd), the open-source Workers runtime. Changes in workerd can introduce JavaScript API changes, thus changing the respective TypeScript types. For example, the [`urlsearchparams_delete_has_value_arg`](/workers/configuration/compatibility-flags/#urlsearchparams-delete-and-has-value-argument) compatibility flag adds optional arguments to some methods, in order to support new additions to the WHATWG URL standard API. This means the correct TypeScript types for your Worker depend on: 1. Your Worker's [Compatibility Date](/workers/configuration/compatibility-dates/). 2. Your Worker's [Compatibility Flags](/workers/configuration/compatibility-flags/). For example, if you have `compatibility_flags = ["nodejs_als"]` in your [Wrangler configuration file](/workers/wrangler/configuration/), then the runtime will allow you to use the [`AsyncLocalStorage`](https://nodejs.org/api/async_context.html#class-asynclocalstorage) class in your worker code, but you will not see this reflected in the type definitions in `@cloudflare/workers-types`. In order to solve this issue, and to ensure that your type definitions are always up-to-date with your compatibility settings, you can dynamically generate the runtime types (as of `wrangler 3.66.0`): ```bash npx wrangler types --experimental-include-runtime ``` This will generate a `d.ts` file and (by default) save it to `.wrangler/types/runtime.d.ts`. You will be prompted in the command's output to add that file to your `tsconfig.json`'s `compilerOptions.types` array. If you would like to commit the file to git, you can provide a custom path. Here, for instance, the `runtime.d.ts` file will be saved to the root of your project: ```bash npx wrangler types --experimental-include-runtime="./runtime.d.ts" ``` **Note: To ensure that your types are always up-to-date, make sure to run `wrangler types --experimental-include-runtime` after any changes to your config file.** See [the full list of available flags](/workers/wrangler/commands/#types) for more details. #### Migrating from `@cloudflare/workers-types` to `wrangler types --experimental-include-runtime` The `@cloudflare/workers-types` package provides runtime types for each distinct [compatibility date](https://github.com/cloudflare/workerd/tree/main/npm/workers-types#compatibility-dates), which is specified by the user in their `tsconfig.json`. But this package is superseded by the `wrangler types --experimental-include-runtime` command. Here are the steps to switch from `@cloudflare/workers-types` to using `wrangler types` with the experimental runtime inclusion: ##### Uninstall `@cloudflare/workers-types` <Tabs> <TabItem label="npm"> ```sh npm uninstall @cloudflare/workers-types ``` </TabItem> <TabItem label="yarn"> ```sh yarn remove @cloudflare/workers-types ``` </TabItem> <TabItem label="pnpm"> ```sh pnpm uninstall @cloudflare/workers-types ``` </TabItem> </Tabs> ##### Generate runtime types using wrangler ```bash npx wrangler types --experimental-include-runtime ``` This will generate a `.d.ts` file, saved to `.wrangler/types/runtime.d.ts` by default. ##### Update your `tsconfig.json` to include the generated types ```json { "compilerOptions": { "types": ["./.wrangler/types/runtime"] } } ``` Note that if you have specified a custom path for the runtime types file, you should use that in your `compilerOptions.types` array instead of the default path. ##### Update your scripts and CI pipelines When switching to `wrangler types --experimental-include-runtime`, you'll want to ensure that your development process always uses the most up-to-date types. The main thing to remember here is that - regardless of your specific framework or build tools - you should run the `wrangler types --experimental-include-runtime` command before any development tasks that rely on TypeScript. This ensures your editor and build tools always have access to the latest types. Most projects will have existing build and development scripts, as well as some type-checking. In the example below, we're adding the `wrangler types --experimental-include-runtime` before the type-checking script in the project: ```json { "scripts": { "dev": "existing-dev-command", "build": "-existing-build-command", "type-check": "wrangler types --experimental-include-runtime && tsc" } } ``` In CI, you may have separate build and test commands. It is best practice to run `wrangler types --experimental-include-runtime` before other CI commands. For example: <Tabs> <TabItem label="npm"> ```yaml - run: npm run generate-types - run: npm run build - run: npm test ``` </TabItem> <TabItem label="yarn"> ```yaml - run: yarn generate-types - run: yarn build - run: yarn test ``` </TabItem> <TabItem label="pnpm"> ```yaml - run: pnpm run generate-types - run: pnpm run build - run: pnpm test ``` </TabItem> </Tabs> By integrating the `wrangler types --experimental-include-runtime` command into your workflow this way, you ensure that your development environment, builds, and CI processes always use the most accurate and up-to-date types for your Cloudflare Worker, regardless of your specific framework or tooling choices. ### Known issues #### Transitive loading of `@types/node` overrides `@cloudflare/workers-types` Your project's dependencies may load the `@types/node` package on their own. As of `@types/node@20.8.4` that package now overrides `Request`, `Response` and `fetch` types (possibly others) specified by `@cloudflare/workers-types` causing type errors. The way to get around this issue currently is to pin the version of `@types/node` to `20.8.3` in your `package.json` like this: ```json { "overrides": { "@types/node": "20.8.3" } } ``` For more information, refer to [this GitHub issue](https://github.com/cloudflare/workerd/issues/1298). ### Resources - [TypeScript template](https://github.com/cloudflare/workers-sdk/tree/main/packages/create-cloudflare/templates/hello-world/ts) - [@cloudflare/workers-types](https://github.com/cloudflare/workers-types) - [Runtime APIs](/workers/runtime-apis/) - [TypeScript Examples](/workers/examples/?languages=TypeScript) --- # Baselime integration URL: https://developers.cloudflare.com/workers/observability/integrations/baselime-integration/ import { Render } from "~/components" [Baselime](https://baselime.io/) is an observability solution built for modern cloud-native environments. It combines logs, metrics, and distributed traces to give you full visibility across your microservices at scale. This integration allows you to connect to a Baselime environment from your Worker to automatically send errors and logs to Baselime with no code changes needed in the Workers application. :::note Baselime integration is available to all Enterprise customers and Free, Pro, and Business customers on the [Workers Paid plan](/workers/platform/pricing/). ::: ## How it works This integration adds a [Tail Worker](/workers/observability/logs/tail-workers) to your application Worker. The Tail Worker automatically sends errors and uncaught exceptions to the Baselime environment you have configured. This integration supports the following Baselime features: * **[Logging](https://baselime.io/docs/analysing-data/overview/)**: Request info, logs, and exceptions are all available to be searched for and analyzed. * **[Error tracking](https://baselime.io/docs/analysing-data/errors/)**: Actively find and be notified of new errors and track their resolution. :::note If there are more configuration options that you would like to see, leave us feedback on the [Cloudflare Developer Discord](https://discord.cloudflare.com) (channel name: integrations). ::: ## Set up an integration with Baselime To set up an integration with Baselime, you need to have a Baselime environment to connect to. If this is your first time using Baselime, you will be prompted to create an account and an environment during the integration setup. To add the Baselime integration to your Worker: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. Select **Workers & Pages**. 3. In **Overview**, select your Worker. 4. Select **Integrations** > **Baselime**. 5. Follow the setup flow. Once installed, the integration will automatically start forwarding events to Baselime. To learn more about Baselime, refer to [Baselime's official documentation](https://baselime.io/docs/). <Render file="wrangler-tail-warning" /> :::caution Note that automatic distributed tracing is not yet supported via the Baselime integration. To add tracing, follow the [Baselime documentation](https://baselime.io/docs/sending-data/platforms/cloudflare/traces/). ::: --- # Integrations URL: https://developers.cloudflare.com/workers/observability/integrations/ import { DirectoryListing } from "~/components"; Send your telemetry data to third parties. <DirectoryListing /> --- # Sentry URL: https://developers.cloudflare.com/workers/observability/integrations/sentry/ import { Render } from "~/components" [Sentry](https://sentry.io/welcome/) is an error tracking and performance monitoring platform that allows developers to diagnose, fix, and optimize the performance of their code. This integration allows you to connect to a Sentry project from your Worker to automatically send errors and uncaught exceptions to Sentry with no code changes needed in the Workers application. :::note Sentry integration is available to all Enterprise customers and Free, Pro, and Business customers on the [Workers Paid plan](/workers/platform/pricing/). ::: :::caution This integration is not supported for Pages projects. Pages does not support [Tail Workers](/workers/observability/logs/tail-workers/), and the Sentry integration relies on adding a Tail Worker to your Worker. ::: ## How it works This integration adds a [Tail Worker](/workers/observability/logs/tail-workers) to your application Worker. The Tail Worker automatically sends errors and uncaught exceptions to the Sentry project you have configured. This integration supports the following Sentry features: * **[Data Handling](https://develop.sentry.dev/sdk/data-handling/)**: As a best practice, do not include PII or other sensitive data in the payload sent to Sentry. HTTP headers (for example, `Authorization` or `Cookie`) can be removed before events are forwarded to Sentry. * **[Sampling](https://docs.sentry.io/platforms/javascript/configuration/sampling/#configuring-the-transaction-sample-rate)**: Sampling can be configured to manage the number and type of events sent to Sentry. Sampling rates can be configured based on the HTTP status code returned by the Worker and for uncaught exceptions. Setting the sampling rate to 100% sends all events to Sentry or setting it to 30% sends approximately 30% of events to Sentry. * **[Breadcrumbs](https://docs.sentry.io/product/issues/issue-details/breadcrumbs/)**: Breadcrumbs create a trail of events that happened prior to an issue. Breadcrumbs are automatically forwarded to Sentry in the case of an error or exception. These events consist of the `console.log()` from the Worker before the error or exception occurred. :::note If there are more configuration options that you would like to see, leave us feedback on the [Cloudflare Developer Discord](https://discord.cloudflare.com) (channel name: integrations). ::: ## Set up an integration with Sentry To set up an integration with Sentry, you need to have an existing Sentry project to connect to. [Create a Sentry project](https://docs.sentry.io/product/sentry-basics/integrate-frontend/create-new-project), or use an existing project for this integration. Then add the Sentry integration to your Worker. To add the Sentry integration to your Worker: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. Select **Workers & Pages**. 3. In **Overview**, select your Worker. 4. Select **Integrations** > **Sentry**. 5. Follow the setup flow. Once installed, the integration will automatically start forwarding matching events to Sentry. To learn more about Sentry, refer to [Sentry's official documentation](https://docs.sentry.io/). <Render file="wrangler-tail-warning" /> :::caution Each Cloudflare account can only be linked to one Sentry organization. Use the [Sentry SDK](https://github.com/getsentry/sentry-javascript) in order to send events to projects in more than one Sentry organization. ::: --- # Breakpoints URL: https://developers.cloudflare.com/workers/observability/dev-tools/breakpoints/ ## Debug via breakpoints As of Wrangler 3.9.0, you can debug via breakpoints in your Worker. Breakpoints provide the ability to review what is happening at a given point in the execution of your Worker. Breakpoint functionality exists in both DevTools and VS Code. For more information on breakpoint debugging via Chrome's DevTools, refer to [Chrome's article on breakpoints](https://developer.chrome.com/docs/devtools/javascript/breakpoints/). ### Setup VS Code to use breakpoints To setup VS Code for breakpoint debugging in your Worker project: 1. Create a `.vscode` folder in your project's root folder if one does not exist. 2. Within that folder, create a `launch.json` file with the following content: ```json { "configurations": [ { "name": "Wrangler", "type": "node", "request": "attach", "port": 9229, "cwd": "/", "resolveSourceMapLocations": null, "attachExistingChildren": false, "autoAttachChildProcesses": false, "sourceMaps": true // works with or without this line } ] } ``` 3. Open your project in VS Code, open a new terminal window from VS Code, and run `npx wrangler dev` to start the local dev server. 4. At the top of the **Run & Debug** panel, you should see an option to select a configuration. Choose **Wrangler**, and select the play icon. **Wrangler: Remote Process \[0]** should show up in the Call Stack panel on the left. 5. Go back to a `.js` or `.ts` file in your project and add at least one breakpoint. 6. Open your browser and go to the Worker's local URL (default `http://127.0.0.1:8787`). The breakpoint should be hit, and you should be able to review details about your code at the specified line. :::caution Breakpoint debugging in `wrangler dev` using `--remote` could extend Worker CPU time and incur additional costs since you are testing against actual resources that count against usage limits. It is recommended to use `wrangler dev` without the `--remote` option. This ensures you are developing locally. ::: :::note The `.vscode/launch.json` file only applies to a single workspace. If you prefer, you can add the above launch configuration to your User Settings (per the [official VS Code documentation](https://code.visualstudio.com/docs/editor/debugging#_global-launch-configuration)) to have it available for all your workspaces. ::: ## Related resources - [Local Development](/workers/local-development/) - Develop your Workers and connected resources locally via Wrangler and [`workerd`](https://github.com/cloudflare/workerd), for a fast, accurate feedback loop. --- # DevTools URL: https://developers.cloudflare.com/workers/observability/dev-tools/ ## Using DevTools When running your Worker locally using `wrangler dev`, you automatically have access to [Cloudflare's implementation](https://github.com/cloudflare/workers-sdk/tree/main/packages/chrome-devtools-patches) of [Chrome's DevTools](https://developer.chrome.com/docs/devtools/overview). DevTools help you debug and optimize your Workers. :::note You may have experience using Chrome's DevTools for frontend development. Notably, Worker DevTools are used for backend code and _not_ client-side JavaScript. ::: ## Opening DevTools To get started, run your Worker in development mode with `wrangler dev`, then open the DevTools in the browser by pressing the `D` key from your terminal. Now when you access this worker locally, it can be debugged and profiled with this DevTools instance. Alternatively, both the [Cloudflare Dashboard](https://dash.cloudflare.com/) and the [Worker's Playground](https://workers.cloudflare.com/playground) include DevTools in their UI. ## DevTool use cases DevTools can be used in a variety of situations. For more information, see the documentation: - [Debugging code via breakpoints and logging](/workers/observability/dev-tools/breakpoints/) - [Profiling CPU usage](/workers/observability/dev-tools/cpu-usage/) - [Addressing out of memory (OOM) errors](/workers/observability/dev-tools/memory-usage/) ## Related resources - [Local development](/workers/local-development/) - Develop your Workers and connected resources locally via Wrangler and workerd, for a fast, accurate feedback loop. --- # Profiling CPU usage URL: https://developers.cloudflare.com/workers/observability/dev-tools/cpu-usage/ If a Worker spends too much time performing CPU-intensive tasks, responses may be slow or the Worker might fail to startup due to [time limits](/workers/platform/limits/#worker-startup-time). Profiling in DevTools can help you identify and fix code that uses too much CPU. Measuring execution time of specific functions in production can be difficult because Workers [only increment timers on I/O](/workers/reference/security-model/#step-1-disallow-timers-and-multi-threading) for security purposes. However, measuring CPU execution times is possible in local development with DevTools. When using DevTools to monitor CPU usage, it may be difficult to replicate specific behavior you are seeing in production. To mimic production behavior, make sure the requests you send to the local Worker are similar to requests in production. This might mean sending a large volume of requests, making requests to specific routes, or using production-like data with the [--remote flag](/workers/local-development/#develop-using-remote-resources-and-bindings). ## Taking a profile To generate a CPU profile: - Run `wrangler dev` to start your Worker - Press the `D` key from your terminal to open DevTools - Select the "Profiler" tab - Select `Start` to begin recording CPU usage - Send requests to your Worker from a new tab - Select `Stop` You now have a CPU profile. ## An Example Profile Let's look at an example to learn how to read a CPU profile. Imagine you have the following Worker: ```js title="index.js" const addNumbers = (body) => { for (let i = 0; i < 5000; ++i) { body = body + " " + i; } return body; }; const moreAddition = (body) => { for (let i = 5001; i < 15000; ++i) { body = body + " " + i; } return body; }; export default { async fetch(request, env, ctx) { let body = "Hello Profiler! - "; body = addNumbers(body); body = moreAddition(body); return new Response(body); }, }; ``` You want to find which part of the code causes slow response times. How do you use DevTool profiling to identify the CPU-heavy code and fix the issue? First, as mentioned above, you open DevTools by pressing the `D` key after running `wrangler dev`. Then, you navigate to the "Profiler" tab and take a profile by pressing `Start` and sending a request.  The top chart in this image shows a timeline of the profile, and you can use it to zoom in on a specific request. The chart below shows the CPU time used for operations run during the request. In this screenshot, you can see "fetch" time at the top and the subscomponents of fetch beneath, including the two functions `addNumbers` and `moreAdditions`. By hovering over each box, you get more information, and by clicking the box, you navigate to the function's source code. Using this graph, you can answer the question of "what is taking CPU time?". The `addNumbers` function has a very small box, representing 0.3ms of CPU time. The `moreAdditions` box is larger, representing 2.2ms of CPU time. Therefore, if you want to make response times faster, you need to optimize `moreAdditions`. You can also change the visualization from ‘Chart’ to ‘Heavy (Bottom Up)’ for an alternative view.  This shows the relative times allocated to each function. At the top of the list, `moreAdditions` is clearly the slowest portion of your Worker. You can see that garbage collection also represents a large percentage of time, so memory optimization could be useful. ## Additional Resources To learn more about how to use the CPU profiler, see [Google's documentation on Profiling the CPU in DevTools](https://developer.chrome.com/docs/devtools/performance/nodejs#profile). To learn how to use DevTools to gain insight into memory, see the [Memory Usage Documentation](/workers/observability/dev-tools/memory-usage/). --- # Profiling Memory URL: https://developers.cloudflare.com/workers/observability/dev-tools/memory-usage/ Understanding Worker memory usage can help you optimize performance, avoid Out of Memory (OOM) errors when hitting [Worker memory limits](/workers/platform/limits/#memory), and fix memory leaks. You can profile memory usage with snapshots in DevTools. Memory snapshots let you view a summary of memory usage, see how much memory is allocated to different data types, and get details on specific objects in memory. When using DevTools to profile memory, it may be difficult to replicate specific behavior you are seeing in production. To mimic production behavior, make sure the requests you send to the local Worker are similar to requests in production. This might mean sending a large volume of requests, making requests to specific routes, or using production-like data with the [--remote flag](/workers/local-development/#develop-using-remote-resources-and-bindings). ## Taking a snapshot To generate a memory snapshot: - Run `wrangler dev` to start your Worker - Press the `D` from your terminal to open DevTools - Select on the "Memory" tab - Send requests to your Worker to start allocating memory - Optionally include a debugger to make sure you can pause execution at the proper time - Select `Take snapshot` You can now inspect Worker memory. ## An Example Snapshot Let's look at an example to learn how to read a memory snapshot. Imagine you have the following Worker: ```js title="index.js" let responseText = "Hello world!"; export default { async fetch(request, env, ctx) { let now = new Date().toISOString(); responseText = responseText + ` (Requested at: ${now})`; return new Response(responseText.slice(0, 53)); }, }; ``` While this code worked well initially, over time you notice slower responses and Out of Memory errors. Using DevTools, you can find out if this is a memory leak. First, as mentioned above, you open DevTools by pressing the `D` key after running `wrangler dev`. Then, you navigate to the "Memory" tab. Next, generate a large volume of traffic to the Worker by sending requests. You can do this with `curl` or by repeatedly reloading the browser. Note that other Workers may require more specific requests to reproduce a memory leak. Then, click the "Take Snapshot" button and view the results. First, navigate to "Statistics" in the dropdown to get a general sense of what takes up memory.  Looking at these statistics, you can see that a lot of memory is dedicated to strings at 67 kB. This is likely the source of the memory leak. If you make more requests and take another snapshot, you would see this number grow.  The memory summary lists data types by the amount of memory they take up. When you click into "(string)", you can see a string that is far larger than the rest. The text shows that you are appending "Requested at" and a date repeatedly, inadvertently overwriting the global variable with an increasingly large string: ```js responseText = responseText + ` (Requested at: ${now})`; ``` Using Memory Snapshotting in DevTools, you've identified the object and line of code causing the memory leak. You can now fix it with a small code change. ## Additional Resources To learn more about how to use Memory Snapshotting, see [Google's documentation on Memory Heap Snapshots](https://developer.chrome.com/docs/devtools/memory-problems/heap-snapshots). To learn how to use DevTools to gain insight into CPU usage, see the [CPU Profiling Documentation](/workers/observability/dev-tools/cpu-usage/). --- # Workers Logpush URL: https://developers.cloudflare.com/workers/observability/logs/logpush/ import { WranglerConfig } from "~/components"; [Cloudflare Logpush](/logs/about/) supports the ability to send [Workers Trace Event Logs](/logs/reference/log-fields/account/workers_trace_events/) to a [supported destination](/logs/get-started/enable-destinations/). Worker’s Trace Events Logpush includes metadata about requests and responses, unstructured `console.log()` messages and any uncaught exceptions. This product is available on the Workers Paid plan. For pricing information, refer to [Pricing](/workers/platform/pricing/#workers-trace-events-logpush). :::caution Workers Trace Events Logpush is not available for zones on the [Cloudflare China Network](/china-network/). ::: ## Verify your Logpush access :::note[Wrangler version] Minimum required Wrangler version: 2.2.0. Check your version by running `wrangler version`. To update Wrangler, refer to [Install/Update Wrangler](/workers/wrangler/install-and-update/). ::: To configure a Logpush job, verify that your Cloudflare account role can use Logpush. To check your role: 1. Log in the [Cloudflare dashboard](https://dash.cloudflare.com). 2. Select your account and scroll down to **Manage Account** > **Members**. 3. Check your account permissions. Roles with Logpush configuration access are different than Workers permissions. Super Administrators, Administrators, and the Log Share roles have full access to Logpush. Alternatively, create a new [API token](/fundamentals/api/get-started/create-token/) scoped at the Account level with Logs Edit permissions. ## Create a Logpush job ### Via the Cloudflare dashboard To create a Logpush job in the Cloudflare dashboard: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com), and select your account. 2. Select **Analytics & Logs** > **Logs**. 3. Select **Add Logpush job**. 4. Select **Workers trace events** as the data set > **Next**. 5. If needed, customize your data fields. Otherwise, select **Next**. 6. Follow the instructions on the dashboard to verify ownership of your data's destination and complete job creation. ### Via cURL The following example sends Workers logs to R2. For more configuration options, refer to [Enable destinations](/logs/get-started/enable-destinations/) and [API configuration](/logs/get-started/api-configuration/) in the Logs documentation. ```bash curl "https://api.cloudflare.com/client/v4/accounts/<ACCOUNT_ID>/logpush/jobs" \ --header 'X-Auth-Key: <API_KEY>' \ --header 'X-Auth-Email: <EMAIL>' \ --header 'Content-Type: application/json' \ --data '{ "name": "workers-logpush", "output_options": { "field_names": ["Event", "EventTimestampMs", "Outcome", "Exceptions", "Logs", "ScriptName"], }, "destination_conf": "r2://<BUCKET_PATH>/{DATE}?account-id=<ACCOUNT_ID>&access-key-id=<R2_ACCESS_KEY_ID>&secret-access-key=<R2_SECRET_ACCESS_KEY>", "dataset": "workers_trace_events", "enabled": true }' | jq . ``` In Logpush, you can configure [filters](/logs/reference/filters/) and a [sampling rate](/logs/get-started/api-configuration/#sampling-rate) to have more control of the volume of data that is sent to your configured destination. For example, if you only want to receive logs for requests that did not result in an exception, add the following `filter` JSON property below `output_options`: `"filter":"{\"where\": {\"key\":\"Outcome\",\"operator\":\"!eq\",\"value\":\"exception\"}}"` ## Enable logging on your Worker Enable logging on your Worker by adding a new property, `logpush = true`, to your Wrangler file. This can be added either in the top-level configuration or under an [environment](/workers/wrangler/environments/). Any new Workers with this property will automatically get picked up by the Logpush job. <WranglerConfig> ```toml # Top-level configuration name = "my-worker" main = "src/index.js" compatibility_date = "2022-07-12" workers_dev = false logpush = true route = { pattern = "example.org/*", zone_name = "example.org" } ``` </WranglerConfig> Configure via multipart script upload API: ```bash curl --request PUT \ "https://api.cloudflare.com/client/v4/accounts/{account_id}/workers/scripts/{script_name}" \ --header "Authorization: Bearer <API_TOKEN>" \ --form 'metadata={"main_module": "my-worker.js", "logpush": true}' \ --form '"my-worker.js"=@./my-worker.js;type=application/javascript+module' ``` ## Limits The `logs` and `exceptions` fields have a combined limit of 16,384 characters before fields will start being truncated. Characters are counted in the order of all `exception.name`s, `exception.message`s, and then `log.message`s. Once that character limit is reached all fields will be truncated with `"<<<Logpush: *field* truncated>>>"` for one message before dropping logs or exceptions. ### Example To illustrate this, suppose our Logpush event looks like the JSON below and the limit is 50 characters (rather than the actual limit of 16,384). The algorithm will: 1. Count the characters in `exception.names`: 1. `"SampleError"` and `"AuthError"` as 20 characters. 2. Count the characters in `exception.message`: 1. `"something went wrong"` counted as 20 characters leaving 10 characters remaining. 2. The first 10 characters of `"unable to process request authentication from client"` will be taken and counted before being truncated to `"unable to <<<Logpush: exception messages truncated>>>"`. 3. Count the characters in `log.message`: 1. We've already begun truncation, so `"Hello "` will be replaced with `"<<<Logpush: messages truncated>>>"` and `"World!"` will be dropped. #### Sample Input ```json { "Exceptions": [ { "Name": "SampleError", "Message": "something went wrong", "TimestampMs": 0 }, { "Name": "AuthError", "Message": "unable to process request authentication from client", "TimestampMs": 1 }, ], "Logs": [ { "Level": "log", "Message": ["Hello "], "TimestampMs": 0 }, { "Level": "log", "Message": ["World!"], "TimestampMs": 0 } ] } ``` #### Sample Output ```json { "Exceptions": [ { "name": "SampleError", "message": "something went wrong", "TimestampMs": 0 }, { "name": "AuthError", "message": "unable to <<<Logpush: exception messages truncated>>>", "TimestampMs": 1 }, ], "Logs": [ { "Level": "log", "Message": ["<<<Logpush: messages truncated>>>"], "TimestampMs": 0 } ] } ``` --- # Logs URL: https://developers.cloudflare.com/workers/observability/logs/ import { Badge, Stream } from "~/components"; Logs are an important component of a developer's toolkit to troubleshoot and diagnose application issues and maintaining system health. The Cloudflare Developer Platform offers many tools to help developers manage their application's logs. ## [Workers Logs](/workers/observability/logs/workers-logs) <Badge text="New" variant="tip" size="large" /> Automatically ingest, filter, and analyze logs emitted from Cloudflare Workers in the Cloudflare dashboard. ## [Real-time logs](/workers/observability/logs/real-time-logs) Access log events in near real-time. Real-time logs provide immediate feedback and visibility into the health of your Cloudflare Worker. ## [Tail Workers](/workers/observability/logs/tail-workers) <Badge text="Beta" size="large"/> Tail Workers allow developers to apply custom filtering, sampling, and transformation logic to telemetry data. ## [Workers Logpush](/workers/observability/logs/logpush) Send Workers Trace Event Logs to a supported destination. Workers Logpush includes metadata about requests and responses, unstructured `console.log()` messages and any uncaught exceptions. ## Video Tutorial <Stream id="8a29e0b3ee30140431df06c0ec935c60" title="Workers Observability" thumbnail="2.5s" /> --- # Real-time logs URL: https://developers.cloudflare.com/workers/observability/logs/real-time-logs/ import { TabItem, Tabs, Steps } from "~/components"; With Real-time logs, access all your log events in near real-time for log events happening globally. Real-time logs is helpful for immediate feedback, such as the status of a new deployment. Real-time logs captures [execution logs](/workers/observability/logs/workers-logs/#execution-logs), [custom logs](/workers/observability/logs/workers-logs/#custom-logs), errors, and uncaught exceptions. For high-traffic applications, real-time logs may enter sampling mode, which means some messages will be dropped and a warning will appear in your logs. :::caution Real-time logs are not available for zones on the [Cloudflare China Network](/china-network/). ::: ## View logs from the dashboard To view real-time logs associated with any deployed Worker using the Cloudflare dashboard: <Steps> 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. In Account Home, go to **Workers & Pages**. 3. In **Overview**, select your **Worker**. 4. Select **Logs**. 5. In the right-hand navigation bar, select **Live**. </Steps> ## View logs using `wrangler tail` To view real-time logs associated with any deployed Worker using Wrangler: 1. Go to your Worker project directory. 2. Run [`npx wrangler tail`](/workers/wrangler/commands/#tail). This will log any incoming requests to your application available in your local terminal. The output of each `wrangler tail` log is a structured JSON object: ```json { "outcome": "ok", "scriptName": null, "exceptions": [], "logs": [], "eventTimestamp": 1590680082349, "event": { "request": { "url": "https://www.bytesized.xyz/", "method": "GET", "headers": {}, "cf": {} } } } ``` By piping the output to tools like [`jq`](https://stedolan.github.io/jq/), you can query and manipulate the requests to look for specific information: ```sh npx wrangler tail | jq .event.request.url ``` ```sh output "https://www.bytesized.xyz/" "https://www.bytesized.xyz/component---src-pages-index-js-a77e385e3bde5b78dbf6.js" "https://www.bytesized.xyz/page-data/app-data.json" ``` You can customize how `wrangler tail` works to fit your needs. Refer to [the `wrangler tail` documentation](/workers/wrangler/commands/#tail) for available configuration options. ## Limits :::note You can filter real-time logs in the dashboard or using [`wrangler tail`](/workers/wrangler/commands/#tail). If your Worker has a high volume of messages, filtering real-time logs can help mitgate messages from being dropped. ::: Note that: - Real-time logs does not store Workers Logs. To store logs, use [Workers Logs](/workers/observability/logs/workers-logs). - If your Worker has a high volume of traffic, the real-time logs might enter sampling mode. This will cause some of your messages to be dropped and a warning to appear in your logs. - Logs from any [Durable Objects](/durable-objects/) your Worker is using will show up in the dashboard. - A maximum of 10 clients can view a Worker's logs at one time. This can be a combination of either dashboard sessions or `wrangler tail` calls. ## Persist logs Logs can be persisted, filtered, and analyzed with [Workers Logs](/workers/observability/logs/workers-logs). To send logs to a third party, use [Workers Logpush](/workers/observability/logs/logpush/) or [Tail Workers](/workers/observability/logs/tail-workers/). ## Related resources - [Errors and exceptions](/workers/observability/errors/) - Review common Workers errors. - [Local development and testing](/workers/local-development/) - Develop and test you Workers locally. - [Workers Logs](/workers/observability/logs/workers-logs) - Collect, store, filter and analyze logging data emitted from Cloudflare Workers. - [Logpush](/workers/observability/logs/logpush/) - Learn how to push Workers Trace Event Logs to supported destinations. - [Tail Workers](/workers/observability/logs/tail-workers/) - Learn how to attach Tail Workers to transform your logs and send them to HTTP endpoints. - [Source maps and stack traces](/workers/observability/source-maps) - Learn how to enable source maps and generate stack traces for Workers. --- # Tail Workers URL: https://developers.cloudflare.com/workers/observability/logs/tail-workers/ import { WranglerConfig } from "~/components"; A Tail Worker receives information about the execution of other Workers (known as producer Workers), such as HTTP statuses, data passed to `console.log()` or uncaught exceptions. Tail Workers can process logs for alerts, debugging, or analytics. Tail Workers are available to all customers on the Workers Paid and Enterprise tiers. Tail Workers are billed by [CPU time](/workers/platform/pricing/#workers), not by the number of requests.  A Tail Worker is automatically invoked after the invocation of a producer Worker (the Worker the Tail Worker will track) that contains the application logic. It captures events after the producer has finished executing. Events throughout the request lifecycle, including potential sub-requests via [Service Bindings](/workers/runtime-apis/bindings/service-bindings/) and [Dynamic Dispatch](/cloudflare-for-platforms/workers-for-platforms/get-started/configuration/), will be included. You can filter, change the format of the data, and send events to any HTTP endpoint. For quick debugging, Tail Workers can be used to send logs to [KV](/kv/api/) or any database. ## Configure Tail Workers To configure a Tail Worker: 1. [Create a Worker](/workers/get-started/guide) to serve as the Tail Worker. 2. Add a [`tail()`](/workers/runtime-apis/handlers/tail/) handler to your Worker. The `tail()` handler is invoked every time the producer Worker to which a Tail Worker is connected is invoked. The following Worker code is a Tail Worker that sends its data to an HTTP endpoint: ```js export default { async tail(events) { fetch("https://example.com/endpoint", { method: "POST", body: JSON.stringify(events), }) } } ``` The following Worker code is an example of what the `events` object may look like: ```json [ { "scriptName": "Example script", "outcome": "exception", "eventTimestamp": 1587058642005, "event": { "request": { "url": "https://example.com/some/requested/url", "method": "GET", "headers": { "cf-ray": "57d55f210d7b95f3", "x-custom-header-name": "my-header-value" }, "cf": { "colo": "SJC" } } }, "logs": [ { "message": ["string passed to console.log()"], "level": "log", "timestamp": 1587058642005 } ], "exceptions": [ { "name": "Error", "message": "Threw a sample exception", "timestamp": 1587058642005 } ], "diagnosticsChannelEvents": [ { "channel": "foo", "message": "The diagnostic channel message", "timestamp": 1587058642005 } ] } ] ``` 3. Add the following to the Wrangler file of the producer Worker: <WranglerConfig> ```toml tail_consumers = [{service = "<TAIL_WORKER_NAME>"}] ``` </WranglerConfig> :::note The Worker that you add a `tail_consumers` binding to must have a `tail()` handler defined. ::: ## Related resources * [`tail()`](/workers/runtime-apis/handlers/tail/) Handler API docs - Learn how to set up a `tail()` handler in your Worker. - [Errors and exceptions](/workers/observability/errors/) - Review common Workers errors. - [Local development and testing](/workers/local-development/) - Develop and test you Workers locally. - [Source maps and stack traces](/workers/observability/source-maps) - Learn how to enable source maps and generate stack traces for Workers. --- # Workers Logs URL: https://developers.cloudflare.com/workers/observability/logs/workers-logs/ import { TabItem, Tabs, Steps, Render, WranglerConfig } from "~/components" Workers Logs lets you automatically collect, store, filter, and analyze logging data emitted from Cloudflare Workers. Data is written to your Cloudflare Account, and you can query it in the dashboard for each of your Workers. All newly created Workers will come with the observability setting enabled by default. Logs include [invocation logs](/workers/observability/logs/workers-logs/#invocation-logs), [custom logs](/workers/observability/logs/workers-logs/#custom-logs), errors, and uncaught exceptions.  To send logs to a third party, use [Workers Logpush](/workers/observability/logs/logpush/) or [Tail Workers](/workers/observability/logs/tail-workers/). ## Enable Workers Logs :::note[Wrangler version] Minimum required Wrangler version: 3.78.6. Check your version by running `wrangler version`. To update Wrangler, refer to [Install/Update Wrangler](/workers/wrangler/install-and-update/). ::: You must add the observability setting for your Worker to write logs to Workers Logs. Add the following setting to your Worker's Wrangler file and redeploy your Worker. <WranglerConfig> ```toml [observability] enabled = true head_sampling_rate = 1 # optional. default = 1. ``` </WranglerConfig> [Head-based sampling](/workers/observability/logs/workers-logs/#head-based-sampling) allows you set the percentage of Workers requests that are logged. ### Enabling with environments [Environments](/workers/wrangler/environments/) allow you to deploy the same Worker application with different configurations. For example, you may want to configure a different `head_sampling_rate` to staging and production. To configure observability for an environment named `staging`: 1. Add the following configuration below `[env.staging]` <WranglerConfig> ```toml [env.staging.observability] enabled = true head_sampling_rate = 1 # optional ``` </WranglerConfig> 2. Deploy your Worker with `npx wrangler deploy -e staging` 3. Repeat step 1 and 2 for each environment. ## View logs from the dashboard Access logs for your Worker from the [Cloudflare dashboard](https://dash.cloudflare.com/?to=/:account/workers/services/view/:worker/production/observability/logs/). <Steps> 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com/?to=/:account/workers/services/view/:worker/production/observability/logs/) and select your account. 2. In Account Home, go to **Workers & Pages**. 3. In **Overview**, select your **Worker**. 4. Select **Logs**. </Steps> ## Best Practices ### Logging structured JSON objects To get the most out of Workers Logs, it is recommended you log in JSON format. Workers Logs automatically extracts the fields and indexes them intelligently in the database. The benefit of this structured logging technique is in how it allows you to easily segment data across any dimension for fields with unlimited cardinality. Consider the following scenarios: | Scenario | Logging Code | Event Log (Partial) | | -------- | ---------------------------------------------------------- | --------------------------------------------- | | 1 | `console.log("user_id: " + 123)` | `{message: "user_id: 123"}` | | 2 | `console.log({user_id: 123})` | `{user_id: 123}` | | 3 | `console.log({user_id: 123, user_email: "a@example.com"})` | `{user_id: 123, user_email: "a@example.com"}` | The difference between these examples is in how you index your logs to enable faster queries. In scenario 1, the `user_id` is embedded within a message. To find all logs relating to a particular user_id, you would have to run a text match. In scenarios 2 and 3, your logs can be filtered against the keys `user_id` and `user_email`. ## Features ### Invocation Logs Each Workers invocation returns a single invocation log that contains details such as the Request, Response, and related metadata. These invocation logs can be identified by the field `$cloudflare.$metadata.type = "cf-worker-event"`. Each invocation log is enriched with information available to Cloudflare in the context of the invocation. In the Workers Logs UI, logs are presented with a localized timestamp and a message. The message is dependent on the invocation handler. For example, Fetch requests will have a message describing the request method and the request URL, while cron events will be listed as cron. Below is a list of invocation handlers along with their invocation message. Invocation logs can be disabled in wrangler by adding the `invocation_logs = false` configuration. <WranglerConfig> ```toml [observability.logs] invocation_logs = false ``` </WranglerConfig> | Invocation Handler | Invocation Message | | --------------------------------------------------------------- | -------------------------- | | [Alarm](/durable-objects/api/alarms/) | \<Scheduled Time\> | | [Email](/email-routing/email-workers/runtime-api/) | \<Email Recipient\> | | [Fetch](/workers/runtime-apis/handlers/fetch/) | \<Method\> \<URL\> | | [Queue](/queues/configuration/javascript-apis/#consumer) | \<Queue Name\> | | [Cron](/workers/runtime-apis/handlers/scheduled/) | \<UNIX-cron schedule\> | | [Tail](/workers/runtime-apis/handlers/tail/) | tail | | [RPC](/workers/runtime-apis/rpc/) | \<RPC method\> | | [WebSocket](/workers/examples/websockets/) | \<WebSocket Event Type\> | ### Custom logs By default a Worker will emit [invocation logs](/workers/observability/logs/workers-logs/#invocation-logs) containing details about the request, response and related metadata. You can also add custom logs throughout your code. Any `console.log` statements within your Worker will be visible in Workers Logs. The following example demonstrates a custom `console.log` within a Worker request handler. <Tabs> <TabItem label="Module Worker" icon="seti:javascript"> ```js export default { async fetch(request) { const { cf } = request; const { city, country } = cf; console.log(`Request came from city: ${city} in country: ${country}`); return new Response("Hello worker!", { headers: { "content-type": "text/plain" }, }); }, }; ``` </TabItem> <TabItem label="Service Worker" icon="seti:javascript"> ```js addEventListener("fetch", (event) => { event.respondWith(handleRequest(event.request)); }); /** * Respond with hello worker text * @param {Request} request */ async function handleRequest(request) { const { cf } = request; const { city, country } = cf; console.log(`Request came from city: ${city} in country: ${country}`); return new Response("Hello worker!", { headers: { "content-type": "text/plain" }, }); } ``` </TabItem> </Tabs> After you deploy the code above, view your Worker's logs in [the dashboard](/workers/observability/logs/workers-logs/#view-logs-from-the-dashboard) or with [real-time logs](/workers/observability/logs/real-time-logs/). ### Head-based sampling Head-based sampling allows you to log a percentage of incoming requests to your Cloudflare Worker. Especially for high-traffic applications, this helps reduce log volume and manage costs, while still providing meaningful insights into your application's performance. When you configure a head-based sampling rate, you can control the percentage of requests that get logged. All logs within the context of the request are collected. To enable head-based sampling, set `head_sampling_rate` within the observability configuration. The valid range is from 0 to 1, where 0 indicates zero out of one hundred requests are logged, and 1 indicates every request is logged. If `head_sampling_rate` is unspecified, it is configured to a default value of 1 (100%). In the example below, `head_sampling_rate` is set to 0.01, which means one out of every one hundred requests is logged. <WranglerConfig> ```toml [observability] enabled = true head_sampling_rate = 0.01 # 1% sampling rate ``` </WranglerConfig> ## Limits | Description | Retention | | ------------------------------------------------------------------ | ---------- | | Maximum log retention period | 7 Days | | Maximum logs per account per day<sup>1</sup> | 5 Billion | | Maximum log size<sup>2</sup> | 128 KB | <sup>1</sup> While Workers Logs is in open beta, there is a daily limit of 5 billion logs per account per day. After the limit is exceed, a 1% head-based sample will be applied for the remainder of the day. <sup>2</sup> A single log has a maximum size limit of [128 KB](/workers/platform/limits/#log-size). Logs exceeding that size will be truncated and the log's `$cloudflare.truncated` field will be set to true. ## Pricing :::note[Billing start date] Workers Logs billing will begin on November 1, 2024. ::: <Render file="workers_logs_pricing" /> ### Examples #### Example 1 A Worker serves 15 million requests per month. Each request emits 1 invocation log and 1 `console.log`. `head_sampling_rate` is configured to 1. | | Monthly Costs | Formula | | ---------- | ----------------- | ---------------------------------------------------------------------------------------------------------------------- | | **Logs** | $6.00 | ((15,000,000 requests per month \* 2 logs per request \* 100% sample) - 20,000,000 included logs) / 1,000,000 \* $0.60 | | **Total** | $6.00 | | #### Example 2 A Worker serves 1 billion requests per month. Each request emits 1 invocation log and 1 `console.log`. `head_sampling_rate` is configured to 0.1. | | Monthly Costs | Formula | | ---------- | ----------------- | ---------------------------------------------------------------------------------------------------------------------- | | **Logs** | $108.00 | ((1,000,000,000 requests per month \* 2 logs per request \* 10% sample) - 20,000,000 included logs) / 1,000,000 \* $0.60 | | **Total** | $108.00 | | --- # Workers (Historic) URL: https://developers.cloudflare.com/workers/platform/changelog/historical-changelog/ This page tracks changes made to Cloudflare Workers before 2023. For a view of more recent updates, refer to the [current changelog](/workers/platform/changelog/). ## 2022-12-16 * Conditional `PUT` requests have been fixed in the R2 bindings API. ## 2022-12-02 * Queues no longer support calling `send()` with an undefined JavaScript value as the message. ## 2022-11-30 * The DOMException constructor has been updated to align better with the standard specification. Specifically, the message and name arguments can now be any JavaScript value that is coercible into a string (previously, passing non-string values would throw). * Extended the R2 binding API to include support for multipart uploads. ## 2022-11-17 * V8 update: 10.6 → 10.8 ## 2022-11-02 * Implemented `toJSON()` for R2Checksums so that it is usable with `JSON.stringify()`. ## 2022-10-21 * The alarm retry limit will no longer apply to errors that are our fault. * Compatibility dates have been added for multiple flags including the new streams implementation. * `DurableObjectStorage` has a new method `sync()` that provides a way for a Worker to wait for its writes (including those performed with `allowUnconfirmed`) to be synchronized with storage. ## 2022-10-10 * Fixed a bug where if an ES-modules-syntax script exported an array-typed value from the top-level module, the upload API would refuse it with a [`500` error](https://community.cloudflare.com/t/community-tip-fixing-error-500-internal-server-error/44453). * `console.log` now prints more information about certain objects, for example Promises. * The Workers Runtime is now built from the Open Source code in: [GitHub - cloudflare/workerd: The JavaScript / Wasm runtime that powers Cloudflare Workers](https://github.com/cloudflare/workerd). ## 2022-09-16 * R2 `put` bindings options can now have an `onlyIf` field similar to `get` that does a conditional upload. * Allow deleting multiple keys at once in R2 bindings. * Added support for SHA-1, SHA-256, SHA-384, SHA-512 checksums in R2 `put` options. * User-specified object checksums will now be available in the R2 `get/head` bindings response. MD5 is included by default for non-multipart uploaded objects. * Updated V8 to 10.6. ## 2022-08-12 * A `Headers` object with the `range` header can now be used for range within `R2GetOptions` for the `get` R2 binding. * When headers are used for `onlyIf` within `R2GetOptions` for the `get` R2 binding, they now correctly compare against the second granularity. This allows correctly round-tripping to the browser and back. Additionally, `secondsGranularity` is now an option that can be passed into options constructed by hand to specify this when constructing outside Headers for the same effect. * Fixed the TypeScript type of `DurableObjectState.id` in [@cloudflare/workers-types](https://github.com/cloudflare/workers-types) to always be a `DurableObjectId`. * Validation errors during Worker upload for module scripts now include correct line and column numbers. * Bugfix, Profiling tools and flame graphs via Chrome’s debug tools now properly report information. ## 2022-07-08 * Workers Usage Report and Workers Weekly Summary have been disabled due to scaling issues with the service. ## 2022-06-24 * `wrangler dev` in global network preview mode now supports scheduling alarms. * R2 GET requests made with the `range` option now contain the returned range in the `GetObject`’s `range` parameter. * Some Web Cryptography API error messages include more information now. * Updated V8 from 10.2 to 10.3. ## 2022-06-18 * Cron trigger events on Worker scripts using the old `addEventListener` syntax are now treated as failing if there is no event listener registered for `scheduled` events. * The `durable_object_alarms` flag no longer needs to be explicitly provided to use DO alarms. ## 2022-06-09 * No externally-visible changes. ## 2022-06-03 * It is now possible to create standard `TransformStream` instances that can perform transformations on the data. Because this changes the behavior of the default `new TransformStream()` with no arguments, the `transformstream_enable_standard_constructor` compatibility flag is required to enable. * Preview in Quick Edit now correctly uses the correct R2 bindings. * Updated V8 from 10.1 to 10.2. ## 2022-05-26 * The static `Response.json()` method can be used to initialize a Response object with a JSON-serialized payload (refer to [whatwg/fetch #1392](https://github.com/whatwg/fetch/pull/1392)). * R2 exceptions being thrown now have the `error` code appended in the message in parenthesis. This is a stop-gap until we are able to explicitly add the code property on the thrown `Error` object. ## 2022-05-19 * R2 bindings: `contentEncoding`, `contentLanguage`, and `cacheControl` are now correctly rendered. * ReadableStream `pipeTo` and `pipeThrough` now support cancellation using `AbortSignal`. * Calling `setAlarm()` in a DO with no `alarm()` handler implemented will now throw instead of failing silently. Calling `getAlarm()` when no `alarm()` handler is currently implemented will return null, even if an alarm was previously set on an old version of the DO class, as no execution will take place. * R2: Better runtime support for additional ranges. * R2 bindings now support ranges that have an `offset` and an optional `length`, a `length` and an optional `offset`, or a `suffix` (returns the last `N` bytes of a file). ## 2022-05-12 * Fix R2 bindings saving cache-control under content-language and rendering cache-control under content-language. * Fix R2 bindings list without options to use the default list limit instead of never returning any results. * Fix R2 bindings which did not correctly handle error messages from R2, resulting in `internal error` being thrown. Also fix behavior for get throwing an exception on a non-existent key instead of returning null. `R2Error` is removed for the time being and will be reinstated at some future time TBD. * R2 bindings: if the onlyIf condition results in a precondition failure or a not modified result, the object is returned without a body instead of returning null. * R2 bindings: sha1 is removed as an option because it was not actually hooked up to anything. TBD on additional checksum options beyond md5. * Added `startAfter` option to the `list()` method in the Durable Object storage API. ## 2022-05-05 * `Response.redirect(url)` will no longer coalesce multiple consecutive slash characters appearing in the URL’s path. * Fix generated types for Date. * Fix R2 bindings list without options to use the default list limit instead of never returning any results. * Fix R2 bindings did not correctly handle error messages from R2, resulting in internal error being thrown. Also fix behavior for get throwing an exception on a non-existent key instead of returning null. `R2Error` is removed for the time being and will be reinstated at some future time TBD. ## 2022-04-29 * Minor V8 update: 10.0 → 10.1. * R2 public beta bindings are the default regardless of compat date or flags. Internal beta bindings customers should transition to public beta bindings as soon as possible. A back compatibility flag is available if this is not immediately possible. After some lag, new scripts carrying the `r2_public_beta_bindings` compatibility flag will stop accepting to be published until that flag is removed. ## 2022-04-22 * Major V8 update: 9.9 → 10.0. ## 2022-04-14 * Performance and stability improvements. ## 2022-04-08 * The AES-GCM implementation that is part of the Web Cryptography API now returns a friendlier error explaining that 0-length IVs are not allowed. * R2 error responses now include better details. ## 2022-03-24 * A new compatibility flag has been introduced, `minimal_subrequests` , which removes some features that were unintentionally being applied to same-zone `fetch()` calls. The flag will default to enabled on Tuesday, 2022-04-05, and is described in [Workers `minimal_subrequests` compatibility flag](/workers/configuration/compatibility-flags/#minimal-subrequests). * When creating a `Response` with JavaScript-backed ReadableStreams, the `Body` mixin functions (e.g. `await response.text()` ) are now implemented. * The `IdentityTransformStream` creates a byte-oriented `TransformStream` implementation that simply passes bytes through unmodified. The readable half of the `TransformStream` supports BYOB-reads. It is important to note that `IdentityTransformStream` is identical to the current non-spec compliant `TransformStream` implementation, which will be updated soon to conform to the WHATWG Stream Standard. All current uses of `new TransformStream()` should be replaced with `new IdentityTransformStream()` to avoid potentially breaking changes later. ## 2022-03-17 * The standard [ByteLengthQueuingStrategy](https://developer.mozilla.org/en-US/docs/Web/API/ByteLengthQueuingStrategy) and [CountQueuingStrategy](https://developer.mozilla.org/en-US/docs/Web/API/CountQueuingStrategy) classes are now available. * When the `capture_async_api_throws` flag is set, built-in Cloudflare-specific and Web Platform Standard APIs that return Promises will no longer throw errors synchronously and will instead return rejected promises. Exception is given with fatal errors such as out of memory errors. * Fix R2 publish date rendering. * Fix R2 bucket binding .get populating contentRange with garbage. contentRange is now undefined as intended. * When using JavaScript-backed `ReadableStream`, it is now possible to use those streams with `new Response()`. ## 2022-03-11 * Fixed a bug where the key size was not counted when determining how many write units to charge for a Durable Object single-key `put()`. This may result in future writes costing one write unit more than past writes when the key is large enough to bump the total write size up above the next billing unit threshold of 4096 bytes. Multi-key `put()` operations have always properly counted the key size when determining billable write units. * Implementations of `CompressionStream` and `DecompressionStream` are now available. ## 2022-03-04 * Initial pipeTo/pipeThrough support on ReadableStreams constructed using the new `ReadableStream()` constructor is now available. * With the `global_navigator` compatibility flag set, the `navigator.userAgent` property can be used to detect when code is running within the Workers environment. * A bug in the new URL implementation was fixed when setting the value of a `URLSearchParam`. * The global `addEventListener` and dispatchEvent APIs are now available when using module syntax. * An implementation of `URLPattern` is now available. ## 2022-02-25 * The `TextDecoder` class now supports the full range of text encodings defined by the WHATWG Encoding Standard. * Both global `fetch()` and durable object `fetch()` now throw a TypeError when they receive a WebSocket in response to a request without the “Upgrade: websocket†header. * Durable Objects users may now store up to 50 GB of data across the objects in their account by default. As before, if you need more storage than that you can contact us for an increase. ## 2022-02-18 * `TextDecoder` now supports Windows-1252 labels (aka ASCII): [Encoding API Encodings - Web APIs | MDN](https://developer.mozilla.org/en-US/docs/Web/API/Encoding_API/Encodings). ## 2022-02-11 * WebSocket message sends were erroneously not respecting Durable Object output gates as described in the [I/O gate blog post](https://blog.cloudflare.com/durable-objects-easy-fast-correct-choose-three/). That bug has now been fixed, meaning that WebSockets will now never send a message under the assumption that a storage write has succeeded unless that write actually has succeeded. ## 2022-02-05 * Fixed bug causing WebSockets to Durable Objects to occasionally hang when the script implementing both a Worker and a Durable Object is re-deployed with new code. * `crypto.getRandomValues` now supports BigInt64Array and BigUint64Array. * A new implementation of the standard URL implementation is available. Use `url_standard` feature flag to enable the spec-compliant URL API implementation. ## 2022-01-28 * No user-visible changes. ## 2022-01-20 * Updated V8: 9.7 → 9.8. ## 2022-01-17 * `HTMLRewriter` now supports inspecting and modifying end tags, not just start tags. * Fixed bug where Durable Objects experiencing a transient CPU overload condition would cause in-progress requests to be unable to return a response (appearing as an indefinite hang from the client side), even after the overload condition clears. ## 2022-01-07 * The `workers_api_getters_setters_on_prototype` configuration flag corrects the way Workers attaches property getters and setters to API objects so that they can be properly subclassed. ## 2021-12-22 * Async iteration (using `for` and `await`) on instances of `ReadableStream` is now available. ## 2021-12-10 * Raised the max value size in Durable Object storage from 32 KiB to 128 KiB. * `AbortSignal.timeout(delay)` returns an `AbortSignal` that will be triggered after the given number of milliseconds. * Preview implementations of the new `ReadableStream` and new `WritableStream` constructors are available behind the `streams_enable_constructors` feature flag. * `crypto.DigestStream` is a non-standard extension to the crypto API that supports generating a hash digest from streaming data. The `DigestStream` itself is a `WritableStream` that does not retain the data written into it; instead, it generates a digest hash automatically when the flow of data has ended. The same hash algorithms supported by `crypto.subtle.digest()` are supported by the `crypto.DigestStream`. * Added early support for the `scheduler.wait()` API, which is [going through the WICG standardization process](https://github.com/WICG/scheduling-apis), to provide an `await`-able alternative to `setTimeout()`. * Fixed bug in `deleteAll` in Durable Objects containing more than 10000 keys that could sometimes cause incomplete data deletion and/or hangs. ## 2021-12-02 * The Streams spec requires that methods returning promises must not throw synchronous errors. As part of the effort of making the Streams implementation more spec compliant, we are converting a number of sync throws to async rejections. * Major V8 update: 9.6 → 9.7. See [V8 release v9.7 · V8](https://v8.dev/blog/v8-release-97) for more details. ## 2021-11-19 * Durable Object stubs that receive an overload exception will be permanently broken to match the behavior of other exception types. * Fixed issue where preview service claimed Let’s Encrypt certificates were expired. * [`structuredClone()`](https://developer.mozilla.org/en-US/docs/Web/API/structuredClone) is now supported. ## 2021-11-12 * The `AbortSignal` object has a new `reason` property indicating the reason for the cancellation. The reason can be specified when the `AbortSignal` is triggered or created. * Unhandled rejection warnings will be printed to the inspector console. ## 2021-11-05 * Upgrade to V8 9.6. This adds support for WebAssembly reference types. Refer to the [V8 release v9.6 · V8](https://v8.dev/blog/v8-release-96) for more details. * Streams: When using the BYOB reader, the `ArrayBuffer` of the provided TypedArray should be detached, per the Streams spec. Because Workers was not previously enforcing that rule, and changing to comply with the spec could breaking existing code, a new compatibility flag, [streams\_byob\_reader\_detaches\_buffer](https://github.com/cloudflare/cloudflare-docs/pull/2644), has been introduced that will be enabled by default on 2021-11-10. User code should never try to reuse an `ArrayBuffer` that has been passed in to a BYOB readers `read()` method. The more recently added extension method `readAtLeast()` will always detach the `ArrayBuffer` and is unaffected by the compatibility flag setting. ## 2021-10-21 * Added support for the `signal` option in `EventTarget.addEventListener()`, to remove an event listener in response to an `AbortSignal`. * The `unhandledrejection` and `rejectionhandled` events are now supported. * The `ReadableStreamDefaultReader` and `ReadableStreamBYOBReader` constructors are now supported. * Added non-standard `ReadableStreamBYOBReader` method `.readAtLeast(size, buffer)` that can be used to return a buffer with at least `size` bytes. The `buffer` parameter must be an `ArrayBufferView`. Behavior is identical to `.read()` except that at least `size` bytes are read, only returning fewer if EOF is encountered. One final call to `.readAtLeast()` is still needed to get back a `done = true` value. * The compatibility flags `formdata_parser_supports_files`, `fetch_refuses_unknown_protocols`, and `durable_object_fetch_requires_full_url` have been scheduled to be turned on by default as of 2021-11-03, 2021-11-10, and 2021-11-10, respectively. For more details, refer to [Compatibility Dates](/workers/configuration/compatibility-dates/) ## 2021-10-14 * `request.signal` will always return an `AbortSignal`. * Cloudflare Workers’ integration with Chrome DevTools profiling now more accurately reports the line numbers and time elapsed. Previously, the line numbers were shown as one line later then the actual code, and the time shown would be proportional but much longer than the actual time used. * Upgrade to v8 9.5. Refer to [V8 release v9.5 · V8](https://v8.dev/blog/v8-release-95) for more details. ## 2021-09-24 * The `AbortController` and `AbortSignal` objects are now available. * The Web Platform `queueMicrotask` API is now available. * It is now possible to use new `EventTarget()` and to create custom `EventTarget` subclasses. * The `once` option is now supported on `addEventTarget` to register event handlers that will be invoked only once. * Per the HTML specification, a listener passed in to the `addEventListener` function is allowed to either be a function or an object with a `handleEvent` member function. Previously, Workers only supported the function option, now it supports both. * The `Event` object now supports most standard methods and properties. * V8 updated from 9.3 to 9.4. ## 2021-09-03 * The `crypto.randomUUID()` method can be used to generate a new random version 4 UUID. * Durable Objects are now scheduled more evenly around a colocation (colo). ## 2021-08-05 * No user-facing changes. Just bug fixes & internal maintenance. ## 2021-07-30 * Fixed a hang in Durable Objects when reading more than 16MB of data at once (for example, with a large `list()` operation). * Added a new compatibility flag `html_rewriter_treats_esi_include_as_void_tag` which causes `HTMLRewriter` to treat `<esi:include>` and `<esi:comment>` as void tags, such that they are considered to have neither an end tag nor nested content. To opt a worker into the new behavior, you must use Wrangler v1.19.0 or newer and specify the flag in `wrangler.toml`. Refer to the [Wrangler compatibility flag notes](https://github.com/cloudflare/wrangler-legacy/pull/2009) for details. ## 2021-07-23 * Performance and stability improvements. ## 2021-07-16 * Workers can now make up to 1000 subrequests to Durable Objects from a within a single request invocation, up from the prior limit of 50. * Major changes to Durable Objects implementation, the details of which will be the subject of an upcoming blog post. In theory, the changes should not harm existing apps, except to make them faster. Let your account team know if you observe anything unusual or report your issue in the [Workers Discord](https://discord.cloudflare.com). * Durable Object constructors may now initiate I/O, such as `fetch()` calls. * Added Durable Objects `state.blockConcurrencyWhile()` API useful for delaying delivery of requests and other events while performing some critical state-affecting task. For example, this can be used to perform start-up initialization in an object’s constructor. * In Durable Objects, the callback passed to `storage.transaction()` can now return a value, which will be propagated as the return value of the `transaction()` call. ## 2021-07-13 * The preview service now prints a warning in the devtools console when a script uses `Response/Request.clone()` but does not read one of the cloned bodies. Such a situation forces the runtime to buffer the entire message body in memory, which reduces performance. [Find an example here](https://cloudflareworkers.com/#823fbe463bfafd5a06bcfeabbdf5eeae:https://tutorial.cloudflareworkers.com). ## 2021-07-01 * Fixed bug where registering the same exact event listener method twice on the same event type threw an internal error. * Add support for the `.forEach()` method for `Headers`, `URLSearchParameters`, and `FormData`. ## 2021-06-27 * WebCrypto: Implemented non-standard Ed25519 operation (algorithm NODE-ED25519, curve name NODE-ED25519). The Ed25519 implementation differs from NodeJS’s in that raw import/export of private keys is disallowed, per parity with ECDSA/ECDH. ## 2021-06-17 Changes this week: * Updated V8 from 9.1 to 9.2. * `wrangler tail` now works on Durable Objects. Note that logs from long-lived WebSockets will not be visible until the WebSocket is closed. ## 2021-06-11 Changes this week: * Turn on V8 Sparkplug compiler. * Durable Objects that are finishing up existing requests after their code is updated will be disconnected from the persistent storage API, to maintain the invariant that only a single instance ever has access to persistent storage for a given Durable Object. ## 2021-06-04 Changes this week: * WebCrypto: We now support the “raw†import/export format for ECDSA/ECDH public keys. * `request.cf` is no longer missing when writing Workers using modules syntax. ## 2021-05-14 Changes this week: * Improve error messages coming from the WebCrypto API. * Updated V8: 9.0 → 9.1 Changes in an earlier release: * WebCrypto: Implement JWK export for RSA, ECDSA, & ECDH. * WebCrypto: Add support for RSA-OAEP * WebCrypto: HKDF implemented. * Fix recently-introduced backwards clock jumps in Durable Objects. * `WebCrypto.generateKey()`, when asked to generate a key pair with algorithm RSA-PSS, would instead return a key pair using algorithm RSASSA-PKCS1-v1\_5. Although the key structure is the same, the signature algorithms differ, and therefore, signatures generated using the key would not be accepted by a correct implementation of RSA-PSS, and vice versa. Since this would be a pretty obvious problem, but no one ever reported it to us, we guess that currently, no one is using this functionality on Workers. ## 2021-04-29 Changes this week: * WebCrypto: Implemented `wrapKey()` / `unwrapKey()` for AES algorithms. * The arguments to `WebSocket.close()` are now optional, as the standard says they should be. ## 2021-04-23 Changes this week: * In the WebCrypto API, encrypt and decrypt operations are now supported for the “AES-CTR†encryption algorithm. * For Durable Objects, CPU time limits are now enforced on the object level rather than the request level. Each time a new request arrives, the time limit is “topped up†to 500ms. After the (free) beta period ends and Durable Objects becomes generally available, we will increase this to 30 seconds. * When a Durable Object exceeds its CPU time limit, the entire object will be discarded and recreated. Previously, we allowed subrequest requests to continue using the same object, but this was dangerous because hitting the CPU time limit can leave the object in an inconsistent state. * Long running Durable Objects are given more subrequest quota as additional WebSocket messages are sent to them, to avoid the problem of a long-running Object being unable to make any more subrequests after it has been held open by a particular WebSocket for a while. * When a Durable Object’s code is updated, or when its isolate is reset due to exceeding the memory limit, all stubs pointing to the object will become invalidated and have to be recreated. This is consistent with what happens when the CPU time is exceeded, or when stubs become disconnected due to random network errors. This behavior is useful, as apps can now assume that two messages sent to the same stub will be delivered to exactly the same live instance (if they are delivered at all). Apps which do not care about this property should recreate their stubs for every request; there is no performance penalty from doing so. * When a Durable Object’s isolate exceeds its memory limit, an exception with an explanatory message will now be thrown to the caller, instead of “internal errorâ€. * When a Durable Object exceeds its CPU time limit, an exception with an explanatory message will now be thrown to the caller, instead of “internal errorâ€. * `wrangler tail` now reports CPU-time-exceeded exceptions with an explanatory message instead of “internal errorâ€. ## 2021-04-19 Changes since the last post on 3/26: * Cron Triggers now have a 15 minute wall time limit, in addition to the existing CPU time limit. (Previously, there was no limit, so a cron trigger that spent all its time waiting for I/O could hang forever.) * Our WebCrypto implementation now supports importing and exporting HMAC and AES keys in JWK format. * Our WebCrypto implementation now supports AES key generation for CTR, CBC, and KW modes. AES-CTR encrypt/decrypt and AES-KW key wrapping/unwrapping support will land in a later release. * Fixed bug where `crypto.subtle.encrypt()` on zero-length inputs would sometimes throw an exception. * Errors on script upload will now be properly reported for module-based scripts, instead of appearing as a ReferenceError. * WebCrypto: Key derivation for ECDH. * WebCrypto: Support ECDH key generation & import. * WebCrypto: Support ECDSA key generation. * Fixed bug where `crypto.subtle.encrypt()` on zero-length inputs would sometimes throw an exception. * Improved exception messages thrown by the WebCrypto API somewhat. * `waitUntil` is now supported for module Workers. An additional argument called ‘ctx’ is passed after ‘env’, and `waitUntil` is a method on ‘ctx’. * `passThroughOnException` is now available under the ctx argument to module handlers * Reliability improvements for Durable Objects * Reliability improvements for Durable Objects persistent storage API * `ScheduledEvent.cron` is now set to the original cron string that the event was scheduled for. ## 2021-03-26 Changes this week: * Existing WebSocket connections to Durable Objects will now be forcibly disconnected on code updates, in order to force clients to connect to the instance running the new code. ## 2021-03-11 New this week: * When the Workers Runtime itself reloads due to us deploying a new version or config change, we now preload high-traffic Workers in the new instance of the runtime before traffic cuts over. This ensures that users do not observe cold starts for these Workers due to the upgrade, and also fixes a low rate of spurious 503 errors that we had previously been seeing due to overload during such reloads. (It looks like no release notes were posted the last few weeks, but there were no new user-visible changes to report.) ## 2021-02-11 Changes this week: * In the preview mode of the dashboard, a Worker that fails during startup will now return a 500 response, rather than getting the default passthrough behavior, which was making it harder to notice when a Worker was failing. * A Durable Object’s ID is now provided to it in its constructor. It can be accessed off of the `state` provided as the constructor’s first argument, as in `state.id`. ## 2021-02-05 New this week: * V8 has been updated from 8.8 to 8.9. * During a `fetch()`, if the destination server commits certain HTTP protocol errors, such as returning invalid (unparsable) headers, we now throw an exception whose description explains the problem, rather than an “internal errorâ€. New last week (forgot to post): * Added support for `waitUntil()` in Durable Objects. It is a method on the state object passed to the Durable Object class’s constructor. ## 2021-01-22 New in the past week: * Fixed a bug which caused scripts with WebAssembly modules to hang when using devtools in the preview service. ## 2021-01-14 Changes this week: * Implemented File and Blob APIs, which can be used when constructing FormData in outgoing requests. Unfortunately, FormData from incoming requests at this time will still use strings even when file metadata was present, in order to avoid breaking existing deployed Workers. We will find a way to fix that in the future. ## 2021-01-07 Changes this week: * No user-visible changes. Changes in the prior release: * Fixed delivery of WebSocket “error†events. * Fixed a rare bug where a WritableStream could be garbage collected while it still had writes queued, causing those writes to be lost. ## 2020-12-10 Changes this week: * Major V8 update: 8.7.220.29 -> 8.8.278.8 ## 2019-09-19 Changes this week: * Unannounced new feature. (Stay tuned.) * Enforced new limit on concurrent subrequests (see below). * Stability improvements. **Concurrent Subrequest Limit** As of this release, we impose a limit on the number of outgoing HTTP requests that a Worker can make simultaneously. **For each incoming request**, a Worker can make up to 6 concurrent outgoing `fetch()` requests. If a Worker’s request handler attempts to call `fetch()` more than six times (on behalf of a single incoming request) without waiting for previous fetches to complete, then fetches after the sixth will be delayed until previous fetches have finished. A Worker is still allowed to make up to 50 total subrequests per incoming request, as before; the new limit is only on how many can execute simultaneously. **Automatic deadlock avoidance** Our implementation automatically detects if delaying a fetch would cause the Worker to deadlock, and prevents the deadlock by cancelling the least-recently-used request. For example, imagine a Worker that starts 10 requests and waits to receive all the responses without reading the response bodies. A fetch is not considered complete until the response body is fully-consumed (for example, by calling `response.text()` or `response.json()`, or by reading from `response.body`). Therefore, in this scenario, the first six requests will run and their response objects would be returned, but the remaining four requests would not start until the earlier responses are consumed. If the Worker fails to actually read the earlier response bodies and is still waiting for the last four requests, then the Workers Runtime will automatically cancel the first four requests so that the remaining ones can complete. If the Worker later goes back and tries to read the response bodies, exceptions will be thrown. **Most Workers are Not Affected** The vast majority of Workers make fewer than six outgoing requests per incoming request. Such Workers are totally unaffected by this change. Of Workers that do make more than six outgoing requests concurrently for a single incoming request, the vast majority either read the response bodies immediately upon each response returning, or never read the response bodies at all. In either case, these Workers will still work as intended – although they may be a little slower due to outgoing requests after the sixth being delayed. A very small number of deployed Workers (about 20 total) make more than 6 requests concurrently, wait for all responses to return, and then go back to read the response bodies later. For all known Workers that do this, we have temporarily grandfathered your zone into the old behavior, so that your Workers will continue to operate. However, we will be communicating with customers one-by-one to request that you update your code to proactively read request bodies, so that it works correctly under the new limit. **Why did we do this?** Cloudflare communicates with origin servers using HTTP/1.1, not HTTP/2. Under HTTP/1.1, each concurrent request requires a separate connection. So, Workers that make many requests concurrently could force the creation of an excessive number of connections to origin servers. In some cases, this caused resource exhaustion problems either at the origin server or within our own stack. On investigating the use cases for such Workers, every case we looked at turned out to be a mistake or otherwise unnecessary. Often, developers were making requests and receiving responses, but they only cared about the response status and headers but not the body. So, they threw away the response objects without reading the body, essentially leaking connections. In some other cases, developers had simply accidentally written code that made excessive requests in a loop for no good reason at all. Both of these cases should now cause no problems under the new behavior. We chose the limit of 6 concurrent connections based on the fact that Chrome enforces the same limit on web sites in the browser. ## 2020-12-04 Changes this week: * Durable Objects storage API now supports listing keys by prefix. * Improved error message when a single request performs more than 1000 KV operations to make clear that a per-request limit was reached, not a global rate limit. * `wrangler dev` previews should now honor non-default resource limits, for example, longer CPU limits for those in the Workers Unbound beta. * Fixed off-by-one line numbers in Worker exceptions. * Exceptions thrown in a Durable Object’s `fetch()` method are now tunneled to its caller. * Fixed a bug where a large Durable Object response body could cause the Durable Object to become unresponsive. ## 2020-11-13 Changes over the past week: * `ReadableStream.cancel()` and `ReadableStream.getReader().cancel()` now take an optional, instead of a mandatory, argument, to conform with the Streams spec. * Fixed an error that occurred when a WASM module declared that it wanted to grow larger than 128MB. Instead, the actual memory usage of the module is monitored and an error is thrown if it exceeds 128MB used. ## 2020-11-05 Changes this week: * Major V8 update: 8.6 -> 8.7 * Limit the maximum number of Durable Objects keys that can be changed in a single transaction to 128. ## 2020-10-05 We had our usual weekly release last week, but: * No user-visible changes. ## 2020-09-24 Changes this week: * Internal changes to support upcoming features. Also, a change from the 2020-09-08 release that it seems we forgot to post: * V8 major update: 8.5 -> 8.6 ## 2020-08-03 Changes last week: * Fixed a regression which could cause `HTMLRewriter.transform()` to throw spurious “The parser has stopped.†errors. * Upgraded V8 from 8.4 to 8.5. ## 2020-07-09 Changes this week: * Fixed a regression in HTMLRewriter: [https://github.com/cloudflare/lol-html/issues/50](https://github.com/cloudflare/lol-html/issues/50) * Common HTTP method names passed to `fetch()` or `new Request()` are now case-insensitive as required by the Fetch API spec. Changes last week (… forgot to post): * `setTimeout`/`setInterval` can now take additional arguments which will be passed on to the callback, as required by the spec. (Few people use this feature today because it’s usually much easier to use lambda captures.) Changes the week before last (… also… forgot to post… we really need to code up a bot for this): * The HTMLRewriter now supports the `:nth-child` , `:first-child` , `:nth-of-type` , and `:first-of-type` selectors. ## 2020-05-15 Changes this week: * Implemented API for yet-to-be-announced new feature. ## 2020-04-20 Looks like we forgot to post release notes for a couple weeks. Releases still are happening weekly as always, but the “post to the community†step is insufficiently automated… 4/2 release: * Fixed a source of long garbage collection paused in memory limit enforcement. 4/9 release: * No publicly-visible changes. 4/16 release: * In preview, we now log a warning when attempting to construct a `Request` or `Response` whose body is of type `FormData` but with the `Content-Type` header overridden. Such bodies would not be parseable by the receiver. ## 2020-03-26 New this week: * Certain “internal errors†that could be thrown when using the Cache API are now reported with human-friendly error messages. For example, `caches.default.match("not a URL")` now throws a TypeError. ## 2020-02-28 New from the past two weeks: * Fixed a bug in the preview service where the CPU time limiter was overly lenient for the first several requests handled by a newly-started worker. The same bug actually exists in production as well, but we are much more cautious about fixing it there, since doing so might break live sites. If you find your worker now exceeds CPU time limits in preview, then it is likely exceeding time limits in production as well, but only appearing to work because the limits are too lenient for the first few requests. Such Workers will eventually fail in production, too (and always have), so it is best to fix the problem in preview before deploying. * Major V8 update: 8.0 -> 8.1 * Minor bug fixes. ## 2020-02-13 Changes over the last couple weeks: * Fixed a bug where if two differently-named scripts within the same account had identical content and were deployed to the same zone, they would be treated as the “same Workerâ€, meaning they would share the same isolate and global variables. This only applied between Workers on the same zone, so was not a security threat, but it caused confusion. Now, two differently-named Worker scripts will never be considered the same Worker even if they have identical content. * Performance and stability improvements. ## 2020-01-24 It has been a while since we posted release notes, partly due to the holidays. Here is what is new over the past month: * Performance and stability improvements. * A rare source of `daemonDown` errors when processing bursty traffic over HTTP/2 has been eliminated. * Updated V8 7.9 -> 8.0. ## 2019-12-12 New this week: * We now pass correct line and column numbers more often when reporting exceptions to the V8 inspector. There remain some cases where the reported line and column numbers will be wrong. * Fixed a significant source of daemonDown (1105) errors. ## 2019-12-06 Runtime release notes covering the past few weeks: * Increased total per-request `Cache.put()` limit to 5GiB. * Increased individual `Cache.put()` limits to the lesser of 5GiB or the zone’s normal [cache limits](/cache/concepts/default-cache-behavior/). * Added a helpful error message explaining AES decryption failures. * Some overload errors were erroneously being reported as daemonDown (1105) errors. They have been changed to exceededCpu (1102) errors, which better describes their cause. * More “internal errors†were converted to useful user-facing errors. * Stability improvements and bug fixes. --- # Changelog URL: https://developers.cloudflare.com/workers/platform/changelog/ import { ProductReleaseNotes } from "~/components"; This changelog details meaningful changes made to Workers across the Cloudflare dashboard, Wrangler, the API, and the workerd runtime. These changes are not configurable. This is *different* from [compatibility dates](/workers/configuration/compatibility-dates/) and [compatibility flags](/workers/configuration/compatibility-flags/), which let you explicitly opt-in to or opt-out of specific changes to the Workers Runtime. {/* <!-- Actual content lives in /src/content/release-notes/workers.yaml. Update the file there for new entries to appear here. For more details, refer to https://developers.cloudflare.com/style-guide/documentation-content-strategy/content-types/changelog/#yaml-file --> */} <ProductReleaseNotes /> --- # Changelog URL: https://developers.cloudflare.com/workers/platform/changelog/platform/ {/* <!-- All changelog entries live in associated /src/content/changelogs/{productName}.yaml. This page needs to exist in order to house the associated RSS feed. --> */} --- # Wrangler URL: https://developers.cloudflare.com/workers/platform/changelog/wrangler/ import { ProductReleaseNotes } from "~/components"; {/* <!-- All changelog entries are pulled directly from https://api.github.com/repos/cloudflare/workers-sdk/releases and manipulated in the src/util/release-notes.ts file (getWranglerChangelog() function). This is unique compared to other changelog entries. --> */} <ProductReleaseNotes /> --- # Fetch Handler URL: https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/ ## Background Incoming HTTP requests to a Worker are passed to the `fetch()` handler as a [`Request`](/workers/runtime-apis/request/) object. To respond to the request with a response, return a [`Response`](/workers/runtime-apis/response/) object: ```js export default { async fetch(request, env, ctx) { return new Response('Hello World!'); }, }; ``` :::note The Workers runtime does not support `XMLHttpRequest` (XHR). Learn the difference between `XMLHttpRequest` and `fetch()` in the [MDN](https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest) documentation. ::: ### Parameters * `request` Request * The incoming HTTP request. * `env` object * The [bindings](/workers/configuration/environment-variables/) available to the Worker. As long as the [environment](/workers/wrangler/environments/) has not changed, the same object (equal by identity) may be passed to multiple requests. * <code>ctx.waitUntil(promisePromise)</code> : void * Refer to [`waitUntil`](/workers/runtime-apis/context/#waituntil). * <code>ctx.passThroughOnException()</code> : void * Refer to [`passThroughOnException`](/workers/runtime-apis/context/#passthroughonexception). --- # Handlers URL: https://developers.cloudflare.com/workers/runtime-apis/handlers/ import { DirectoryListing } from "~/components" Handlers are methods on Workers that can receive and process external inputs, and can be invoked from outside your Worker. For example, the `fetch()` handler receives an HTTP request, and can return a response: ```js export default { async fetch(request, env, ctx) { return new Response('Hello World!'); }, }; ``` The following handlers are available within Workers: <DirectoryListing /> ## Handlers in Python Workers When you [write Workers in Python](/workers/languages/python/), handlers are prefixed with `on_`. For example, `on_fetch` or `on_scheduled`. --- # Scheduled Handler URL: https://developers.cloudflare.com/workers/runtime-apis/handlers/scheduled/ ## Background When a Worker is invoked via a [Cron Trigger](/workers/configuration/cron-triggers/), the `scheduled()` handler handles the invocation. :::note[Testing scheduled() handlers in local development] You can test the behavior of your `scheduled()` handler in local development using Wrangler. Cron Triggers can be tested using `Wrangler` by passing in the `--test-scheduled` flag to [`wrangler dev`](/workers/wrangler/commands/#dev). This will expose a `/__scheduled` route which can be used to test using a http request. To simulate different cron patterns, a `cron` query parameter can be passed in. ```sh npx wrangler dev --test-scheduled curl "http://localhost:8787/__scheduled?cron=*+*+*+*+*" ``` ::: --- ## Syntax ```js export default { async scheduled(event, env, ctx) { ctx.waitUntil(doSomeTaskOnASchedule()); }, }; ``` ### Properties - `event.cron` string - The value of the [Cron Trigger](/workers/configuration/cron-triggers/) that started the `ScheduledEvent`. - `event.type` string - The type of event. This will always return `"scheduled"`. - `event.scheduledTime` number - The time the `ScheduledEvent` was scheduled to be executed in milliseconds since January 1, 1970, UTC. It can be parsed as <code>new Date(event.scheduledTime)</code>. - `env` object - An object containing the bindings associated with your Worker using ES modules format, such as KV namespaces and Durable Objects. - `ctx` object - An object containing the context associated with your Worker using ES modules format. Currently, this object just contains the `waitUntil` function. ### Methods When a Workers script is invoked by a [Cron Trigger](/workers/configuration/cron-triggers/), the Workers runtime starts a `ScheduledEvent` which will be handled by the `scheduled` function in your Workers Module class. The `ctx` argument represents the context your function runs in, and contains the following methods to control what happens next: - <code>ctx.waitUntil(promisePromise)</code> : void - Use this method to notify the runtime to wait for asynchronous tasks (for example, logging, analytics to third-party services, streaming and caching). The first `ctx.waitUntil` to fail will be observed and recorded as the status in the [Cron Trigger](/workers/configuration/cron-triggers/) Past Events table. Otherwise, it will be reported as a success. --- # Tail Handler URL: https://developers.cloudflare.com/workers/runtime-apis/handlers/tail/ ## Background The `tail()` handler is the handler you implement when writing a [Tail Worker](/workers/observability/logs/tail-workers/). Tail Workers can be used to process logs in real-time and send them to a logging or analytics service. The `tail()` handler is called once each time the connected producer Worker is invoked. To configure a Tail Worker, refer to [Tail Workers documentation](/workers/observability/logs/tail-workers/). ## Syntax ```js export default { async tail(events, env, ctx) { fetch("<YOUR_ENDPOINT>", { method: "POST", body: JSON.stringify(events), }) } } ``` ### Parameters * `events` array * An array of [`TailItems`](/workers/runtime-apis/handlers/tail/#tailitems). One `TailItem` is collected for each event that triggers a Worker. For Workers for Platforms customers with a Tail Worker installed on the dynamic dispatch Worker, `events` will contain two elements: one for the dynamic dispatch Worker and one for the User Worker. * `env` object * An object containing the bindings associated with your Worker using [ES modules format](/workers/reference/migrate-to-module-workers/), such as KV namespaces and Durable Objects. * `ctx` object * An object containing the context associated with your Worker using [ES modules format](/workers/reference/migrate-to-module-workers/). Currently, this object just contains the `waitUntil` function. ### Properties * `event.type` string * The type of event. This will always return `"tail"`. * `event.traces` array * An array of [`TailItems`](/workers/runtime-apis/handlers/tail/#tailitems). One `TailItem` is collected for each event that triggers a Worker. For Workers for Platforms customers with a Tail Worker installed on the dynamic dispatch Worker, `events` will contain two elements: one for the dynamic dispatch Worker and one for the user Worker. * <code>event.waitUntil(promisePromise)</code> : void * Refer to [`waitUntil`](/workers/runtime-apis/context/#waituntil). Note that unlike fetch event handlers, tail handlers do not return a value, so this is the only way for trace Workers to do asynchronous work. ### `TailItems` #### Properties * `scriptName` string * The name of the producer script. * `event` object * Contains information about the Worker’s triggering event. * For fetch events: a [`FetchEventInfo` object](/workers/runtime-apis/handlers/tail/#fetcheventinfo) * For other event types: `null`, currently. * `eventTimestamp` number * Measured in epoch time. * `logs` array * An array of [TailLogs](/workers/runtime-apis/handlers/tail/#taillog). * `exceptions` array * An array of [`TailExceptions`](/workers/runtime-apis/handlers/tail/#tailexception). A single Worker invocation might result in multiple unhandled exceptions, since a Worker can register multiple asynchronous tasks. * `outcome` string * The outcome of the Worker invocation, one of: * `unknown`: outcome status was not set. * `ok`: The worker invocation succeeded. * `exception`: An unhandled exception was thrown. This can happen for many reasons, including: * An uncaught JavaScript exception. * A fetch handler that does not result in a Response. * An internal error. * `exceededCpu`: The Worker invocation exceeded either its CPU limits. * `exceededMemory`: The Worker invocation exceeded memory limits. * `scriptNotFound`: An internal error from difficulty retrieving the Worker script. * `canceled`: The worker invocation was canceled before it completed. Commonly because the client disconnected before a response could be sent. * `responseStreamDisconnected`: The response stream was disconnected during deferred proxying. Happens when either the client or server hangs up early. :::note[Outcome is not the same as HTTP status.] Outcome is equivalent to the exit status of a script and an indicator of whether it has fully run to completion. A Worker outcome may differ from a response code if, for example: * a script successfully processes a request but is logically designed to return a `4xx`/`5xx` response. * a script sends a successful `200` response but an asynchronous task registered via `waitUntil()` later exceeds CPU or memory limits. ::: ### `FetchEventInfo` #### Properties * `request` object * A [`TailRequest` object](/workers/runtime-apis/handlers/tail/#tailrequest). * `response` object * A [`TailResponse` object](/workers/runtime-apis/handlers/tail/#tailresponse). ### `TailRequest` #### Properties * `cf` object * Contains the data from [`IncomingRequestCfProperties`](/workers/runtime-apis/request/#incomingrequestcfproperties). * `headers` object * Header name/value entries (redacted by default). Header names are lowercased, and the values associated with duplicate header names are concatenated, with the string `", "` (comma space) interleaved, similar to [the Fetch standard](https://fetch.spec.whatwg.org/#concept-header-list-get). * `method` string * The HTTP request method. * `url` string * The HTTP request URL (redacted by default). #### Methods * `getUnredacted()` object * Returns a TailRequest object with unredacted properties Some of the properties of `TailRequest` are redacted by default to make it harder to accidentally record sensitive information, like user credentials or API tokens. The redactions use heuristic rules, so they are subject to false positives and negatives. Clients can call `getUnredacted()` to bypass redaction, but they should always be careful about what information is retained, whether using the redaction or not. * Header redaction: The header value will be the string `“REDACTEDâ€` when the (case-insensitive) header name is `cookie`/`set-cookie` or contains a substring `"authâ€`, `“keyâ€`, `“secretâ€`, `“tokenâ€`, or `"jwt"`. * URL redaction: For each greedily matched substring of ID characters (a-z, A-Z, 0-9, '+', '-', '\_') in the URL, if it meets the following criteria for a hex or base-64 ID, the substring will be replaced with the string `“REDACTEDâ€`. * Hex ID: Contains 32 or more hex digits, and contains only hex digits and separators ('+', '-', '\_') * Base-64 ID: Contains 21 or more characters, and contains at least two uppercase, two lowercase, and two digits. ### `TailResponse` #### Properties * `status` number * The HTTP status code. ### `TailLog` Records information sent to console functions. #### Properties * `timestamp` number * Measured in epoch time. * `level` string * A string indicating the console function that was called. One of: `debug`, `info`, `log`, `warn`, `error`. * `message` object * The array of parameters passed to the console function. ### `TailException` Records an unhandled exception that occurred during the Worker invocation. #### Properties * `timestamp` number * Measured in epoch time. * `name` string * The error type (For example,`Error`, `TypeError`, etc.). * `message` object * The error description (For example, `"x" is not a function`). ## Related resources * [Tail Workers](/workers/observability/logs/tail-workers/) - Configure a Tail Worker to receive information about the execution of other Workers. --- # Bindings (env) URL: https://developers.cloudflare.com/workers/runtime-apis/bindings/ import { DirectoryListing, WranglerConfig } from "~/components" Bindings allow your Worker to interact with resources on the Cloudflare Developer Platform. The following bindings available today: <DirectoryListing /> ## What is a binding? When you declare a binding on your Worker, you grant it a specific capability, such as being able to read and write files to an [R2](/r2/) bucket. For example: <WranglerConfig> ```toml main = "./src/index.js" r2_buckets = [ { binding = "MY_BUCKET", bucket_name = "<MY_BUCKET_NAME>" } ] ``` </WranglerConfig> ```js export default { async fetch(request, env) { const key = url.pathname.slice(1); await env.MY_BUCKET.put(key, request.body); return new Response(`Put ${key} successfully!`); } } ``` You can think of a binding as a permission and an API in one piece. With bindings, you never have to add secret keys or tokens to your Worker in order to access resources on your Cloudflare account — the permission is embedded within the API itself. The underlying secret is never exposed to your Worker's code, and therefore can't be accidentally leaked. ## Making changes to bindings When you deploy a change to your Worker, and only change its bindings (i.e. you don't change the Worker's code), Cloudflare may reuse existing isolates that are already running your Worker. This improves performance — you can change an environment variable or other binding without unnecessarily reloading your code. As a result, you must be careful when "polluting" global scope with derivatives of your bindings. Anything you create there might continue to exist despite making changes to any underlying bindings. Consider an external client instance which uses a secret API key accessed from `env`: if you put this client instance in global scope and then make changes to the secret, a client instance using the original value might continue to exist. The correct approach would be to create a new client instance for each request. The following is a good approach: ```ts export default { fetch(request, env) { let client = new Client(env.MY_SECRET); // `client` is guaranteed to be up-to-date with the latest value of `env.MY_SECRET` since a new instance is constructed with every incoming request // ... do things with `client` } } ``` Compared to this alternative, which might have surprising and unwanted behavior: ```ts let client = undefined; export default { fetch(request, env) { client ??= new Client(env.MY_SECRET); // `client` here might not be updated when `env.MY_SECRET` changes, since it may already exist in global scope // ... do things with `client` } } ``` If you have more advanced needs, explore the [AsyncLocalStorage API](/workers/runtime-apis/nodejs/asynclocalstorage/), which provides a mechanism for exposing values down to child execution handlers. --- # mTLS URL: https://developers.cloudflare.com/workers/runtime-apis/bindings/mtls/ import { TabItem, Tabs, WranglerConfig } from "~/components"; When using [HTTPS](https://www.cloudflare.com/learning/ssl/what-is-https/), a server presents a certificate for the client to authenticate in order to prove their identity. For even tighter security, some services require that the client also present a certificate. This process - known as [mTLS](https://www.cloudflare.com/learning/access-management/what-is-mutual-tls/) - moves authentication to the protocol of TLS, rather than managing it in application code. Connections from unauthorized clients are rejected during the TLS handshake instead. To present a client certificate when communicating with a service, create a mTLS certificate [binding](/workers/runtime-apis/bindings/) in your Worker project's Wrangler file. This will allow your Worker to present a client certificate to a service on your behalf. :::caution Currently, mTLS for Workers cannot be used for requests made to a service that is a [proxied zone](/dns/proxy-status/) on Cloudflare. If your Worker presents a client certificate to a service proxied by Cloudflare, Cloudflare will return a `520` error. ::: First, upload a certificate and its private key to your account using the [`wrangler mtls-certificate`](/workers/wrangler/commands/#mtls-certificate) command: :::caution The `wrangler mtls-certificate upload` command requires the [SSL and Certificates Edit API token scope](/fundamentals/api/reference/permissions/). If you are using the OAuth flow triggered by `wrangler login`, the correct scope is set automatically. If you are using API tokens, refer to [Create an API token](https://developers.cloudflare.com/fundamentals/api/get-started/create-token/) to set the right scope for your API token. ::: ```sh npx wrangler mtls-certificate upload --cert cert.pem --key key.pem --name my-client-cert ``` Then, update your Worker project's Wrangler file to create an mTLS certificate binding: <WranglerConfig> ```toml title="wrangler.toml" mtls_certificates = [ { binding = "MY_CERT", certificate_id = "<CERTIFICATE_ID>" } ] ``` </WranglerConfig> :::note Certificate IDs are displayed after uploading, and can also be viewed with the command `wrangler mtls-certificate list`. ::: Adding an mTLS certificate binding includes a variable in the Worker's environment on which the `fetch()` method is available. This `fetch()` method uses the standard [Fetch](/workers/runtime-apis/fetch/) API and has the exact same signature as the global `fetch`, but always presents the client certificate when establishing the TLS connection. :::note mTLS certificate bindings present an API similar to [service bindings](/workers/runtime-apis/bindings/service-bindings). ::: ### Interface <Tabs> <TabItem label="JavaScript" icon="seti:javascript"> ```js export default { async fetch(request, environment) { return await environment.MY_CERT.fetch("https://a-secured-origin.com"); }, }; ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```js interface Env { MY_CERT: Fetcher; } export default { async fetch(request, environment): Promise<Response> { return await environment.MY_CERT.fetch("https://a-secured-origin.com") } } satisfies ExportedHandler<Env>; ``` </TabItem> </Tabs> --- # Rate Limiting URL: https://developers.cloudflare.com/workers/runtime-apis/bindings/rate-limit/ import { TabItem, Tabs, WranglerConfig } from "~/components" The Rate Limiting API lets you define rate limits and write code around them in your Worker. You can use it to enforce: * Rate limits that are applied after your Worker starts, only once a specific part of your code is reached * Different rate limits for different types of customers or users (ex: free vs. paid) * Resource-specific or path-specific limits (ex: limit per API route) * Any combination of the above The Rate Limiting API is backed by the same infrastructure that serves the [Rate limiting rules](/waf/rate-limiting-rules/) that are built into the [Cloudflare Web Application Firewall (WAF)](/waf/). :::caution[The Rate Limiting API is in open beta] * You must use version 3.45.0 or later of the [Wrangler CLI](/workers/wrangler) We want your feedback. Tell us what you'd like to see in the [#workers-discussions](https://discord.com/channels/595317990191398933/779390076219686943) or [#workers-help](https://discord.com/channels/595317990191398933/1052656806058528849) channels of the [Cloudflare Developers Discord](https://discord.cloudflare.com/). You can find the an archive of the previous discussion in [#rate-limiting-beta](https://discord.com/channels/595317990191398933/1225429769219211436) ::: ## Get started First, add a [binding](/workers/runtime-apis/bindings) to your Worker that gives it access to the Rate Limiting API: <WranglerConfig> ```toml main = "src/index.js" # The rate limiting API is in open beta. [[unsafe.bindings]] name = "MY_RATE_LIMITER" type = "ratelimit" # An identifier you define, that is unique to your Cloudflare account. # Must be an integer. namespace_id = "1001" # Limit: the number of tokens allowed within a given period in a single # Cloudflare location # Period: the duration of the period, in seconds. Must be either 10 or 60 simple = { limit = 100, period = 60 } ``` </WranglerConfig> This binding makes the `MY_RATE_LIMITER` binding available, which provides a `limit()` method: <Tabs> <TabItem label="JavaScript" icon="seti:javascript"> ```javascript export default { async fetch(request, env) { const { pathname } = new URL(request.url) const { success } = await env.MY_RATE_LIMITER.limit({ key: pathname }) // key can be any string of your choosing if (!success) { return new Response(`429 Failure – rate limit exceeded for ${pathname}`, { status: 429 }) } return new Response(`Success!`) } } ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts interface Env { MY_RATE_LIMITER: any; } export default { async fetch(request, env): Promise<Response> { const { pathname } = new URL(request.url) const { success } = await env.MY_RATE_LIMITER.limit({ key: pathname }) // key can be any string of your choosing if (!success) { return new Response(`429 Failure – rate limit exceeded for ${pathname}`, { status: 429 }) } return new Response(`Success!`) } } satisfies ExportedHandler<Env>; ``` </TabItem> </Tabs> The `limit()` API accepts a single argument — a configuration object with the `key` field. * The key you provide can be any `string` value. * A common pattern is to define your key by combining a string that uniquely identifies the actor initiating the request (ex: a user ID or customer ID) and a string that identifies a specific resource (ex: a particular API route). You can define and configure multiple rate limiting configurations per Worker, which allows you to define different limits against incoming request and/or user parameters as needed to protect your application or upstream APIs. For example, here is how you can define two rate limiting configurations for free and paid tier users: <WranglerConfig> ```toml main = "src/index.js" # Free user rate limiting [[unsafe.bindings]] name = "FREE_USER_RATE_LIMITER" type = "ratelimit" namespace_id = "1001" simple = { limit = 100, period = 60 } # Paid user rate limiting [[unsafe.bindings]] name = "PAID_USER_RATE_LIMITER" type = "ratelimit" namespace_id = "1002" simple = { limit = 1000, period = 60 } ``` </WranglerConfig> ## Configuration A rate limiting binding has three settings: 1. `namespace_id` (number) - a positive integer that uniquely defines this rate limiting configuration - e.g. `namespace_id = "999"`. 2. `limit` (number) - the limit (number of requests, number of API calls) to be applied. This is incremented when you call the `limit()` function in your Worker. 3. `period` (seconds) - must be `10` or `60`. The period to measure increments to the `limit` over, in seconds. For example, to apply a rate limit of 1500 requests per minute, you would define a rate limiting configuration as follows: <WranglerConfig> ```toml [[unsafe.bindings]] name = "MY_RATE_LIMITER" type = "ratelimit" namespace_id = "1001" # 1500 requests - calls to limit() increment this simple = { limit = 1500, period = 60 } ``` </WranglerConfig> ## Best practices The `key` passed to the `limit` function, that determines what to rate limit on, should represent a unique characteristic of a user or class of user that you wish to rate limit. * Good choices include API keys in `Authorization` HTTP headers, URL paths or routes, specific query parameters used by your application, and/or user IDs and tenant IDs. These are all stable identifiers and are unlikely to change from request-to-request. * It is not recommended to use IP addresses or locations (regions or countries), since these can be shared by many users in many valid cases. You may find yourself unintentionally rate limiting a wider group of users than you intended by rate limiting on these keys. ```ts // Recommended: use a key that represents a specific user or class of user const url = new URL(req.url) const userId = url.searchParams.get("userId") || "" const { success } = await env.MY_RATE_LIMITER.limit({ key: userId }) // Not recommended: many users may share a single IP, especially on mobile networks // or when using privacy-enabling proxies const ipAddress = req.headers.get("cf-connecting-ip") || "" const { success } = await env.MY_RATE_LIMITER.limit({ key: ipAddress }) ``` ## Locality Rate limits that you define and enforce in your Worker are local to the [Cloudflare location](https://www.cloudflare.com/network/) that your Worker runs in. For example, if a request comes in from Sydney, Australia, to the Worker shown above, after 100 requests in a 60 second window, any further requests for a particular path would be rejected, and a 429 HTTP status code returned. But this would only apply to requests served in Sydney. For each unique key you pass to your rate limiting binding, there is a unique limit per Cloudflare location. ## Performance The Rate Limiting API in Workers is designed to be fast. The underlying counters are cached on the same machine that your Worker runs in, and updated asynchronously in the background by communicating with a backing store that is within the same Cloudflare location. This means that while in your code you `await` a call to the `limit()` method: ```javascript const { success } = await env.MY_RATE_LIMITER.limit({ key: customerId }) ``` You are not waiting on a network request. You can use the Rate Limiting API without introducing any meaningful latency to your Worker. ## Accuracy The above also means that the Rate Limiting API is permissive, eventually consistent, and intentionally designed to not be used as an accurate accounting system. For example, if many requests come in to your Worker in a single Cloudflare location, all rate limited on the same key, the [isolate](/workers/reference/how-workers-works) that serves each request will check against its locally cached value of the rate limit. Very quickly, but not immediately, these requests will count towards the rate limit within that Cloudflare location. ## Examples * [`@elithrar/workers-hono-rate-limit`](https://github.com/elithrar/workers-hono-rate-limit) — Middleware that lets you easily add rate limits to routes in your [Hono](https://hono.dev/) application. * [`@hono-rate-limiter/cloudflare`](https://github.com/rhinobase/hono-rate-limiter) — Middleware that lets you easily add rate limits to routes in your [Hono](https://hono.dev/) application, with multiple data stores to choose from. --- # Version metadata URL: https://developers.cloudflare.com/workers/runtime-apis/bindings/version-metadata/ import { TabItem, Tabs, WranglerConfig } from "~/components" The version metadata binding can be used to access metadata associated with a [version](/workers/configuration/versions-and-deployments/#versions) from inside the Workers runtime. Worker version ID, version tag and timestamp of when the version was created are available through the version metadata binding. They can be used in events sent to [Workers Analytics Engine](/analytics/analytics-engine/) or to any third-party analytics/metrics service in order to aggregate by Worker version. To use the version metadata binding, update your Worker's Wrangler file: <WranglerConfig> ```toml title="wrangler.toml" [version_metadata] binding = "CF_VERSION_METADATA" ``` </WranglerConfig> ### Interface An example of how to access the version ID and version tag from within a Worker to send events to [Workers Analytics Engine](/analytics/analytics-engine/): <Tabs> <TabItem label="JavaScript" icon="seti:javascript"> ```js export default { async fetch(request, env, ctx) { const { id: versionId, tag: versionTag, timestamp: versionTimestamp } = env.CF_VERSION_METADATA; env.WAE.writeDataPoint({ indexes: [versionId], blobs: [versionTag, versionTimestamp], //... }); //... }, }; ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts interface Environment { CF_VERSION_METADATA: WorkerVersionMetadata; WAE: AnalyticsEngineDataset; } export default { async fetch(request, env, ctx) { const { id: versionId, tag: versionTag } = env.CF_VERSION_METADATA; env.WAE.writeDataPoint({ indexes: [versionId], blobs: [versionTag], //... }); //... }, } satisfies ExportedHandler<Env>; ``` </TabItem> </Tabs> --- # EventEmitter URL: https://developers.cloudflare.com/workers/runtime-apis/nodejs/eventemitter/ import { Render } from "~/components" <Render file="nodejs-compat-howto" /> An `EventEmitter` is an object that emits named events that cause listeners to be called. ```js import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.on('hello', (...args) => { console.log(...args); }); emitter.emit('hello', 1, 2, 3); ``` The implementation in the Workers runtime fully supports the entire Node.js `EventEmitter` API. This includes the `captureRejections` option that allows improved handling of async functions as event handlers: ```js const emitter = new EventEmitter({ captureRejections: true }); emitter.on('hello', async (...args) => { throw new Error('boom'); }); emitter.on('error', (err) => { // the async promise rejection is emitted here! }); ``` Refer to the [Node.js documentation for `EventEmitter`](https://nodejs.org/api/events.html#class-eventemitter) for more information. --- # assert URL: https://developers.cloudflare.com/workers/runtime-apis/nodejs/assert/ import { Render } from "~/components"; <Render file="nodejs-compat-howto" /> The `assert` module in Node.js provides a number of useful assertions that are useful when building tests. ```js import { strictEqual, deepStrictEqual, ok, doesNotReject } from "node:assert"; strictEqual(1, 1); // ok! strictEqual(1, "1"); // fails! throws AssertionError deepStrictEqual({ a: { b: 1 } }, { a: { b: 1 } }); // ok! deepStrictEqual({ a: { b: 1 } }, { a: { b: 2 } }); // fails! throws AssertionError ok(true); // ok! ok(false); // fails! throws AssertionError await doesNotReject(async () => {}); // ok! await doesNotReject(async () => { throw new Error("boom"); }); // fails! throws AssertionError ``` :::note In the Workers implementation of `assert`, all assertions run in, what Node.js calls, the strict assertion mode. In strict assertion mode, non-strict methods behave like their corresponding strict methods. For example, `deepEqual()` will behave like `deepStrictEqual()`. ::: Refer to the [Node.js documentation for `assert`](https://nodejs.org/dist/latest-v19.x/docs/api/assert.html) for more information. --- # AsyncLocalStorage URL: https://developers.cloudflare.com/workers/runtime-apis/nodejs/asynclocalstorage/ import { Render } from "~/components" ## Background <Render file="nodejs-compat-howto" /> Cloudflare Workers provides an implementation of a subset of the Node.js [`AsyncLocalStorage`](https://nodejs.org/dist/latest-v18.x/docs/api/async_context.html#class-asynclocalstorage) API for creating in-memory stores that remain coherent through asynchronous operations. ## Constructor ```js import { AsyncLocalStorage } from 'node:async_hooks'; const asyncLocalStorage = new AsyncLocalStorage(); ``` * <code>new AsyncLocalStorage()</code> : AsyncLocalStorage * Returns a new `AsyncLocalStorage` instance. ## Methods * `getStore()` : any * Returns the current store. If called outside of an asynchronous context initialized by calling `asyncLocalStorage.run()`, it returns `undefined`. * <code>run(storeany, callbackfunction, ...argsarguments)</code> : any * Runs a function synchronously within a context and returns its return value. The store is not accessible outside of the callback function. The store is accessible to any asynchronous operations created within the callback. The optional `args` are passed to the callback function. If the callback function throws an error, the error is thrown by `run()` also. * <code>exit(callbackfunction, ...argsarguments)</code> : any * Runs a function synchronously outside of a context and returns its return value. This method is equivalent to calling `run()` with the `store` value set to `undefined`. ## Static Methods * `AsyncLocalStorage.bind(fn)` : function * Captures the asynchronous context that is current when `bind()` is called and returns a function that enters that context before calling the passed in function. * `AsyncLocalStorage.snapshot()` : function * Captures the asynchronous context that is current when `snapshot()` is called and returns a function that enters that context before calling a given function. ## Examples ### Fetch Listener ```js import { AsyncLocalStorage } from 'node:async_hooks'; const asyncLocalStorage = new AsyncLocalStorage(); let idSeq = 0; export default { async fetch(req) { return asyncLocalStorage.run(idSeq++, () => { // Simulate some async activity... await scheduler.wait(1000); return new Response(asyncLocalStorage.getStore()); }); } }; ``` ### Multiple stores The API supports multiple `AsyncLocalStorage` instances to be used concurrently. ```js import { AsyncLocalStorage } from 'node:async_hooks'; const als1 = new AsyncLocalStorage(); const als2 = new AsyncLocalStorage(); export default { async fetch(req) { return als1.run(123, () => { return als2.run(321, () => { // Simulate some async activity... await scheduler.wait(1000); return new Response(`${als1.getStore()}-${als2.getStore()}`); }); }); } }; ``` ### Unhandled Rejections When a `Promise` rejects and the rejection is unhandled, the async context propagates to the `'unhandledrejection'` event handler: ```js import { AsyncLocalStorage } from 'node:async_hooks'; const asyncLocalStorage = new AsyncLocalStorage(); let idSeq = 0; addEventListener('unhandledrejection', (event) => { console.log(asyncLocalStorage.getStore(), 'unhandled rejection!'); }); export default { async fetch(req) { return asyncLocalStorage.run(idSeq++, () => { // Cause an unhandled rejection! throw new Error('boom'); }); } }; ``` ### `AsyncLocalStorage.bind()` and `AsyncLocalStorage.snapshot()` ```js import { AsyncLocalStorage } from 'node:async_hooks'; const als = new AsyncLocalStorage(); function foo() { console.log(als.getStore()); } function bar() { console.log(als.getStore()); } const oneFoo = als.run(123, () => AsyncLocalStorage.bind(foo)); oneFoo(); // prints 123 const snapshot = als.run('abc', () => AsyncLocalStorage.snapshot()); snapshot(foo); // prints 'abc' snapshot(bar); // prints 'abc' ``` ```js import { AsyncLocalStorage } from 'node:async_hooks'; const als = new AsyncLocalStorage(); class MyResource { #runInAsyncScope = AsyncLocalStorage.snapshot(); doSomething() { this.#runInAsyncScope(() => { return als.getStore(); }); } }; const myResource = als.run(123, () => new MyResource()); console.log(myResource.doSomething()); // prints 123 ``` ## `AsyncResource` The [`AsyncResource`](https://nodejs.org/dist/latest-v18.x/docs/api/async_context.html#class-asyncresource) class is a component of Node.js' async context tracking API that allows users to create their own async contexts. Objects that extend from `AsyncResource` are capable of propagating the async context in much the same way as promises. Note that `AsyncLocalStorage.snapshot()` and `AsyncLocalStorage.bind()` provide a better approach. `AsyncResource` is provided solely for backwards compatibility with Node.js. ### Constructor ```js import { AsyncResource, AsyncLocalStorage } from 'node:async_hooks'; const als = new AsyncLocalStorage(); class MyResource extends AsyncResource { constructor() { // The type string is required by Node.js but unused in Workers. super('MyResource'); } doSomething() { this.runInAsyncScope(() => { return als.getStore(); }); } }; const myResource = als.run(123, () => new MyResource()); console.log(myResource.doSomething()); // prints 123 ``` * <code>new AsyncResource(typestring, optionsAsyncResourceOptions)</code> : AsyncResource * Returns a new `AsyncResource`. Importantly, while the constructor arguments are required in Node.js' implementation of `AsyncResource`, they are not used in Workers. * <code>AsyncResource.bind(fnfunction, typestring, thisArgany)</code> * Binds the given function to the current async context. ### Methods * <code>asyncResource.bind(fnfunction, thisArgany)</code> * Binds the given function to the async context associated with this `AsyncResource`. * <code>asyncResource.runInAsyncScope(fnfunction, thisArgany, ...argsarguments)</code> * Call the provided function with the given arguments in the async context associated with this `AsyncResource`. ## Caveats * The `AsyncLocalStorage` implementation provided by Workers intentionally omits support for the [`asyncLocalStorage.enterWith()`](https://nodejs.org/dist/latest-v18.x/docs/api/async_context.html#asynclocalstorageenterwithstore) and [`asyncLocalStorage.disable()`](https://nodejs.org/dist/latest-v18.x/docs/api/async_context.html#asynclocalstoragedisable) methods. * Workers does not implement the full [`async_hooks`](https://nodejs.org/dist/latest-v18.x/docs/api/async_hooks.html) API upon which Node.js' implementation of `AsyncLocalStorage` is built. * Workers does not implement the ability to create an `AsyncResource` with an explicitly identified trigger context as allowed by Node.js. This means that a new `AsyncResource` will always be bound to the async context in which it was created. * Thenables (non-Promise objects that expose a `then()` method) are not fully supported when using `AsyncLocalStorage`. When working with thenables, instead use [`AsyncLocalStorage.snapshot()`](https://nodejs.org/api/async_context.html#static-method-asynclocalstoragesnapshot) to capture a snapshot of the current context. --- # Buffer URL: https://developers.cloudflare.com/workers/runtime-apis/nodejs/buffer/ import { Render } from "~/components" <Render file="nodejs-compat-howto" /> The `Buffer` API in Node.js is one of the most commonly used Node.js APIs for manipulating binary data. Every `Buffer` instance extends from the standard [`Uint8Array`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Uint8Array) class, but adds a range of unique capabilities such as built-in base64 and hex encoding/decoding, byte-order manipulation, and encoding-aware substring searching. ```js import { Buffer } from 'node:buffer'; const buf = Buffer.from('hello world', 'utf8'); console.log(buf.toString('hex')); // Prints: 68656c6c6f20776f726c64 console.log(buf.toString('base64')); // Prints: aGVsbG8gd29ybGQ= ``` A Buffer extends from `Uint8Array`. Therefore, it can be used in any Workers API that currently accepts `Uint8Array`, such as creating a new Response: ```js const response = new Response(Buffer.from("hello world")); ``` You can also use the `Buffer` API when interacting with streams: ```js const writable = getWritableStreamSomehow(); const writer = writable.getWriter(); writer.write(Buffer.from("hello world")); ``` Refer to the [Node.js documentation for `Buffer`](https://nodejs.org/dist/latest-v19.x/docs/api/buffer.html) for more information. --- # Crypto URL: https://developers.cloudflare.com/workers/runtime-apis/nodejs/crypto/ import { Render } from "~/components"; <Render file="nodejs-compat-howto" /> The `node:crypto` module provides cryptographic functionality that includes a set of wrappers for OpenSSL's hash, HMAC, cipher, decipher, sign, and verify functions. A subset of the `node:crypto` module is available in Workers. All APIs in the tables below with a ✅ are supported, and unless otherwise noted, work the same way as the implementations in Node.js. The [WebCrypto API](/workers/runtime-apis/web-crypto/) is also available within Cloudflare Workers. ## Classes | API | Supported? | Notes | | --------------------------------------------------------------------------------- | ---------- | ----- | | [Certificate](https://nodejs.org/api/crypto.html#class-certificate) | ✅ | | | [Cipher](https://nodejs.org/api/crypto.html#class-cipher) | | | | [Decipher](https://nodejs.org/api/crypto.html#class-decipher) | | | | [DiffieHellman](https://nodejs.org/api/crypto.html#class-diffiehellman) | ✅ | | | [DiffieHellmanGroup](https://nodejs.org/api/crypto.html#class-diffiehellmangroup) | ✅ | | | [ECDH](https://nodejs.org/api/crypto.html#class-ecdh) | | | | [Hash](https://nodejs.org/api/crypto.html#class-hash) | ✅ | | | [Hmac](https://nodejs.org/api/crypto.html#class-hmac) | ✅ | | | [KeyObject](https://nodejs.org/api/crypto.html#class-keyobject) | ✅ | | | [Sign](https://nodejs.org/api/crypto.html#class-sign) | | | | [Verify](https://nodejs.org/api/crypto.html#class-verify) | | | | [X509Certificate](https://nodejs.org/api/crypto.html#class-x509certificate) | ✅ | | | [constants](https://nodejs.org/api/crypto.html#cryptoconstants) | | | ## Primes | API | Supported? | Notes | | -------------------------------------------------------------------------------------------- | ---------- | ----- | | [checkPrime](https://nodejs.org/api/crypto.html#cryptocheckprimecandidate-options-callback) | ✅ | | | [checkPrimeSync](https://nodejs.org/api/crypto.html#cryptocheckprimesynccandidate-options) | ✅ | | | [generatePrime](https://nodejs.org/api/crypto.html#cryptogenerateprimesize-options-callback) | ✅ | | | [generatePrimeSync](https://nodejs.org/api/crypto.html#cryptogenerateprimesyncsize-options) | ✅ | | ## Ciphers | API | Supported? | Notes | | ----------------------------------------------------------------------------------------------------- | ---------- | ------------------------------------------ | | [createCipher](https://nodejs.org/api/crypto.html#cryptocreatecipheralgorithm-password-options) | | Deprecated, use `createCipheriv` instead | | [createCipheriv](https://nodejs.org/api/crypto.html#cryptocreatecipherivalgorithm-key-iv-options) | | | | [createDecipher](https://nodejs.org/api/crypto.html#cryptocreatedecipheralgorithm-password-options) | | Deprecated, use `createDecipheriv` instead | | [createDecipheriv](https://nodejs.org/api/crypto.html#cryptocreatedecipherivalgorithm-key-iv-options) | | | | [privateDecrypt](https://nodejs.org/api/crypto.html#cryptoprivatedecryptprivatekey-buffer) | | | | [privateEncrypt](https://nodejs.org/api/crypto.html#cryptoprivateencryptprivatekey-buffer) | | | | [publicDecrypt](https://nodejs.org/api/crypto.html#cryptopublicdecryptkey-buffer) | | | | [publicEncrypt](https://nodejs.org/api/crypto.html#cryptopublicencryptkey-buffer) | | | ## DiffieHellman | API | Supported? | Notes | | ----------------------------------------------------------------------------------------------------------------------------------------- | ---------- | ----- | | [createDiffieHellman(prime)](https://nodejs.org/api/crypto.html#cryptocreatediffiehellmanprime-primeencoding-generator-generatorencoding) | ✅ | | | [createDiffieHellman(primeLength)](https://nodejs.org/api/crypto.html#cryptocreatediffiehellmanprimelength-generator) | ✅ | | | [createDiffieHellmanGroup](https://nodejs.org/api/crypto.html#cryptocreatediffiehellmangroupname) | ✅ | | | [createECDH](https://nodejs.org/api/crypto.html#cryptocreateecdhcurvename) | | | | [diffieHellman](https://nodejs.org/api/crypto.html#cryptodiffiehellmanoptions) | | | | [getDiffieHellman](https://nodejs.org/api/crypto.html#cryptogetdiffiehellmangroupname) | ✅ | | ## Hash | API | Supported? | Notes | | -------------------------------------------------------------------------------------- | ---------- | ----- | | [createHash](https://nodejs.org/api/crypto.html#cryptocreatehashalgorithm-options) | ✅ | | | [createHmac](https://nodejs.org/api/crypto.html#cryptocreatehmacalgorithm-key-options) | ✅ | | | [getHashes](https://nodejs.org/api/crypto.html#cryptogethashes) | ✅ | | ## Keys | API | Supported? | Notes | | ------------------------------------------------------------------------------------------------ | ---------- | ----- | | [createPrivateKey](https://nodejs.org/api/crypto.html#cryptocreateprivatekeykey) | ✅ | | | [createPublicKey](https://nodejs.org/api/crypto.html#cryptocreatepublickeykey) | ✅ | | | [createSecretKey](https://nodejs.org/api/crypto.html#cryptocreatesecretkeykey-encoding) | ✅ | | | [generateKey](https://nodejs.org/api/crypto.html#cryptogeneratekeytype-options-callback) | ✅ | | | [generateKeyPair](https://nodejs.org/api/crypto.html#cryptogeneratekeypairtype-options-callback) | ✅ | Does not support DSA or DH key pairs | | [generateKeyPairSync](https://nodejs.org/api/crypto.html#cryptogeneratekeypairsynctype-options) | ✅ | Does not support DSA or DH key pairs | | [generateKeySync](https://nodejs.org/api/crypto.html#cryptogeneratekeysynctype-options) | ✅ | | ## Sign/Verify | API | Supported? | Notes | | ---------------------------------------------------------------------------------------------- | ---------- | ----- | | [createSign](https://nodejs.org/api/crypto.html#cryptocreatesignalgorithm-options) | | | | [createVerify](https://nodejs.org/api/crypto.html#cryptocreateverifyalgorithm-options) | | | | [sign](https://nodejs.org/api/crypto.html#cryptosignalgorithm-data-key-callback) | | | | [verify](https://nodejs.org/api/crypto.html#cryptoverifyalgorithm-data-key-signature-callback) | | | ## Misc | API | Supported? | Notes | | ---------------------------------------------------------------------------------------- | ---------- | ----- | | [getCipherInfo](https://nodejs.org/api/crypto.html#cryptogetcipherinfonameornid-options) | | | | [getCiphers](https://nodejs.org/api/crypto.html#cryptogetciphers) | ✅ | | | [getCurves](https://nodejs.org/api/crypto.html#cryptogetcurves) | ✅ | | | [secureHeapUsed](https://nodejs.org/api/crypto.html#cryptosecureheapused) | ✅ | | | [setEngine](https://nodejs.org/api/crypto.html#cryptosetengineengine-flags) | ✅ | | | [timingSafeEqual](https://nodejs.org/api/crypto.html#cryptotimingsafeequala-b) | ✅ | | ## Fips | API | Supported? | Notes | | --------------------------------------------------------------- | ---------- | ----------------------------------- | | [getFips](https://nodejs.org/api/crypto.html#cryptogetfips) | ✅ | | | [fips](https://nodejs.org/api/crypto.html#cryptofips) | ✅ | Deprecated, use `getFips()` instead | | [setFips](https://nodejs.org/api/crypto.html#cryptosetfipsbool) | ✅ | | ## Random | API | Supported? | Notes | | -------------------------------------------------------------------------------------------- | ---------- | ----- | | [getRandomValues](https://nodejs.org/api/crypto.html#cryptogetrandomvaluestypedarray) | ✅ | | | [randomBytes](https://nodejs.org/api/crypto.html#cryptorandombytessize-callback) | ✅ | | | [randomFillSync](https://nodejs.org/api/crypto.html#cryptorandomfillsyncbuffer-offset-size) | ✅ | | | [randomFill](https://nodejs.org/api/crypto.html#cryptorandomfillbuffer-offset-size-callback) | ✅ | | | [randomInt](https://nodejs.org/api/crypto.html#cryptorandomintmin-max-callback) | ✅ | | | [randomUUID](https://nodejs.org/api/crypto.html#cryptorandomuuidoptions) | ✅ | | ## Key Derivation | API | Supported? | Notes | | ------------------------------------------------------------------------------------------------------------ | ---------- | ------------------------------ | | [hkdf](https://nodejs.org/api/crypto.html#cryptohkdfdigest-ikm-salt-info-keylen-callback) | ✅ | Does not yet support KeyObject | | [hkdfSync](https://nodejs.org/api/crypto.html#cryptohkdfsyncdigest-ikm-salt-info-keylen) | ✅ | Does not yet support KeyObject | | [pbkdf2](https://nodejs.org/api/crypto.html#cryptopbkdf2password-salt-iterations-keylen-digest-callback) | ✅ | | | [pbkdf2Sync](https://nodejs.org/api/crypto.html#cryptopbkdf2password-salt-iterations-keylen-digest-callback) | ✅ | | | [scrypt](https://nodejs.org/api/crypto.html#cryptoscryptpassword-salt-keylen-options-callback) | ✅ | | | [scryptSync](https://nodejs.org/api/crypto.html#cryptoscryptsyncpassword-salt-keylen-options) | ✅ | | ## WebCrypto | API | Supported? | Notes | | --------------------------------------------------------- | ---------- | ----- | | [subtle](https://nodejs.org/api/crypto.html#cryptosubtle) | ✅ | | | [webcrypto](https://nodejs.org/api/crypto.html#) | ✅ | | --- # dns URL: https://developers.cloudflare.com/workers/runtime-apis/nodejs/dns/ import { Render, TypeScriptExample } from "~/components"; <Render file="nodejs-compat-howto" /> You can use [`node:dns`](https://nodejs.org/api/dns.html) for name resolution via [DNS over HTTPS](/1.1.1.1/encryption/dns-over-https/) using [Cloudflare DNS](https://www.cloudflare.com/application-services/products/dns/) at 1.1.1.1. <TypeScriptExample filename="index.ts"> ```ts import dns from 'node:dns'; let responese = await dns.promises.resolve4('cloudflare.com', 'NS'); ``` </TypeScriptExample> All `node:dns` functions are available, except `lookup`, `lookupService`, and `resolve` which throw "Not implemented" errors when called. :::note DNS requests will execute a subrequest, counts for your [Worker's subrequest limit](/workers/platform/limits/#subrequests). ::: The full `node:dns` API is documented in the [Node.js documentation for `node:dns`](https://nodejs.org/api/dns.html). --- # Diagnostics Channel URL: https://developers.cloudflare.com/workers/runtime-apis/nodejs/diagnostics-channel/ import { Render } from "~/components" <Render file="nodejs-compat-howto" /> The [`diagnostics_channel`](https://nodejs.org/dist/latest-v20.x/docs/api/diagnostics_channel.html) module provides an API to create named channels to report arbitrary message data for diagnostics purposes. The API is essentially a simple event pub/sub model that is specifically designed to support low-overhead diagnostics reporting. ```js import { channel, hasSubscribers, subscribe, unsubscribe, tracingChannel, } from 'node:diagnostics_channel'; // For publishing messages to a channel, acquire a channel object: const myChannel = channel('my-channel'); // Any JS value can be published to a channel. myChannel.publish({ foo: 'bar' }); // For receiving messages on a channel, use subscribe: subscribe('my-channel', (message) => { console.log(message); }); ``` All `Channel` instances are singletons per each Isolate/context (for example, the same entry point). Subscribers are always invoked synchronously and in the order they were registered, much like an `EventTarget` or Node.js `EventEmitter` class. ## Integration with Tail Workers When using [Tail Workers](/workers/observability/logs/tail-workers/), all messages published to any channel will be forwarded also to the [Tail Worker](/workers/observability/logs/tail-workers/). Within the Tail Worker, the diagnostic channel messages can be accessed via the `diagnosticsChannelEvents` property: ```js export default { async tail(events) { for (const event of events) { for (const messageData of event.diagnosticsChannelEvents) { console.log(messageData.timestamp, messageData.channel, messageData.message); } } } } ``` Note that message published to the tail worker is passed through the [structured clone algorithm](https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Structured_clone_algorithm) (same mechanism as the [`structuredClone()`](https://developer.mozilla.org/en-US/docs/Web/API/structuredClone) API) so only values that can be successfully cloned are supported. ## `TracingChannel` Per the Node.js documentation, "[`TracingChannel`](https://nodejs.org/api/diagnostics_channel.html#class-tracingchannel) is a collection of \[Channels] which together express a single traceable action. `TracingChannel` is used to formalize and simplify the process of producing events for tracing application flow." ```js import { tracingChannel } from 'node:diagnostics_channel'; import { AsyncLocalStorage } from 'node:async_hooks' const channels = tracingChannel('my-channel'); const requestId = new AsyncLocalStorage(); channels.start.bindStore(requestId); channels.subscribe({ start(message) { console.log(requestId.getStore()); // { requestId: '123' } // Handle start message }, end(message) { console.log(requestId.getStore()); // { requestId: '123' } // Handle end message }, asyncStart(message) { console.log(requestId.getStore()); // { requestId: '123' } // Handle asyncStart message }, asyncEnd(message) { console.log(requestId.getStore()); // { requestId: '123' } // Handle asyncEnd message }, error(message) { console.log(requestId.getStore()); // { requestId: '123' } // Handle error message }, }); // The subscriber handlers will be invoked while tracing the execution of the async // function passed into `channel.tracePromise`... channel.tracePromise(async () => { // Perform some asynchronous work... }, { requestId: '123' }); ``` Refer to the [Node.js documentation for `diagnostics_channel`](https://nodejs.org/dist/latest-v20.x/docs/api/diagnostics_channel.html) for more information. --- # Node.js compatibility URL: https://developers.cloudflare.com/workers/runtime-apis/nodejs/ import { DirectoryListing, WranglerConfig } from "~/components"; When you write a Worker, you may need to import packages from [npm](https://www.npmjs.com/). Many npm packages rely on APIs from the [Node.js runtime](https://nodejs.org/en/about), and will not work unless these Node.js APIs are available. Cloudflare Workers provides a subset of Node.js APIs in two forms: 1. As built-in APIs provided by the Workers Runtime 2. As polyfills that [Wrangler](/workers/wrangler/) adds to your Worker's code You can view which APIs are supported using the [Node.js compatibility matrix](https://workers-nodejs-compat-matrix.pages.dev). ## Get Started To enable built-in Node.js APIs and add polyfills, you need to add the `nodejs_compat` compatibility flag to your wrangler configuration. This also enables `nodejs_compat_v2` as long as your compatibility date is 2024-09-23 or later. [Learn more about the Node.js compatibility flag and v2](/workers/configuration/compatibility-flags/#nodejs-compatibility-flag). <WranglerConfig> ```toml title="wrangler.toml" compatibility_flags = [ "nodejs_compat" ] compatibility_date = "2024-09-23" ``` </WranglerConfig> ## Built-in Node.js Runtime APIs The following APIs from Node.js are provided directly by the Workers Runtime when either `nodejs_compat` or `nodejs_compat_v2` are enabled: <DirectoryListing /> Unless otherwise specified, implementations of Node.js APIs in Workers are intended to match the implementation in the [Current release of Node.js](https://github.com/nodejs/release#release-schedule). ## Node.js API Polyfills To enable built-in Node.js APIs and add polyfills, you need to add the `nodejs_compat` compatibility flag to your wrangler configuration. This also enables `nodejs_compat_v2` as long as your compatibility date is 2024-09-23 or later. [Learn more about the Node.js compatibility flag and v2](/workers/configuration/compatibility-flags/#nodejs-compatibility-flag). Adding polyfills maximizes compatibility with existing npm packages, while recognizing that not all APIs from Node.js make sense in the context of serverless functions. Where it is possible to provide a fully functional polyfill of the relevant Node.js API, unenv will do so. In cases where this is not possible, such as the [`fs`](https://nodejs.org/api/fs.html), unenv adds a module with mocked methods. Calling these mocked methods will either noop or will throw an error with a message like: ``` [unenv] <method name> is not implemented yet! ``` This allows you to import packages that use these Node.js modules, even if certain methods are not supported. For a full list of APIs supported, including information on which are mocked, see the [Node.js compatibility matrix](https://workers-nodejs-compat-matrix.pages.dev). If an API you wish to use is missing and you want to suggest that Workers support it, please add a post or comment in the [Node.js APIs discussions category](https://github.com/cloudflare/workerd/discussions/categories/node-js-apis) on GitHub. ## Enable only AsyncLocalStorage To enable only the Node.js `AsyncLocalStorage` API, use the `nodejs_als` compatibility flag. <WranglerConfig> ```toml compatibility_flags = [ "nodejs_als" ] ``` </WranglerConfig> --- # net URL: https://developers.cloudflare.com/workers/runtime-apis/nodejs/net/ import { Render, TypeScriptExample } from "~/components"; <Render file="nodejs-compat-howto" /> You can use [`node:net`](https://nodejs.org/api/net.html) to create a direct connection to servers via a TCP sockets with [`net.Socket`](https://nodejs.org/api/net.html#class-netsocket). These functions use [`connect`](/workers/runtime-apis/tcp-sockets/#connect) functionality from the built-in `cloudflare:sockets` module. <TypeScriptExample filename="index.ts"> ```ts import net from "node:net"; const exampleIP = "127.0.0.1"; export default { async fetch(req): Promise<Response> { const socket = new net.Socket(); socket.connect(4000, exampleIP, function () { console.log("Connected"); }); socket.write("Hello, Server!"); socket.end(); return new Response("Wrote to server", { status: 200 }); }, } satisfies ExportedHandler; ``` </TypeScriptExample> Additionally, other APIs such as [`net.BlockList`](https://nodejs.org/api/net.html#class-netblocklist) and [`net.SocketAddress`](https://nodejs.org/api/net.html#class-netsocketaddress) are available. Note that the [`net.Server`](https://nodejs.org/api/net.html#class-netserver) class is not supported by Workers. The full `node:net` API is documented in the [Node.js documentation for `node:net`](https://nodejs.org/api/net.html). --- # path URL: https://developers.cloudflare.com/workers/runtime-apis/nodejs/path/ import { Render } from "~/components"; <Render file="nodejs-compat-howto" /> The [`node:path`](https://nodejs.org/api/path.html) module provides utilities for working with file and directory paths. The `node:path` module can be accessed using: ```js import path from "node:path"; path.join("/foo", "bar", "baz/asdf", "quux", ".."); // Returns: '/foo/bar/baz/asdf' ``` Refer to the [Node.js documentation for `path`](https://nodejs.org/api/path.html) for more information. --- # process URL: https://developers.cloudflare.com/workers/runtime-apis/nodejs/process/ import { Render } from "~/components" <Render file="nodejs-compat-howto" /> The [`process`](https://nodejs.org/dist/latest-v19.x/docs/api/process.html) module in Node.js provides a number of useful APIs related to the current process. Within a serverless environment like Workers, most of these APIs are not relevant or meaningful, but some are useful for cross-runtime compatibility. Within Workers, the following APIs are available: ```js import { env, nextTick, } from 'node:process'; env['FOO'] = 'bar'; console.log(env['FOO']); // Prints: bar nextTick(() => { console.log('next tick'); }); ``` ## `process.env` In the Node.js implementation of `process.env`, the `env` object is a copy of the environment variables at the time the process was started. In the Workers implementation, there is no process-level environment, so `env` is an empty object. You can still set and get values from `env`, and those will be globally persistent for all Workers running in the same isolate and context (for example, the same Workers entry point). ### Relationship to per-request `env` argument in `fetch()` handlers Workers do have a concept of [environment variables](/workers/configuration/environment-variables/) that are applied on a per-Worker and per-request basis. These are not accessible automatically via the `process.env` API. It is possible to manually copy these values into `process.env` if you need to. Be aware, however, that setting any value on `process.env` will coerce that value into a string. ```js import * as process from 'node:process'; export default { fetch(req, env) { // Set process.env.FOO to the value of env.FOO if process.env.FOO is not already set // and env.FOO is a string. process.env.FOO ??= (() => { if (typeof env.FOO === 'string') { return env.FOO; } })(); } }; ``` It is strongly recommended that you *do not* replace the entire `process.env` object with the request `env` object. Doing so will cause you to lose any environment variables that were set previously and will cause unexpected behavior for other Workers running in the same isolate. Specifically, it would cause inconsistency with the `process.env` object when accessed via named imports. ```js import * as process from 'node:process'; import { env } from 'node:process'; process.env === env; // true! they are the same object process.env = {}; // replace the object! Do not do this! process.env === env; // false! they are no longer the same object // From this point forward, any changes to process.env will not be reflected in env, // and vice versa! ``` ## `process.nextTick()` The Workers implementation of `process.nextTick()` is a wrapper for the standard Web Platform API [`queueMicrotask()`](https://developer.mozilla.org/en-US/docs/Web/API/WindowOrWorkerGlobalScope/queueMicrotask). Refer to the [Node.js documentation for `process`](https://nodejs.org/dist/latest-v19.x/docs/api/process.html) for more information. --- # Streams URL: https://developers.cloudflare.com/workers/runtime-apis/nodejs/streams/ import { Render } from "~/components" <Render file="nodejs-compat-howto" /> The [Node.js streams API](https://nodejs.org/api/stream.html) is the original API for working with streaming data in JavaScript, predating the [WHATWG ReadableStream standard](https://streams.spec.whatwg.org/). A stream is an abstract interface for working with streaming data in Node.js. Streams can be readable, writable, or both. All streams are instances of [EventEmitter](/workers/runtime-apis/nodejs/eventemitter/). Where possible, you should use the [WHATWG standard "Web Streams" API](https://streams.spec.whatwg.org/), which is [supported in Workers](https://streams.spec.whatwg.org/). ```js import { Readable, Transform, } from 'node:stream'; import { text, } from 'node:stream/consumers'; import { pipeline, } from 'node:stream/promises'; // A Node.js-style Transform that converts data to uppercase // and appends a newline to the end of the output. class MyTransform extends Transform { constructor() { super({ encoding: 'utf8' }); } _transform(chunk, _, cb) { this.push(chunk.toString().toUpperCase()); cb(); } _flush(cb) { this.push('\n'); cb(); } } export default { async fetch() { const chunks = [ "hello ", "from ", "the ", "wonderful ", "world ", "of ", "node.js ", "streams!" ]; function nextChunk(readable) { readable.push(chunks.shift()); if (chunks.length === 0) readable.push(null); else queueMicrotask(() => nextChunk(readable)); } // A Node.js-style Readable that emits chunks from the // array... const readable = new Readable({ encoding: 'utf8', read() { nextChunk(readable); } }); const transform = new MyTransform(); await pipeline(readable, transform); return new Response(await text(transform)); } }; ``` Refer to the [Node.js documentation for `stream`](https://nodejs.org/api/stream.html) for more information. --- # StringDecoder URL: https://developers.cloudflare.com/workers/runtime-apis/nodejs/string-decoder/ import { Render } from "~/components" <Render file="nodejs-compat-howto" /> The [`node:string_decoder`](https://nodejs.org/api/string_decoder.html) is a legacy utility module that predates the WHATWG standard [TextEncoder](/workers/runtime-apis/encoding/#textencoder) and [TextDecoder](/workers/runtime-apis/encoding/#textdecoder) API. In most cases, you should use `TextEncoder` and `TextDecoder` instead. `StringDecoder` is available in the Workers runtime primarily for compatibility with existing npm packages that rely on it. `StringDecoder` can be accessed using: ```js const { StringDecoder } = require('node:string_decoder'); const decoder = new StringDecoder('utf8'); const cent = Buffer.from([0xC2, 0xA2]); console.log(decoder.write(cent)); const euro = Buffer.from([0xE2, 0x82, 0xAC]); console.log(decoder.write(euro)); ``` Refer to the [Node.js documentation for `string_decoder`](https://nodejs.org/dist/latest-v20.x/docs/api/string_decoder.html) for more information. --- # test URL: https://developers.cloudflare.com/workers/runtime-apis/nodejs/test/ import { Render } from "~/components" <Render file="nodejs-compat-howto" /> ## `MockTracker` The `MockTracker` API in Node.js provides a means of tracking and managing mock objects in a test environment. ```js import { mock } from 'node:test'; const fn = mock.fn(); fn(1,2,3); // does nothing... but console.log(fn.mock.callCount()); // Records how many times it was called console.log(fn.mock.calls[0].arguments)); // Recoreds the arguments that were passed each call ``` The full `MockTracker` API is documented in the [Node.js documentation for `MockTracker`](https://nodejs.org/docs/latest/api/test.html#class-mocktracker). The Workers implementation of `MockTracker` currently does not include an implementation of the [Node.js mock timers API](https://nodejs.org/docs/latest/api/test.html#class-mocktimers). --- # timers URL: https://developers.cloudflare.com/workers/runtime-apis/nodejs/timers/ import { Render, TypeScriptExample } from "~/components"; <Render file="nodejs-compat-howto" /> Use [`node:timers`](https://nodejs.org/api/timers.html) APIs to schedule functions to be executed later. This includes [`setTimeout`](https://nodejs.org/api/timers.html#settimeoutcallback-delay-args) for calling a function after a delay, [`setInterval`](https://nodejs.org/api/timers.html#clearintervaltimeout) for calling a function repeatedly, and [`setImmediate`](https://nodejs.org/api/timers.html#setimmediatecallback-args) for calling a function in the next iteration of the event loop. <TypeScriptExample filename="index.ts"> ```ts import timers from "node:timers"; console.log("first"); timers.setTimeout(() => { console.log("last"); }, 10); timers.setTimeout(() => { console.log("next"); }); ``` </TypeScriptExample> :::note Due to [security-based restrictions on timers](/workers/reference/security-model/#step-1-disallow-timers-and-multi-threading) in Workers, timers are limited to returning the time of the last I/O. This means that while setTimeout, setInterval, and setImmediate will defer your function execution until after other events have run, they will not delay them for the full time specified. ::: :::note When called from a global level (on [`globalThis`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/globalThis)), functions such as `clearTimeout` and `setTimeout` will respect web standards rather than Node.js-specific functionality. For complete Node.js compatibility, you must call functions from the `node:timers` module. ::: The full `node:timers` API is documented in the [Node.js documentation for `node:timers`](https://nodejs.org/api/timers.html). --- # url URL: https://developers.cloudflare.com/workers/runtime-apis/nodejs/url/ import { Render } from "~/components" <Render file="nodejs-compat-howto" /> ## domainToASCII Returns the Punycode ASCII serialization of the domain. If domain is an invalid domain, the empty string is returned. ```js import { domainToASCII } from 'node:url'; console.log(domainToASCII('español.com')); // Prints xn--espaol-zwa.com console.log(domainToASCII('ä¸æ–‡.com')); // Prints xn--fiq228c.com console.log(domainToASCII('xn--iñvalid.com')); // Prints an empty string ``` ## domainToUnicode Returns the Unicode serialization of the domain. If domain is an invalid domain, the empty string is returned. It performs the inverse operation to `domainToASCII()`. ```js import { domainToUnicode } from 'node:url'; console.log(domainToUnicode('xn--espaol-zwa.com')); // Prints español.com console.log(domainToUnicode('xn--fiq228c.com')); // Prints ä¸æ–‡.com console.log(domainToUnicode('xn--iñvalid.com')); // Prints an empty string ``` --- # zlib URL: https://developers.cloudflare.com/workers/runtime-apis/nodejs/zlib/ import { Render } from "~/components" <Render file="nodejs-compat-howto" /> The node:zlib module provides compression functionality implemented using Gzip, Deflate/Inflate, and Brotli. To access it: ```js import zlib from 'node:zlib'; ``` The full `node:zlib` API is documented in the [Node.js documentation for `node:zlib`](https://nodejs.org/api/zlib.html). --- # util URL: https://developers.cloudflare.com/workers/runtime-apis/nodejs/util/ import { Render } from "~/components" <Render file="nodejs-compat-howto" /> ## promisify/callbackify The `promisify` and `callbackify` APIs in Node.js provide a means of bridging between a Promise-based programming model and a callback-based model. The `promisify` method allows taking a Node.js-style callback function and converting it into a Promise-returning async function: ```js import { promisify } from 'node:util'; function foo(args, callback) { try { callback(null, 1); } catch (err) { // Errors are emitted to the callback via the first argument. callback(err); } } const promisifiedFoo = promisify(foo); await promisifiedFoo(args); ``` Similarly to `promisify`, `callbackify` converts a Promise-returning async function into a Node.js-style callback function: ```js import { callbackify } from 'node:util'; async function foo(args) { throw new Error('boom'); } const callbackifiedFoo = callbackify(foo); callbackifiedFoo(args, (err, value) => { If (err) throw err; }); ``` `callbackify` and `promisify` make it easy to handle all of the challenges that come with bridging between callbacks and promises. Refer to the [Node.js documentation for `callbackify`](https://nodejs.org/dist/latest-v19.x/docs/api/util.html#utilcallbackifyoriginal) and [Node.js documentation for `promisify`](https://nodejs.org/dist/latest-v19.x/docs/api/util.html#utilpromisifyoriginal) for more information. ## util.types The `util.types` API provides a reliable and efficient way of checking that values are instances of various built-in types. ```js import { types } from 'node:util'; types.isAnyArrayBuffer(new ArrayBuffer()); // Returns true types.isAnyArrayBuffer(new SharedArrayBuffer()); // Returns true types.isArrayBufferView(new Int8Array()); // true types.isArrayBufferView(Buffer.from('hello world')); // true types.isArrayBufferView(new DataView(new ArrayBuffer(16))); // true types.isArrayBufferView(new ArrayBuffer()); // false function foo() { types.isArgumentsObject(arguments); // Returns true } types.isAsyncFunction(function foo() {}); // Returns false types.isAsyncFunction(async function foo() {}); // Returns true // .. and so on ``` :::caution The Workers implementation currently does not provide implementations of the `util.types.isExternal()`, `util.types.isProxy()`, `util.types.isKeyObject()`, or `util.type.isWebAssemblyCompiledModule()` APIs. ::: For more about `util.types`, refer to the [Node.js documentation for `util.types`](https://nodejs.org/dist/latest-v19.x/docs/api/util.html#utiltypes). ## util.MIMEType `util.MIMEType` provides convenience methods that allow you to more easily work with and manipulate [MIME types](https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/MIME_types). For example: ```js import { MIMEType } from 'node:util'; const myMIME = new MIMEType('text/javascript;key=value'); console.log(myMIME.type); // Prints: text console.log(myMIME.essence); // Prints: text/javascript console.log(myMIME.subtype); // Prints: javascript console.log(String(myMIME)); // Prints: application/javascript;key=value ``` For more about `util.MIMEType`, refer to the [Node.js documentation for `util.MIMEType`](https://nodejs.org/api/util.html#class-utilmimetype). --- # Error handling URL: https://developers.cloudflare.com/workers/runtime-apis/rpc/error-handling/ ## Exceptions An exception thrown by an RPC method implementation will propagate to the caller. If it is one of the standard JavaScript Error types, the `message` and prototype's `name` will be retained, though the stack trace is not. ### Unsupported error types * If an [`AggregateError`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/AggregateError) is thrown by an RPC method, it is not propagated back to the caller. * The [`SuppressedError`](https://github.com/tc39/proposal-explicit-resource-management?tab=readme-ov-file#the-suppressederror-error) type from the Explicit Resource Management proposal is not currently implemented or supported in Workers. * Own properties of error objects, such as the [`cause`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Error/cause) property, are not propagated back to the caller ## Additional properties For some remote exceptions, the runtime may set properties on the propagated exception to provide more information about the error; see [Durable Object error handling](/durable-objects/best-practices/error-handling) for more details. --- # Remote-procedure call (RPC) URL: https://developers.cloudflare.com/workers/runtime-apis/rpc/ import { DirectoryListing, Render, Stream, WranglerConfig } from "~/components" :::note To use RPC, [define a compatibility date](/workers/configuration/compatibility-dates) of `2024-04-03` or higher, or include `rpc` in your [compatibility flags](/workers/configuration/compatibility-flags/#nodejs-compatibility-flag). ::: Workers provide a built-in, JavaScript-native [RPC (Remote Procedure Call)](https://en.wikipedia.org/wiki/Remote_procedure_call) system, allowing you to: * Define public methods on your Worker that can be called by other Workers on the same Cloudflare account, via [Service Bindings](/workers/runtime-apis/bindings/service-bindings/rpc) * Define public methods on [Durable Objects](/durable-objects) that can be called by other workers on the same Cloudflare account that declare a binding to it. The RPC system is designed to feel as similar as possible to calling a JavaScript function in the same Worker. In most cases, you should be able to write code in the same way you would if everything was in a single Worker. ## Example <Render file="service-binding-rpc-example" product="workers" /> The client, in this case Worker A, calls Worker B and tells it to execute a specific procedure using specific arguments that the client provides. This is accomplished with standard JavaScript classes. ## All calls are asynchronous Whether or not the method you are calling was declared asynchronous on the server side, it will behave as such on the client side. You must `await` the result. Note that RPC calls do not actually return `Promise`s, but they return a type that behaves like a `Promise`. The type is a "custom thenable", in that it implements the method `then()`. JavaScript supports awaiting any "thenable" type, so, for the most part, you can treat the return value like a Promise. (We'll see why the type is not actually a Promise a bit later.) ## Structured clonable types, and more Nearly all types that are [Structured Cloneable](https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Structured_clone_algorithm#supported_types) can be used as a parameter or return value of an RPC method. This includes, most basic "value" types in JavaScript, including objects, arrays, strings and numbers. As an exception to Structured Clone, application-defined classes (or objects with custom prototypes) cannot be passed over RPC, except as described below. The RPC system also supports a number of types that are not Structured Cloneable, including: * Functions, which are replaced by stubs that call back to the sender. * Application-defined classes that extend `RpcTarget`, which are similarly replaced by stubs. * [ReadableStream](/workers/runtime-apis/streams/readablestream/) and [WriteableStream](/workers/runtime-apis/streams/writablestream/), with automatic streaming flow control. * [Request](/workers/runtime-apis/request/) and [Response](/workers/runtime-apis/response/), for conveniently representing HTTP messages. * RPC stubs themselves, even if the stub was received from a third Worker. ## Functions You can send a function over RPC. When you do so, the function is replaced by a "stub". The recipient can call the stub like a function, but doing so makes a new RPC back to the place where the function originated. ### Return functions from RPC methods <Render file="service-binding-rpc-functions-example" product="workers" /> ### Send functions as parameters of RPC methods You can also send a function in the parameters of an RPC. This enables the "server" to call back to the "client", reversing the direction of the relationship. Because of this, the words "client" and "server" can be ambiguous when talking about RPC. The "server" is a Durable Object or WorkerEntrypoint, and the "client" is the Worker that invoked the server via a binding. But, RPCs can flow both ways between the two. When talking about an individual RPC, we recommend instead using the words "caller" and "callee". ## Class Instances To use an instance of a class that you define as a parameter or return value of an RPC method, you must extend the built-in `RpcTarget` class. Consider the following example: <WranglerConfig> ```toml name = "counter" main = "./src/counter.js" ``` </WranglerConfig> ```js import { WorkerEntrypoint, RpcTarget } from "cloudflare:workers"; class Counter extends RpcTarget { #value = 0; increment(amount) { this.#value += amount; return this.#value; } get value() { return this.#value; } } export class CounterService extends WorkerEntrypoint { async newCounter() { return new Counter(); } } export default { fetch() { return new Response("ok") } } ``` The method `increment` can be called directly by the client, as can the public property `value`: <WranglerConfig> ```toml name = "client-worker" main = "./src/clientWorker.js" services = [ { binding = "COUNTER_SERVICE", service = "counter", entrypoint = "CounterService" } ] ``` </WranglerConfig> ```js export default { async fetch(request, env) { using counter = await env.COUNTER_SERVICE.newCounter(); await counter.increment(2); // returns 2 await counter.increment(1); // returns 3 await counter.increment(-5); // returns -2 const count = await counter.value; // returns -2 return new Response(count); } } ``` :::note Refer to [Explicit Resource Management](/workers/runtime-apis/rpc/lifecycle) to learn more about the `using` declaration shown in the example above. ::: Classes that extend `RpcTarget` work a lot like functions: the object itself is not serialized, but is instead replaced by a stub. In this case, the stub itself is not callable, but its methods are. Calling any method on the stub actually makes an RPC back to the original object, where it was created. As shown above, you can also access properties of classes. Properties behave like RPC methods that don't take any arguments — you await the property to asynchronously fetch its current value. Note that the act of awaiting the property (which, behind the scenes, calls `.then()` on it) is what causes the property to be fetched. If you do not use `await` when accessing the property, it will not be fetched. :::note While it's possible to define a similar interface to the caller using an object that contains many functions, this is less efficient. If you return an object that contains five functions, then you are creating five stubs. If you return a class instance, where the class declares five methods, you are only returning a single stub. Returning a single stub is often more efficient and easier to reason about. Moreover, when returning a plain object (not a class), non-function properties of the object will be transmitted at the time the object itself is transmitted; they cannot be fetched asynchronously on-demand. ::: :::note Classes which do not inherit `RpcTarget` cannot be sent over RPC at all. This differs from Structured Clone, which defines application-defined classes as clonable. Why the difference? By default, the Structured Clone algorithm simply ignores an object's class entirely. So, the recipient receives a plain object, containing the original object's instance properties but entirely missing its original type. This behavior is rarely useful in practice, and could be confusing if the developer had intended the class to be treated as an `RpcTarget`. So, Workers RPC has chosen to disallow classes that are not `RpcTarget`s, to avoid any confusion. ::: ### Promise pipelining When you call an RPC method and get back an object, it's common to immediately call a method on the object: ```js // Two round trips. using counter = await env.COUNTER_SERVICE.getCounter(); await counter.increment(); ``` But consider the case where the Worker service that you are calling may be far away across the network, as in the case of [Smart Placement](/workers/runtime-apis/bindings/service-bindings/#smart-placement) or [Durable Objects](/durable-objects). The code above makes two round trips, once when calling `getCounter()`, and again when calling `.increment()`. We'd like to avoid this. With most RPC systems, the only way to avoid the problem would be to combine the two calls into a single "batch" call, perhaps called `getCounterAndIncrement()`. However, this makes the interface worse. You wouldn't design a local interface this way. Workers RPC allows a different approach: You can simply omit the first `await`: ```js // Only one round trip! Note the missing `await`. using promiseForCounter = env.COUNTER_SERVICE.getCounter(); await promiseForCounter.increment(); ``` In this code, `getCounter()` returns a promise for a counter. Normally, the only thing you would do with a promise is `await` it. However, Workers RPC promises are special: they also allow you to initiate speculative calls on the future result of the promise. These calls are sent to the server immediately, without waiting for the initial call to complete. Thus, multiple chained calls can be completed in a single round trip. How does this work? The promise returned by an RPC is not a real JavaScript `Promise`. Instead, it is a custom ["Thenable"](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise#thenables). It has a `.then()` method like `Promise`, which allows it to be used in all the places where you'd use a normal `Promise`. For instance, you can `await` it. But, in addition to that, an RPC promise also acts like a stub. Calling any method name on the promise forms a speculative call on the promise's eventual result. This is known as "promise pipelining". This works when calling properties of objects returned by RPC methods as well. For example: ```js import { WorkerEntrypoint } from "cloudflare:workers"; export class MyService extends WorkerEntrypoint { async foo() { return { bar: { baz: () => "qux" } } } } ``` ```js export default { async fetch(request, env) { using foo = env.MY_SERVICE.foo(); let baz = await foo.bar.baz(); return new Response(baz); } } ``` If the initial RPC ends up throwing an exception, then any pipelined calls will also fail with the same exception ## ReadableStream, WriteableStream, Request and Response You can send and receive [`ReadableStream`](/workers/runtime-apis/streams/readablestream/), [`WriteableStream`](/workers/runtime-apis/streams/writablestream/), [`Request`](/workers/runtime-apis/request/), and [`Response`](/workers/runtime-apis/response/) using RPC methods. When doing so, bytes in the body are automatically streamed with appropriate flow control. Only [byte-oriented streams](https://developer.mozilla.org/en-US/docs/Web/API/Streams_API/Using_readable_byte_streams) (streams with an underlying byte source of `type: "bytes"`) are supported. In all cases, ownership of the stream is transferred to the recipient. The sender can no longer read/write the stream after sending it. If the sender wishes to keep its own copy, it can use the [`tee()` method of `ReadableStream`](https://developer.mozilla.org/en-US/docs/Web/API/ReadableStream/tee) or the [`clone()` method of `Request` or `Response`](https://developer.mozilla.org/en-US/docs/Web/API/Response/clone). Keep in mind that doing this may force the system to buffer bytes and lose the benefits of flow control. ## Forwarding RPC stubs A stub received over RPC from one Worker can be forwarded over RPC to another Worker. ```js using counter = env.COUNTER_SERVICE.getCounter(); await env.ANOTHER_SERVICE.useCounter(counter); ``` Here, three different workers are involved: 1. The calling Worker (we'll call this the "introducer") 2. `COUNTER_SERVICE` 3. `ANOTHER_SERVICE` When `ANOTHER_SERVICE` calls a method on the `counter` that is passed to it, this call will automatically be proxied through the introducer and on to the [`RpcTarget`](/workers/runtime-apis/rpc/) class implemented by `COUNTER_SERVICE`. In this way, the introducer Worker can connect two Workers that did not otherwise have any ability to form direct connections to each other. Currently, this proxying only lasts until the end of the Workers' execution contexts. A proxy connection cannot be persisted for later use. ## Video Tutorial In this video, we explore how Cloudflare Workers support Remote Procedure Calls (RPC) to simplify communication between Workers. Learn how to implement RPC in your JavaScript applications and build serverless solutions with ease. Whether you're managing microservices or optimizing web architecture, this tutorial will show you how to quickly set up and use Cloudflare Workers for RPC calls. By the end of this video, you'll understand how to call functions between Workers, pass functions as arguments, and implement user authentication with Cloudflare Workers. <Stream id="d506935b6767fd07626adbec46d41e6d" title="Introduction to Workers RPC" thumbnail="2.5s" /> ## More Details <DirectoryListing /> ## Limitations * [Smart Placement](/workers/configuration/smart-placement/) is currently ignored when making RPC calls. If Smart Placement is enabled for Worker A, and Worker B declares a [Service Binding](/workers/runtime-apis/bindings) to it, when Worker B calls Worker A via RPC, Worker A will run locally, on the same machine. * The maximum serialized RPC limit is 1 MB. Consider using [`ReadableStream`](/workers/runtime-apis/streams/readablestream/) when returning more data. --- # Lifecycle URL: https://developers.cloudflare.com/workers/runtime-apis/rpc/lifecycle/ ## Lifetimes, Memory and Resource Management When you call another Worker over RPC using a Service binding, you are using memory in the Worker you are calling. Consider the following example: ```js let user = await env.USER_SERVICE.findUser(id); ``` Assume that `findUser()` on the server side returns an object extending `RpcTarget`, thus `user` on the client side ends up being a stub pointing to that remote object. As long as the stub still exists on the client, the corresponding object on the server cannot be garbage collected. But, each isolate has its own garbage collector which cannot see into other isolates. So, in order for the server's isolate to know that the object can be collected, the calling isolate must send it an explicit signal saying so, called "disposing" the stub. In many cases (described below), the system will automatically realize when a stub is no longer needed, and will dispose it automatically. However, for best performance, your code should dispose stubs explicitly when it is done with them. ## Explicit Resource Management To ensure resources are properly disposed of, you should use [Explicit Resource Management](https://github.com/tc39/proposal-explicit-resource-management), a new JavaScript language feature that allows you to explicitly signal when resources can be disposed of. Explicit Resource Management is a Stage 3 TC39 proposal — it is [coming to V8 soon](https://bugs.chromium.org/p/v8/issues/detail?id=13559). Explicit Resource Management adds the following language features: - The [`using` declaration](https://github.com/tc39/proposal-explicit-resource-management?tab=readme-ov-file#using-declarations) - [`Symbol.dispose` and `Symbol.asyncDispose`](https://github.com/tc39/proposal-explicit-resource-management?tab=readme-ov-file#additions-to-symbol) If a variable is declared with `using`, when the variable is no longer in scope, the variable's disposer will be invoked. For example: ```js function sendEmail(id, message) { using user = await env.USER_SERVICE.findUser(id); await user.sendEmail(message); // user[Symbol.dispose]() is implicitly called at the end of the scope. } ``` `using` declarations are useful to make sure you can't forget to dispose stubs — even if your code is interrupted by an exception. ### How to use the `using` declaration in your Worker Because it has not yet landed in V8, the `using` keyword is not yet available directly in the Workers runtime. To use it in your code, you must use a prerelease version of the [Wrangler CLI](/workers/wrangler/) to run and deploy your Worker: ```sh npx wrangler@using-keyword-experimental dev ``` This version of Wrangler will transpile `using` into direct calls to `Symbol.dispose()`, before running your code or deploying it to Cloudflare. The following code: ```js { using counter = await env.COUNTER_SERVICE.newCounter(); await counter.increment(2); await counter.increment(4); } ``` ...is equivalent to: ```js { const counter = await env.COUNTER_SERVICE.newCounter(); try { await counter.increment(2); await counter.increment(4); } finally { counter[Symbol.dispose](); } } ``` ## Automatic disposal and execution contexts The RPC system automatically disposes of stubs in the following cases: ### End of event handler / execution context When an event handler is "done", any stubs created as part of the event are automatically disposed. For example, consider a [`fetch()` handler](/workers/runtime-apis/handlers/fetch) which handles incoming HTTP events. The handler may make outgoing RPCs as part of handling the event, and those may return stubs. When the final HTTP response is sent, the handler is "done", and all stubs are immediately disposed. More precisely, the event has an "execution context", which begins when the handler is first invoked, and ends when the HTTP response is sent. The execution context may also end early if the client disconnects before receiving a response, or it can be extended past its normal end point by calling [`ctx.waitUntil()`](/workers/runtime-apis/context). For example, the Worker below does not make use of the `using` declaration, but stubs will be disposed of once the `fetch()` handler returns a response: ```js export default { async fetch(request, env, ctx) { let authResult = await env.AUTH_SERVICE.checkCookie( req.headers.get("Cookie"), ); if (!authResult.authorized) { return new Response("Not authorized", { status: 403 }); } let profile = await authResult.user.getProfile(); return new Response(`Hello, ${profile.name}!`); }, }; ``` A Worker invoked via RPC also has an execution context. The context begins when an RPC method on a `WorkerEntrypoint` is invoked. If no stubs are passed in the parameters or results of this RPC, the context ends (the event is "done") when the RPC returns. However, if any stubs are passed, then the execution context is implicitly extended until all such stubs are disposed (and all calls made through them have returned). As with HTTP, if the client disconnects, the server's execution context is canceled immediately, regardless of whether stubs still exist. A client that is itself another Worker is considered to have disconnected when its own execution context ends. Again, the context can be extended with [`ctx.waitUntil()`](/workers/runtime-apis/context). ### Stubs received as parameters in an RPC call When stubs are received in the parameters of an RPC, those stubs are automatically disposed when the call returns. If you wish to keep the stubs longer than that, you must call the `dup()` method on them. ### Disposing RPC objects disposes stubs that are part of that object When an RPC returns any kind of object, that object will have a disposer added by the system. Disposing it will dispose all stubs returned by the call. For instance, if an RPC returns an array of four stubs, the array itself will have a disposer that disposes all four stubs. The only time the value returned by an RPC does not have a disposer is when it is a primitive value, such as a number or string. These types cannot have disposers added to them, but because these types cannot themselves contain stubs, there is no need for a disposer in this case. This means you should almost always store the result of an RPC into a `using` declaration: ```js using result = stub.foo(); ``` This way, if the result contains any stubs, they will be disposed of. Even if you don't expect the RPC to return stubs, if it returns any kind of an object, it is a good idea to store it into a `using` declaration. This way, if the RPC is extended in the future to return stubs, your code is ready. If you decide you want to keep a returned stub beyond the scope of the `using` declaration, you can call `dup()` on the stub before the end of the scope. (Remember to explicitly dispose the duplicate later.) ## Disposers and `RpcTarget` classes A class that extends [`RpcTarget`](/workers/runtime-apis/rpc/) can optionally implement a disposer: ```js class Foo extends RpcTarget { [Symbol.dispose]() { // ... } } ``` The RpcTarget's disposer runs after the last stub is disposed. Note that the client-side call to the stub's disposer does not wait for the server-side disposer to be called; the server's disposer is called later on. Because of this, any exceptions thrown by the disposer do not propagate to the client; instead, they are reported as uncaught exceptions. Note that an `RpcTarget`'s disposer must be declared as `Symbol.dispose`. `Symbol.asyncDispose` is not supported. ## The `dup()` method Sometimes, you need to pass a stub to a function which will dispose the stub when it is done, but you also want to keep the stub for later use. To solve this problem, you can "dup" the stub: ```js let stub = await env.SOME_SERVICE.getThing(); // Create a duplicate. let stub2 = stub.dup(); // Call some function that will dispose the stub. await func(stub); // stub2 is still valid ``` You can think of `dup()` like the [Unix system call of the same name](https://man7.org/linux/man-pages/man2/dup.2.html): it creates a new handle pointing at the same target, which must be independently closed (disposed). If the instance of the [`RpcTarget` class](/workers/runtime-apis/rpc/) that the stubs point to has a disposer, the disposer will only be invoked when all duplicates have been disposed. However, this only applies to duplicates that originate from the same stub. If the same instance of `RpcTarget` is passed over RPC multiple times, a new stub is created each time, and these are not considered duplicates of each other. Thus, the disposer will be invoked once for each time the `RpcTarget` was sent. In order to avoid this situation, you can manually create a stub locally, and then pass the stub across RPC multiple times. When passing a stub over RPC, ownership of the stub transfers to the recipient, so you must make a `dup()` for each time you send it: ```js import { RpcTarget, RpcStub } from "cloudflare:workers"; class Foo extends RpcTarget { // ... } let obj = new Foo(); let stub = new RpcStub(obj); await rpc1(stub.dup()); // sends a dup of `stub` await rpc2(stub.dup()); // sends another dup of `stub` stub[Symbol.dispose](); // disposes the original stub // obj's disposer will be called when the other two stubs // are disposed remotely. ``` --- # Reserved Methods URL: https://developers.cloudflare.com/workers/runtime-apis/rpc/reserved-methods/ Some method names are reserved or have special semantics. ## Special Methods For backwards compatibility, when extending `WorkerEntrypoint` or `DurableObject`, the following method names have special semantics. Note that this does *not* apply to `RpcTarget`. On `RpcTarget`, these methods work like any other RPC method. ### `fetch()` The `fetch()` method is treated specially — it can only be used to handle an HTTP request — equivalent to the [fetch handler](/workers/runtime-apis/handlers/fetch/). You may implement a `fetch()` method in your class that extends `WorkerEntrypoint` — but it must accept only one parameter of type [`Request`](https://developer.mozilla.org/en-US/docs/Web/API/Request), and must return an instance of [`Response`](https://developer.mozilla.org/en-US/docs/Web/API/Response), or a [Promise](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise) of one. On the client side, `fetch()` called on a service binding or Durable Object stub works like the standard global `fetch()`. That is, the caller may pass one or two parameters to `fetch()`. If the caller does not simply pass a single `Request` object, then a new `Request` is implicitly constructed, passing the parameters to its constructor, and that request is what is actually sent to the server. Some properties of `Request` control the behavior of `fetch()` on the client side and are not actually sent to the server. For example, the property `redirect: "auto"` (which is the default) instructs `fetch()` that if the server returns a redirect response, it should automatically be followed, resulting in an HTTP request to the public internet. Again, this behavior is according to the Fetch API standard. In short, `fetch()` doesn't have RPC semantics, it has Fetch API semantics. ### `connect()` The `connect()` method of the `WorkerEntrypoint` class is reserved for opening a socket-like connection to your Worker. This is currently not implemented or supported — though you can [open a TCP socket from a Worker](/workers/runtime-apis/tcp-sockets/) or connect directly to databases over a TCP socket with [Hyperdrive](/hyperdrive/get-started/). ## Disallowed Method Names The following method (or property) names may not be used as RPC methods on any RPC type (including `WorkerEntrypoint`, `DurableObject`, and `RpcTarget`): * `dup`: This is reserved for duplicating a stub. Refer to the [RPC Lifecycle](/workers/runtime-apis/rpc/lifecycle) docs to learn more about `dup()`. * `constructor`: This name has special meaning for JavaScript classes. It is not intended to be called as a method, so it is not allowed over RPC. The following methods are disallowed only on `WorkerEntrypoint` and `DurableObject`, but allowed on `RpcTarget`. These methods have historically had special meaning to Durable Objects, where they are used to handle certain system-generated events. * `alarm` * `webSocketMessage` * `webSocketClose` * `webSocketError` --- # TypeScript URL: https://developers.cloudflare.com/workers/runtime-apis/rpc/typescript/ The [`@cloudflare/workers-types`](https://www.npmjs.com/package/@cloudflare/workers-types) package provides the `Service` and `DurableObjectNamespace` types, each of which accepts a single type parameter for the server-side [`WorkerEntrypoint`](/workers/runtime-apis/bindings/service-bindings/rpc) or [`DurableObject`](/durable-objects/best-practices/create-durable-object-stubs-and-send-requests/#call-rpc-methods) types. Using higher-order types, we automatically generate client-side stub types (e.g., forcing all methods to be async). For example: ```ts interface Env { SUM_SERVICE: Service<SumService>; COUNTER_OBJECT: DurableObjectNamespace<Counter> } export default { async fetch(req, env, ctx): Promise<Response> { const result = await env.SUM_SERVICE.sum(1, 2); return new Response(result.toString()); } } satisfies ExportedHandler<Env>; ``` --- # Visibility and Security Model URL: https://developers.cloudflare.com/workers/runtime-apis/rpc/visibility/ ## Security Model The Workers RPC system is intended to allow safe communications between Workers that do not trust each other. The system does not allow either side of an RPC session to access arbitrary objects on the other side, much less invoke arbitrary code. Instead, each side can only invoke the objects and functions for which they have explicitly received stubs via previous calls. This security model is commonly known as Object Capabilities, or Capability-Based Security. Workers RPC is built on [Cap'n Proto RPC](https://capnproto.org/rpc.html), which in turn is based on CapTP, the object transport protocol used by the [distributed programming language E](https://www.crockford.com/ec/etut.html). ## Visibility of Methods and Properties ### Private properties [Private properties](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Classes/Private_properties) of classes are not directly exposed over RPC. ### Class instance properties When you send an instance of an application-defined class, the recipient can only access methods and properties declared on the class, not properties of the instance. For example: ```js class Foo extends RpcTarget { constructor() { super(); // i CANNOT be accessed over RPC this.i = 0; // funcProp CANNOT be called over RPC this.funcProp = () => {} } // value CAN be accessed over RPC get value() { return this.i; } // method CAN be called over RPC method() {} } ``` This behavior is intentional — it is intended to protect you from accidentally exposing private class internals. Generally, instance properties should be declared private, [by prefixing them with `#`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Classes/Private_properties). However, private properties are a relatively new feature of JavaScript, and are not yet widely used in the ecosystem. Since the RPC interface between two of your Workers may be a security boundary, we need to be extra-careful, so instance properties are always private when communicating between Workers using RPC, whether or not they have the `#` prefix. You can always declare an explicit getter at the class level if you wish to expose the property, as shown above. These visibility rules apply only to objects that extend `RpcTarget`, `WorkerEntrypoint`, or `DurableObject`, and do not apply to plain objects. Plain objects are passed "by value", sending all of their "own" properties. ### "Own" properties of functions When you pass a function over RPC, the caller can access the "own" properties of the function object itself. ```js someRpcMethod() { let func = () => {}; func.prop = 123; // `prop` is visible over RPC return func; } ``` Such properties on a function are accessed asynchronously, like class properties of an RpcTarget. But, unlike the `RpcTarget` example above, the function's instance properties that are accessible to the caller. In practice, properties are rarely added to functions. --- # Streams URL: https://developers.cloudflare.com/workers/runtime-apis/streams/ import { DirectoryListing, TabItem, Tabs } from "~/components"; The [Streams API](https://developer.mozilla.org/en-US/docs/Web/API/Streams_API) is a web standard API that allows JavaScript to programmatically access and process streams of data. <DirectoryListing /> Workers do not need to prepare an entire response body before returning a `Response`. You can use a [`ReadableStream`](/workers/runtime-apis/streams/readablestream/) to stream a response body after sending the front matter (that is, HTTP status line and headers). This allows you to minimize: - The visitor's time-to-first-byte. - The buffering done in the Worker. Minimizing buffering is especially important for processing or transforming response bodies larger than the Worker's memory limit. For these cases, streaming is the only implementation strategy. :::note By default, Cloudflare Workers is capable of streaming responses using the [Streams APIs](https://developer.mozilla.org/en-US/docs/Web/API/Streams_API). To maintain the streaming behavior, you should only modify the response body using the methods in the Streams APIs. If your Worker only forwards subrequest responses to the client verbatim without reading their body text, then its body handling is already optimal and you do not have to use these APIs. ::: The worker can create a `Response` object using a `ReadableStream` as the body. Any data provided through the `ReadableStream` will be streamed to the client as it becomes available. <Tabs> <TabItem label="Module Worker" icon="seti:javascript"> ```js export default { async fetch(request, env, ctx) { // Fetch from origin server. const response = await fetch(request); // ... and deliver our Response while that’s running. return new Response(response.body, response); }, }; ``` </TabItem> <TabItem label="Service Worker" icon="seti:javascript"> ```js addEventListener("fetch", (event) => { event.respondWith(fetchAndStream(event.request)); }); async function fetchAndStream(request) { // Fetch from origin server. const response = await fetch(request); // ... and deliver our Response while that’s running. return new Response(readable.body, response); } ``` </TabItem> </Tabs> A [`TransformStream`](/workers/runtime-apis/streams/transformstream/) and the [`ReadableStream.pipeTo()`](/workers/runtime-apis/streams/readablestream/#methods) method can be used to modify the response body as it is being streamed: <Tabs> <TabItem label="Module Worker" icon="seti:javascript"> ```js export default { async fetch(request, env, ctx) { // Fetch from origin server. const response = await fetch(request); const { readable, writable } = new TransformStream({ transform(chunk, controller) { controller.enqueue(modifyChunkSomehow(chunk)); }, }); // Start pumping the body. NOTE: No await! response.body.pipeTo(writable); // ... and deliver our Response while that’s running. return new Response(readable, response); }, }; ``` </TabItem> <TabItem label="Service Worker" icon="seti:javascript"> ```js addEventListener("fetch", (event) => { event.respondWith(fetchAndStream(event.request)); }); async function fetchAndStream(request) { // Fetch from origin server. const response = await fetch(request); const { readable, writable } = new TransformStream({ transform(chunk, controller) { controller.enqueue(modifyChunkSomehow(chunk)); }, }); // Start pumping the body. NOTE: No await! response.body.pipeTo(writable); // ... and deliver our Response while that’s running. return new Response(readable, response); } ``` </TabItem> </Tabs> This example calls `response.body.pipeTo(writable)` but does not `await` it. This is so it does not block the forward progress of the remainder of the `fetchAndStream()` function. It continues to run asynchronously until the response is complete or the client disconnects. The runtime can continue running a function (`response.body.pipeTo(writable)`) after a response is returned to the client. This example pumps the subrequest response body to the final response body. However, you can use more complicated logic, such as adding a prefix or a suffix to the body or to process it somehow. --- ## Common issues :::caution[Warning] The Streams API is only available inside of the [Request context](/workers/runtime-apis/request/), inside the `fetch` event listener callback. ::: --- ## Related resources - [MDN's Streams API documentation](https://developer.mozilla.org/en-US/docs/Web/API/Streams_API) - [Streams API spec](https://streams.spec.whatwg.org/) - Write your Worker code in [ES modules syntax](/workers/reference/migrate-to-module-workers/) for an optimized experience. --- # ReadableStream URL: https://developers.cloudflare.com/workers/runtime-apis/streams/readablestream/ ## Background A `ReadableStream` is returned by the `readable` property inside [`TransformStream`](/workers/runtime-apis/streams/transformstream/). ## Properties * `locked` boolean * A Boolean value that indicates if the readable stream is locked to a reader. ## Methods * <code>pipeTo(destinationWritableStream, optionsPipeToOptions)</code> : Promise\<void> * Pipes the readable stream to a given writable stream `destination` and returns a promise that is fulfilled when the `write` operation succeeds or rejects it if the operation fails. * <code>getReader(optionsObject)</code> : ReadableStreamDefaultReader * Gets an instance of `ReadableStreamDefaultReader` and locks the `ReadableStream` to that reader instance. This method accepts an object argument indicating options. The only supported option is `mode`, which can be set to `byob` to create a [`ReadableStreamBYOBReader`](/workers/runtime-apis/streams/readablestreambyobreader/), as shown here: ```js let reader = readable.getReader({ mode: 'byob' }); ``` ### `PipeToOptions` * `preventClose` bool * When `true`, closure of the source `ReadableStream` will not cause the destination `WritableStream` to be closed. * `preventAbort` bool * When `true`, errors in the source `ReadableStream` will no longer abort the destination `WritableStream`. `pipeTo` will return a rejected promise with the error from the source or any error that occurred while aborting the destination. *** ## Related resources * [Streams](/workers/runtime-apis/streams/) * [Readable streams in the WHATWG Streams API specification](https://streams.spec.whatwg.org/#rs-model) * [MDN’s `ReadableStream` documentation](https://developer.mozilla.org/en-US/docs/Web/API/ReadableStream) --- # ReadableStream BYOBReader URL: https://developers.cloudflare.com/workers/runtime-apis/streams/readablestreambyobreader/ {/* <!-- The space in the title was introduced to create a pleasing line-break in the title in the sidebar. --> */} {/* <!-- TODO: See EW-2105. Should we document this if it isn’t effectively using buffer space? --> */} ## Background `BYOB` is an abbreviation of bring your own buffer. A `ReadableStreamBYOBReader` allows reading into a developer-supplied buffer, thus minimizing copies. An instance of `ReadableStreamBYOBReader` is functionally identical to [`ReadableStreamDefaultReader`](/workers/runtime-apis/streams/readablestreamdefaultreader/) with the exception of the `read` method. A `ReadableStreamBYOBReader` is not instantiated via its constructor. Rather, it is retrieved from a [`ReadableStream`](/workers/runtime-apis/streams/readablestream/): ```js const { readable, writable } = new TransformStream(); const reader = readable.getReader({ mode: 'byob' }); ``` *** ## Methods * <code>read(bufferArrayBufferView)</code> : Promise\<ReadableStreamBYOBReadResult> * Returns a promise with the next available chunk of data read into a passed-in buffer. * <code>readAtLeast(minBytes, bufferArrayBufferView)</code> : Promise\<ReadableStreamBYOBReadResult> * Returns a promise with the next available chunk of data read into a passed-in buffer. The promise will not resolve until at least `minBytes` have been read. *** ## Common issues :::caution[Warning] `read` provides no control over the minimum number of bytes that should be read into the buffer. Even if you allocate a 1 MiB buffer, the kernel is perfectly within its rights to fulfill this read with a single byte, whether or not an EOF immediately follows. In practice, the Workers team has found that `read` typically fills only 1% of the provided buffer. `readAtLeast` is a non-standard extension to the Streams API which allows users to specify that at least `minBytes` bytes must be read into the buffer before resolving the read. ::: *** ## Related resources * [Streams](/workers/runtime-apis/streams/) * [Background about BYOB readers in the Streams API WHATWG specification](https://streams.spec.whatwg.org/#byob-readers) --- # TransformStream URL: https://developers.cloudflare.com/workers/runtime-apis/streams/transformstream/ ## Background A transform stream consists of a pair of streams: a writable stream, known as its writable side, and a readable stream, known as its readable side. Writes to the writable side result in new data being made available for reading from the readable side. Workers currently only implements an identity transform stream, a type of transform stream which forwards all chunks written to its writable side to its readable side, without any changes. *** ## Constructor ```js let { readable, writable } = new TransformStream(); ``` * `TransformStream()` TransformStream * Returns a new identity transform stream. ## Properties * `readable` ReadableStream * An instance of a `ReadableStream`. * `writable` WritableStream * An instance of a `WritableStream`. *** ## `IdentityTransformStream` The current implementation of `TransformStream` in the Workers platform is not current compliant with the [Streams Standard](https://streams.spec.whatwg.org/#transform-stream) and we will soon be making changes to the implementation to make it conform with the specification. In preparation for doing so, we have introduced the `IdentityTransformStream` class that implements behavior identical to the current `TransformStream` class. This type of stream forwards all chunks of byte data (in the form of `TypedArray`s) written to its writable side to its readable side, without any changes. The `IdentityTransformStream` readable side supports [bring your own buffer (BYOB) reads](https://developer.mozilla.org/en-US/docs/Web/API/ReadableStreamBYOBReader). ### Constructor ```js let { readable, writable } = new IdentityTransformStream(); ``` * `IdentityTransformStream()` IdentityTransformStream * Returns a new identity transform stream. ### Properties * `readable` ReadableStream * An instance of a `ReadableStream`. * `writable` WritableStream * An instance of a `WritableStream`. *** ## `FixedLengthStream` The `FixedLengthStream` is a specialization of `IdentityTransformStream` that limits the total number of bytes that the stream will passthrough. It is useful primarily because, when using `FixedLengthStream` to produce either a `Response` or `Request`, the fixed length of the stream will be used as the `Content-Length` header value as opposed to use chunked encoding when using any other type of stream. An error will occur if too many, or too few bytes are written through the stream. ### Constructor ```js let { readable, writable } = new FixedLengthStream(1000); ``` * `FixedLengthStream(length)` FixedLengthStream * Returns a new identity transform stream. * `length` maybe a `number` or `bigint` with a maximum value of `2^53 - 1`. ### Properties * `readable` ReadableStream * An instance of a `ReadableStream`. * `writable` WritableStream * An instance of a `WritableStream`. *** ## Related resources * [Streams](/workers/runtime-apis/streams/) * [Transform Streams in the WHATWG Streams API specification](https://streams.spec.whatwg.org/#transform-stream) --- # ReadableStream DefaultReader URL: https://developers.cloudflare.com/workers/runtime-apis/streams/readablestreamdefaultreader/ {/* <!-- The space in the title was introduced to create a pleasing line-break in the title in the sidebar. --> */} ## Background A reader is used when you want to read from a [`ReadableStream`](/workers/runtime-apis/streams/readablestream/), rather than piping its output to a [`WritableStream`](/workers/runtime-apis/streams/writablestream/). A `ReadableStreamDefaultReader` is not instantiated via its constructor. Rather, it is retrieved from a [`ReadableStream`](/workers/runtime-apis/streams/readablestream/): ```js const { readable, writable } = new TransformStream(); const reader = readable.getReader(); ``` *** ## Properties * `reader.closed` : Promise * A promise indicating if the reader is closed. The promise is fulfilled when the reader stream closes and is rejected if there is an error in the stream. ## Methods * `read()` : Promise * A promise that returns the next available chunk of data being passed through the reader queue. * <code>cancel(reasonstringoptional)</code> : void * Cancels the stream. `reason` is an optional human-readable string indicating the reason for cancellation. `reason` will be passed to the underlying source’s cancel algorithm -- if this readable stream is one side of a [`TransformStream`](/workers/runtime-apis/streams/transformstream/), then its cancel algorithm causes the transform’s writable side to become errored with `reason`. :::caution[Warning] Any data not yet read is lost. ::: * `releaseLock()` : void * Releases the lock on the readable stream. A lock cannot be released if the reader has pending read operations. A `TypeError` is thrown and the reader remains locked. *** ## Related resources * [Streams](/workers/runtime-apis/streams/) * [Readable streams in the WHATWG Streams API specification](https://streams.spec.whatwg.org/#rs-model) --- # WritableStream URL: https://developers.cloudflare.com/workers/runtime-apis/streams/writablestream/ ## Background A `WritableStream` is the `writable` property of a [`TransformStream`](/workers/runtime-apis/streams/transformstream/). On the Workers platform, `WritableStream` cannot be directly created using the `WritableStream` constructor. A typical way to write to a `WritableStream` is to pipe a [`ReadableStream`](/workers/runtime-apis/streams/readablestream/) to it. ```js readableStream .pipeTo(writableStream) .then(() => console.log('All data successfully written!')) .catch(e => console.error('Something went wrong!', e)); ``` To write to a `WritableStream` directly, you must use its writer. ```js const writer = writableStream.getWriter(); writer.write(data); ``` Refer to the [WritableStreamDefaultWriter](/workers/runtime-apis/streams/writablestreamdefaultwriter/) documentation for further detail. ## Properties * `locked` boolean * A Boolean value to indicate if the writable stream is locked to a writer. ## Methods * <code>abort(reasonstringoptional)</code> : Promise\<void> * Aborts the stream. This method returns a promise that fulfills with a response `undefined`. `reason` is an optional human-readable string indicating the reason for cancellation. `reason` will be passed to the underlying sink’s abort algorithm. If this writable stream is one side of a [TransformStream](/workers/runtime-apis/streams/transformstream/), then its abort algorithm causes the transform’s readable side to become errored with `reason`. :::caution[Warning] Any data not yet written is lost upon abort. ::: * `getWriter()` : WritableStreamDefaultWriter * Gets an instance of `WritableStreamDefaultWriter` and locks the `WritableStream` to that writer instance. *** ## Related resources * [Streams](/workers/runtime-apis/streams/) * [Writable streams in the WHATWG Streams API specification](https://streams.spec.whatwg.org/#ws-model) --- # WritableStream DefaultWriter URL: https://developers.cloudflare.com/workers/runtime-apis/streams/writablestreamdefaultwriter/ {/* <!-- The space in the title was introduced to create a pleasing line-break in the title in the sidebar. --> */} ## Background A writer is used when you want to write directly to a [`WritableStream`](/workers/runtime-apis/streams/writablestream/), rather than piping data to it from a [`ReadableStream`](/workers/runtime-apis/streams/readablestream/). For example: ```js function writeArrayToStream(array, writableStream) { const writer = writableStream.getWriter(); array.forEach(chunk => writer.write(chunk).catch(() => {})); return writer.close(); } writeArrayToStream([1, 2, 3, 4, 5], writableStream) .then(() => console.log('All done!')) .catch(e => console.error('Error with the stream: ' + e)); ``` ## Properties * `writer.desiredSize` int * The size needed to fill the stream’s internal queue, as an integer. Always returns 1, 0 (if the stream is closed), or `null` (if the stream has errors). * `writer.closed` Promise\<void> * A promise that indicates if the writer is closed. The promise is fulfilled when the writer stream is closed and rejected if there is an error in the stream. ## Methods * <code>abort(reasonstringoptional)</code> : Promise\<void> * Aborts the stream. This method returns a promise that fulfills with a response `undefined`. `reason` is an optional human-readable string indicating the reason for cancellation. `reason` will be passed to the underlying sink’s abort algorithm. If this writable stream is one side of a [TransformStream](/workers/runtime-apis/streams/transformstream/), then its abort algorithm causes the transform’s readable side to become errored with `reason`. :::caution[Warning] Any data not yet written is lost upon abort. ::: * `close()` : Promise\<void> * Attempts to close the writer. Remaining writes finish processing before the writer is closed. This method returns a promise fulfilled with `undefined` if the writer successfully closes and processes the remaining writes, or rejected on any error. * `releaseLock()` : void * Releases the writer’s lock on the stream. Once released, the writer is no longer active. You can call this method before all pending `write(chunk)` calls are resolved. This allows you to queue a `write` operation, release the lock, and begin piping into the writable stream from another source, as shown in the example below. ```js let writer = writable.getWriter(); // Write a preamble. writer.write(new TextEncoder().encode('foo bar')); // While that’s still writing, pipe the rest of the body from somewhere else. writer.releaseLock(); await someResponse.body.pipeTo(writable); ``` * <code>write(chunkany)</code> : Promise\<void> * Writes a chunk of data to the writer and returns a promise that resolves if the operation succeeds. * The underlying stream may accept fewer kinds of type than `any`, it will throw an exception when encountering an unexpected type. *** ## Related resources * [Streams](/workers/runtime-apis/streams/) * [Writable streams in the WHATWG Streams API specification](https://streams.spec.whatwg.org/#ws-model) --- # WebAssembly (Wasm) URL: https://developers.cloudflare.com/workers/runtime-apis/webassembly/ import { DirectoryListing } from "~/components" [WebAssembly](https://webassembly.org/) (abbreviated Wasm) allows you to compile languages like [Rust](/workers/languages/rust/), Go, or C to a binary format that can run in a wide variety of environments, including [web browsers](https://developer.mozilla.org/en-US/docs/WebAssembly#browser_compatibility), Cloudflare Workers, and other WebAssembly runtimes. On Workers and in [Cloudflare Pages Functions](https://blog.cloudflare.com/pages-functions-with-webassembly/), you can use WebAssembly to: * Execute code written in a language other than JavaScript, via `WebAssembly.instantiate()`. * Write an entire Cloudflare Worker in Rust, using bindings that make Workers' JavaScript APIs available directly from your Rust code. Most programming languages can be compiled to Wasm, although support varies across languages and compilers. Guides are available for the following languages: <DirectoryListing /> ## Supported proposals WebAssembly is a rapidly evolving set of standards, with [many proposed APIs](https://webassembly.org/roadmap/) which are in various stages of development. In general, Workers supports the same set of features that are available in Google Chrome. ### SIMD SIMD is supported on Workers. For more information on using SIMD in WebAssembly, refer to [Fast, parallel applications with WebAssembly SIMD](https://v8.dev/features/simd). ### Threading Threading is not possible in Workers. Each Worker runs in a single thread, and the [Web Worker](https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API) API is not supported. ## Binary size Compiling to WebAssembly often requires including additional runtime dependencies. As a result, Workers that use WebAssembly are typically larger than an equivalent Worker written in JavaScript. The larger your Worker is, the longer it may take your Worker to start. Refer to [Worker startup time](https://developers.cloudflare.com/workers/platform/limits/#worker-startup-time) for more information. We recommend using tools like [`wasm-opt`](https://github.com/brson/wasm-opt-rs) to optimize the size of your Wasm binary. ## WebAssembly System Interface (WASI) The [WebAssembly System Interface](https://wasi.dev/) (abbreviated WASI) is a modular system interface for WebAssembly that standardizes a set of underlying system calls for networking, file system access, and more. Applications can depend on the WebAssembly System Interface to behave identically across host environments and operating systems. WASI is an earlier and more rapidly evolving set of standards than Wasm. WASI support is experimental on Cloudflare Workers, with only some syscalls implemented. Refer to our [open source implementation of WASI](https://github.com/cloudflare/workers-wasi), and [blog post about WASI on Workers](https://blog.cloudflare.com/announcing-wasi-on-workers/) demonstrating its use. ### Resources on WebAssembly * [Serverless Rust with Cloudflare Workers](https://blog.cloudflare.com/cloudflare-workers-as-a-serverless-rust-platform/) * [WebAssembly on Cloudflare Workers](https://blog.cloudflare.com/webassembly-on-cloudflare-workers/) --- # Wasm in JavaScript URL: https://developers.cloudflare.com/workers/runtime-apis/webassembly/javascript/ Wasm can be used from within a Worker written in JavaScript or TypeScript by importing a Wasm module, and instantiating an instance of this module using [`WebAssembly.instantiate()`](https://developer.mozilla.org/en-US/docs/WebAssembly/JavaScript_interface/instantiate). This can be used to accelerate computationally intensive operations which do not involve significant I/O. This guide demonstrates the basics of Wasm and JavaScript interoperability. ## Simple Wasm Module In this guide, you will use the WebAssembly Text Format to create a simple Wasm module to understand how imports and exports work. In practice, you would not write code in this format. You would instead use the programming language of your choice and compile directly to WebAssembly Binary Format (`.wasm`). Review the following example module (`;;` denotes a comment): ```txt ;; src/simple.wat (module ;; Import a function from JavaScript named `imported_func` ;; which takes a single i32 argument and assign to ;; variable $i (func $i (import "imports" "imported_func") (param i32)) ;; Export a function named `exported_func` which takes a ;; single i32 argument and returns an i32 (func (export "exported_func") (param $input i32) (result i32) ;; Invoke `imported_func` with $input as argument local.get $input call $i ;; Return $input local.get $input return ) ) ``` Using [`wat2wasm`](https://github.com/WebAssembly/wabt), convert the WAT format to WebAssembly Binary Format: ```sh wat2wasm src/simple.wat -o src/simple.wasm ``` ## Bundling Wrangler will bundle any Wasm module that ends in `.wasm` or `.wasm?module`, so that it is available at runtime within your Worker. This is done using a default bundling rule which can be customized in the [Wrangler configuration file](/workers/wrangler/configuration/). Refer to [Wrangler Bundling](/workers/wrangler/bundling/) for more information. ## Use from JavaScript After you have converted the WAT format to WebAssembly Binary Format, import and use the Wasm module in your existing JavaScript or TypeScript Worker: ```typescript import mod from "./simple.wasm"; // Define imports available to Wasm instance. const importObject = { imports: { imported_func: (arg: number) => { console.log(`Hello from JavaScript: ${arg}`); }, }, }; // Create instance of WebAssembly Module `mod`, supplying // the expected imports in `importObject`. This should be // done at the top level of the script to avoid instantiation on every request. const instance = await WebAssembly.instantiate(mod, importObject); export default { async fetch() { // Invoke the `exported_func` from our Wasm Instance with // an argument. const retval = instance.exports.exported_func(42); // Return the return value! return new Response(`Success: ${retval}`); }, }; ``` When invoked, this Worker should log `Hello from JavaScript: 42` and return `Success: 42`, demonstrating the ability to invoke Wasm methods with arguments from JavaScript and vice versa. ## Next steps In practice, you will likely compile a language of your choice (such as Rust) to WebAssembly binaries. Many languages provide a `bindgen` to simplify the interaction between JavaScript and Wasm. These tools may integrate with your JavaScript bundler, and provide an API other than the WebAssembly API for initializing and invoking your Wasm module. As an example, refer to the [Rust `wasm-bindgen` documentation](https://rustwasm.github.io/wasm-bindgen/examples/without-a-bundler.html). Alternatively, to write your entire Worker in Rust, Workers provides many of the same [Runtime APIs](/workers/runtime-apis) and [bindings](/workers/runtime-apis/bindings/) when using the `workers-rs` crate. For more information, refer to the [Workers Rust guide](/workers/languages/rust/). --- # Get Started URL: https://developers.cloudflare.com/workers/testing/miniflare/get-started/ The Miniflare API allows you to dispatch events to workers without making actual HTTP requests, simulate connections between Workers, and interact with local emulations of storage products like [KV](/workers/testing/miniflare/storage/kv), [R2](/workers/testing/miniflare/storage/r2), and [Durable Objects](/workers/testing/miniflare/storage/durable-objects). This makes it great for writing tests, or other advanced use cases where you need finer-grained control. ## Installation Miniflare is installed using `npm` as a dev dependency: ```sh $ npm install -D miniflare ``` ## Usage In all future examples, we'll assume Node.js is running in ES module mode. You can do this by setting the `type` field in your `package.json`: ```json title=package.json { ... "type": "module" ... } ``` To initialise Miniflare, import the `Miniflare` class from `miniflare`: ```js import { Miniflare } from "miniflare"; const mf = new Miniflare({ modules: true, script: ` export default { async fetch(request, env, ctx) { return new Response("Hello Miniflare!"); } } `, }); const res = await mf.dispatchFetch("http://localhost:8787/"); console.log(await res.text()); // Hello Miniflare! await mf.dispose(); ``` The [rest of these docs](/workers/testing/miniflare/core/fetch) go into more detail on configuring specific features. ### String and File Scripts Note in the above example we're specifying `script` as a string. We could've equally put the script in a file such as `worker.js`, then used the `scriptPath` property instead: ```js const mf = new Miniflare({ scriptPath: "worker.js", }); ``` ### Watching, Reloading and Disposing Miniflare's API is primarily intended for testing use cases, where file watching isn't usually required. If you need to watch files, consider using a separate file watcher like [fs.watch()](https://nodejs.org/api/fs.html#fswatchfilename-options-listener) or [chokidar](https://github.com/paulmillr/chokidar), and calling setOptions() with your original configuration on change. To cleanup and stop listening for requests, you should `dispose()` your instances: ```js await mf.dispose(); ``` You can also manually reload scripts (main and Durable Objects') and options by calling `setOptions()` with the original configuration object. ### Updating Options and the Global Scope You can use the `setOptions` method to update the options of an existing `Miniflare` instance. This accepts the same options object as the `new Miniflare` constructor, applies those options, then reloads the worker. ```js const mf = new Miniflare({ script: "...", kvNamespaces: ["TEST_NAMESPACE"], bindings: { KEY: "value1" }, }); await mf.setOptions({ script: "...", kvNamespaces: ["TEST_NAMESPACE"], bindings: { KEY: "value2" }, }); ``` ### Dispatching Events `getWorker` dispatches `fetch`, `queues`, and `scheduled` events to workers respectively: ```js import { Miniflare } from "miniflare"; const mf = new Miniflare({ script: ` export default { let lastScheduledController; let lastQueueBatch; async fetch(request, env, ctx) { const { pathname } = new URL(request.url); if (pathname === "/scheduled") { return Response.json({ scheduledTime: lastScheduledController?.scheduledTime, cron: lastScheduledController?.cron, }); } else if (pathname === "/queue") { return Response.json({ queue: lastQueueBatch.queue, messages: lastQueueBatch.messages.map((message) => ({ id: message.id, timestamp: message.timestamp.getTime(), body: message.body, bodyType: message.body.constructor.name, })), }); } else if (pathname === "/get-url") { return new Response(request.url); } else { return new Response(null, { status: 404 }); } }, async scheduled(controller, env, ctx) { lastScheduledController = controller; if (controller.cron === "* * * * *") controller.noRetry(); }, async queue(batch, env, ctx) { lastQueueBatch = batch; if (batch.queue === "needy") batch.retryAll(); for (const message of batch.messages) { if (message.id === "perfect") message.ack(); } } } `, }); const res = await mf.dispatchFetch("http://localhost:8787/", { headers: { "X-Message": "Hello Miniflare!" }, }); console.log(await res.text()); // Hello Miniflare! const scheduledResult = await worker.scheduled({ cron: "* * * * *", }); console.log(scheduledResult); // { outcome: "ok", noRetry: true }); const queueResult = await worker.queue("needy", [ { id: "a", timestamp: new Date(1000), body: "a" }, { id: "b", timestamp: new Date(2000), body: { b: 1 } }, ]); console.log(queueResult); // { outcome: "ok", retryAll: true, ackAll: false, explicitRetries: [], explicitAcks: []} ``` See [📨 Fetch Events](/workers/testing/miniflare/core/fetch) and [â° Scheduled Events](/workers/testing/miniflare/core/scheduled) for more details. ### HTTP Server Miniflare starts an HTTP server automatically. To wait for it to be ready, `await` the `ready` property: ```js {11} import { Miniflare } from "miniflare"; const mf = new Miniflare({ modules: true, script: ` export default { async fetch(request, env, ctx) { return new Response("Hello Miniflare!"); }) } `, port: 5000, }); await mf.ready; console.log("Listening on :5000"); ``` #### `Request#cf` Object By default, Miniflare will fetch the `Request#cf` object from a trusted Cloudflare endpoint. You can disable this behaviour, using the `cf` option: ```js const mf = new Miniflare({ cf: false, }); ``` You can also provide a custom cf object via a filepath: ```js const mf = new Miniflare({ cf: "cf.json", }); ``` ### HTTPS Server To start an HTTPS server instead, set the `https` option. To use the [default shared self-signed certificate](https://github.com/cloudflare/workers-sdk/tree/main/packages/miniflare/src/http/cert.ts), set `https` to `true`: ```js const mf = new Miniflare({ https: true, }); ``` To load an existing certificate from the file system: ```js const mf = new Miniflare({ // These are all optional, you don't need to include them all httpsKeyPath: "./key.pem", httpsCertPath: "./cert.pem", }); ``` To load an existing certificate from strings instead: ```js const mf = new Miniflare({ // These are all optional, you don't need to include them all httpsKey: "-----BEGIN RSA PRIVATE KEY-----...", httpsCert: "-----BEGIN CERTIFICATE-----...", }); ``` If both a string and path are specified for an option (e.g. `httpsKey` and `httpsKeyPath`), the string will be preferred. ### Logging By default, `[mf:*]` logs are disabled when using the API. To enable these, set the `log` property to an instance of the `Log` class. Its only parameter is a log level indicating which messages should be logged: ```js {5} import { Miniflare, Log, LogLevel } from "miniflare"; const mf = new Miniflare({ scriptPath: "worker.js", log: new Log(LogLevel.DEBUG), // Enable debug messages }); ``` ## Reference ```js import { Miniflare, Log, LogLevel } from "miniflare"; const mf = new Miniflare({ // All options are optional, but one of script or scriptPath is required log: new Log(LogLevel.INFO), // Logger Miniflare uses for debugging script: ` export default { async fetch(request, env, ctx) { return new Response("Hello Miniflare!"); } } `, scriptPath: "./index.js", modules: true, // Enable modules modulesRules: [ // Modules import rule { type: "ESModule", include: ["**/*.js"], fallthrough: true }, { type: "Text", include: ["**/*.text"] }, ], compatibilityDate: "2021-11-23", // Opt into backwards-incompatible changes from compatibilityFlags: ["formdata_parser_supports_files"], // Control specific backwards-incompatible changes upstream: "https://miniflare.dev", // URL of upstream origin workers: [{ // reference additional named workers name: "worker2", kvNamespaces: { COUNTS: "counts" }, serviceBindings: { INCREMENTER: "incrementer", // Service bindings can also be defined as custom functions, with access // to anything defined outside Miniflare. async CUSTOM(request) { // `request` is the incoming `Request` object. return new Response(message); }, }, modules: true, script: `export default { async fetch(request, env, ctx) { // Get the message defined outside const response = await env.CUSTOM.fetch("http://host/"); const message = await response.text(); // Increment the count 3 times await env.INCREMENTER.fetch("http://host/"); await env.INCREMENTER.fetch("http://host/"); await env.INCREMENTER.fetch("http://host/"); const count = await env.COUNTS.get("count"); return new Response(message + count); } }`, }, }], name: "worker", // Name of service routes: ["*site.mf/worker"], host: "127.0.0.1", // Host for HTTP(S) server to listen on port: 8787, // Port for HTTP(S) server to listen on https: true, // Enable self-signed HTTPS (with optional cert path) httpsKey: "-----BEGIN RSA PRIVATE KEY-----...", httpsKeyPath: "./key.pem", // Path to PEM SSL key httpsCert: "-----BEGIN CERTIFICATE-----...", httpsCertPath: "./cert.pem", // Path to PEM SSL cert chain cf: "./node_modules/.mf/cf.json", // Path for cached Request cf object from Cloudflare liveReload: true, // Reload HTML pages whenever worker is reloaded kvNamespaces: ["TEST_NAMESPACE"], // KV namespace to bind kvPersist: "./kv-data", // Persist KV data (to optional path) r2Buckets: ["BUCKET"], // R2 bucket to bind r2Persist: "./r2-data", // Persist R2 data (to optional path) durableObjects: { // Durable Object to bind TEST_OBJECT: "TestObject", // className API_OBJECT: { className: "ApiObject", scriptName: "api" }, }, durableObjectsPersist: "./durable-objects-data", // Persist Durable Object data (to optional path) cache: false, // Enable default/named caches (enabled by default) cachePersist: "./cache-data", // Persist cached data (to optional path) cacheWarnUsage: true, // Warn on cache usage, for workers.dev subdomains sitePath: "./site", // Path to serve Workers Site files from siteInclude: ["**/*.html", "**/*.css", "**/*.js"], // Glob pattern of site files to serve siteExclude: ["node_modules"], // Glob pattern of site files not to serve bindings: { SECRET: "sssh" }, // Binds variable/secret to environment wasmBindings: { ADD_MODULE: "./add.wasm" }, // WASM module to bind textBlobBindings: { TEXT: "./text.txt" }, // Text blob to bind dataBlobBindings: { DATA: "./data.bin" }, // Data blob to bind }); await mf.setOptions({ kvNamespaces: ["TEST_NAMESPACE2"] }); // Apply options and reload const bindings = await mf.getBindings(); // Get bindings (KV/Durable Object namespaces, variables, etc) // Dispatch "fetch" event to worker const res = await mf.dispatchFetch("http://localhost:8787/", { headers: { Authorization: "Bearer ..." }, }); const text = await res.text(); // Dispatch "scheduled" event to worker const scheduledResult = await worker.scheduled({ cron: "30 * * * *" }) const TEST_NAMESPACE = await mf.getKVNamespace("TEST_NAMESPACE"); const BUCKET = await mf.getR2Bucket("BUCKET"); const caches = await mf.getCaches(); // Get global `CacheStorage` instance const defaultCache = caches.default; const namedCache = await caches.open("name"); // Get Durable Object namespace and storage for ID const TEST_OBJECT = await mf.getDurableObjectNamespace("TEST_OBJECT"); const id = TEST_OBJECT.newUniqueId(); const storage = await mf.getDurableObjectStorage(id); // Get Queue Producer const producer = await mf.getQueueProducer("QUEUE_BINDING"); // Get D1 Database const db = await mf.getD1Database("D1_BINDING") await mf.dispose(); // Cleanup storage database connections and watcher ``` --- # Miniflare URL: https://developers.cloudflare.com/workers/testing/miniflare/ import { DirectoryListing, LinkButton } from "~/components"; :::caution This documentation describes the Miniflare API, which is only relevant for advanced use cases. Instead, most users should use [Wrangler](/workers/wrangler) to build, run & deploy their Workers locally ::: **Miniflare** is a simulator for developing and testing [**Cloudflare Workers**](https://workers.cloudflare.com/). It's written in TypeScript, and runs your code in a sandbox implementing Workers' runtime APIs. - 🎉 **Fun:** develop Workers easily with detailed logging, file watching and pretty error pages supporting source maps. - 🔋 **Full-featured:** supports most Workers features, including KV, Durable Objects, WebSockets, modules and more. - âš¡ **Fully-local:** test and develop Workers without an Internet connection. Reload code on change quickly. <LinkButton href="/workers/testing/miniflare/get-started"> Get Started </LinkButton> <LinkButton variant="secondary" href="https://github.com/cloudflare/workers-sdk/tree/main/packages/miniflare" > GitHub </LinkButton> <LinkButton variant="secondary" href="https://npmjs.com/package/miniflare"> NPM </LinkButton> --- These docs primarily cover Miniflare specific things. For more information on runtime APIs, refer to the [Cloudflare Workers docs](/workers). If you find something that doesn't behave as it does in the production Workers environment (and this difference isn't documented), or something's wrong in these docs, please [open a GitHub issue](https://github.com/cloudflare/workers-sdk/issues/new/choose). <DirectoryListing descriptions /> --- # Writing tests URL: https://developers.cloudflare.com/workers/testing/miniflare/writing-tests/ :::note For most users, Cloudflare recommends using the Workers Vitest integration for testing Workers and [Pages Functions](/pages/functions/) projects. [Vitest](https://vitest.dev/) is a popular JavaScript testing framework featuring a very fast watch mode, Jest compatibility, and out-of-the-box support for TypeScript. ::: import { TabItem, Tabs, Details } from "~/components"; import { FileTree } from "@astrojs/starlight/components"; This guide will show you how to set up [Miniflare](/workers/testing/miniflare) to test your Workers. Miniflare is a low-level API that allows you to fully control how your Workers are run and tested. To use Miniflare, make sure you've installed the latest version of Miniflare v3: <Tabs> <TabItem label="npm"> ```sh npm install -D miniflare ``` </TabItem> <TabItem label="yarn"> ```sh yarn add -D miniflare ``` </TabItem> <TabItem label="pnpm"> ```sh pnpm add -D miniflare ``` </TabItem> </Tabs> The rest of this guide demonstrates concepts with the [`node:test`](https://nodejs.org/api/test.html) testing framework, but any testing framework can be used. Miniflare is a low-level API that exposes a large variety of configuration options for running your Worker. In most cases, your tests will only need a subset of the available options, but you can refer to the [full API reference](/workers/testing/miniflare/get-started/#reference) to explore what is possible with Miniflare. Before writing a test, you will need to create a Worker. Since Miniflare is a low-level API that emulates the Cloudflare platform primitives, your Worker will need to be written in JavaScript or you'll need to [integrate your own build pipeline](#custom-builds) into your testing setup. Here's an example JavaScript-only Worker: ```js title="src/index.js" export default { async fetch(request) { return new Response(`Hello World`); }, }; ``` Next, you will need to create an initial test file: ```js {12,13,14,15,16,17,18,19} title="src/index.test.js" import assert from "node:assert"; import test, { after, before, describe } from "node:test"; import { Miniflare } from "miniflare"; describe("worker", () => { /** * @type {Miniflare} */ let worker; before(async () => { worker = new Miniflare({ modules: [ { type: "ESModule", path: "src/index.js", }, ], }); await worker.ready; }); test("hello world", async () => { assert.strictEqual( await (await worker.dispatchFetch("http://example.com")).text(), "Hello World", ); }); after(async () => { await worker.dispose(); }); }); ``` You should be able to run the above test via `node --test` The highlighted lines of the test file above demonstrate how to set up Miniflare to run a JavaScript Worker. Once Miniflare has been set up, your individual tests can send requests to the running Worker and assert against the responses. This is the main limitation of using Miniflare for testing your Worker as compared to the [Vitest integration](/workers/testing/vitest-integration/) — all access to your Worker must be through the `dispatchFetch()` Miniflare API, and you cannot unit test individual functions from your Worker. <Details header="What runtime are tests running in?"> When using the [Vitest integration](/workers/testing/vitest-integration/), your entire test suite runs in [`workerd`](https://github.com/cloudflare/workerd), which is why it is possible to unit test individual functions. By contrast, when using a different testing framework to run tests via Miniflare, only your Worker itself is running in [`workerd`](https://github.com/cloudflare/workerd) — your test files run in Node.js. This means that importing functions from your Worker into your test files might exhibit different behaviour than you'd see at runtime if the functions rely on `workerd`-specific behaviour. </Details> ## Interacting with Bindings :::caution Miniflare does not read [Wrangler's config file](/workers/wrangler/configuration). All bindings that your Worker uses need to be specified in the Miniflare API options. ::: The `dispatchFetch()` API from Miniflare allows you to send requests to your Worker and assert that the correct response is returned, but sometimes you need to interact directly with bindings in tests. For use cases like that, Miniflare provides the [`getBindings()`](/workers/testing/miniflare/get-started/#reference) API. For instance, to access an environment variable in your tests, adapt the test file `src/index.test.js` as follows: ```diff lang="js" title="src/index.test.js" ... describe("worker", () => { ... before(async () => { worker = new Miniflare({ ... + bindings: { + FOO: "Hello Bindings", + }, }); ... }); test("text binding", async () => { const bindings = await worker.getBindings(); assert.strictEqual(bindings.FOO, "Hello Bindings"); }); ... }); ``` You can also interact with local resources such as KV and R2 using the same API as you would from a Worker. For example, here's how you would interact with a KV namespace: ```diff lang="js" title="src/index.test.js" ... describe("worker", () => { ... before(async () => { worker = new Miniflare({ ... + kvNamespaces: ["KV"], }); ... }); test("kv binding", async () => { const bindings = await worker.getBindings(); await bindings.KV.put("key", "value"); assert.strictEqual(await bindings.KV.get("key"), "value"); }); ... }); ``` ## More complex Workers The example given above shows how to test a simple Worker consisting of a single JavaScript file. However, most real-world Workers are more complex than that. Miniflare supports providing all constituent files of your Worker directly using the API: ```js new Miniflare({ modules: [ { type: "ESModule", path: "src/index.js", }, { type: "ESModule", path: "src/imported.js", }, ], }); ``` This can be a bit cumbersome as your Worker grows. To help with this, Miniflare can also crawl your module graph to automatically figure out which modules to include: ```js new Miniflare({ scriptPath: "src/index-with-imports.js", modules: true, modulesRules: [{ type: "ESModule", include: ["**/*.js"] }], }); ``` ## Custom builds In many real-world cases, Workers are not written in plain JavaScript but instead consist of multiple TypeScript files that import from npm packages and other dependencies, which are then bundled by a build tool. When testing your Worker via Miniflare directly you need to run this build tool before your tests. Exactly how this build is run will depend on the specific test framework you use, but for `node:test` it would likely be in a `setup()` hook. For example, if you use [Wrangler](/workers/wrangler/) to build and deploy your Worker, you could spawn a `wrangler build` command like this: ```js before(() => { spawnSync("npx wrangler build -c wrangler-build.json", { shell: true, stdio: "pipe", }); }); ``` --- # Configuration URL: https://developers.cloudflare.com/workers/testing/vitest-integration/configuration/ import { Details } from "~/components" The Workers Vitest integration provides additional configuration on top of Vitest's usual options. You can use the [`defineWorkersConfig()`](/workers/testing/vitest-integration/configuration/#defineworkersconfigoptions) API for better type checking and completions. An example configuration would be: ```ts import { defineWorkersConfig } from "@cloudflare/vitest-pool-workers/config"; export default defineWorkersConfig({ test: { poolOptions: { workers: { wrangler: { configPath: "./wrangler.toml" }, }, }, }, }); ``` :::note Use [`defineWorkersProject`](#defineworkersprojectoptions) with [Vitest Workspaces](https://vitest.dev/guide/workspace) to specify a different configuration for certain tests. ::: :::caution Custom Vitest `environment`s or `runner`s are not supported when using the Workers Vitest integration. ::: ## APIs The following APIs are exported from the `@cloudflare/vitest-pool-workers/config` module. ### `defineWorkersConfig(options)` Ensures Vitest is configured to use the Workers integration with the correct module resolution settings, and provides type checking for [WorkersPoolOptions](#workerspooloptions). This should be used in place of the [`defineConfig()`](https://vitest.dev/config/file.html) function from Vitest. It also accepts a `Promise` of `options`, or an optionally-`async` function returning `options`. ```ts import { defineWorkersConfig } from "@cloudflare/vitest-pool-workers/config"; export default defineWorkersConfig({ test: { poolOptions: { workers: { // ... }, }, }, }); ``` ### `defineWorkersProject(options)` Similar to [`defineWorkersConfig()`](#defineworkersconfigoptions), this ensures Vitest is configured to use the Workers integration with the correct module resolution settings, and provides type checking for [WorkersPoolOptions](#workerspooloptions), except it should be used in place of the [`defineProject()`](https://vitest.dev/guide/workspace) function from Vitest. It also accepts a `Promise` of `options`, or an optionally-`async` function returning `options`. ```ts import { defineWorkspace, defineProject } from "vitest/config"; import { defineWorkersProject } from "@cloudflare/vitest-pool-workers/config"; const workspace = defineWorkspace([ defineWorkersProject({ test: { name: "Workers", include: ["**/*.worker.test.ts"], poolOptions: { workers: { // ... }, }, }, }), // ... ]); export default workspace; ``` ### `buildPagesASSETSBinding(assetsPath)` Creates a Pages ASSETS binding that serves files insides the `assetsPath`. This is required if you uses `createPagesEventContext()` or `SELF` to test your **Pages Functions**. Refer to the [Pages recipe](/workers/testing/vitest-integration/recipes) for a full example. ```ts import path from "node:path"; import { buildPagesASSETSBinding, defineWorkersProject, } from "@cloudflare/vitest-pool-workers/config"; export default defineWorkersProject(async () => { const assetsPath = path.join(__dirname, "public"); return { test: { poolOptions: { workers: { miniflare: { serviceBindings: { ASSETS: await buildPagesASSETSBinding(assetsPath), }, }, }, }, }, }; }); ``` ### `readD1Migrations(migrationsPath)` Reads all [D1 migrations](/d1/reference/migrations/) stored at `migrationsPath` and returns them ordered by migration number. Each migration will have its contents split into an array of individual SQL queries. Call the [`applyD1Migrations()`](/workers/testing/vitest-integration/test-apis/#d1) function inside a test or [setup file](https://vitest.dev/config/#setupfiles) to apply migrations. Refer to the [D1 recipe](https://github.com/cloudflare/workers-sdk/tree/main/fixtures/vitest-pool-workers-examples/d1) for an example project using migrations. ```ts import path from "node:path"; import { defineWorkersProject, readD1Migrations, } from "@cloudflare/vitest-pool-workers/config"; export default defineWorkersProject(async () => { // Read all migrations in the `migrations` directory const migrationsPath = path.join(__dirname, "migrations"); const migrations = await readD1Migrations(migrationsPath); return { test: { setupFiles: ["./test/apply-migrations.ts"], poolOptions: { workers: { miniflare: { // Add a test-only binding for migrations, so we can apply them in a setup file bindings: { TEST_MIGRATIONS: migrations }, }, }, }, }, }; }); ``` ## `WorkersPoolOptions` * `main`: stringoptional * Entry point to Worker run in the same isolate/context as tests. This option is required to use `import { SELF } from "cloudflare:test"` for integration tests, or Durable Objects without an explicit `scriptName` if classes are defined in the same Worker. This file goes through Vite transforms and can be TypeScript. Note that `import module from "<path-to-main>"` inside tests gives exactly the same `module` instance as is used internally for the `SELF` and Durable Object bindings. If `wrangler.configPath` is defined and this option is not, it will be read from the `main` field in that configuration file. * `isolatedStorage`: booleanoptional * Enables per-test isolated storage. If enabled, any writes to storage performed in a test will be undone at the end of the test. The test's storage environment is copied from the containing suite, meaning `beforeAll()` hooks can be used to seed data. If this option is disabled, all tests will share the same storage. `.concurrent` tests are not supported when isolated storage is enabled. Refer to [Isolation and concurrency](/workers/testing/vitest-integration/isolation-and-concurrency/) for more information on the isolation model. * Defaults to `true`. <Details header="Illustrative example"> ```ts import { env } from "cloudflare:test"; import { beforeAll, beforeEach, describe, test, expect } from "vitest"; // Get the current list stored in a KV namespace async function get(): Promise<string[]> { return await env.NAMESPACE.get("list", "json") ?? []; } // Add an item to the end of the list async function append(item: string) { const value = await get(); value.push(item); await env.NAMESPACE.put("list", JSON.stringify(value)); } beforeAll(() => append("all")); beforeEach(() => append("each")); test("one", async () => { // Each test gets its own storage environment copied from the parent await append("one"); expect(await get()).toStrictEqual(["all", "each", "one"]); }); // `append("each")` and `append("one")` undone test("two", async () => { await append("two"); expect(await get()).toStrictEqual(["all", "each", "two"]); }); // `append("each")` and `append("two")` undone describe("describe", async () => { beforeAll(() => append("describe all")); beforeEach(() => append("describe each")); test("three", async () => { await append("three"); expect(await get()).toStrictEqual([ // All `beforeAll()`s run before `beforeEach()`s "all", "describe all", "each", "describe each", "three" ]); }); // `append("each")`, `append("describe each")` and `append("three")` undone test("four", async () => { await append("four"); expect(await get()).toStrictEqual([ "all", "describe all", "each", "describe each", "four" ]); }); // `append("each")`, `append("describe each")` and `append("four")` undone }); ``` </Details> * `singleWorker`: booleanoptional * Runs all tests in this project serially in the same Worker, using the same module cache. This can significantly speed up execution if you have lots of small test files. Refer to the [Isolation and concurrency](/workers/testing/vitest-integration/isolation-and-concurrency/) page for more information on the isolation model. * Defaults to `false`. * `miniflare`: `SourcelessWorkerOptions & { workers?: WorkerOptions\[]; }` optional * Use this to provide configuration information that is typically stored within the [Wrangler configuration file](/workers/wrangler/configuration/), such as [bindings](/workers/runtime-apis/bindings/), [compatibility dates](/workers/configuration/compatibility-dates/), and [compatibility flags](/workers/configuration/compatibility-flags/). The `WorkerOptions` interface is defined [here](https://github.com/cloudflare/workers-sdk/tree/main/packages/miniflare#interface-workeroptions). Use the `main` option above to configure the entry point, instead of the Miniflare `script`, `scriptPath`, or `modules` options. * If your project makes use of multiple Workers, you can configure auxiliary Workers that run in the same `workerd` process as your tests and can be bound to. Auxiliary Workers are configured using the `workers` array, containing regular Miniflare [`WorkerOptions`](https://github.com/cloudflare/workers-sdk/tree/main/packages/miniflare#interface-workeroptions) objects. Note that unlike the `main` Worker, auxiliary Workers: * Cannot have TypeScript entrypoints. You must compile auxiliary Workers to JavaScript first. You can use the [`wrangler deploy --dry-run --outdir dist`](/workers/wrangler/commands/#deploy) command for this. * Use regular Workers module resolution semantics. Refer to the [Isolation and concurrency](/workers/testing/vitest-integration/isolation-and-concurrency/#modules) page for more information. * Cannot access the [`cloudflare:test`](/workers/testing/vitest-integration/test-apis/) module. * Do not require specific compatibility dates or flags. * Can be written with the [Service Worker syntax](/workers/reference/migrate-to-module-workers/#service-worker-syntax). * Are not affected by global mocks defined in your tests. * `wrangler`: `{ configPath?: string; environment?: string; }` optional * Path to [Wrangler configuration file](/workers/wrangler/configuration/) to load `main`, [compatibility settings](/workers/configuration/compatibility-dates/) and [bindings](/workers/runtime-apis/bindings/) from. These options will be merged with the `miniflare` option above, with `miniflare` values taking precedence. For example, if your Wrangler configuration defined a [service binding](/workers/runtime-apis/bindings/service-bindings/) named `SERVICE` to a Worker named `service`, but you included `serviceBindings: { SERVICE(request) { return new Response("body"); } }` in the `miniflare` option, all requests to `SERVICE` in tests would return `body`. Note `configPath` accepts both `.toml` and `.json` files. * The environment option can be used to specify the [Wrangler environment](/workers/wrangler/environments/) to pick up bindings and variables from. :::caution You must define a compatibility date of `2022-10-31` or higher, and include [`nodejs_compat` in your compatibility flags](/workers/configuration/compatibility-flags/#nodejs-compatibility-flag) to use the Workers Vitest integration. ::: ## `WorkersPoolOptionsContext` * `inject`: typeof import("vitest").inject * The same `inject()` function usually imported from the `vitest` module inside tests. This allows you to define `miniflare` configuration based on injected values from [`globalSetup`](https://vitest.dev/config/#globalsetup) scripts. Use this if you have a value in your configuration that is dynamically generated and only known at runtime of your tests. For example, a global setup script might start an upstream server on a random port. This port could be `provide()`d and then `inject()`ed in the configuration for an external service binding or [Hyperdrive](/hyperdrive/). Refer to the [Hyperdrive recipe](https://github.com/cloudflare/workers-sdk/tree/main/fixtures/vitest-pool-workers-examples/hyperdrive) for an example project using this provide/inject approach. <Details header="Illustrative example"> ```ts // env.d.ts declare module "vitest" { interface ProvidedContext { port: number; } } // global-setup.ts import type { GlobalSetupContext } from "vitest/node"; export default function ({ provide }: GlobalSetupContext) { // Runs inside Node.js, could start server here... provide("port", 1337); return () => { /* ...then teardown here */ }; } // vitest.config.ts import { defineWorkersConfig } from "@cloudflare/vitest-pool-workers/config"; export default defineWorkersConfig({ test: { globalSetup: ["./global-setup.ts"], pool: "@cloudflare/vitest-pool-workers", poolOptions: { workers: ({ inject }) => ({ miniflare: { hyperdrives: { DATABASE: `postgres://user:pass@example.com:${inject("port")}/db`, }, }, }), }, }, }); ``` </Details> ## `SourcelessWorkerOptions` Sourceless `WorkerOptions` type without `script`, `scriptPath`, or `modules` properties. Refer to the Miniflare [`WorkerOptions`](https://github.com/cloudflare/workers-sdk/tree/main/packages/miniflare#interface-workeroptions) type for more details. ```ts type SourcelessWorkerOptions = Omit< WorkerOptions, "script" | "scriptPath" | "modules" | "modulesRoot" >; ``` --- # Vitest integration URL: https://developers.cloudflare.com/workers/testing/vitest-integration/ import { DirectoryListing, Render } from "~/components" Information about the Workers Vitest integration - the recommended package for writing unit and integration tests for Workers. <DirectoryListing /> <Render file="testing-pages-functions" product="workers" /> --- # Debugging URL: https://developers.cloudflare.com/workers/testing/vitest-integration/debugging/ This guide shows you how to debug your Workers tests with Vitest. This is available with `@cloudflare/vitest-pool-workers` v0.7.5 or later. ## Open inspector with Vitest To start debugging, run Vitest with the following command and attach a debugger to port `9229`: ```sh vitest --inspect --no-file-parallelism ``` ## Customize the inspector port By default, the inspector will be opened on port `9229`. If you need to use a different port (for example, `3456`), you can run the following command: ```sh vitest --inspect=3456 --no-file-parallelism ``` Alternatively, you can define it in your Vitest configuration file: ```ts import { defineWorkersConfig } from "@cloudflare/vitest-pool-workers/config"; export default defineWorkersConfig({ test: { inspector: { port: 3456, }, poolOptions: { workers: { // ... }, }, }, }); ``` ## Setup VS Code to use breakpoints To setup VS Code for breakpoint debugging in your Worker tests, create a `.vscode/launch.json` file that contains the following configuration: ```json { "configurations": [ { "type": "node", "request": "launch", "name": "Open inspector with Vitest", "program": "${workspaceRoot}/node_modules/vitest/vitest.mjs", "console": "integratedTerminal", "args": ["--inspect=9229", "--no-file-parallelism"] }, { "name": "Attach to Workers Runtime", "type": "node", "request": "attach", "port": 9229, "cwd": "/", "resolveSourceMapLocations": null, "attachExistingChildren": false, "autoAttachChildProcesses": false, } ], "compounds": [ { "name": "Debug Workers tests", "configurations": ["Open inspector with Vitest", "Attach to Workers Runtime"], "stopAll": true } ] } ``` Select **Debug Workers tests** at the top of the **Run & Debug** panel to open an inspector with Vitest and attach a debugger to the Workers runtime. Then you can add breakpoints to your test files and start debugging. --- # Isolation and concurrency URL: https://developers.cloudflare.com/workers/testing/vitest-integration/isolation-and-concurrency/ Review how the Workers Vitest integration runs your tests, how it isolates tests from each other, and how it imports modules. ## Run tests When you run your tests with the Workers Vitest integration, Vitest will: 1. Read and evaluate your configuration file using Node.js. 2. Run any [`globalSetup`](https://vitest.dev/config/#globalsetup) files using Node.js. 3. Collect and sequence test files. 4. For each Vitest project, depending on its configured isolation and concurrency, start one or more [`workerd`](https://github.com/cloudflare/workerd) processes, each running one or more Workers. 5. Run [`setupFiles`](https://vitest.dev/config/#setupfiles) and test files in `workerd` using the appropriate Workers. 6. Watch for changes and re-run test files using the same Workers if the configuration has not changed. ## Isolation and concurrency models The [`isolatedStorage` and `singleWorker`](/workers/testing/vitest-integration/configuration/#workerspooloptions) configuration options both control isolation and concurrency. The Workers Vitest integration tries to minimise the number of `workerd` processes it starts, reusing Workers and their module caches between test runs where possible. The current implementation of isolated storage requires each `workerd` process to run one test file at a time, and does not support `.concurrent` tests. A copy of all auxiliary `workers` exists in each `workerd` process. By default, the `isolatedStorage` option is enabled. We recommend you enable the `singleWorker: true` option if you have lots of small test files. ### `isolatedStorage: true, singleWorker: false` (Default) In this model, a `workerd` process is started for each test file. Test files are executed concurrently but `.concurrent` tests are not supported. Each test will read/write from an isolated storage environment, and bind to its own set of auxiliary `workers`.  ### `isolatedStorage: true, singleWorker: true` In this model, a single `workerd` process is started with a single Worker for all test files. Test files are executed in serial and `.concurrent` tests are not supported. Each test will read/write from an isolated storage environment, and bind to the same auxiliary `workers`.  ### `isolatedStorage: false, singleWorker: false` In this model, a single `workerd` process is started with a Worker for each test file. Tests files are executed concurrently and `.concurrent` tests are supported. Every test will read/write from the same shared storage, and bind to the same auxiliary `workers`.  ### `isolatedStorage: false, singleWorker: true` In this model, a single `workerd` process is started with a single Worker for all test files. Test files are executed in serial but `.concurrent` tests are supported. Every test will read/write from the same shared storage, and bind to the same auxiliary `workers`.  ## Modules Each Worker has its own module cache. As Workers are reused between test runs, their module caches are also reused. Vitest invalidates parts of the module cache at the start of each test run based on changed files. The Workers Vitest pool works by running code inside a Cloudflare Worker that Vitest would usually run inside a [Node.js Worker thread](https://nodejs.org/api/worker_threads.html). To make this possible, the pool **automatically injects** the [`nodejs_compat`](/workers/configuration/compatibility-flags/#nodejs-compatibility-flag), [`no_nodejs_compat_v2`] and [`export_commonjs_default`](/workers/configuration/compatibility-flags/#commonjs-modules-do-not-export-a-module-namespace) compatibility flags. This is the minimal compatibility setup that still allows Vitest to run correctly, but without pulling in polyfills and globals that aren't required. If you already have a Node.js compatibility flag defined in your configuration, Vitest Pool Workers will not try to add those flags. :::caution Using Vitest Pool Workers may cause your Worker to behave differently when deployed than during testing as the `nodejs_compat` flag is enabled by default. This means that Node.js-specific APIs and modules are available when running your tests. However, Cloudflare Workers do not support these Node.js APIs in the production environment unless you specify this flag in your Worker configuration. If you do not have a `nodejs_compat` or `nodejs_compat_v2` flag in your configuration and you import a Node.js module in your Worker code, your tests may pass, but you will find that you will not be able to deploy this Worker, as the upload call (either via the REST API or via Wrangler) will throw an error. However, if you use Node.js globals that are not supported by the runtime, your Worker upload will be successful, but you may see errors in production code. Let's create a contrived example to illustrate the issue. The `wrangler.toml` does not specify either `nodejs_compat` or `nodejs_compat_v2`: ```toml name = "test" main = "src/index.ts" compatibility_date = "2024-12-16" # no nodejs_compat flags here ``` In our `src/index.ts` file, we use the `process` object, which is a Node.js global, unavailable in the Workerd runtime: ```typescript export default { async fetch(request, env, ctx): Promise<Response> { process.env.TEST = 'test'; return new Response(process.env.TEST); }, } satisfies ExportedHandler<Env>; ``` The test is a simple assertion that the Worker managed to use `process`. ```typescript it('responds with "test"', async () => { const response = await SELF.fetch('https://example.com/'); expect(await response.text()).toMatchInlineSnapshot(`"test"`); }); ``` Now, if we run `npm run test`, we see that the tests will _pass_: ``` ✓ test/index.spec.ts (1) ✓ responds with "test" Test Files 1 passed (1) Tests 1 passed (1) ``` And we can run `wrangler dev` and `wrangler deploy` without issues. It _looks like_ our code is fine. However, this code will fail in production as `process` is not available in the Workerd runtime. To fix the issue, we either need to avoid using Node.js APIs, or add the `nodejs_compat` flag to our Wrangler configuration. ::: --- # Known issues URL: https://developers.cloudflare.com/workers/testing/vitest-integration/known-issues/ The Workers Vitest pool is currently in open beta. The following are issues Cloudflare is aware of and fixing: ### Coverage Native code coverage via [V8](https://v8.dev/blog/javascript-code-coverage) is not supported. You must use instrumented code coverage via [Istanbul](https://istanbul.js.org/) instead. Refer to the [Vitest Coverage documentation](https://vitest.dev/guide/coverage) for setup instructions. ### Fake timers Vitest's [fake timers](https://vitest.dev/guide/mocking.html#timers) do not apply to KV, R2 and cache simulators. For example, you cannot expire a KV key by advancing fake time. ### Dynamic `import()` statements with `SELF` and Durable Objects Dynamic `import()` statements do not work inside `export default { ... }` handlers when writing integration tests with `SELF`, or inside Durable Object event handlers. You must import and call your handlers directly, or use static `import` statements in the global scope. ### Durable Object alarms Durable Object alarms are not reset between test runs and do not respect isolated storage. Ensure you delete or run all alarms with [`runDurableObjectAlarm()`](/workers/testing/vitest-integration/test-apis/#durable-objects) scheduled in each test before finishing the test. ### WebSockets Using WebSockets with Durable Objects with the [`isolatedStorage`](/workers/testing/vitest-integration/isolation-and-concurrency) flag turned on is not supported. You must set `isolatedStorage: false` in your `vitest.config.ts` file. ### Isolated storage When the `isolatedStorage` flag is enabled (the default), the test runner will undo any writes to the storage at the end of the test as detailed in the [isolation and concurrency documentation](/workers/testing/vitest-integration/isolation-and-concurrency/). However, Cloudflare recommends that you consider the following actions to avoid any common issues: #### Await all storage operations Always `await` all `Promise`s that read or write to storage services. ```ts // Example: Seed data beforeAll(async () => { await env.KV.put('message', 'test message'); await env.R2.put('file', 'hello-world'); }); ``` #### Explicitly signal resource disposal When calling RPC methods of a Service Worker or Durable Object that return non-primitive values (such as objects or classes extending `RpcTarget`), use the `using` keyword to explicitly signal when resources can be disposed of. See [this example test](https://github.com/cloudflare/workers-sdk/tree/main/fixtures/vitest-pool-workers-examples/rpc/test/unit.test.ts#L155) and refer to [explicit-resource-management](/workers/runtime-apis/rpc/lifecycle#explicit-resource-management) for more details. ```ts using result = await stub.getCounter(); ``` #### Consume response bodies When making requests via `fetch` or `R2.get()`, consume the entire response body, even if you are not asserting its content. For example: ```ts test('check if file exists', async () => { await env.R2.put('file', 'hello-world'); const response = await env.R2.get('file'); expect(response).not.toBe(null); // Consume the response body even if you are not asserting it await response.text() }); ``` ### Module resolution If you encounter module resolution issues such as: `Error: Cannot use require() to import an ES Module` or `Error: No such module`, you can bundle these dependencies using the [deps.optimizer](https://vitest.dev/config/#deps-optimizer) option: ```tsx import { defineWorkersConfig } from "@cloudflare/vitest-pool-workers/config"; export default defineWorkersConfig({ test: { deps: { optimizer: { ssr: { enabled: true, include: ["your-package-name"], }, }, }, poolOptions: { workers: { // ... }, }, }, }); ``` You can find an example in the [Recipes](/workers/testing/vitest-integration/recipes) page. ### Importing modules from global setup file Although Vitest is set up to resolve packages for the `workerd` runtime, it runs your global setup file in the Node.js environment. This can cause issues when importing packages like [Postgres.js](https://github.com/cloudflare/workers-sdk/issues/6465), which exports a non-Node version for `workerd`. To work around this, you can create a wrapper that uses Vite's SSR module loader to import the global setup file under the correct conditions. Then, adjust your Vitest configuration to point to this wrapper. For example: ```ts // File: global-setup-wrapper.ts import { createServer } from "vite" // Import the actual global setup file with the correct setup const mod = await viteImport("./global-setup.ts") export default mod.default; // Helper to import the file with default node setup async function viteImport(file: string) { const server = await createServer({ root: import.meta.dirname, configFile: false, server: { middlewareMode: true, hmr: false, watch: null, ws: false }, optimizeDeps: { noDiscovery: true }, clearScreen: false, }); const mod = await server.ssrLoadModule(file); await server.close(); return mod; } ``` ```ts // File: vitest.config.ts import { defineWorkersConfig } from "@cloudflare/vitest-pool-workers/config"; export default defineWorkersConfig({ test: { // Replace the globalSetup with the wrapper file globalSetup: ["./global-setup-wrapper.ts"], poolOptions: { workers: { // ... }, }, }, }); ``` --- # Recipes URL: https://developers.cloudflare.com/workers/testing/vitest-integration/recipes/ Recipes are examples that help demonstrate how to write unit tests and integration tests for Workers projects using the [`@cloudflare/vitest-pool-workers`](https://www.npmjs.com/package/@cloudflare/vitest-pool-workers) package. - [Basic unit and integration tests for Workers using `SELF`](https://github.com/cloudflare/workers-sdk/tree/main/fixtures/vitest-pool-workers-examples/basics-unit-integration-self) - [Basic unit and integration tests for Pages Functions using `SELF`](https://github.com/cloudflare/workers-sdk/tree/main/fixtures/vitest-pool-workers-examples/pages-functions-unit-integration-self) - [Basic integration tests using an auxiliary Worker](https://github.com/cloudflare/workers-sdk/tree/main/fixtures/vitest-pool-workers-examples/basics-integration-auxiliary) - [Basic integration test for Workers with static assets](https://github.com/cloudflare/workers-sdk/tree/main/fixtures/vitest-pool-workers-examples/workers-assets) - [Isolated tests using KV, R2 and the Cache API](https://github.com/cloudflare/workers-sdk/tree/main/fixtures/vitest-pool-workers-examples/kv-r2-caches) - [Isolated tests using D1 with migrations](https://github.com/cloudflare/workers-sdk/tree/main/fixtures/vitest-pool-workers-examples/d1) - [Isolated tests using Durable Objects with direct access](https://github.com/cloudflare/workers-sdk/tree/main/fixtures/vitest-pool-workers-examples/durable-objects) - [Tests using Queue producers and consumers](https://github.com/cloudflare/workers-sdk/tree/main/fixtures/vitest-pool-workers-examples/queues) - [Tests using Hyperdrive with a Vitest managed TCP server](https://github.com/cloudflare/workers-sdk/tree/main/fixtures/vitest-pool-workers-examples/hyperdrive) - [Tests using declarative/imperative outbound request mocks](https://github.com/cloudflare/workers-sdk/tree/main/fixtures/vitest-pool-workers-examples/request-mocking) - [Tests using multiple auxiliary Workers and request mocks](https://github.com/cloudflare/workers-sdk/tree/main/fixtures/vitest-pool-workers-examples/multiple-workers) - [Tests importing WebAssembly modules](https://github.com/cloudflare/workers-sdk/tree/main/fixtures/vitest-pool-workers-examples/web-assembly) - [Tests using JSRPC with entrypoints and Durable Objects](https://github.com/cloudflare/workers-sdk/tree/main/fixtures/vitest-pool-workers-examples/rpc) - [Integration test with static assets and Puppeteer](https://github.com/GregBrimble/puppeteer-vitest-workers-assets) - [Resolving modules with Vite Dependency Pre-Bundling](https://github.com/cloudflare/workers-sdk/tree/main/fixtures/vitest-pool-workers-examples/module-resolution) --- # Test APIs URL: https://developers.cloudflare.com/workers/testing/vitest-integration/test-apis/ The Workers Vitest integration provides runtime helpers for writing tests in the `cloudflare:test` module. The `cloudflare:test` module is provided by the `@cloudflare/vitest-pool-workers` package, but can only be imported from test files that execute in the Workers runtime. ## `cloudflare:test` module definition * <code>env</code>: import("cloudflare:test").ProvidedEnv * Exposes the [`env` object](/workers/runtime-apis/handlers/fetch/#parameters) for use as the second argument passed to ES modules format exported handlers. This provides access to [bindings](/workers/runtime-apis/bindings/) that you have defined in your [Vitest configuration file](/workers/testing/vitest-integration/configuration/). <br/> ```js import { env } from "cloudflare:test"; it("uses binding", async () => { await env.KV_NAMESPACE.put("key", "value"); expect(await env.KV_NAMESPACE.get("key")).toBe("value"); }); ``` To configure the type of this value, use an ambient module type: ```ts declare module "cloudflare:test" { interface ProvidedEnv { KV_NAMESPACE: KVNamespace; } // ...or if you have an existing `Env` type... interface ProvidedEnv extends Env {} } ``` * <code>SELF</code>: Fetcher * [Service binding](/workers/runtime-apis/bindings/service-bindings/) to the default export defined in the `main` Worker. Use this to write integration tests against your Worker. The `main` Worker runs in the same isolate/context as tests so any global mocks will apply to it too. <br/> ```js import { SELF } from "cloudflare:test"; it("dispatches fetch event", async () => { const response = await SELF.fetch("https://example.com"); expect(await response.text()).toMatchInlineSnapshot(...); }); ``` * <code>fetchMock</code>: import("undici").MockAgent * Declarative interface for mocking outbound `fetch()` requests. Deactivated by default and reset before running each test file. Refer to [`undici`'s `MockAgent` documentation](https://undici.nodejs.org/#/docs/api/MockAgent) for more information. Note this only mocks `fetch()` requests for the current test runner Worker. Auxiliary Workers should mock `fetch()`es using the Miniflare `fetchMock`/`outboundService` options. Refer to [Configuration](/workers/testing/vitest-integration/configuration/#workerspooloptions) for more information. <br/> ```js import { fetchMock } from "cloudflare:test"; import { beforeAll, afterEach, it, expect } from "vitest"; beforeAll(() => { // Enable outbound request mocking... fetchMock.activate(); // ...and throw errors if an outbound request isn't mocked fetchMock.disableNetConnect(); }); // Ensure we matched every mock we defined afterEach(() => fetchMock.assertNoPendingInterceptors()); it("mocks requests", async () => { // Mock the first request to `https://example.com` fetchMock .get("https://example.com") .intercept({ path: "/" }) .reply(200, "body"); const response = await fetch("https://example.com/"); expect(await response.text()).toBe("body"); }); ``` ### Events * <code>createExecutionContext()</code>: ExecutionContext * Creates an instance of the [`context` object](/workers/runtime-apis/handlers/fetch/#parameters) for use as the third argument to ES modules format exported handlers. * <code>waitOnExecutionContext(ctx:ExecutionContext)</code>: Promise\<void> * Use this to wait for all Promises passed to `ctx.waitUntil()` to settle, before running test assertions on any side effects. Only accepts instances of `ExecutionContext` returned by `createExecutionContext()`. <br/> ```ts import { env, createExecutionContext, waitOnExecutionContext } from "cloudflare:test"; import { it, expect } from "vitest"; import worker from "./index.mjs"; it("calls fetch handler", async () => { const request = new Request("https://example.com"); const ctx = createExecutionContext(); const response = await worker.fetch(request, env, ctx); await waitOnExecutionContext(ctx); expect(await response.text()).toMatchInlineSnapshot(...); }); ``` * <code>createScheduledController(options?:FetcherScheduledOptions)</code>: ScheduledController * Creates an instance of `ScheduledController` for use as the first argument to modules-format [`scheduled()`](/workers/runtime-apis/handlers/scheduled/) exported handlers. <br/> ```ts import { env, createScheduledController, createExecutionContext, waitOnExecutionContext } from "cloudflare:test"; import { it, expect } from "vitest"; import worker from "./index.mjs"; it("calls scheduled handler", async () => { const ctrl = createScheduledController({ scheduledTime: new Date(1000), cron: "30 * * * *" }); const ctx = createExecutionContext(); await worker.scheduled(ctrl, env, ctx); await waitOnExecutionContext(ctx); }); ``` * <code>createMessageBatch(queueName:string, messages:ServiceBindingQueueMessage\[])</code>: MessageBatch * Creates an instance of `MessageBatch` for use as the first argument to modules-format [`queue()`](/queues/configuration/javascript-apis/#consumer) exported handlers. * <code>getQueueResult(batch:MessageBatch, ctx:ExecutionContext)</code>: Promise\<FetcherQueueResult> * Gets the acknowledged/retry state of messages in the `MessageBatch`, and waits for all `ExecutionContext#waitUntil()`ed `Promise`s to settle. Only accepts instances of `MessageBatch` returned by `createMessageBatch()`, and instances of `ExecutionContext` returned by `createExecutionContext()`. <br/> ```ts import { env, createMessageBatch, createExecutionContext, getQueueResult } from "cloudflare:test"; import { it, expect } from "vitest"; import worker from "./index.mjs"; it("calls queue handler", async () => { const batch = createMessageBatch("my-queue", [ { id: "message-1", timestamp: new Date(1000), body: "body-1" } ]); const ctx = createExecutionContext(); await worker.queue(batch, env, ctx); const result = await getQueueResult(batch, ctx); expect(result.ackAll).toBe(false); expect(result.retryBatch).toMatchObject({ retry: false }); expect(result.explicitAcks).toStrictEqual(["message-1"]); expect(result.retryMessages).toStrictEqual([]); }); ``` ### Durable Objects * <code>runInDurableObject\<O extends DurableObject, R>(stub:DurableObjectStub, callback:(instance: O, state: DurableObjectState) => R | Promise\<R>)</code>: Promise\<R> * Runs the provided `callback` inside the Durable Object that corresponds to the provided `stub`. <br/> This temporarily replaces your Durable Object's `fetch()` handler with `callback`, then sends a request to it, returning the result. This can be used to call/spy-on Durable Object methods or seed/get persisted data. Note this can only be used with `stub`s pointing to Durable Objects defined in the `main` Worker. <br/> ```ts export class Counter { constructor(readonly state: DurableObjectState) {} async fetch(request: Request): Promise<Response> { let count = (await this.state.storage.get<number>("count")) ?? 0; void this.state.storage.put("count", ++count); return new Response(count.toString()); } } ``` ```ts import { env, runInDurableObject } from "cloudflare:test"; import { it, expect } from "vitest"; import { Counter } from "./index.ts"; it("increments count", async () => { const id = env.COUNTER.newUniqueId(); const stub = env.COUNTER.get(id); let response = await stub.fetch("https://example.com"); expect(await response.text()).toBe("1"); response = await runInDurableObject(stub, async (instance: Counter, state) => { expect(instance).toBeInstanceOf(Counter); expect(await state.storage.get<number>("count")).toBe(1); const request = new Request("https://example.com"); return instance.fetch(request); }); expect(await response.text()).toBe("2"); }); ``` * <code>runDurableObjectAlarm(stub:DurableObjectStub)</code>: Promise\<boolean> * Immediately runs and removes the Durable Object pointed to by `stub`'s alarm if one is scheduled. Returns `true` if an alarm ran, and `false` otherwise. Note this can only be used with `stub`s pointing to Durable Objects defined in the `main` Worker. * <code>listDurableObjectIds(namespace:DurableObjectNamespace)</code>: Promise\<DurableObjectId\[]> * Gets the IDs of all objects that have been created in the `namespace`. Respects `isolatedStorage` if enabled, meaning objects created in a different test will not be returned. <br/> ```ts import { env, listDurableObjectIds } from "cloudflare:test"; import { it, expect } from "vitest"; it("increments count", async () => { const id = env.COUNTER.newUniqueId(); const stub = env.COUNTER.get(id); const response = await stub.fetch("https://example.com"); expect(await response.text()).toBe("1"); const ids = await listDurableObjectIds(env.COUNTER); expect(ids.length).toBe(1); expect(ids[0].equals(id)).toBe(true); }); ``` ### D1 * <code>applyD1Migrations(db:D1Database, migrations:D1Migration\[], migrationTableName?:string)</code>: Promise\<void> * Applies all un-applied [D1 migrations](/d1/reference/migrations/) stored in the `migrations` array to database `db`, recording migrations state in the `migrationsTableName` table. `migrationsTableName` defaults to `d1_migrations`. Call the [`readD1Migrations()`](/workers/testing/vitest-integration/configuration/#readd1migrationsmigrationspath) function from the `@cloudflare/vitest-pool-workers/config` package inside Node.js to get the `migrations` array. Refer to the [D1 recipe](https://github.com/cloudflare/workers-sdk/tree/main/fixtures/vitest-pool-workers-examples/d1) for an example project using migrations. --- # Automate analytics reporting with Cloudflare Workers and email routing URL: https://developers.cloudflare.com/workers/tutorials/automated-analytics-reporting/ import { Render, PackageManagers, TabItem, Tabs, WranglerConfig } from "~/components"; In this tutorial, you will create a [Cloudflare Worker](https://workers.cloudflare.com/) that fetches analytics data about your account from Cloudflare's [GraphQL Analytics API](https://developers.cloudflare.com/analytics/graphql-api/). You will be able to view the account analytics data in your browser and receive a scheduled email report. You will learn: 1. How to create a Worker using the `c3` CLI. 2. How to fetch analytics data from Cloudflare's GraphQL Analytics API. 3. How to send an email with a Worker. 4. How to schedule the Worker to run at a specific time. 5. How to store secrets and environment variables in your Worker. 6. How to test the Worker locally. 7. How to deploy the Worker to Cloudflare's edge network. ## Prerequisites Before you start, make sure you: <Render file="prereqs" product="workers" /> 3. [Add a domain](/fundamentals/setup/manage-domains/add-site/) to your Cloudflare account. 4. [Enable Email Routing](/email-routing/get-started/enable-email-routing/) for your domain. 5. Create Cloudflare's [Analytics API token](/analytics/graphql-api/getting-started/authentication/api-token-auth/). ## 1. Create a Worker While you can create a Worker using the Cloudflare dashboard, creating a Worker using the `c3` CLI is recommended as it provides a more streamlined development experience and allows you to test your Worker locally. First, use the `c3` CLI to create a new Cloudflare Workers project. <PackageManagers type="create" pkg="cloudflare@latest" args={"account-analytics"} /> In this tutorial, name your Worker as `account-analytics`. <Render file="c3-post-run-steps" product="workers" params={{ category: "hello-world", type: "Hello World Worker", lang: "JavaScript", }} /> Now, the Worker is set up. Move into your project directory: ```sh cd account-analytics ``` To continue with this tutorial, install the [`mimetext`](https://www.npmjs.com/package/mimetext) package: <Tabs> <TabItem label="pnpm"> ```sh pnpm install mimetext ``` </TabItem> <TabItem label="npm"> ```sh npm install mimetext ``` </TabItem> <TabItem label="yarn"> ```sh yarn add mimetext ``` </TabItem> </Tabs> ## 2. Update Wrangler configuration file [`wrangler.jsonc`](/workers/wrangler/configuration/) contains the configuration for your Worker. It was created when you ran `c3` CLI. Open `wrangler.jsonc` in your code editor and update it with the following configuration: <WranglerConfig> ```toml name = "account-analytics" main = "src/index.js" compatibility_date = "2024-11-01" compatibility_flags = ["nodejs_compat"] # Set destination_address to the email address where you want to receive the report send_email = [ {name = "ANALYTICS_EMAIL", destination_address = "<>"} ] # Schedule the Worker to run every day at 10:00 AM [triggers] crons = ["0 10 * * *"] # Enable observability to view Worker logs [observability] enabled = true [vars] # This value shows the name of the sender in the email SENDER_NAME = "Cloudflare Analytics Worker" # This email address will be used as the sender of the email SENDER_EMAIL = "<>" # This email address will be used as the recipient of the email RECIPIENT_EMAIL = "<>" # This value will be used as the subject of the email EMAIL_SUBJECT = "Cloudflare Analytics Report" ``` </WranglerConfig> Before you continue, update the following: 1. `destination_address`: Enter the email address where you want to receive the analytics report. 2. `[VARS]`: Enter the environment variable values you want to use in your Worker. :::note[IMPORTANT] `destination_address` and `RECIPIENT_EMAIL` **must** contain [Email Routing verified email](/email-routing/get-started/enable-email-routing/) address. `SENDER_EMAIL` **must** be an email address on a domain that is added to your Cloudflare domain and has Email Routing enabled. ::: ## 3. Update the Worker code You will now add the code step by step to the `src/index.js` file. This tutorial will explain each part of the code. ### Add the required modules and Handlers While you are in your project directory, open `src/index.js` in your code editor and update it with the following code: ```js // Import required modules for email handling import { EmailMessage } from "cloudflare:email"; import { createMimeMessage } from "mimetext"; export default { // HTTP request handler - This Handler is invoked when the Worker is accessed via HTTP async fetch(request, env, ctx) { try { const analyticsData = await fetchAnalytics(env); const formattedContent = formatContent( analyticsData.data, analyticsData.formattedDate, ); return new Response(formattedContent, { headers: { "Content-Type": "text/plain" }, }); } catch (error) { console.error("Error:", error); return new Response(`Error: ${error.message}`, { status: 500, headers: { "Content-Type": "text/plain" }, }); } }, // Scheduled task handler - This Handler is invoked via a Cron Trigger async scheduled(event, env, ctx) { try { const analyticsData = await fetchAnalytics(env); const formattedContent = formatContent( analyticsData.data, analyticsData.formattedDate, ); await sendEmail(env, formattedContent); console.log("Analytics email sent successfully"); } catch (error) { console.error("Failed to send analytics email:", error); } }, }; ``` The code above defines two [Worker Handlers](/workers/runtime-apis/handlers/): - `fetch`: This Handler executes when the Worker is accessed via HTTP. It fetches the analytics data, formats it and returns it as a response. - `scheduled`: This Handler executes at the scheduled time. It fetches the analytics data, formats it and sends an email with the analytics data. ### Add the analytics fetching function Add the following function to the `src/index.js` file, below the Handlers: ```js async function fetchAnalytics(env) { // Calculate yesterday's date for the report and format it for display const yesterday = new Date(); yesterday.setDate(yesterday.getDate() - 1); const dateString = yesterday.toISOString().split("T")[0]; const formattedDate = yesterday.toLocaleDateString("en-US", { weekday: "long", year: "numeric", month: "long", day: "numeric", }); // Fetch analytics data from Cloudflare's GraphQL Analytics API const response = await fetch(`https://api.cloudflare.com/client/v4/graphql`, { method: "POST", headers: { Authorization: `Bearer ${env.CF_API_TOKEN}`, "Content-Type": "application/json", }, body: JSON.stringify({ query: ` query GetAnalytics($accountTag: String!, $date: String!) { viewer { accounts(filter: { accountTag: $accountTag }) { httpRequests1dGroups(limit: 1, filter: { date: $date }) { sum { requests pageViews bytes encryptedRequests encryptedBytes cachedRequests cachedBytes threats browserMap { pageViews uaBrowserFamily } responseStatusMap { requests edgeResponseStatus } clientHTTPVersionMap { requests clientHTTPProtocol } } } } } } `, variables: { accountTag: env.CF_ACCOUNT_ID, date: dateString, }, }), }); const data = await response.json(); if (data.errors) { throw new Error(`GraphQL Error: ${JSON.stringify(data.errors)}`); } return { data, formattedDate }; } ``` In the code above, the `fetchAnalytics` function fetches analytics data from Cloudflare's GraphQL Analytics API. The `fetchAnalytics` function calculates yesterday's date, formats the date for display, and sends a GraphQL query to the Analytics API to fetch the analytics data. :::note `env.CF_API_TOKEN` and `env.CF_ACCOUNT_ID` are [Worker Secrets](/workers/configuration/secrets/). These variables are used to authenticate the request and fetch the analytics data for the specified account. You will add these secrets to your Worker after the code is complete. ::: This function returns the **raw** data for the previous day, including: - Traffic overview data (Total requests, Page views and Blocked threats) - Bandwidth data (Total bandwidth, Encrypted bandwidth and Cached bandwidth) - Caching and Encryption data (Encrypted requests, Cached requests, Encryption rate and Cache rate) - Browser data (Page views by browser) - HTTP status code data (Requests by status code) - HTTP version data (Requests by HTTP version) This data will be used to generate the analytics report. In the following step, you will add the function that formats this data. ### Add the data formatting function Add the following function to the `src/index.js` file, below the `fetchAnalytics` function: ```js function formatContent(analyticsData, formattedDate) { const stats = analyticsData.data.viewer.accounts[0].httpRequests1dGroups[0].sum; // Helper function to format bytes into human-readable format const formatBytes = (bytes) => { if (bytes === 0) return "0 Bytes"; const k = 1024; const sizes = ["Bytes", "KB", "MB", "GB", "TB"]; const i = Math.floor(Math.log(bytes) / Math.log(k)); return parseFloat((bytes / Math.pow(k, i)).toFixed(2)) + " " + sizes[i]; }; // Format browser statistics const browserData = stats.browserMap .sort((a, b) => b.pageViews - a.pageViews) .map((b) => ` ${b.uaBrowserFamily}: ${b.pageViews} views`) .join("\n"); // Format HTTP status code statistics const statusData = stats.responseStatusMap .sort((a, b) => b.requests - a.requests) .map((s) => ` ${s.edgeResponseStatus}: ${s.requests} requests`) .join("\n"); // Format HTTP version statistics const httpVersionData = stats.clientHTTPVersionMap .sort((a, b) => b.requests - a.requests) .map((h) => ` ${h.clientHTTPProtocol}: ${h.requests} requests`) .join("\n"); // Return formatted report return ` CLOUDFLARE ANALYTICS REPORT ========================== Generated for: ${formattedDate} TRAFFIC OVERVIEW --------------- Total Requests: ${stats.requests.toLocaleString()} Page Views: ${stats.pageViews.toLocaleString()} Security Threats Blocked: ${stats.threats.toLocaleString()} BANDWIDTH --------- Total Bandwidth: ${formatBytes(stats.bytes)} Encrypted Bandwidth: ${formatBytes(stats.encryptedBytes)} Cached Bandwidth: ${formatBytes(stats.cachedBytes)} CACHING & ENCRYPTION ------------------- Total Requests: ${stats.requests.toLocaleString()} Encrypted Requests: ${stats.encryptedRequests.toLocaleString()} Cached Requests: ${stats.cachedRequests.toLocaleString()} Encryption Rate: ${((stats.encryptedRequests / stats.requests) * 100).toFixed(1)}% Cache Rate: ${((stats.cachedRequests / stats.requests) * 100).toFixed(1)}% BROWSERS -------- ${browserData} HTTP STATUS CODES --------------- ${statusData} HTTP VERSIONS ------------ ${httpVersionData} `; } ``` At this point, you have defined the `fetchAnalytics` function that fetches raw analytics data from Cloudflare's GraphQL Analytics API and the `formatContent` function that formats the analytics data into a human-readable report. ### Add the email sending function Add the following function to the `src/index.js` file, below the `formatContent` function: ```js async function sendEmail(env, content) { // Create and configure email message const msg = createMimeMessage(); msg.setSender({ name: env.SENDER_NAME, addr: env.SENDER_EMAIL, }); msg.setRecipient(env.RECIPIENT_EMAIL); msg.setSubject(env.EMAIL_SUBJECT); msg.addMessage({ contentType: "text/plain", data: content, }); // Send email using Cloudflare Email Routing service const message = new EmailMessage( env.SENDER_EMAIL, env.RECIPIENT_EMAIL, msg.asRaw(), ); try { await env.ANALYTICS_EMAIL.send(message); } catch (error) { throw new Error(`Failed to send email: ${error.message}`); } } ``` This function sends an email with the formatted analytics data to the specified recipient email address using Cloudflare's Email Routing service. :::note `sendEmail` function uses multiple environment variables that are set in the Wrangler file. ::: ## 4. Test the Worker Now that you have updated the Worker code, you can test it locally using the `wrangler dev` command. This command starts a local server that runs your Worker code. Before you run the Worker, you need to add two Worker secrets: - `CF_API_TOKEN`: Cloudflare GraphQL Analytics API token you created earlier. - `CF_ACCOUNT_ID`: Your Cloudflare account ID. You can find your account ID in the Cloudflare dashboard under the **Workers & Pages** Overview tab. Create a `.dev.vars` file in the root of your project directory and add the following: ```sh CF_API_TOKEN=YOUR_CLOUDFLARE_API_TOKEN CF_ACCOUNT_ID=YOUR_CLOUDFLARE_ACCOUNT_ID ``` Now, run the Worker locally: ```sh npx wrangler dev --remote ``` Open the `http://localhost:8787` URL on your browser. The browser will display analytics data. ## 5. Deploy the Worker and Worker secrets Once you have tested the Worker locally, you can deploy your Worker to Cloudflare's edge network: ```sh npx wrangler deploy ``` CLI command will output the URL where your Worker is deployed. Before you can use this URL in your browser to view the analytics data, you need to add two Worker secrets you already have locally to your deployed Worker: ```sh npx wrangler secret put <secret> ``` Replace `<secret>` with the name of the secret you want to add. Repeat this command for `CF_API_TOKEN` and `CF_ACCOUNT_ID` secrets. Once you put the secrets, preview your analytics data at `account-analytics.<YOUR_SUBDOMAIN>.workers.dev`. You will also receive an email report to the specified recipient email address every day at 10:00 AM. If you want to disable a public URL for your Worker, you can do so by following these steps: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com). 2. In Account Home, select **Workers & Pages**, then select `account-analytics` Worker. 3. Go to **Settings** > **Domains & Routes**. 4. Select **Disable** to disable the public `account-analytics.<YOUR_SUBDOMAIN>.workers.dev` URL. You have successfully created, tested and deployed a Worker that fetches analytics data from Cloudflare's GraphQL Analytics API and sends an email report via Email Routing. ## Related resources To build more with Workers, refer to [Tutorials](/workers/tutorials/). If you have any questions, need assistance, or would like to share your project, join the Cloudflare Developer community on [Discord](https://discord.cloudflare.com) to connect with other developers and the Cloudflare team. --- # Build a todo list Jamstack application URL: https://developers.cloudflare.com/workers/tutorials/build-a-jamstack-app/ import { Render, PackageManagers, WranglerConfig } from "~/components"; In this tutorial, you will build a todo list application using HTML, CSS, and JavaScript. The application data will be stored in [Workers KV](/kv/api/).  Before starting this project, you should have some experience with HTML, CSS, and JavaScript. You will learn: 1. How building with Workers makes allows you to focus on writing code and ship finished products. 2. How the addition of Workers KV makes this tutorial a great introduction to building full, data-driven applications. If you would like to see the finished code for this project, find the [project on GitHub](https://github.com/lauragift21/cloudflare-workers-todos) and refer to the [live demo](https://todos.examples.workers.dev/) to review what you will be building. <Render file="tutorials-before-you-start" /> ## 1. Create a new Workers project First, use the [`create-cloudflare`](https://www.npmjs.com/package/create-cloudflare) CLI tool to create a new Cloudflare Workers project named `todos`. In this tutorial, you will use the default `Hello World` template to create a Workers project. <PackageManagers type="create" pkg="cloudflare@latest" args={"todos"} /> <Render file="c3-post-run-steps" product="workers" params={{ category: "hello-world", type: "Hello World Worker", lang: "JavaScript", }} /> Move into your newly created directory: ```sh cd todos ``` Inside of your new `todos` Worker project directory, `index.js` represents the entry point to your Cloudflare Workers application. All incoming HTTP requests to a Worker are passed to the [`fetch()` handler](/workers/runtime-apis/handlers/fetch/) as a [request](/workers/runtime-apis/request/) object. After a request is received by the Worker, the response your application constructs will be returned to the user. This tutorial will guide you through understanding how the request/response pattern works and how you can use it to build fully featured applications. ```js export default { async fetch(request, env, ctx) { return new Response("Hello World!"); }, }; ``` In your default `index.js` file, you can see that request/response pattern in action. The `fetch` constructs a new `Response` with the body text `'Hello World!'`. When a Worker receives a `request`, the Worker returns the newly constructed response to the client. Your Worker will serve new responses directly from [Cloudflare's global network](https://www.cloudflare.com/network) instead of continuing to your origin server. A standard server would accept requests and return responses. Cloudflare Workers allows you to respond by constructing responses directly on the Cloudflare global network. ## 2. Review project details Any project you deploy to Cloudflare Workers can make use of modern JavaScript tooling like [ES modules](/workers/reference/migrate-to-module-workers/), `npm` packages, and [`async`/`await`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/async_function) functions to build your application. In addition to writing Workers, you can use Workers to [build full applications](/workers/tutorials/build-a-slackbot/) using the same tooling and process as in this tutorial. In this tutorial, you will build a todo list application running on Workers that allows reading data from a [KV](/kv/) store and using the data to populate an HTML response to send to the client. The work needed to create this application is split into three tasks: 1. Write data to KV. 2. Rendering data from KV. 3. Adding todos from the application UI. For the remainder of this tutorial you will complete each task, iterating on your application, and then publish it to your own domain. ## 3. Write data to KV To begin, you need to understand how to populate your todo list with actual data. To do this, use [Cloudflare Workers KV](/kv/) — a key-value store that you can access inside of your Worker to read and write data. To get started with KV, set up a namespace. All of your cached data will be stored inside that namespace and, with configuration, you can access that namespace inside the Worker with a predefined variable. Use Wrangler to create a new namespace called `TODOS` with the [`kv namespace create` command](/workers/wrangler/commands/#kv-namespace-create) and get the associated namespace ID by running the following command in your terminal: ```sh title="Create a new KV namespace" npx wrangler kv namespace create "TODOS" --preview ``` The associated namespace can be combined with a `--preview` flag to interact with a preview namespace instead of a production namespace. Namespaces can be added to your application by defining them inside your Wrangler configuration. Copy your newly created namespace ID, and in your [Wrangler configuration file](/workers/wrangler/configuration/), define a `kv_namespaces` key to set up your namespace: <WranglerConfig> ```toml kv_namespaces = [ {binding = "TODOS", id = "<YOUR_ID>", preview_id = "<YOUR_PREVIEW_ID>"} ] ``` </WranglerConfig> The defined namespace, `TODOS`, will now be available inside of your codebase. With that, it is time to understand the [KV API](/kv/api/). A KV namespace has three primary methods you can use to interface with your cache: `get`, `put`, and `delete`. Start storing data by defining an initial set of data, which you will put inside of the cache using the `put` method. The following example defines a `defaultData` object instead of an array of todo items. You may want to store metadata and other information inside of this cache object later on. Given that data object, use `JSON.stringify` to add a string into the cache: ```js export default { async fetch(request, env, ctx) { const defaultData = { todos: [ { id: 1, name: "Finish the Cloudflare Workers blog post", completed: false, }, ], }; await env.TODOS.put("data", JSON.stringify(defaultData)); return new Response("Hello World!"); }, }; ``` Workers KV is an eventually consistent, global datastore. Any writes within a region are immediately reflected within that same region but will not be immediately available in other regions. However, those writes will eventually be available everywhere and, at that point, Workers KV guarantees that data within each region will be consistent. Given the presence of data in the cache and the assumption that your cache is eventually consistent, this code needs a slight adjustment: the application should check the cache and use its value, if the key exists. If it does not, you will use `defaultData` as the data source for now (it should be set in the future) and write it to the cache for future use. After breaking out the code into a few functions for simplicity, the result looks like this: ```js export default { async fetch(request, env, ctx) { const defaultData = { todos: [ { id: 1, name: "Finish the Cloudflare Workers blog post", completed: false, }, ], }; const setCache = (data) => env.TODOS.put("data", data); const getCache = () => env.TODOS.get("data"); let data; const cache = await getCache(); if (!cache) { await setCache(JSON.stringify(defaultData)); data = defaultData; } else { data = JSON.parse(cache); } return new Response(JSON.stringify(data)); }, }; ``` ## Render data from KV Given the presence of data in your code, which is the cached data object for your application, you should take this data and render it in a user interface. To do this, make a new `html` variable in your Workers script and use it to build up a static HTML template that you can serve to the client. In `fetch`, construct a new `Response` with a `Content-Type: text/html` header and serve it to the client: ```js const html = `<!DOCTYPE html> <html> <head> <meta charset="UTF-8" /> <meta name="viewport" content="width=device-width,initial-scale=1" /> <title>Todos</title> </head> <body> <h1>Todos</h1> </body> </html> `; async fetch (request, env, ctx) { // previous code return new Response(html, { headers: { 'Content-Type': 'text/html' } }); } ``` You have a static HTML site being rendered and you can begin populating it with data. In the body, add a `div` tag with an `id` of `todos`: ```js null {10} const html = `<!DOCTYPE html> <html> <head> <meta charset="UTF-8" /> <meta name="viewport" content="width=device-width,initial-scale=1" /> <title>Todos</title> </head> <body> <h1>Todos</h1> <div id="todos"></div> </body> </html> `; ``` Add a `<script>` element at the end of the body content that takes a `todos` array. For each `todo` in the array, create a `div` element and appends it to the `todos` HTML element: ```js null {12,13,14,15,16,17,18,19,20} const html = `<!DOCTYPE html> <html> <head> <meta charset="UTF-8" /> <meta name="viewport" content="width=device-width,initial-scale=1" /> <title>Todos</title> </head> <body> <h1>Todos</h1> <div id="todos"></div> </body> <script> window.todos = [] var todoContainer = document.querySelector("#todos") window.todos.forEach(todo => { var el = document.createElement("div") el.textContent = todo.name todoContainer.appendChild(el) }) </script> </html> `; ``` Your static page can take in `window.todos` and render HTML based on it, but you have not actually passed in any data from KV. To do this, you will need to make a few changes. First, your `html` variable will change to a function. The function will take in a `todos` argument, which will populate the `window.todos` variable in the above code sample: ```js null {1,6} const html = (todos) => ` <!doctype html> <html> <!-- existing content --> <script> window.todos = ${todos} var todoContainer = document.querySelector("#todos") // ... <script> </html> `; ``` In `fetch`, use the retrieved KV data to call the `html` function and generate a `Response` based on it: ```js null {2} async fetch (request, env, ctx) { const body = html(JSON.stringify(data.todos).replace(/</g, '\\u003c')); return new Response(body, { headers: { 'Content-Type': 'text/html' }, }); } ``` ## 4. Add todos from the user interface (UI) At this point, you have built a Cloudflare Worker that takes data from Cloudflare KV and renders a static page based on that Worker. That static page reads data and generates a todo list based on that data. The remaining task is creating todos from inside the application UI. You can add todos using the KV API — update the cache by running `env.TODOS.put(newData)`. To update a todo item, you will add a second handler in your Workers script, designed to watch for `PUT` requests to `/`. When a request body is received at that URL, the Worker will send the new todo data to your KV store. Add this new functionality in `fetch`: if the request method is a PUT, it will take the request body and update the cache. ```js null {5,6,7,8,9,10,11,12,13,14} export default { async fetch(request, env, ctx) { const setCache = (data) => env.TODOS.put("data", data); if (request.method === "PUT") { const body = await request.text(); try { JSON.parse(body); await setCache(body); return new Response(body, { status: 200 }); } catch (err) { return new Response(err, { status: 500 }); } } // previous code }, }; ``` Check that the request is a `PUT` and wrap the remainder of the code in a `try/catch` block. First, parse the body of the request coming in, ensuring that it is JSON, before you update the cache with the new data and return it to the user. If anything goes wrong, return a `500` status code. If the route is hit with an HTTP method other than `PUT` — for example, `POST` or `DELETE` — return a `404` error. With this script, you can now add some dynamic functionality to your HTML page to actually hit this route. First, create an input for your todo name and a button for submitting the todo. ```js null {5,6,7,8} const html = (todos) => ` <!doctype html> <html> <!-- existing content --> <div> <input type="text" name="name" placeholder="A new todo"></input> <button id="create">Create</button> </div> <!-- existing script --> </html> `; ``` Given that input and button, add a corresponding JavaScript function to watch for clicks on the button — once the button is clicked, the browser will `PUT` to `/` and submit the todo. ```js null {8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23} const html = (todos) => ` <!doctype html> <html> <!-- existing content --> <script> // Existing JavaScript code var createTodo = function() { var input = document.querySelector("input[name=name]") if (input.value.length) { todos = [].concat(todos, { id: todos.length + 1, name: input.value, completed: false, }) fetch("/", { method: "PUT", body: JSON.stringify({ todos: todos }), }) } } document.querySelector("#create").addEventListener("click", createTodo) </script> </html> `; ``` This code updates the cache. Remember that the KV cache is eventually consistent — even if you were to update your Worker to read from the cache and return it, you have no guarantees it will actually be up to date. Instead, update the list of todos locally, by taking your original code for rendering the todo list, making it a reusable function called `populateTodos`, and calling it when the page loads and when the cache request has finished: ```js null {6,7,8,9,10,11,12,13,14,15,16} const html = (todos) => ` <!doctype html> <html> <!-- existing content --> <script> var populateTodos = function() { var todoContainer = document.querySelector("#todos") todoContainer.innerHTML = null window.todos.forEach(todo => { var el = document.createElement("div") el.textContent = todo.name todoContainer.appendChild(el) }) } populateTodos() var createTodo = function() { var input = document.querySelector("input[name=name]") if (input.value.length) { todos = [].concat(todos, { id: todos.length + 1, name: input.value, completed: false, }) fetch("/", { method: "PUT", body: JSON.stringify({ todos: todos }), }) populateTodos() input.value = "" } } document.querySelector("#create").addEventListener("click", createTodo) </script> `; ``` With the client-side code in place, deploying the new version of the function should put all these pieces together. The result is an actual dynamic todo list. ## 5. Update todos from the application UI For the final piece of your todo list, you need to be able to update todos — specifically, marking them as completed. Luckily, a great deal of the infrastructure for this work is already in place. You can update the todo list data in the cache, as evidenced by your `createTodo` function. Performing updates on a todo is more of a client-side task than a Worker-side one. To start, the `populateTodos` function can be updated to generate a `div` for each todo. In addition, move the name of the todo into a child element of that `div`: ```js null {11,12,13} const html = (todos) => ` <!doctype html> <html> <!-- existing content --> <script> var populateTodos = function() { var todoContainer = document.querySelector("#todos") todoContainer.innerHTML = null window.todos.forEach(todo => { var el = document.createElement("div") var name = document.createElement("span") name.textContent = todo.name el.appendChild(name) todoContainer.appendChild(el) }) } </script> `; ``` You have designed the client-side part of this code to handle an array of todos and render a list of HTML elements. There is a number of things that you have been doing that you have not quite had a use for yet – specifically, the inclusion of IDs and updating the todo's completed state. These things work well together to actually support updating todos in the application UI. To start, it would be useful to attach the ID of each todo in the HTML. By doing this, you can then refer to the element later in order to correspond it to the todo in the JavaScript part of your code. Data attributes and the corresponding `dataset` method in JavaScript are a perfect way to implement this. When you generate your `div` element for each todo, you can attach a data attribute called todo to each `div`: ```js null {11} const html = (todos) => ` <!doctype html> <html> <!-- existing content --> <script> var populateTodos = function() { var todoContainer = document.querySelector("#todos") todoContainer.innerHTML = null window.todos.forEach(todo => { var el = document.createElement("div") el.dataset.todo = todo.id var name = document.createElement("span") name.textContent = todo.name el.appendChild(name) todoContainer.appendChild(el) }) } </script> `; ``` Inside your HTML, each `div` for a todo now has an attached data attribute, which looks like: ```html <div data-todo="1"></div> <div data-todo="2"></div> ``` You can now generate a checkbox for each todo element. This checkbox will default to unchecked for new todos but you can mark it as checked as the element is rendered in the window: ```js null {13,14,15,17} const html = (todos) => ` <!doctype html> <html> <!-- existing content --> <script> window.todos.forEach(todo => { var el = document.createElement("div") el.dataset.todo = todo.id var name = document.createElement("span") name.textContent = todo.name var checkbox = document.createElement("input") checkbox.type = "checkbox" checkbox.checked = todo.completed ? 1 : 0 el.appendChild(checkbox) el.appendChild(name) todoContainer.appendChild(el) }) </script> `; ``` The checkbox is set up to correctly reflect the value of completed on each todo but it does not yet update when you actually check the box. To do this, attach the `completeTodo` function as an event listener on the `click` event. Inside the function, inspect the checkbox element, find its parent (the todo `div`), and use its `todo` data attribute to find the corresponding todo in the data array. You can toggle the completed status, update its properties, and rerender the UI: ```js null {9,13,14,15,16,17,18,19,20,21,22} const html = (todos) => ` <!doctype html> <html> <!-- existing content --> <script> var populateTodos = function() { window.todos.forEach(todo => { // Existing todo element set up code checkbox.addEventListener("click", completeTodo) }) } var completeTodo = function(evt) { var checkbox = evt.target var todoElement = checkbox.parentNode var newTodoSet = [].concat(window.todos) var todo = newTodoSet.find(t => t.id == todoElement.dataset.todo) todo.completed = !todo.completed todos = newTodoSet updateTodos() } </script> `; ``` The final result of your code is a system that checks the `todos` variable, updates your Cloudflare KV cache with that value, and then does a re-render of the UI based on the data it has locally. ## 6. Conclusion and next steps By completing this tutorial, you have built a static HTML, CSS, and JavaScript application that is transparently powered by Workers and Workers KV, which take full advantage of Cloudflare's global network. If you would like to keep improving on your project, you can implement a better design (you can refer to a live version available at [todos.signalnerve.workers.dev](https://todos.signalnerve.workers.dev/)), or make additional improvements to security and speed. You may also want to add user-specific caching. Right now, the cache key is always `data` – this means that any visitor to the site will share the same todo list with other visitors. Within your Worker, you could use values from the client request to create and maintain user-specific lists. For example, you may generate a cache key based on the requesting IP: ```js null {15,16,22,33} export default { async fetch(request, env, ctx) { const defaultData = { todos: [ { id: 1, name: "Finish the Cloudflare Workers blog post", completed: false, }, ], }; const setCache = (key, data) => env.TODOS.put(key, data); const getCache = (key) => env.TODOS.get(key); const ip = request.headers.get("CF-Connecting-IP"); const myKey = `data-${ip}`; if (request.method === "PUT") { const body = await request.text(); try { JSON.parse(body); await setCache(myKey, body); return new Response(body, { status: 200 }); } catch (err) { return new Response(err, { status: 500 }); } } let data; const cache = await getCache(); if (!cache) { await setCache(myKey, JSON.stringify(defaultData)); data = defaultData; } else { data = JSON.parse(cache); } const body = html(JSON.stringify(data.todos).replace(/</g, "\\u003c")); return new Response(body, { headers: { "Content-Type": "text/html", }, }); }, }; ``` After making these changes and deploying the Worker one more time, your todo list application now includes per-user functionality while still taking full advantage of Cloudflare's global network. The final version of your Worker script should look like this: ```js const html = (todos) => ` <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width,initial-scale=1"> <title>Todos</title> <link href="https://cdn.jsdelivr.net/npm/tailwindcss/dist/tailwind.min.css" rel="stylesheet"></link> </head> <body class="bg-blue-100"> <div class="w-full h-full flex content-center justify-center mt-8"> <div class="bg-white shadow-md rounded px-8 pt-6 py-8 mb-4"> <h1 class="block text-grey-800 text-md font-bold mb-2">Todos</h1> <div class="flex"> <input class="shadow appearance-none border rounded w-full py-2 px-3 text-grey-800 leading-tight focus:outline-none focus:shadow-outline" type="text" name="name" placeholder="A new todo"></input> <button class="bg-blue-500 hover:bg-blue-800 text-white font-bold ml-2 py-2 px-4 rounded focus:outline-none focus:shadow-outline" id="create" type="submit">Create</button> </div> <div class="mt-4" id="todos"></div> </div> </div> </body> <script> window.todos = ${todos} var updateTodos = function() { fetch("/", { method: "PUT", body: JSON.stringify({ todos: window.todos }) }) populateTodos() } var completeTodo = function(evt) { var checkbox = evt.target var todoElement = checkbox.parentNode var newTodoSet = [].concat(window.todos) var todo = newTodoSet.find(t => t.id == todoElement.dataset.todo) todo.completed = !todo.completed window.todos = newTodoSet updateTodos() } var populateTodos = function() { var todoContainer = document.querySelector("#todos") todoContainer.innerHTML = null window.todos.forEach(todo => { var el = document.createElement("div") el.className = "border-t py-4" el.dataset.todo = todo.id var name = document.createElement("span") name.className = todo.completed ? "line-through" : "" name.textContent = todo.name var checkbox = document.createElement("input") checkbox.className = "mx-4" checkbox.type = "checkbox" checkbox.checked = todo.completed ? 1 : 0 checkbox.addEventListener("click", completeTodo) el.appendChild(checkbox) el.appendChild(name) todoContainer.appendChild(el) }) } populateTodos() var createTodo = function() { var input = document.querySelector("input[name=name]") if (input.value.length) { window.todos = [].concat(todos, { id: window.todos.length + 1, name: input.value, completed: false }) input.value = "" updateTodos() } } document.querySelector("#create").addEventListener("click", createTodo) </script> </html> `; export default { async fetch(request, env, ctx) { const defaultData = { todos: [ { id: 1, name: "Finish the Cloudflare Workers blog post", completed: false, }, ], }; const setCache = (key, data) => env.TODOS.put(key, data); const getCache = (key) => env.TODOS.get(key); const ip = request.headers.get("CF-Connecting-IP"); const myKey = `data-${ip}`; if (request.method === "PUT") { const body = await request.text(); try { JSON.parse(body); await setCache(myKey, body); return new Response(body, { status: 200 }); } catch (err) { return new Response(err, { status: 500 }); } } let data; const cache = await getCache(); if (!cache) { await setCache(myKey, JSON.stringify(defaultData)); data = defaultData; } else { data = JSON.parse(cache); } const body = html(JSON.stringify(data.todos).replace(/</g, "\\u003c")); return new Response(body, { headers: { "Content-Type": "text/html", }, }); }, }; ``` You can find the source code for this project, as well as a README with deployment instructions, [on GitHub](https://github.com/lauragift21/cloudflare-workers-todos). --- # Build a QR code generator URL: https://developers.cloudflare.com/workers/tutorials/build-a-qr-code-generator/ import { Render, PackageManagers } from "~/components"; In this tutorial, you will build and publish a Worker application that generates QR codes. If you would like to review the code for this tutorial, the final version of the codebase is [available on GitHub](https://github.com/kristianfreeman/workers-qr-code-generator). You can take the code provided in the example repository, customize it, and deploy it for use in your own projects. <Render file="tutorials-before-you-start" /> ## 1. Create a new Workers project First, use the [`create-cloudflare` CLI](/pages/get-started/c3) to create a new Cloudflare Workers project. To do this, open a terminal window and run the following command: <PackageManagers type="create" pkg="cloudflare@latest" args={"qr-code-generator"} /> <Render file="c3-post-run-steps" product="workers" params={{ category: "hello-world", type: "Hello World Worker", lang: "JavaScript", }} /> Then, move into your newly created directory: ```sh cd qr-code-generator ``` Inside of your new `qr-code-generator` Worker project directory, `index.js` represents the entry point to your Cloudflare Workers application. All Cloudflare Workers applications start by listening for `fetch` events, which are triggered when a client makes a request to a Workers route. After a request is received by the Worker, the response your application constructs will be returned to the user. This tutorial will guide you through understanding how the request/response pattern works and how you can use it to build fully featured applications. ```js export default { async fetch(request, env, ctx) { return new Response("Hello Worker!"); }, }; ``` In your default `index.js` file, you can see that request/response pattern in action. The `fetch` constructs a new `Response` with the body text `'Hello Worker!'`. When a Worker receives a `fetch` event, the Worker returns the newly constructed response to the client. Your Worker will serve new responses directly from [Cloudflare's global network](https://www.cloudflare.com/network) instead of continuing to your origin server. A standard server would accept requests and return responses. Cloudflare Workers allows you to respond quickly by constructing responses directly on the Cloudflare global network. ## 2. Handle Incoming Request Any project you publish to Cloudflare Workers can make use of modern JavaScript tooling like ES modules, `npm` packages, and [`async`/`await`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/async_function) functions to build your application. In addition to writing Workers, you can use Workers to [build full applications](/workers/tutorials/build-a-slackbot/) using the same tooling and process as in this tutorial. The QR code generator you will build in this tutorial will be a Worker that runs on a single route and receives requests. Each request will contain a text message (a URL, for example), which the function will encode into a QR code. The function will then respond with the QR code in PNG image format. At this point in the tutorial, your Worker function can receive requests and return a simple response with the text `"Hello Worker!"`. To handle data coming into your Worker, check if the incoming request is a `POST` request: ```js null {2,3,4} export default { async fetch(request, env, ctx) { if (request.method === "POST") { return new Response("Hello Worker!"); } }, }; ``` Currently, if an incoming request is not a `POST`, the function will return `undefined`. However, a Worker always needs to return a `Response`. Since the function should only accept incoming `POST` requests, return a new `Response` with a [`405` status code](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/405) if the incoming request is not a `POST`: ```js null {7,8} export default { async fetch(request, env, ctx) { if (request.method === "POST") { return new Response("Hello Worker!"); } return new Response("Expected POST request", { status: 405, }); }, }; ``` You have established the basic flow of the request. You will now set up a response to incoming valid requests. If a `POST` request comes in, the function should generate a QR code. To start, move the `"Hello Worker!"` response into a new function, `generateQRCode`, which will ultimately contain the bulk of your function’s logic: ```js null {7,8,9,10} export default { async fetch(request, env, ctx) { if (request.method === "POST") { } }, }; async function generateQRCode(request) { // TODO: Include QR code generation return new Response("Hello worker!"); } ``` With the `generateQRCode` function filled out, call it within `fetch` function and return its result directly to the client: ```js null {4} export default { async fetch(request, env, ctx) { if (request.method === "POST") { return generateQRCode(request); } }, }; ``` ## 3. Build a QR code generator All projects deployed to Cloudflare Workers support npm packages. This support makes it easy to rapidly build out functionality in your Workers. The ['qrcode-svg'](https://github.com/papnkukn/qrcode-svg) package is a great way to take text and encode it into a QR code. In the command line, install and save 'qrcode-svg' to your project’s 'package.json': ```sh title="Installing the qr-image package" npm install --save qrcode-svg ``` In `index.js`, import the `qrcode-svg` package as the variable `QRCode`. In the `generateQRCode` function, parse the incoming request as JSON using `request.json`, and generate a new QR code using the `qrcode-svg` package. The QR code is generated as an SVG. Construct a new instance of `Response`, passing in the SVG data as the body, and a `Content-Type` header of `image/svg+xml`. This will allow browsers to properly parse the data coming back from your Worker as an image: ```js null {1,2,3,4,5,6} import QRCode from "qrcode-svg"; async function generateQRCode(request) { const { text } = await request.json(); const qr = new QRCode({ content: text || "https://workers.dev" }); return new Response(qr.svg(), { headers: { "Content-Type": "image/svg+xml" } }); } ``` ## 4. Test in an application UI The Worker will execute when a user sends a `POST` request to a route, but it is best practice to also provide a proper interface for testing the function. At this point in the tutorial, if any request is received by your function that is not a `POST`, a `405` response is returned. The new version of `fetch` should return a new `Response` with a static HTML document instead of the `405` error: ```js null {23-54} export default { async fetch(request, env, ctx) { if (request.method === "POST") { return generateQRCode(request); } return new Response(landing, { headers: { "Content-Type": "text/html", }, }); }, }; async function generateQRCode(request) { const { text } = await request.json(); const qr = new QRCode({ content: text || "https://workers.dev" }); return new Response(qr.svg(), { headers: { "Content-Type": "image/svg+xml" } }); } const landing = ` <h1>QR Generator</h1> <p>Click the below button to generate a new QR code. This will make a request to your Worker.</p> <input type="text" id="text" value="https://workers.dev"></input> <button onclick="generate()">Generate QR Code</button> <p>Generated QR Code Image</p> <img id="qr" src="#" /> <script> function generate() { fetch(window.location.pathname, { method: "POST", headers: { "Content-Type": "application/json" }, body: JSON.stringify({ text: document.querySelector("#text").value }) }) .then(response => response.blob()) .then(blob => { const reader = new FileReader(); reader.onloadend = function () { document.querySelector("#qr").src = reader.result; // Update the image source with the newly generated QR code } reader.readAsDataURL(blob); }) } </script> `; ``` The `landing` variable, which is a static HTML string, sets up an `input` tag and a corresponding `button`, which calls the `generateQRCode` function. This function will make an HTTP `POST` request back to your Worker, allowing you to see the corresponding QR code image returned on the page. With the above steps complete, your Worker is ready. The full version of the code looks like this: ```js const QRCode = require("qrcode-svg"); export default { async fetch(request, env, ctx) { if (request.method === "POST") { return generateQRCode(request); } return new Response(landing, { headers: { "Content-Type": "text/html", }, }); }, }; async function generateQRCode(request) { const { text } = await request.json(); const qr = new QRCode({ content: text || "https://workers.dev" }); return new Response(qr.svg(), { headers: { "Content-Type": "image/svg+xml" } }); } const landing = ` <h1>QR Generator</h1> <p>Click the below button to generate a new QR code. This will make a request to your Worker.</p> <input type="text" id="text" value="https://workers.dev"></input> <button onclick="generate()">Generate QR Code</button> <p>Generated QR Code Image</p> <img id="qr" src="#" /> <script> function generate() { fetch(window.location.pathname, { method: "POST", headers: { "Content-Type": "application/json" }, body: JSON.stringify({ text: document.querySelector("#text").value }) }) .then(response => response.blob()) .then(blob => { const reader = new FileReader(); reader.onloadend = function () { document.querySelector("#qr").src = reader.result; // Update the image source with the newly generated QR code } reader.readAsDataURL(blob); }) } </script> `; ``` ## 5. Deploy your Worker With all the above steps complete, you have written the code for a QR code generator on Cloudflare Workers. Wrangler has built-in support for bundling, uploading, and releasing your Cloudflare Workers application. To do this, run `npx wrangler deploy`, which will build and deploy your code. ```sh title="Deploy your Worker project" npx wrangler deploy ``` ## Related resources In this tutorial, you built and deployed a Worker application for generating QR codes. If you would like to see the full source code for this application, you can find it [on GitHub](https://github.com/kristianfreeman/workers-qr-code-generator). If you want to get started building your own projects, review the existing list of [Quickstart templates](/workers/get-started/quickstarts/). --- # Build a Slackbot URL: https://developers.cloudflare.com/workers/tutorials/build-a-slackbot/ import { Render, TabItem, Tabs, PackageManagers } from "~/components"; In this tutorial, you will build a [Slack](https://slack.com) bot using [Cloudflare Workers](/workers/). Your bot will make use of GitHub webhooks to send messages to a Slack channel when issues are updated or created, and allow users to write a command to look up GitHub issues from inside Slack.  This tutorial is recommended for people who are familiar with writing web applications. You will use TypeScript as the programming language and [Hono](https://hono.dev/) as the web framework. If you have built an application with tools like [Node](https://nodejs.org) and [Express](https://expressjs.com), this project will feel very familiar to you. If you are new to writing web applications or have wanted to build something like a Slack bot in the past, but were intimidated by deployment or configuration, Workers will be a way for you to focus on writing code and shipping projects. If you would like to review the code or how the bot works in an actual Slack channel before proceeding with this tutorial, you can access the final version of the codebase [on GitHub](https://github.com/yusukebe/workers-slack-bot). From GitHub, you can add your own Slack API keys and deploy it to your own Slack channels for testing. --- <Render file="tutorials-before-you-start" /> ## Set up Slack This tutorial assumes that you already have a Slack account, and the ability to create and manage Slack applications. ### Configure a Slack application To post messages from your Cloudflare Worker into a Slack channel, you will need to create an application in Slack’s UI. To do this, go to Slack’s API section, at [api.slack.com/apps](https://api.slack.com/apps), and select **Create New App**.  Slack applications have many features. You will make use of two of them, Incoming Webhooks and Slash Commands, to build your Worker-powered Slack bot. #### Incoming Webhook Incoming Webhooks are URLs that you can use to send messages to your Slack channels. Your incoming webhook will be paired with GitHub’s webhook support to send messages to a Slack channel whenever there are updates to issues in a given repository. You will see the code in more detail as you build your application. First, create a Slack webhook: 1. On the sidebar of Slack's UI, select **Incoming Webhooks**. 2. In **Webhook URLs for your Workspace**, select **Add New Webhook to Workspace**. 3. On the following screen, select the channel that you want your webhook to send messages to (you can select a room, like #general or #code, or be messaged directly by your Slack bot when the webhook is called.) 4. Authorize the new webhook URL. After authorizing your webhook URL, you will be returned to the **Incoming Webhooks** page and can view your new webhook URL. You will add this into your Workers code later. Next, you will add the second component to your Slack bot: a Slash Command.  #### Slash Command A Slash Command in Slack is a custom-configured command that can be attached to a URL request. For example, if you configured `/weather <zip>`, Slack would make an HTTP POST request to a configured URL, passing the text `<zip>` to get the weather for a specified zip code. In your application, you will use the `/issue` command to look up GitHub issues using the [GitHub API](https://developer.github.com). Typing `/issue cloudflare/wrangler#1` will send the text `cloudflare/wrangler#1` in a HTTP POST request to your application, which the application will use to find the [relevant GitHub issue](https://github.com/cloudflare/wrangler-legacy/issues/1). 1. On the Slack sidebar, select **Slash Commands**. 2. Create your first slash command. For this tutorial, you will use the command `/issue`. The request URL should be the `/lookup` path on your application URL: for example, if your application will be hosted at `https://myworkerurl.com`, the Request URL should be `https://myworkerurl.com/lookup`.  ### Configure your GitHub Webhooks Your Cloudflare Workers application will be able to handle incoming requests from Slack. It should also be able to receive events directly from GitHub. If a GitHub issue is created or updated, you can make use of GitHub webhooks to send that event to your Workers application and post a corresponding message in Slack. To configure a webhook: 1. Go to your GitHub repository's **Settings** > **Webhooks** > **Add webhook**. If you have a repository like `https://github.com/user/repo`, you can access the **Webhooks** page directly at `https://github.com/user/repo/settings/hooks`. 2. Set the Payload URL to the `/webhook` path on your Worker URL. For example, if your Worker will be hosted at `https://myworkerurl.com`, the Payload URL should be `https://myworkerurl.com/webhook`. 3. In the **Content type** dropdown, select **application/json**. The **Content type** for your payload can either be a URL-encoded payload (`application/x-www-form-urlencoded`) or JSON (`application/json`). For the purpose of this tutorial and to make parsing the payload sent to your application, select JSON. 4. In **Which events would you like to trigger this webhook?**, select **Let me select individual events**. GitHub webhooks allow you to specify which events you would like to have sent to your webhook. By default, the webhook will send `push` events from your repository. For the purpose of this tutorial, you will choose **Let me select individual events**. 5. Select the **Issues** event type. There are many different event types that can be enabled for your webhook. Selecting **Issues** will send every issue-related event to your webhook, including when issues are opened, edited, deleted, and more. If you would like to expand your Slack bot application in the future, you can select more of these events after the tutorial. 6. Select **Add webhook**.  When your webhook is created, it will attempt to send a test payload to your application. Since your application is not actually deployed yet, leave the configuration as it is. You will later return to your repository to create, edit, and close some issues to ensure that the webhook is working once your application is deployed. ## Init To initiate the project, use the command line interface [C3 (create-cloudflare-cli)](https://github.com/cloudflare/workers-sdk/tree/main/packages/create-cloudflare). <PackageManagers type="create" pkg="cloudflare@latest" args={"slack-bot"} /> Follow these steps to create a Hono project. - For _What would you like to start with_?, select `Framework Starter`. - For _Which development framework do you want to use?_, select `Hono`. - For, _Do you want to deploy your application?_, select `No`. Go to the `slack-bot` directory: ```sh cd slack-bot ``` Open `src/index.ts` in an editor to find the following code. ```ts import { Hono } from "hono"; type Bindings = { [key in keyof CloudflareBindings]: CloudflareBindings[key]; }; const app = new Hono<{ Bindings: Bindings }>(); app.get("/", (c) => { return c.text("Hello Hono!"); }); export default app; ``` This is a minimal application using Hono. If a GET access comes in on the path `/`, it will return a response with the text `Hello Hono!`. It also returns a message `404 Not Found` with status code 404 if any other path or method is accessed. To run the application on your local machine, execute the following command. <Tabs> <TabItem label="npm"> ```sh title="Run your application locally" npm run dev ``` </TabItem> <TabItem label="yarn"> ```sh title="Run your application locally" yarn dev ``` </TabItem> </Tabs> Access to `http://localhost:8787` in your browser after the server has been started, and you can see the message. Hono helps you to create your Workers application easily and quickly. ## Build Now, let's create a Slack bot on Cloudflare Workers. ### Separating files You can create your application in several files instead of writing all endpoints and functions in one file. With Hono, it is able to add routing of child applications to the parent application using the function `app.route()`. For example, imagine the following Web API application. ```ts import { Hono } from "hono"; const app = new Hono(); app.get("/posts", (c) => c.text("Posts!")); app.post("/posts", (c) => c.text("Created!", 201)); export default app; ``` You can add the routes under `/api/v1`. ```ts null {2,6} import { Hono } from "hono"; import api from "./api"; const app = new Hono(); app.route("/api/v1", api); export default app; ``` It will return `Posts!` when accessing `GET /api/v1/posts`. The Slack bot will have two child applications called "route" each. 1. `lookup` route will take requests from Slack (sent when a user uses the `/issue` command), and look up the corresponding issue using the GitHub API. This application will be added to `/lookup` in the main application. 2. `webhook` route will be called when an issue changes on GitHub, via a configured webhook. This application will be add to `/webhook` in the main application. Create the route files in a directory named `routes`. ```sh title="Create new folders and files" mkdir -p src/routes touch src/routes/lookup.ts touch src/routes/webhook.ts ``` Then update the main application. ```ts null {2,3,7,8} import { Hono } from "hono"; import lookup from "./routes/lookup"; import webhook from "./routes/webhook"; const app = new Hono(); app.route("/lookup", lookup); app.route("/webhook", webhook); export default app; ``` ### Defining TypeScript types Before implementing the actual functions, you need to define the TypeScript types you will use in this project. Create a new file in the application at `src/types.ts` and write the code. `Bindings` is a type that describes the Cloudflare Workers environment variables. `Issue` is a type for a GitHub issue and `User` is a type for a GitHub user. You will need these later. ```ts export type Bindings = { SLACK_WEBHOOK_URL: string; }; export type Issue = { html_url: string; title: string; body: string; state: string; created_at: string; number: number; user: User; }; type User = { html_url: string; login: string; avatar_url: string; }; ``` ### Creating the lookup route Start creating the lookup route in `src/routes/lookup.ts`. ```ts import { Hono } from "hono"; const app = new Hono(); export default app; ``` To understand how you should design this function, you need to understand how Slack slash commands send data to URLs. According to the [documentation for Slack slash commands](https://api.slack.com/interactivity/slash-commands), Slack sends an HTTP POST request to your specified URL, with a `application/x-www-form-urlencoded` content type. For example, if someone were to type `/issue cloudflare/wrangler#1`, you could expect a data payload in the format: ```txt token=gIkuvaNzQIHg97ATvDxqgjtO &team_id=T0001 &team_domain=example &enterprise_id=E0001 &enterprise_name=Globular%20Construct%20Inc &channel_id=C2147483705 &channel_name=test &user_id=U2147483697 &user_name=Steve &command=/issue &text=cloudflare/wrangler#1 &response_url=https://hooks.slack.com/commands/1234/5678 &trigger_id=13345224609.738474920.8088930838d88f008e0 ``` Given this payload body, you need to parse it, and get the value of the `text` key. With that `text`, for example, `cloudflare/wrangler#1`, you can parse that string into known piece of data (`owner`, `repo`, and `issue_number`), and use it to make a request to GitHub’s API, to retrieve the issue data. With Slack slash commands, you can respond to a slash command by returning structured data as the response to the incoming slash command. In this case, you should use the response from GitHub’s API to present a formatted version of the GitHub issue, including pieces of data like the title of the issue, who created it, and the date it was created. Slack’s new [Block Kit](https://api.slack.com/block-kit) framework will allow you to return a detailed message response, by constructing text and image blocks with the data from GitHub’s API. #### Parsing slash commands To begin, the `lookup` route should parse the messages coming from Slack. As previously mentioned, the Slack API sends an HTTP POST in URL Encoded format. You can get the variable `text` by parsing it with `c.req.json()`. ```ts null {5,6,7,8,9,10} import { Hono } from "hono"; const app = new Hono(); app.post("/", async (c) => { const { text } = await c.req.parseBody(); if (typeof text !== "string") { return c.notFound(); } }); export default app; ``` Given a `text` variable, that contains text like `cloudflare/wrangler#1`, you should parse that text, and get the individual parts from it for use with GitHub’s API: `owner`, `repo`, and `issue_number`. To do this, create a new file in your application, at `src/utils/github.ts`. This file will contain a number of “utility†functions for working with GitHub’s API. The first of these will be a string parser, called `parseGhIssueString`: ```ts const ghIssueRegex = /(?<owner>[\w.-]*)\/(?<repo>[\w.-]*)\#(?<issue_number>\d*)/; export const parseGhIssueString = (text: string) => { const match = text.match(ghIssueRegex); return match ? (match.groups ?? {}) : {}; }; ``` `parseGhIssueString` takes in a `text` input, matches it against `ghIssueRegex`, and if a match is found, returns the `groups` object from that match, making use of the `owner`, `repo`, and `issue_number` capture groups defined in the regex. By exporting this function from `src/utils/github.ts`, you can make use of it back in `src/handlers/lookup.ts`: ```ts null {2,12} import { Hono } from "hono"; import { parseGhIssueString } from "../utils/github"; const app = new Hono(); app.post("/", async (c) => { const { text } = await c.req.parseBody(); if (typeof text !== "string") { return c.notFound(); } const { owner, repo, issue_number } = parseGhIssueString(text); }); export default app; ``` #### Making requests to GitHub’s API With this data, you can make your first API lookup to GitHub. Again, make a new function in `src/utils/github.ts`, to make a `fetch` request to the GitHub API for the issue data: ```ts null {8,9,10,11,12} const ghIssueRegex = /(?<owner>[\w.-]*)\/(?<repo>[\w.-]*)\#(?<issue_number>\d*)/; export const parseGhIssueString = (text: string) => { const match = text.match(ghIssueRegex); return match ? (match.groups ?? {}) : {}; }; export const fetchGithubIssue = ( owner: string, repo: string, issue_number: string, ) => { const url = `https://api.github.com/repos/${owner}/${repo}/issues/${issue_number}`; const headers = { "User-Agent": "simple-worker-slack-bot" }; return fetch(url, { headers }); }; ``` Back in `src/handlers/lookup.ts`, use `fetchGitHubIssue` to make a request to GitHub’s API, and parse the response: ```ts null {2,3,14,15} import { Hono } from "hono"; import { fetchGithubIssue, parseGhIssueString } from "../utils/github"; import { Issue } from "../types"; const app = new Hono(); app.post("/", async (c) => { const { text } = await c.req.parseBody(); if (typeof text !== "string") { return c.notFound(); } const { owner, repo, issue_number } = parseGhIssueString(text); const response = await fetchGithubIssue(owner, repo, issue_number); const issue = await response.json<Issue>(); }); export default app; ``` #### Constructing a Slack message After you have received a response back from GitHub’s API, the final step is to construct a Slack message with the issue data, and return it to the user. The final result will look something like this:  You can see four different pieces in the above screenshot: 1. The first line (bolded) links to the issue, and shows the issue title 2. The following lines (including code snippets) are the issue body 3. The last line of text shows the issue status, the issue creator (with a link to the user’s GitHub profile), and the creation date for the issue 4. The profile picture of the issue creator, on the right-hand side The previously mentioned [Block Kit](https://api.slack.com/block-kit) framework will help take the issue data (in the structure lined out in [GitHub’s REST API documentation](https://developer.github.com/v3/issues/)) and format it into something like the above screenshot. Create another file, `src/utils/slack.ts`, to contain the function `constructGhIssueSlackMessage`, a function for taking issue data, and turning it into a collection of blocks. Blocks are JavaScript objects that Slack will use to format the message: ```ts import { Issue } from "../types"; export const constructGhIssueSlackMessage = ( issue: Issue, issue_string: string, prefix_text?: string, ) => { const issue_link = `<${issue.html_url}|${issue_string}>`; const user_link = `<${issue.user.html_url}|${issue.user.login}>`; const date = new Date(Date.parse(issue.created_at)).toLocaleDateString(); const text_lines = [ prefix_text, `*${issue.title} - ${issue_link}*`, issue.body, `*${issue.state}* - Created by ${user_link} on ${date}`, ]; }; ``` Slack messages accept a variant of Markdown, which supports bold text via asterisks (`*bolded text*`), and links in the format `<https://yoururl.com|Display Text>`. Given that format, construct `issue_link`, which takes the `html_url` property from the GitHub API `issue` data (in format `https://github.com/cloudflare/wrangler-legacy/issues/1`), and the `issue_string` sent from the Slack slash command, and combines them into a clickable link in the Slack message. `user_link` is similar, using `issue.user.html_url` (in the format `https://github.com/signalnerve`, a GitHub user) and the user’s GitHub username (`issue.user.login`), to construct a clickable link to the GitHub user. Finally, parse `issue.created_at`, an ISO 8601 string, convert it into an instance of a JavaScript `Date`, and turn it into a formatted string, in the format `MM/DD/YY`. With those variables in place, `text_lines` is an array of each line of text for the Slack message. The first line is the **issue title** and the **issue link**, the second is the **issue body**, and the final line is the **issue state** (for example, open or closed), the **user link**, and the **creation date**. With the text constructed, you can finally construct your Slack message, returning an array of blocks for Slack’s [Block Kit](https://api.slack.com/block-kit). In this case, there is only have one block: a [section](https://api.slack.com/reference/messaging/blocks#section) block with Markdown text, and an accessory image of the user who created the issue. Return that single block inside of an array, to complete the `constructGhIssueSlackMessage` function: ```ts null {15,16,17,18,19,20,21,22,23,24,25,26,27,28} import { Issue } from "../types"; export const constructGhIssueSlackMessage = ( issue: Issue, issue_string: string, prefix_text?: string, ) => { const issue_link = `<${issue.html_url}|${issue_string}>`; const user_link = `<${issue.user.html_url}|${issue.user.login}>`; const date = new Date(Date.parse(issue.created_at)).toLocaleDateString(); const text_lines = [ prefix_text, `*${issue.title} - ${issue_link}*`, issue.body, `*${issue.state}* - Created by ${user_link} on ${date}`, ]; return [ { type: "section", text: { type: "mrkdwn", text: text_lines.join("\n"), }, accessory: { type: "image", image_url: issue.user.avatar_url, alt_text: issue.user.login, }, }, ]; }; ``` #### Finishing the lookup route In `src/handlers/lookup.ts`, use `constructGhIssueSlackMessage` to construct `blocks`, and return them as a new response with `c.json()` when the slash command is called: ```ts null {3,17,18,19,20,21,22} import { Hono } from "hono"; import { fetchGithubIssue, parseGhIssueString } from "../utils/github"; import { constructGhIssueSlackMessage } from "../utils/slack"; import { Issue } from "../types"; const app = new Hono(); app.post("/", async (c) => { const { text } = await c.req.parseBody(); if (typeof text !== "string") { return c.notFound(); } const { owner, repo, issue_number } = parseGhIssueString(text); const response = await fetchGithubIssue(owner, repo, issue_number); const issue = await response.json<Issue>(); const blocks = constructGhIssueSlackMessage(issue, text); return c.json({ blocks, response_type: "in_channel", }); }); export default app; ``` One additional parameter passed into the response is `response_type`. By default, responses to slash commands are ephemeral, meaning that they are only seen by the user who writes the slash command. Passing a `response_type` of `in_channel`, as seen above, will cause the response to appear for all users in the channel. If you would like the messages to remain private, remove the `response_type` line. This will cause `response_type` to default to `ephemeral`. #### Handling errors The `lookup` route is almost complete, but there are a number of errors that can occur in the route, such as parsing the body from Slack, getting the issue from GitHub, or constructing the Slack message itself. Although Hono applications can handle errors without having to do anything, you can customize the response returned in the following way. ```ts null {25,26,27,28,29,30} import { Hono } from "hono"; import { fetchGithubIssue, parseGhIssueString } from "../utils/github"; import { constructGhIssueSlackMessage } from "../utils/slack"; import { Issue } from "../types"; const app = new Hono(); app.post("/", async (c) => { const { text } = await c.req.parseBody(); if (typeof text !== "string") { return c.notFound(); } const { owner, repo, issue_number } = parseGhIssueString(text); const response = await fetchGithubIssue(owner, repo, issue_number); const issue = await response.json<Issue>(); const blocks = constructGhIssueSlackMessage(issue, text); return c.json({ blocks, response_type: "in_channel", }); }); app.onError((_e, c) => { return c.text( "Uh-oh! We couldn't find the issue you provided. " + "We can only find public issues in the following format: `owner/repo#issue_number`.", ); }); export default app; ``` ### Creating the webhook route You are now halfway through implementing the routes for your Workers application. In implementing the next route, `src/routes/webhook.ts`, you will re-use a lot of the code that you have already written for the lookup route. At the beginning of this tutorial, you configured a GitHub webhook to track any events related to issues in your repository. When an issue is opened, for example, the function corresponding to the path `/webhook` on your Workers application should take the data sent to it from GitHub, and post a new message in the configured Slack channel. In `src/routes/webhook.ts`, define a blank Hono application. The difference from the `lookup` route is that the `Bindings` is passed as a generics for the `new Hono()`. This is necessary to give the appropriate TypeScript type to `SLACK_WEBHOOK_URL` which will be used later. ```ts import { Hono } from "hono"; import { Bindings } from "../types"; const app = new Hono<{ Bindings: Bindings }>(); export default app; ``` Much like with the `lookup` route, you will need to parse the incoming payload inside of `request`, get the relevant issue data from it (refer to [the GitHub API documentation on `IssueEvent`](https://developer.github.com/v3/activity/events/types/#issuesevent) for the full payload schema), and send a formatted message to Slack to indicate what has changed. The final version will look something like this:  Compare this message format to the format returned when a user uses the `/issue` slash command. You will see that there is only one actual difference between the two: the addition of an action text on the first line, in the format `An issue was $action:`. This action, which is sent as part of the `IssueEvent` from GitHub, will be used as you construct a very familiar looking collection of blocks using Slack’s Block Kit. #### Parsing event data To start filling out the route, parse the request body formatted JSON into an object and construct some helper variables: ```ts null {2,6,7,8,9,10} import { Hono } from "hono"; import { constructGhIssueSlackMessage } from "../utils/slack"; const app = new Hono(); app.post("/", async (c) => { const { action, issue, repository } = await c.req.json(); const prefix_text = `An issue was ${action}:`; const issue_string = `${repository.owner.login}/${repository.name}#${issue.number}`; }); export default app; ``` An `IssueEvent`, the payload sent from GitHub as part of your webhook configuration, includes an `action` (what happened to the issue: for example, it was opened, closed, locked, etc.), the `issue` itself, and the `repository`, among other things. Use `c.req.json()` to convert the payload body of the request from JSON into a plain JS object. Use ES6 destructuring to set `action`, `issue` and `repository` as variables you can use in your code. `prefix_text` is a string indicating what happened to the issue, and `issue_string` is the familiar string `owner/repo#issue_number` that you have seen before: while the `lookup` route directly used the text sent from Slack to fill in `issue_string`, you will construct it directly based on the data passed in the JSON payload. #### Constructing and sending a Slack message The messages your Slack bot sends back to your Slack channel from the `lookup` and `webhook` routes are incredibly similar. Because of this, you can re-use the existing `constructGhIssueSlackMessage` to continue populating `src/handlers/webhook.ts`. Import the function from `src/utils/slack.ts`, and pass the issue data into it: ```ts null {10} import { Hono } from "hono"; import { constructGhIssueSlackMessage } from "../utils/slack"; const app = new Hono(); app.post("/", async (c) => { const { action, issue, repository } = await c.req.json(); const prefix_text = `An issue was ${action}:`; const issue_string = `${repository.owner.login}/${repository.name}#${issue.number}`; const blocks = constructGhIssueSlackMessage(issue, issue_string, prefix_text); }); export default app; ``` Importantly, the usage of `constructGhIssueSlackMessage` in this handler adds one additional argument to the function, `prefix_text`. Update the corresponding function inside of `src/utils/slack.ts`, adding `prefix_text` to the collection of `text_lines` in the message block, if it has been passed in to the function. Add a utility function, `compact`, which takes an array, and filters out any `null` or `undefined` values from it. This function will be used to remove `prefix_text` from `text_lines` if it has not actually been passed in to the function, such as when called from `src/handlers/lookup.ts`. The full (and final) version of the `src/utils/slack.ts` looks like this: ```ts null {3,26} import { Issue } from "../types"; const compact = (array: unknown[]) => array.filter((el) => el); export const constructGhIssueSlackMessage = ( issue: Issue, issue_string: string, prefix_text?: string, ) => { const issue_link = `<${issue.html_url}|${issue_string}>`; const user_link = `<${issue.user.html_url}|${issue.user.login}>`; const date = new Date(Date.parse(issue.created_at)).toLocaleDateString(); const text_lines = [ prefix_text, `*${issue.title} - ${issue_link}*`, issue.body, `*${issue.state}* - Created by ${user_link} on ${date}`, ]; return [ { type: "section", text: { type: "mrkdwn", text: compact(text_lines).join("\n"), }, accessory: { type: "image", image_url: issue.user.avatar_url, alt_text: issue.user.login, }, }, ]; }; ``` Back in `src/handlers/webhook.ts`, the `blocks` that are returned from `constructGhIssueSlackMessage` become the body in a new `fetch` request, an HTTP POST request to a Slack webhook URL. Once that request completes, return a response with status code `200`, and the body text `"OK"`: ```ts null {13,14,15,16,17,18,19} import { Hono } from "hono"; import { constructGhIssueSlackMessage } from "../utils/slack"; import { Bindings } from "../types"; const app = new Hono<{ Bindings: Bindings }>(); app.post("/", async (c) => { const { action, issue, repository } = await c.req.json(); const prefix_text = `An issue was ${action}:`; const issue_string = `${repository.owner.login}/${repository.name}#${issue.number}`; const blocks = constructGhIssueSlackMessage(issue, issue_string, prefix_text); const fetchResponse = await fetch(c.env.SLACK_WEBHOOK_URL, { body: JSON.stringify({ blocks }), method: "POST", headers: { "Content-Type": "application/json" }, }); return c.text("OK"); }); export default app; ``` The constant `SLACK_WEBHOOK_URL` represents the Slack Webhook URL that you created all the way back in the [Incoming Webhook](/workers/tutorials/build-a-slackbot/#incoming-webhook) section of this tutorial. :::caution Since this webhook allows developers to post directly to your Slack channel, keep it secret. ::: To use this constant inside of your codebase, use the [`wrangler secret`](/workers/wrangler/commands/#secret) command: ```sh title="Set the SLACK_WEBHOOK_URL secret" npx wrangler secret put SLACK_WEBHOOK_URL ``` ```sh output Enter a secret value: https://hooks.slack.com/services/abc123 ``` #### Handling errors Similarly to the `lookup` route, the `webhook` route should include some basic error handling. Unlike `lookup`, which sends responses directly back into Slack, if something goes wrong with your webhook, it may be useful to actually generate an erroneous response, and return it to GitHub. To do this, write the custom error handler with `app.onError()` and return a new response with a status code of `500`. The final version of `src/routes/webhook.ts` looks like this: ```ts null {24,25,26,27,28,29,30,31} import { Hono } from "hono"; import { constructGhIssueSlackMessage } from "../utils/slack"; import { Bindings } from "../types"; const app = new Hono<{ Bindings: Bindings }>(); app.post("/", async (c) => { const { action, issue, repository } = await c.req.json(); const prefix_text = `An issue was ${action}:`; const issue_string = `${repository.owner.login}/${repository.name}#${issue.number}`; const blocks = constructGhIssueSlackMessage(issue, issue_string, prefix_text); const fetchResponse = await fetch(c.env.SLACK_WEBHOOK_URL, { body: JSON.stringify({ blocks }), method: "POST", headers: { "Content-Type": "application/json" }, }); if (!fetchResponse.ok) throw new Error(); return c.text("OK"); }); app.onError((_e, c) => { return c.json( { message: "Unable to handle webhook", }, 500, ); }); export default app; ``` ## Deploy By completing the preceding steps, you have finished writing the code for your Slack bot. You can now deploy your application. Wrangler has built-in support for bundling, uploading, and releasing your Cloudflare Workers application. To do this, run the following command which will build and deploy your code. <Tabs> <TabItem label="npm"> ```sh title="Deploy your application" npm run deploy ``` </TabItem> <TabItem label="yarn"> ```sh title="Deploy your application" yarn deploy ``` </TabItem> </Tabs> Deploying your Workers application should now cause issue updates to start appearing in your Slack channel, as the GitHub webhook can now successfully reach your Workers webhook route:  ## Related resources In this tutorial, you built and deployed a Cloudflare Workers application that can respond to GitHub webhook events, and allow GitHub API lookups within Slack. If you would like to review the full source code for this application, you can find the repository [on GitHub](https://github.com/yusukebe/workers-slack-bot). If you want to get started building your own projects, review the existing list of [Quickstart templates](/workers/get-started/quickstarts/). --- # Create a fine-tuned OpenAI model with R2 URL: https://developers.cloudflare.com/workers/tutorials/create-finetuned-chatgpt-ai-models-with-r2/ import { Render, PackageManagers, WranglerConfig } from "~/components"; In this tutorial, you will use the [OpenAI](https://openai.com) API and [Cloudflare R2](/r2) to create a [fine-tuned model](https://platform.openai.com/docs/guides/fine-tuning). This feature in OpenAI's API allows you to derive a custom model from OpenAI's various large language models based on a set of custom instructions and example answers. These instructions and example answers are written in a document, known as a fine-tune document. This document will be stored in R2 and dynamically provided to OpenAI's APIs when creating a new fine-tune model. In order to use this feature, you will do the following tasks: 1. Upload a fine-tune document to R2. 2. Read the R2 file and upload it to OpenAI. 3. Create a new fine-tuned model based on the document.  To review the completed code for this application, refer to the [GitHub repository for this tutorial](https://github.com/kristianfreeman/openai-finetune-r2-example). ## Prerequisites Before you start, make sure you have: - A Cloudflare account with access to R2. If you do not have a Cloudflare account, [sign up](https://dash.cloudflare.com/sign-up/workers-and-pages) before continuing. Then purchase R2 from your Cloudflare dashboard. - An OpenAI API key. - A fine-tune document, structured as [JSON Lines](https://jsonlines.org/). Use the [example document](https://github.com/kristianfreeman/openai-finetune-r2-example/blob/16ca53ca9c8589834abe317487eeedb8a24c7643/example_data.jsonl) in the source code. ## 1. Create a Worker application First, use the `c3` CLI to create a new Cloudflare Workers project. <PackageManagers type="create" pkg="cloudflare@latest" args={"finetune-chatgpt-model"} /> <Render file="c3-post-run-steps" product="workers" params={{ category: "hello-world", type: "Hello World Worker", lang: "TypeScript", }} /> The above options will create the "Hello World" TypeScript project. Move into your newly created directory: ```sh cd finetune-chatgpt-model ``` ## 2. Upload a fine-tune document to R2 Next, upload the fine-tune document to R2. R2 is a key-value store that allows you to store and retrieve files from within your Workers application. You will use [Wrangler](/workers/wrangler) to create a new R2 bucket. To create a new R2 bucket use the [`wrangler r2 bucket create`](/workers/wrangler/commands/#r2-bucket-create) command. Note that you are logged in with your Cloudflare account. If not logged in via Wrangler, use the [`wrangler login`](/workers/wrangler/commands/#login) command. ```sh npx wrangler r2 bucket create <BUCKET_NAME> ``` Replace `<BUCKET_NAME>` with your desired bucket name. Note that bucket names must be lowercase and can only contain dashes. Next, upload a file using the [`wrangler r2 object put`](/workers/wrangler/commands/#r2-object-put) command. ```sh npx wrangler r2 object put <PATH> -f <FILE_NAME> ``` `<PATH>` is the combined bucket and file path of the file you want to upload -- for example, `fine-tune-ai/finetune.jsonl`, where `fine-tune-ai` is the bucket name. Replace `<FILE_NAME>` with the local filename of your fine-tune document. ## 3. Bind your bucket to the Worker A binding is how your Worker interacts with external resources such as the R2 bucket. To bind the R2 bucket to your Worker, add the following to your Wrangler file. Update the binding property to a valid JavaScript variable identifier. Replace `<YOUR_BUCKET_NAME>` with the name of the bucket you created in [step 2](#2-upload-a-fine-tune-document-to-r2): <WranglerConfig> ```toml [[r2_buckets]] binding = 'MY_BUCKET' # <~ valid JavaScript variable name bucket_name = '<YOUR_BUCKET_NAME>' ``` </WranglerConfig> ## 4. Initialize your Worker application You will use [Hono](https://hono.dev/), a lightweight framework for building Cloudflare Workers applications. Hono provides an interface for defining routes and middleware functions. Inside your project directory, run the following command to install Hono: ```sh npm install hono ``` You also need to install the [OpenAI Node API library](https://www.npmjs.com/package/openai). This library provides convenient access to the OpenAI REST API in a Node.js project. To install the library, execute the following command: ```sh npm install openai ``` Next, open the `src/index.ts` file and replace the default code with the below code. Replace `<MY_BUCKET>` with the binding name you set in Wrangler file. ```typescript import { Context, Hono } from "hono"; import OpenAI from "openai"; type Bindings = { <MY_BUCKET>: R2Bucket OPENAI_API_KEY: string } type Variables = { openai: OpenAI } const app = new Hono<{ Bindings: Bindings, Variables: Variables }>() app.use('*', async (c, next) => { const openai = new OpenAI({ apiKey: c.env.OPENAI_API_KEY, }) c.set("openai", openai) await next() }) app.onError((err, c) => { return c.text(err.message, 500) }) export default app; ``` In the above code, you first import the required packages and define the types. Then, you initialize `app` as a new Hono instance. Using the `use` middleware function, you add the OpenAI API client to the context of all routes. This middleware function allows you to access the client from within any route handler. `onError()` defines an error handler to return any errors as a JSON response. ## 5. Read R2 files and upload them to OpenAI In this section, you will define the route and function responsible for handling file uploads. In `createFile`, your Worker reads the file from R2 and converts it to a `File` object. Your Worker then uses the OpenAI API to upload the file and return the response. The `GET /files` route listens for `GET` requests with a query parameter `file`, representing a filename of an uploaded fine-tune document in R2. The function uses the `createFile` function to manage the file upload process. Replace `<MY_BUCKET>` with the binding name you set in Wrangler file. ```typescript // New import added at beginning of file import { toFile } from 'openai/uploads' const createFile = async (c: Context, r2Object: R2ObjectBody) => { const openai: OpenAI = c.get("openai") const blob = await r2Object.blob() const file = await toFile(blob, r2Object.key) const uploadedFile = await openai.files.create({ file, purpose: "fine-tune", }) return uploadedFile } app.get('/files', async c => { const fileQueryParam = c.req.query("file") if (!fileQueryParam) return c.text("Missing file query param", 400) const file = await c.env.<MY_BUCKET>.get(fileQueryParam) if (!file) return c.text("Couldn't find file", 400) const uploadedFile = await createFile(c, file) return c.json(uploadedFile) }) ``` ## 6. Create fine-tuned models This section includes the `GET /models` route and the `createModel` function. The function `createModel` takes care of specifying the details and initiating the fine-tuning process with OpenAI. The route handles incoming requests for creating a new fine-tuned model. ```typescript const createModel = async (c: Context, fileId: string) => { const openai: OpenAI = c.get("openai"); const body = { training_file: fileId, model: "gpt-4o-mini", }; return openai.fineTuning.jobs.create(body); }; app.get("/models", async (c) => { const fileId = c.req.query("file_id"); if (!fileId) return c.text("Missing file ID query param", 400); const model = await createModel(c, fileId); return c.json(model); }); ``` ## 7. List all fine-tune jobs This section describes the `GET /jobs` route and the corresponding `getJobs` function. The function interacts with OpenAI's API to fetch a list of all fine-tuning jobs. The route provides an interface for retrieving this information. ```typescript const getJobs = async (c: Context) => { const openai: OpenAI = c.get("openai"); const resp = await openai.fineTuning.jobs.list(); return resp.data; }; app.get("/jobs", async (c) => { const jobs = await getJobs(c); return c.json(jobs); }); ``` ## 8. Deploy your application After you have created your Worker application and added the required functions, deploy the application. Before you deploy, you must set the `OPENAI_API_KEY` [secret](/workers/configuration/secrets/) for your application. Do this by running the [`wrangler secret put`](/workers/wrangler/commands/#put) command: ```sh npx wrangler secret put OPENAI_API_KEY ``` To deploy your Worker application to the Cloudflare global network: 1. Make sure you are in your Worker project's directory, then run the [`wrangler deploy`](/workers/wrangler/commands/#deploy) command: ```sh npx wrangler deploy ``` 2. Wrangler will package and upload your code. 3. After your application is deployed, Wrangler will provide you with your Worker's URL. ## 9. View the fine-tune job status and use the model To use your application, create a new fine-tune job by making a request to the `/files` with a `file` query param matching the filename you uploaded earlier: ```sh curl https://your-worker-url.com/files?file=finetune.jsonl ``` When the file is uploaded, issue another request to `/models`, passing the `file_id` query parameter. This should match the `id` returned as JSON from the `/files` route: ```sh curl https://your-worker-url.com/models?file_id=file-abc123 ``` Finally, visit `/jobs` to see the status of your fine-tune jobs in OpenAI. Once the fine-tune job has completed, you can see the `fine_tuned_model` value, indicating a fine-tuned model has been created.  Visit the [OpenAI Playground](https://platform.openai.com/playground) in order to use your fine-tune model. Select your fine-tune model from the top-left dropdown of the interface.  Use it in any API requests you make to OpenAI's chat completions endpoints. For instance, in the below code example: ```javascript openai.chat.completions.create({ messages: [{ role: "system", content: "You are a helpful assistant." }], model: "ft:gpt-4o-mini:my-org:custom_suffix:id", }); ``` ## Next steps To build more with Workers, refer to [Tutorials](/workers/tutorials). If you have any questions, need assistance, or would like to share your project, join the Cloudflare Developer community on [Discord](https://discord.cloudflare.com) to connect with other developers and the Cloudflare team. --- # Connect to and query your Turso database using Workers URL: https://developers.cloudflare.com/workers/tutorials/connect-to-turso-using-workers/ import { Render, PackageManagers, WranglerConfig } from "~/components"; This tutorial will guide you on how to build globally distributed applications with Cloudflare Workers, and [Turso](https://chiselstrike.com/), an edge-hosted distributed database based on libSQL. By using Workers and Turso, you can create applications that are close to your end users without having to maintain or operate infrastructure in tens or hundreds of regions. :::note For a more seamless experience, refer to the [Turso Database Integration guide](/workers/databases/native-integrations/turso/). The Turso Database Integration will connect your Worker to a Turso database by getting the right configuration from Turso and adding it as [secrets](/workers/configuration/secrets/) to your Worker. ::: ## Prerequisites Before continuing with this tutorial, you should have: - Successfully [created up your first Cloudflare Worker](/workers/get-started/guide/) and/or have deployed a Cloudflare Worker before. - Installed [Wrangler](/workers/wrangler/install-and-update/), a command-line tool for building Cloudflare Workers. - A [GitHub account](https://github.com/), required for authenticating to Turso. - A basic familiarity with installing and using command-line interface (CLI) applications. ## Install the Turso CLI You will need the Turso CLI to create and populate a database. Run either of the following two commands in your terminal to install the Turso CLI: ```sh # On macOS or Linux with Homebrew brew install chiselstrike/tap/turso # Manual scripted installation curl -sSfL <https://get.tur.so/install.sh> | bash ``` After you have installed the Turso CLI, verify that the CLI is in your shell path: ```sh turso --version ``` ```sh output # This should output your current Turso CLI version (your installed version may be higher): turso version v0.51.0 ``` ## Create and populate a database Before you create your first Turso database, you need to log in to the CLI using your GitHub account by running: ```sh turso auth login ``` ```sh output Waiting for authentication... ✔ Success! Logged in as <your GitHub username> ``` `turso auth login` will open a browser window and ask you to sign into your GitHub account, if you are not already logged in. The first time you do this, you will need to give the Turso application permission to use your account. Select **Approve** to grant Turso the permissions needed. After you have authenticated, you can create a database by running `turso db create <DATABASE_NAME>`. Turso will automatically choose a location closest to you. ```sh turso db create my-db ``` ```sh output # Example: [===> ] Creating database my-db in Los Angeles, California (US) (lax) # Once succeeded: Created database my-db in Los Angeles, California (US) (lax) in 34 seconds. ``` With your first database created, you can now connect to it directly and execute SQL against it: ```sh turso db shell my-db ``` To get started with your database, create and define a schema for your first table. In this example, you will create a `example_users` table with one column: `email` (of type `text`) and then populate it with one email address. In the shell you just opened, paste in the following SQL: ```sql create table example_users (email text); insert into example_users values ("foo@bar.com"); ``` If the SQL statements succeeded, there will be no output. Note that the trailing semi-colons (`;`) are necessary to terminate each SQL statement. Type `.quit` to exit the shell. ## Use Wrangler to create a Workers project The Workers command-line interface, [Wrangler](/workers/wrangler/install-and-update/), allows you to create, locally develop, and deploy your Workers projects. To create a new Workers project (named `worker-turso-ts`), run the following: <PackageManagers type="create" pkg="cloudflare@latest" args={"worker-turso-ts"} /> <Render file="c3-post-run-steps" product="workers" params={{ category: "hello-world", type: "Hello World Worker", lang: "TypeScript", }} /> To start developing your Worker, `cd` into your new project directory: ```sh cd worker-turso-ts ``` In your project directory, you now have the following files: - `wrangler.json` / `wrangler.toml`: [Wrangler configuration file](/workers/wrangler/configuration/) - `src/index.ts`: A minimal Hello World Worker written in TypeScript - `package.json`: A minimal Node dependencies configuration file. - `tsconfig.json`: TypeScript configuration that includes Workers types. Only generated if indicated. For this tutorial, only the [Wrangler configuration file](/workers/wrangler/configuration/) and `src/index.ts` file are relevant. You will not need to edit the other files, and they should be left as is. ## Configure your Worker for your Turso database The Turso client library requires two pieces of information to make a connection: 1. `LIBSQL_DB_URL` - The connection string for your Turso database. 2. `LIBSQL_DB_AUTH_TOKEN` - The authentication token for your Turso database. This should be kept a secret, and not committed to source code. To get the URL for your database, run the following Turso CLI command, and copy the result: ```sh turso db show my-db --url ``` ```sh output libsql://my-db-<your-github-username>.turso.io ``` Open the [Wrangler configuration file](/workers/wrangler/configuration/) in your editor and at the bottom of the file, create a new `[vars]` section representing the [environment variables](/workers/configuration/environment-variables/) for your project: <WranglerConfig> ```toml [vars] LIBSQL_DB_URL = "paste-your-url-here" ``` </WranglerConfig> Save the changes to the [Wrangler configuration file](/workers/wrangler/configuration/). Next, create a long-lived authentication token for your Worker to use when connecting to your database. Run the following Turso CLI command, and copy the output to your clipboard: ```sh turso db tokens create my-db -e none # Will output a long text string (an encoded JSON Web Token) ``` To keep this token secret: 1. You will create a `.dev.vars` file for local development. Do not commit this file to source control. You should add `.dev.vars to your `.gitignore\` file if you are using Git. - You will also create a [secret](/workers/configuration/secrets/) to keep your authentication token confidential. First, create a new file called `.dev.vars` with the following structure. Paste your authentication token in the quotation marks: ``` LIBSQL_DB_AUTH_TOKEN="<YOUR_AUTH_TOKEN>" ``` Save your changes to `.dev.vars`. Next, store the authentication token as a secret for your production Worker to reference. Run the following `wrangler secret` command to create a Secret with your token: ```sh # Ensure you specify the secret name exactly: your Worker will need to reference it later. npx wrangler secret put LIBSQL_DB_AUTH_TOKEN ``` ```sh output ? Enter a secret value: › <paste your token here> ``` Select `<Enter>` on your keyboard to save the token as a secret. Both `LIBSQL_DB_URL` and `LIBSQL_DB_AUTH_TOKEN` will be available in your Worker's environment at runtime. ## Install extra libraries Install the Turso client library and a router: ```sh npm install @libsql/client itty-router ``` The `@libsql/client` library allows you to query a Turso database. The `itty-router` library is a lightweight router you will use to help handle incoming requests to the worker. ## Write your Worker You will now write a Worker that will: 1. Handle an HTTP request. 2. Route it to a specific handler to either list all users in our database or add a new user. 3. Return the results and/or success. Open `src/index.ts` and delete the existing template. Copy the below code exactly as is and paste it into the file: ```ts import { Client as LibsqlClient, createClient } from "@libsql/client/web"; import { Router, RouterType } from "itty-router"; export interface Env { // The environment variable containing your the URL for your Turso database. LIBSQL_DB_URL?: string; // The Secret that contains the authentication token for your Turso database. LIBSQL_DB_AUTH_TOKEN?: string; // These objects are created before first use, then stashed here // for future use router?: RouterType; } export default { async fetch(request, env): Promise<Response> { if (env.router === undefined) { env.router = buildRouter(env); } return env.router.fetch(request); }, } satisfies ExportedHandler<Env>; function buildLibsqlClient(env: Env): LibsqlClient { const url = env.LIBSQL_DB_URL?.trim(); if (url === undefined) { throw new Error("LIBSQL_DB_URL env var is not defined"); } const authToken = env.LIBSQL_DB_AUTH_TOKEN?.trim(); if (authToken === undefined) { throw new Error("LIBSQL_DB_AUTH_TOKEN env var is not defined"); } return createClient({ url, authToken }); } function buildRouter(env: Env): RouterType { const router = Router(); router.get("/users", async () => { const client = buildLibsqlClient(env); const rs = await client.execute("select * from example_users"); return Response.json(rs); }); router.get("/add-user", async (request) => { const client = buildLibsqlClient(env); const email = request.query.email; if (email === undefined) { return new Response("Missing email", { status: 400 }); } if (typeof email !== "string") { return new Response("email must be a single string", { status: 400 }); } if (email.length === 0) { return new Response("email length must be > 0", { status: 400 }); } try { await client.execute({ sql: "insert into example_users values (?)", args: [email], }); } catch (e) { console.error(e); return new Response("database insert failed"); } return new Response("Added"); }); router.all("*", () => new Response("Not Found.", { status: 404 })); return router; } ``` Save your `src/index.ts` file with your changes. Note: - The libSQL client library import '@libsql/client/web' must be imported exactly as shown when working with Cloudflare workers. The non-web import will not work in the Workers environment. - The `Env` interface contains the environment variable and secret you defined earlier. - The `Env` interface also caches the libSQL client object and router, which are created on the first request to the Worker. - The `/users` route fetches all rows from the `example_users` table you created in the Turso shell. It simply serializes the `ResultSet` object as JSON directly to the caller. - The `/add-user` route inserts a new row using a value provided in the query string. With your environment configured and your code ready, you will now test your Worker locally before you deploy. ## Run the Worker locally with Wrangler To run a local instance of our Worker (entirely on your machine), run the following command: ```sh npx wrangler dev ``` You should be able to review output similar to the following: ```txt Your worker has access to the following bindings: - Vars: - LIBSQL_DB_URL: "your-url" ⎔ Starting a local server... â•â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â•® │ [b] open a browser, [d] open Devtools, [l] turn off local mode, [c] clear console, [x] to exit │ ╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ Debugger listening on ws://127.0.0.1:61918/1064babd-bc9d-4bed-b171-b35dab3b7680 For help, see: https://nodejs.org/en/docs/inspector Debugger attached. [mf:inf] Worker reloaded! (40.25KiB) [mf:inf] Listening on 0.0.0.0:8787 [mf:inf] - http://127.0.0.1:8787 [mf:inf] - http://192.168.1.136:8787 [mf:inf] Updated `Request.cf` object cache! ``` The localhost address — the one with `127.0.0.1` in it — is a web-server running locally on your machine. Connect to it and validate your Worker returns the email address you inserted when you created your `example_users` table by visiting the `/users` route in your browser: [http://127.0.0.1:8787/users](http://127.0.0.1:8787/users). You should see JSON similar to the following containing the data from the `example_users` table: ```json { "columns": ["email"], "rows": [{ "email": "foo@bar.com" }], "rowsAffected": 0 } ``` :::caution If you see an error instead of a list of users, double check that: - You have entered the correct value for your `LIBSQL_DB_URL` in the [Wrangler configuration file](/workers/wrangler/configuration/). - You have set a secret called `LIBSQL_DB_AUTH_TOKEN` with your database authentication token. Both of these need to be present and match the variable names in your Worker's code. ::: Test the `/add-users` route and pass it an email address to insert: [http://127.0.0.1:8787/add-user?email=test@test.com](http://127.0.0.1:8787/add-user?email=test@test.com.) You should see the text `“Addedâ€`. If you load the first URL with the `/users` route again ([http://127.0.0.1:8787/users](http://127.0.0.1:8787/users)), it will show the newly added row. You can repeat this as many times as you like. Note that due to its design, your application will not stop you from adding duplicate email addresses. Quit Wrangler by typing `q` into the shell where it was started. ## Deploy to Cloudflare After you have validated that your Worker can connect to your Turso database, deploy your Worker. Run the following Wrangler command to deploy your Worker to the Cloudflare global network: ```sh npx wrangler deploy ``` The first time you run this command, it will launch a browser, ask you to sign in with your Cloudflare account, and grant permissions to Wrangler. The `deploy` command will output the following: ```txt Your worker has access to the following bindings: - Vars: - LIBSQL_DB_URL: "your-url" ... Published worker-turso-ts (0.19 sec) https://worker-turso-ts.<your-Workers-subdomain>.workers.dev Current Deployment ID: f9e6b48f-5aac-40bd-8f44-8a40be2212ff ``` You have now deployed a Worker that can connect to your Turso database, query it, and insert new data. ## Optional: Clean up To clean up the resources you created as part of this tutorial: - If you do not want to keep this Worker, run `npx wrangler delete worker-turso-ts` to delete the deployed Worker. - You can also delete your Turso database via `turso db destroy my-db`. ## Related resources - Find the [complete project source code on GitHub](https://github.com/cloudflare/workers-sdk/tree/main/templates/worker-turso-ts/). - Understand how to [debug your Cloudflare Worker](/workers/observability/). - Join the [Cloudflare Developer Discord](https://discord.cloudflare.com). - Join the [ChiselStrike (Turso) Discord](https://discord.com/invite/4B5D7hYwub). --- # Deploy a real-time chat application URL: https://developers.cloudflare.com/workers/tutorials/deploy-a-realtime-chat-app/ import { Render, WranglerConfig } from "~/components"; In this tutorial, you will deploy a serverless, real-time chat application that runs using [Durable Objects](/durable-objects/). This chat application uses a Durable Object to control each chat room. Users connect to the Object using WebSockets. Messages from one user are broadcast to all the other users. The chat history is also stored in durable storage. Real-time messages are relayed directly from one user to others without going through the storage layer. To continue with this tutorial, you must purchase the [Workers Paid plan](/workers/platform/pricing/#workers) and enable Durable Objects by logging into the [Cloudflare dashboard](https://dash.cloudflare.com) > **Workers & Pages** > select your Worker > **Durable Objects**. <Render file="tutorials-before-you-start" /> ## Clone the chat application repository Open your terminal and clone the [workers-chat-demo](https://github.com/cloudflare/workers-chat-demo) repository: ```sh git clone https://github.com/cloudflare/workers-chat-demo.git ``` ## Authenticate Wrangler After you have cloned the repository, authenticate Wrangler by running: ```sh npx wrangler login ``` ## Deploy your project When you are ready to deploy your application, run: ```sh npx wrangler deploy ``` Your application will be deployed to your `*.workers.dev` subdomain. To deploy your application to a custom domain within the Cloudflare dashboard, go to your Worker > **Triggers** > **Add Custom Domain**. To deploy your application to a custom domain using Wrangler, open your project's [Wrangler configuration file](/workers/wrangler/configuration/). To configure a route in your Wrangler configuration file, add the following to your environment: <WranglerConfig> ```toml routes = [ { pattern = "example.com/about", zone_id = "<YOUR_ZONE_ID>" } ] ``` </WranglerConfig> If you have specified your zone ID in the environment of your Wrangler configuration file, you will not need to write it again in object form. To configure a subdomain in your Wrangler configuration file, add the following to your environment: <WranglerConfig> ```toml routes = [ { pattern = "subdomain.example.com", custom_domain = true } ] ``` </WranglerConfig> To test your live application: 1. Open your `edge-chat-demo.<SUBDOMAIN>.workers.dev` subdomain. Your subdomain can be found in the [Cloudflare dashboard](https://dash.cloudflare.com) > **Workers & Pages** > your Worker > **Triggers** > **Routes** > select the `edge-chat-demo.<SUBDOMAIN>.workers.dev` route. 2. Enter a name in the **your name** field. 3. Choose whether to enter a public room or create a private room. 4. Send the link to other participants. You will be able to view room participants on the right side of the screen. ## Uninstall your application To uninstall your chat application, modify your Wrangler file to remove the `durable_objects` bindings and add a `deleted_classes` migration: <WranglerConfig> ```toml [durable_objects] bindings = [ ] # Indicate that you want the ChatRoom and RateLimiter classes to be callable as Durable Objects. [[migrations]] tag = "v1" # Should be unique for each entry new_classes = ["ChatRoom", "RateLimiter"] [[migrations]] tag = "v2" deleted_classes = ["ChatRoom", "RateLimiter"] ``` </WranglerConfig> Then run `npx wrangler deploy`. To delete your Worker: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. In Account Home, select **Workers & Pages**. 3. In **Overview**, select your Worker. 4. Select **Manage Service** > **Delete**. For complete instructions on set up and deletion, refer to the `README.md` in your cloned repository. By completing this tutorial, you have deployed a real-time chat application with Durable Objects and Cloudflare Workers. ## Related resources Continue building with other Cloudflare Workers tutorials below. - [Build a Slackbot](/workers/tutorials/build-a-slackbot/) - [Create SMS notifications for your GitHub repository using Twilio](/workers/tutorials/github-sms-notifications-using-twilio/) - [Build a QR code generator](/workers/tutorials/build-a-qr-code-generator/) --- # Create a deploy button with GitHub Actions URL: https://developers.cloudflare.com/workers/tutorials/deploy-button/ Deploy buttons let you deploy applications to Cloudflare's global network in under five minutes. The deploy buttons use Wrangler to deploy a Worker using the [Wrangler GitHub Action](https://github.com/marketplace/actions/deploy-to-cloudflare-workers-with-wrangler). You can deploy an application from a set of ready-made Cloudflare templates, or make deploy buttons for your own applications to make sharing your work easier. Try the deploy button below to deploy a GraphQL server: [](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/workers-graphql-server) Refer to [deploy.workers.cloudflare.com](https://deploy.workers.cloudflare.com/) for additional projects to deploy. ## Create a deploy button for your project 1. Add a GitHub Actions workflow to your project. Add a new file to `.github/workflows`, such as `.github/workflows/deploy.yml`, and create a GitHub workflow for deploying your project. It should include a set of `on` events, including at least `repository_dispatch`, but probably `push` and maybe `schedule` as well. Add a step for publishing your project using [wrangler-action](https://github.com/cloudflare/wrangler-action): ```yaml name: Deploy Worker on: push: pull_request: repository_dispatch: jobs: deploy: runs-on: ubuntu-latest timeout-minutes: 60 needs: test steps: - uses: actions/checkout@v2 - name: Build & Deploy Worker uses: cloudflare/wrangler-action@v3 with: apiToken: ${{ secrets.CF_API_TOKEN }} accountId: ${{ secrets.CF_ACCOUNT_ID }} ``` 2. Add the Markdown code for your button to your project's README, replacing the example `url` parameter with your repository URL. ```md [](https://deploy.workers.cloudflare.com/?url=https://github.com/YOURUSERNAME/YOURREPO) ``` 3. With your button configured, anyone can use the **Deploy with Workers** button in your repository README, and deploy their own copy of your application to Cloudflare's global network. --- # GitHub SMS notifications using Twilio URL: https://developers.cloudflare.com/workers/tutorials/github-sms-notifications-using-twilio/ import { Render, PackageManagers, WranglerConfig } from "~/components"; In this tutorial, you will learn to build an SMS notification system on Workers to receive updates on a GitHub repository. Your Worker will send you a text update using Twilio when there is new activity on your repository. You will learn how to: - Build webhooks using Workers. - Integrate Workers with GitHub and Twilio. - Use Worker secrets with Wrangler.  --- <Render file="tutorials-before-you-start" /> ## Create a Worker project Start by using `npm create cloudflare@latest` to create a Worker project in the command line: <PackageManagers type="create" pkg="cloudflare@latest" args={"github-twilio-notifications"} /> <Render file="c3-post-run-steps" product="workers" params={{ category: "hello-world", type: "Hello World Worker", lang: "JavaScript", }} /> Make note of the URL that your application was deployed to. You will be using it when you configure your GitHub webhook. ```sh cd github-twilio-notifications ``` Inside of your new `github-sms-notifications` directory, `src/index.js` represents the entry point to your Cloudflare Workers application. You will configure this file for most of the tutorial. You will also need a GitHub account and a repository for this tutorial. If you do not have either setup, [create a new GitHub account](https://github.com/join) and [create a new repository](https://docs.github.com/en/get-started/quickstart/create-a-repo) to continue with this tutorial. First, create a webhook for your repository to post updates to your Worker. Inside of your Worker, you will then parse the updates. Finally, you will send a `POST` request to Twilio to send a text message to you. You can reference the finished code at this [GitHub repository](https://github.com/rickyrobinett/workers-sdk/tree/main/templates/examples/github-sms-notifications-using-twilio). --- ## Configure GitHub To start, configure a GitHub webhook to post to your Worker when there is an update to the repository: 1. Go to your GitHub repository's **Settings** > **Webhooks** > **Add webhook**. 2. Set the Payload URL to the `/webhook` path on the Worker URL that you made note of when your application was first deployed. 3. In the **Content type** dropdown, select _application/json_. 4. In the **Secret** field, input a secret key of your choice. 5. In **Which events would you like to trigger this webhook?**, select **Let me select individual events**. Select the events you want to get notifications for (such as **Pull requests**, **Pushes**, and **Branch or tag creation**). 6. Select **Add webhook** to finish configuration.  --- ## Parsing the response With your local environment set up, parse the repository update with your Worker. Initially, your generated `index.js` should look like this: ```js export default { async fetch(request, env, ctx) { return new Response("Hello World!"); }, }; ``` Use the `request.method` property of [`Request`](/workers/runtime-apis/request/) to check if the request coming to your application is a `POST` request, and send an error response if the request is not a `POST` request. ```js export default { async fetch(request, env, ctx) { if (request.method !== "POST") { return new Response("Please send a POST request!"); } }, }; ``` Next, validate that the request is sent with the right secret key. GitHub attaches a hash signature for [each payload using the secret key](https://docs.github.com/en/developers/webhooks-and-events/webhooks/securing-your-webhooks). Use a helper function called `checkSignature` on the request to ensure the hash is correct. Then, you can access data from the webhook by parsing the request as JSON. ```js async fetch(request, env, ctx) { if(request.method !== 'POST') { return new Response('Please send a POST request!'); } try { const rawBody = await request.text(); if (!checkSignature(rawBody, request.headers, env.GITHUB_SECRET_TOKEN)) { return new Response("Wrong password, try again", {status: 403}); } } catch (e) { return new Response(`Error: ${e}`); } }, ``` The `checkSignature` function will use the Node.js crypto library to hash the received payload with your known secret key to ensure it matches the request hash. GitHub uses an HMAC hexdigest to compute the hash in the SHA-256 format. You will place this function at the top of your `index.js` file, before your export. ```js import { createHmac, timingSafeEqual } from "node:crypto"; import { Buffer } from "node:buffer"; function checkSignature(text, headers, githubSecretToken) { const hmac = createHmac("sha256", githubSecretToken); hmac.update(text); const expectedSignature = hmac.digest("hex"); const actualSignature = headers.get("x-hub-signature-256"); const trusted = Buffer.from(`sha256=${expectedSignature}`, "ascii"); const untrusted = Buffer.from(actualSignature, "ascii"); return ( trusted.byteLength == untrusted.byteLength && timingSafeEqual(trusted, untrusted) ); } ``` To make this work, you need to use [`wrangler secret put`](/workers/wrangler/commands/#put) to set your `GITHUB_SECRET_TOKEN`. This token is the secret you picked earlier when configuring you GitHub webhook: ```sh npx wrangler secret put GITHUB_SECRET_TOKEN ``` Add the nodejs_compat flag to your Wrangler file: <WranglerConfig> ```toml compatibility_flags = ["nodejs_compat"] ``` </WranglerConfig> --- ## Sending a text with Twilio You will send a text message to you about your repository activity using Twilio. You need a Twilio account and a phone number that can receive text messages. [Refer to the Twilio guide to get set up](https://www.twilio.com/messaging/sms). (If you are new to Twilio, they have [an interactive game](https://www.twilio.com/quest) where you can learn how to use their platform and get some free credits for beginners to the service.) You can then create a helper function to send text messages by sending a `POST` request to the Twilio API endpoint. [Refer to the Twilio reference](https://www.twilio.com/docs/sms/api/message-resource#create-a-message-resource) to learn more about this endpoint. Create a new function called `sendText()` that will handle making the request to Twilio: ```js async function sendText(accountSid, authToken, message) { const endpoint = `https://api.twilio.com/2010-04-01/Accounts/${accountSid}/Messages.json`; const encoded = new URLSearchParams({ To: "%YOUR_PHONE_NUMBER%", From: "%YOUR_TWILIO_NUMBER%", Body: message, }); const token = btoa(`${accountSid}:${authToken}`); const request = { body: encoded, method: "POST", headers: { Authorization: `Basic ${token}`, "Content-Type": "application/x-www-form-urlencoded", }, }; const response = await fetch(endpoint, request); const result = await response.json(); return Response.json(result); } ``` To make this work, you need to set some secrets to hide your `ACCOUNT_SID` and `AUTH_TOKEN` from the source code. You can set secrets with [`wrangler secret put`](/workers/wrangler/commands/#put) in your command line. ```sh npx wrangler secret put TWILIO_ACCOUNT_SID npx wrangler secret put TWILIO_AUTH_TOKEN ``` Modify your `githubWebhookHandler` to send a text message using the `sendText` function you just made. ```js async fetch(request, env, ctx) { if(request.method !== 'POST') { return new Response('Please send a POST request!'); } try { const rawBody = await request.text(); if (!checkSignature(rawBody, request.headers, env.GITHUB_SECRET_TOKEN)) { return new Response('Wrong password, try again', {status: 403}); } const action = request.headers.get('X-GitHub-Event'); const json = JSON.parse(rawBody); const repoName = json.repository.full_name; const senderName = json.sender.login; return await sendText( env.TWILIO_ACCOUNT_SID, env.TWILIO_AUTH_TOKEN, `${senderName} completed ${action} onto your repo ${repoName}` ); } catch (e) { return new Response(`Error: ${e}`); } }; ``` Run the `npx wrangler publish` command to redeploy your Worker project: ```sh npx wrangler deploy ```  Now when you make an update (that you configured in the GitHub **Webhook** settings) to your repository, you will get a text soon after. If you have never used Git before, refer to the [GIT Push and Pull Tutorial](https://www.datacamp.com/tutorial/git-push-pull) for pushing to your repository. Reference the finished code [on GitHub](https://github.com/rickyrobinett/workers-sdk/tree/main/templates/examples/github-sms-notifications-using-twilio). By completing this tutorial, you have learned how to build webhooks using Workers, integrate Workers with GitHub and Twilio, and use Worker secrets with Wrangler. ## Related resources {/* <!-- - [Authorize users with Auth0](/workers/tutorials/authorize-users-with-auth0/) --> */} - [Build a JAMStack app](/workers/tutorials/build-a-jamstack-app/) - [Build a QR code generator](/workers/tutorials/build-a-qr-code-generator/) --- # Generate YouTube thumbnails with Workers and Cloudflare Image Resizing URL: https://developers.cloudflare.com/workers/tutorials/generate-youtube-thumbnails-with-workers-and-images/ import { Render, PackageManagers, WranglerConfig } from "~/components"; In this tutorial, you will learn how to programmatically generate a custom YouTube thumbnail using Cloudflare Workers and Cloudflare Image Resizing. You may want to generate a custom YouTube thumbnail to customize the thumbnail's design, call-to-actions and images used to encourage more viewers to watch your video. This tutorial will help you understand how to work with [Images](/images/),[Image Resizing](/images/transform-images/) and [Cloudflare Workers](/workers/). <Render file="tutorials-before-you-start" /> To follow this tutorial, make sure you have Node, Cargo, and [Wrangler](/workers/wrangler/install-and-update/) installed on your machine. ## Learning goals In this tutorial, you will learn how to: - Upload Images to Cloudflare with the Cloudflare dashboard or API. - Set up a Worker project with Wrangler. - Manipulate images with image transformations in your Worker. ## Upload your image To generate a custom thumbnail image, you first need to upload a background image to Cloudflare Images. This will serve as the image you use for transformations to generate the thumbnails. Cloudflare Images allows you to store, resize, optimize and deliver images in a fast and secure manner. To get started, upload your images to the Cloudflare dashboard or use the Upload API. ### Upload with the dashboard To upload an image using the Cloudflare dashboard: 1. Log in to the [Cloudflare Dashboard](https://dash.cloudflare.com) and select your account. 2. Select **Images**. 3. Use **Quick Upload** to either drag and drop an image or click to browse and choose a file from your local files. 4. After the image is uploaded, view it using the generated URL. ### Upload with the API To upload your image with the [Upload via URL](/images/upload-images/upload-url/) API, refer to the example below: ```sh curl --request POST \ --url https://api.cloudflare.com/client/v4/accounts/<ACCOUNT_ID>/images/v1 \ --header 'Authorization: Bearer <API_TOKEN>' \ --form 'url=<PATH_TO_IMAGE>' \ --form 'metadata={"key":"value"}' \ --form 'requireSignedURLs=false' ``` - `ACCOUNT_ID`: The current user's account id which can be found in your account settings. - `API_TOKEN`: Needs to be generated to scoping Images permission. - `PATH_TO_IMAGE`: Indicates the URL for the image you want to upload. You will then receive a response similar to this: ```json { "result": { "id": "2cdc28f0-017a-49c4-9ed7-87056c83901", "filename": "image.jpeg", "metadata": { "key": "value" }, "uploaded": "2022-01-31T16:39:28.458Z", "requireSignedURLs": false, "variants": [ "https://imagedelivery.net/Vi7wi5KSItxGFsWRG2Us6Q/2cdc28f0-017a-49c4-9ed7-87056c83901/public", "https://imagedelivery.net/Vi7wi5KSItxGFsWRG2Us6Q/2cdc28f0-017a-49c4-9ed7-87056c83901/thumbnail" ] }, "success": true, "errors": [], "messages": [] } ``` Now that you have uploaded your image, you will use it as the background image for your video's thumbnail. ## Create a Worker to transform text to image After uploading your image, create a Worker that will enable you to transform text to image. This image can be used as an overlay on the background image you uploaded. Use the [rustwasm-worker-template](https://github.com/cloudflare/workers-sdk/tree/main/templates/worker-rust). You will need the following before you begin: - A recent version of [Rust](https://rustup.rs/). - Access to the `cargo-generate` subcommand: ```sh cargo install cargo-generate ``` Create a new Worker project using the `worker-rust` template: ```sh cargo generate https://github.com/cloudflare/rustwasm-worker-template ``` You will now make a few changes to the files in your project directory. 1. In the `lib.rs` file, add the following code block: ```rs use worker::*; mod utils; #[event(fetch)] pub async fn main(req: Request, env: Env, _ctx: worker::Context) -> Result<Response> { // Optionally, get more helpful error messages written to the console in the case of a panic. utils::set_panic_hook(); let router = Router::new(); router .get("/", |_, _| Response::ok("Hello from Workers!")) .run(req, env) .await } ``` 2. Update the `Cargo.toml` file in your `worker-to-text` project directory to use [text-to-png](https://github.com/RookAndPawn/text-to-png), a Rust package for rendering text to PNG. Add the package as a dependency by running: ```sh cargo add text-to-png@0.2.0 ``` 3. Import the `text_to_png` library into your `worker-to-text` project's `lib.rs` file. ```rs null {1} use text_to_png::{TextPng, TextRenderer}; use worker::*; mod utils; #[event(fetch)] pub async fn main(req: Request, env: Env, _ctx: worker::Context) -> Result<Response> { // Optionally, get more helpful error messages written to the console in the case of a panic. utils::set_panic_hook(); let router = Router::new(); router .get("/", |_, _| Response::ok("Hello from Workers!")) .run(req, env) .await } ``` 4. Update `lib.rs` to create a `handle-slash` function that will activate the image transformation based on the text passed to the URL as a query parameter. ```rs null {17} use text_to_png::{TextPng, TextRenderer}; use worker::*; mod utils; #[event(fetch)] pub async fn main(req: Request, env: Env, _ctx: worker::Context) -> Result<Response> { // Optionally, get more helpful error messages written to the console in the case of a panic. utils::set_panic_hook(); let router = Router::new(); router .get("/", |_, _| Response::ok("Hello from Workers!")) .run(req, env) .await } async fn handle_slash(text: String) -> Result<Response> {} ``` 5. In the `handle-slash` function, call the `TextRenderer` by assigning it to a renderer value, specifying that you want to use a custom font. Then, use the `render_text_to_png_data` method to transform the text into image format. In this example, the custom font (`Inter-Bold.ttf`) is located in an `/assets` folder at the root of the project which will be used for generating the thumbnail. You must update this portion of the code to point to your custom font file. ```rs null {17,18,19,20,21,22,23,24} use text_to_png::{TextPng, TextRenderer}; use worker::*; mod utils; #[event(fetch)] pub async fn main(req: Request, env: Env, _ctx: worker::Context) -> Result<Response> { // Optionally, get more helpful error messages written to the console in the case of a panic. utils::set_panic_hook(); let router = Router::new(); router .get("/", |_, _| Response::ok("Hello from Workers!")) .run(req, env) .await } async fn handle_slash(text: String) -> Result<Response> { let renderer = TextRenderer::try_new_with_ttf_font_data(include_bytes!("../assets/Inter-Bold.ttf")) .expect("Example font is definitely loadable"); let text_png: TextPng = renderer.render_text_to_png_data(text.replace("+", " "), 60, "003682").unwrap(); } ``` 6. Rewrite the `Router` function to call `handle_slash` when a query is passed in the URL, otherwise return the `"Hello Worker!"` as the response. ```rs null {11,12,13,14,15,16,17,18,19,20} use text_to_png::{TextPng, TextRenderer}; use worker::*; mod utils; #[event(fetch)] pub async fn main(req: Request, env: Env, _ctx: worker::Context) -> Result<Response> { // Optionally, get more helpful error messages written to the console in the case of a panic. utils::set_panic_hook(); let router = Router::new(); router .get_async("/", |req, _| async move { if let Some(text) = req.url()?.query() { handle_slash(text.into()).await } else { handle_slash("Hello Worker!".into()).await } }) .run(req, env) .await } async fn handle_slash(text: String) -> Result<Response> { let renderer = TextRenderer::try_new_with_ttf_font_data(include_bytes!("../assets/Inter-Bold.ttf")) .expect("Example font is definitely loadable"); let text_png: TextPng = renderer.render_text_to_png_data(text.replace("+", " "), 60, "003682").unwrap(); } ``` 7. In your `lib.rs` file, set the headers to `content-type: image/png` so that the response is correctly rendered as a PNG image. ```rs null {29,30,31,32} use text_to_png::{TextPng, TextRenderer}; use worker::*; mod utils; #[event(fetch)] pub async fn main(req: Request, env: Env, _ctx: worker::Context) -> Result<Response> { // Optionally, get more helpful error messages written to the console in the case of a panic. utils::set_panic_hook(); let router = Router::new(); router .get_async("/", |req, _| async move { if let Some(text) = req.url()?.query() { handle_slash(text.into()).await } else { handle_slash("Hello Worker!".into()).await } }) .run(req, env) .await } async fn handle_slash(text: String) -> Result<Response> { let renderer = TextRenderer::try_new_with_ttf_font_data(include_bytes!("../assets/Inter-Bold.ttf")) .expect("Example font is definitely loadable"); let text_png: TextPng = renderer.render_text_to_png_data(text.replace("+", " "), 60, "003682").unwrap(); let mut headers = Headers::new(); headers.set("content-type", "image/png")?; Ok(Response::from_bytes(text_png.data)?.with_headers(headers)) } ``` The final `lib.rs` file should look as follows. Find the full code as an example repository on [GitHub](https://github.com/cloudflare/workers-sdk/tree/main/templates/examples/worker-to-text). ```rs use text_to_png::{TextPng, TextRenderer}; use worker::*; mod utils; #[event(fetch)] pub async fn main(req: Request, env: Env, _ctx: worker::Context) -> Result<Response> { // Optionally, get more helpful error messages written to the console in the case of a panic. utils::set_panic_hook(); let router = Router::new(); router .get_async("/", |req, _| async move { if let Some(text) = req.url()?.query() { handle_slash(text.into()).await } else { handle_slash("Hello Worker!".into()).await } }) .run(req, env) .await } async fn handle_slash(text: String) -> Result<Response> { let renderer = TextRenderer::try_new_with_ttf_font_data(include_bytes!("../assets/Inter-Bold.ttf")) .expect("Example font is definitely loadable"); let text = if text.len() > 128 { "Nope".into() } else { text }; let text = urlencoding::decode(&text).map_err(|_| worker::Error::BadEncoding)?; let text_png: TextPng = renderer.render_text_to_png_data(text.replace("+", " "), 60, "003682").unwrap(); let mut headers = Headers::new(); headers.set("content-type", "image/png")?; Ok(Response::from_bytes(text_png.data)?.with_headers(headers)) } ``` After you have finished updating your project, start a local server for developing your Worker by running: ```sh npx wrangler dev ``` This should spin up a `localhost` instance with the image displayed:  Adding a query parameter with custom text, you should receive:  To deploy your Worker, open your Wrangler file and update the `name` key with your project's name. Below is an example with this tutorial's project name: <WranglerConfig> ```toml name = "worker-to-text" ``` </WranglerConfig> Then run the `npx wrangler deploy` command to deploy your Worker. ```sh npx wrangler deploy ``` A `.workers.dev` domain will be generated for your Worker after running `wrangler deploy`. You will use this domain in the main thumbnail image. ## Create a Worker to display the original image Create a Worker to serve the image you uploaded to Images by running: <PackageManagers type="create" pkg="cloudflare@latest" args={"thumbnail-image"} /> <Render file="c3-post-run-steps" product="workers" params={{ category: "hello-world", type: "Hello World Worker", lang: "JavaScript", }} /> To start developing your Worker, `cd` into your new project directory: ```sh cd thumbnail-image ``` This will create a new Worker project named `thumbnail-image`. In the `src/index.js` file, add the following code block: ```js export default { async fetch(request, env) { const url = new URL(request.url); if (url.pathname === "/original-image") { const image = await fetch( `https://imagedelivery.net/${env.CLOUDFLARE_ACCOUNT_HASH}/${IMAGE_ID}/public`, ); return image; } return new Response("Image Resizing with a Worker"); }, }; ``` Update `env.CLOUDFLARE_ACCOUNT_HASH` with your [Cloudflare account ID](/fundamentals/setup/find-account-and-zone-ids/). Update `env.IMAGE_ID` with your [image ID](/images/get-started/). Run your Worker and go to the `/original-image` route to review your image. ## Add custom text on your image You will now use [Cloudflare image transformations](/images/transform-images/), with the `fetch` method, to add your dynamic text image as an overlay on top of your background image. Start by displaying the resulting image on a different route. Call the new route `/thumbnail`. ```js null {11} export default { async fetch(request, env) { const url = new URL(request.url); if (url.pathname === "/original-image") { const image = await fetch( `https://imagedelivery.net/${env.CLOUDFLARE_ACCOUNT_HASH}/${IMAGE_ID}/public`, ); return image; } if (url.pathname === "/thumbnail") { } return new Response("Image Resizing with a Worker"); }, }; ``` Next, use the `fetch` method to apply the image transformation changes on top of the background image. The overlay options are nested in `options.cf.image`. ```js null {12,13,14,15,16,17,18} export default { async fetch(request, env) { const url = new URL(request.url); if (url.pathname === "/original-image") { const image = await fetch( `https://imagedelivery.net/${env.CLOUDFLARE_ACCOUNT_HASH}/${IMAGE_ID}/public`, ); return image; } if (url.pathname === "/thumbnail") { fetch(imageURL, { cf: { image: {}, }, }); } return new Response("Image Resizing with a Worker"); }, }; ``` The `imageURL` is the URL of the image you want to use as a background image. In the `cf.image` object, specify the options you want to apply to the background image. :::note At time of publication, Cloudflare image transformations do not allow resizing images in a Worker that is stored in Cloudflare Images. Instead of using the image you served on the `/original-image` route, you will use the same image from a different source. ::: Add your background image to an assets directory on GitHub and push your changes to GitHub. Copy the URL of the image upload by performing a left click on the image and selecting the **Copy Remote File Url** option. Replace the `imageURL` value with the copied remote URL. ```js null {2,3} if (url.pathname === "/thumbnail") { const imageURL = "https://github.com/lauragift21/social-image-demo/blob/1ed9044463b891561b7438ecdecbdd9da48cdb03/assets/cover.png?raw=true"; fetch(imageURL, { cf: { image: {}, }, }); } ``` Next, add overlay options in the image object. Resize the image to the preferred width and height for YouTube thumbnails and use the [draw](/images/transform-images/draw-overlays/) option to add overlay text using the deployed URL of your `text-to-image` Worker. ```js null {3,4,5,6,7,8,9,10,11,12} fetch(imageURL, { cf: { image: { width: 1280, height: 720, draw: [ { url: "https://text-to-image.examples.workers.dev", left: 40, }, ], }, }, }); ``` Image transformations can only be tested when you deploy your Worker. To deploy your Worker, open your Wrangler file and update the `name` key with your project's name. Below is an example with this tutorial's project name: <WranglerConfig> ```toml name = "thumbnail-image" ``` </WranglerConfig> Deploy your Worker by running: ```sh npx wrangler deploy ``` The command deploys your Worker to custom `workers.dev` subdomain. Go to your `.workers.dev` subdomain and go to the `/thumbnail` route. You should see the resized image with the text `Hello Workers!`.  You will now make text applied dynamic. Making your text dynamic will allow you change the text and have it update on the image automatically. To add dynamic text, append any text attached to the `/thumbnail` URL using query parameters and pass it down to the `text-to-image` Worker URL as a parameter. ```js for (const title of url.searchParams.values()) { try { const editedImage = await fetch(imageURL, { cf: { image: { width: 1280, height: 720, draw: [ { url: `https://text-to-image.examples.workers.dev/?${title}`, left: 50, }, ], }, }, }); return editedImage; } catch (error) { console.log(error); } } ``` This will always return the text you pass as a query string in the generated image. This example URL, [https://socialcard.cdnuptime.com/thumbnail?Getting%20Started%20With%20Cloudflare%20Images](https://socialcard.cdnuptime.com/thumbnail?Getting%20Started%20With%20Cloudflare%20Images), will generate the following image:  By completing this tutorial, you have successfully made a custom YouTube thumbnail generator. ## Related resources In this tutorial, you learned how to use Cloudflare Workers and Cloudflare image transformations to generate custom YouTube thumbnails. To learn more about Cloudflare Workers and image transformations, refer to [Resize an image with a Worker](/images/transform-images/transform-via-workers/). --- # Handle form submissions with Airtable URL: https://developers.cloudflare.com/workers/tutorials/handle-form-submissions-with-airtable/ import { Render, PackageManagers, WranglerConfig } from "~/components"; In this tutorial, you will use [Cloudflare Workers](/workers/) and [Airtable](https://airtable.com) to persist form submissions from a front-end user interface. Airtable is a free-to-use spreadsheet solution that has an approachable API for developers. Workers will handle incoming form submissions and use Airtable's [REST API](https://airtable.com/api) to asynchronously persist the data in an Airtable base (Airtable's term for a spreadsheet) for later reference.  <Render file="tutorials-before-you-start" /> ## 1. Create a form For this tutorial, you will be building a Workers function that handles input from a contact form. The form this tutorial references will collect a first name, last name, email address, phone number, message subject, and a message. :::note[Build a form] If this is your first time building a form and you would like to follow a tutorial to create a form with Cloudflare Pages, refer to the [HTML forms](/pages/tutorials/forms) tutorial. ::: Review a simplified example of the form used in this tuttorial. Note that the `action` parameter of the `<form>` tag should point to the deployed Workers application that you will build in this tutorial. ```html title="Your front-end code" {1} <form action="https://workers-airtable-form.signalnerve.workers.dev/submit" method="POST"> <div> <label for="first_name">First name</label> <input type="text" name="first_name" id="first_name" autocomplete="given-name" placeholder="Ellen" required /> </div> <div> <label for="last_name">Last name</label> <input type="text" name="last_name" id="last_name" autocomplete="family-name" placeholder="Ripley" required /> </div> <div> <label for="email">Email</label> <input id="email" name="email" type="email" autocomplete="email" placeholder="eripley@nostromo.com" required /> </div> </div> <div> <label for="phone"> Phone <span>Optional</span> </label> <input type="text" name="phone" id="phone" autocomplete="tel" placeholder="+1 (123) 456-7890" /> </div> <div> <label for="subject">Subject</label> <input type="text" name="subject" id="subject" placeholder="Your example subject" required /> </div> <div> <label for="message"> Message <span>Max 500 characters</span> </label> <textarea id="message" name="message" rows="4" placeholder="Tenetur quaerat expedita vero et illo. Tenetur explicabo dolor voluptatem eveniet. Commodi est beatae id voluptatum porro laudantium. Quam placeat accusamus vel officiis vel. Et perferendis dicta ut perspiciatis quos iste. Tempore autem molestias voluptates in sapiente enim doloremque." required></textarea> </div> <div> <button type="submit"> Submit </button> </div> </form> ``` ## 2. Create a Worker project To handle the form submission, create and deploy a Worker that parses the incoming form data and prepares it for submission to Airtable. Create a new `airtable-form-handler` Worker project: <PackageManagers type="create" pkg="cloudflare@latest" args={"airtable-form-handler"} /> <Render file="c3-post-run-steps" product="workers" params={{ category: "hello-world", type: "Hello World Worker", lang: "JavaScript", }} /> Then, move into the newly created directory: ```sh cd airtable-form-handler ``` ## 3. Configure an Airtable base When your Worker is complete, it will send data up to an Airtable base via Airtable's REST API. If you do not have an Airtable account, create one (the free plan is sufficient to complete this tutorial). In Airtable's dashboard, create a new base by selecting **Start from scratch**. After you have created a new base, set it up for use with the front-end form. Delete the existing columns, and create six columns, with the following field types: | Field name | Airtable field type | | ------------ | ------------------- | | First Name | "Single line text" | | Last Name | "Single line text" | | Email | "Email" | | Phone Number | "Phone number" | | Subject | "Single line text" | | Message | "Long text" | Note that the field names are case-sensitive. If you change the field names, you will need to exactly match your new field names in the API request you make to Airtable later in the tutorial. Finally, you can optionally rename your table -- by defaulte it will have a name like Table 1. In the below code, we assume the table has been renamed with a more descriptive name, like `Form Submissions`. Next, navigate to [Airtable's API page](https://airtable.com/api) and select your new base. Note that you must be logged into Airtable to see your base information. In the API documentation page, find your **Airtable base ID**. You will also need to create a **Personal access token** that you'll use to access your Airtable base. You can do so by visiting the [Personal access tokens](https://airtable.com/create/tokens) page on Airtable's website and creating a new token. Make sure that you configure the token in the following way: - Scope: the `data.records:write` scope must be set on the token - Access: access should be granted to the base you have been working with in this tutorial The results access token should now be set in your application. To make the token available in your codebase, use the [`wrangler secret`](/workers/wrangler/commands/#secret) command. The `secret` command encrypts and stores environment variables for use in your function, without revealing them to users. Run `wrangler secret put`, passing `AIRTABLE_ACCESS_TOKEN` as the name of your secret: ```sh npx wrangler secret put AIRTABLE_ACCESS_TOKEN ``` ```sh output Enter the secret text you would like assigned to the variable AIRTABLE_ACCESS_TOKEN on the script named airtable-form-handler: ****** 🌀 Creating the secret for script name airtable-form-handler ✨ Success! Uploaded secret AIRTABLE_ACCESS_TOKEN. ``` Before you continue, review the keys that you should have from Airtable: 1. **Airtable Table Name**: The name for your table, like Form Submissions. 2. **Airtable Base ID**: The alphanumeric base ID found at the top of your base's API page. 3. **Airtable Access Token**: A Personal Access Token created by the user to access information about your new Airtable base. ## 4. Submit data to Airtable With your Airtable base set up, and the keys and IDs you need to communicate with the API ready, you will now set up your Worker to persist data from your form into Airtable. In your Worker project's `index.js` file, replace the default code with a Workers fetch handler that can respond to requests. When the URL requested has a pathname of `/submit`, you will handle a new form submission, otherwise, you will return a `404 Not Found` response. ```js export default { async fetch(request, env) { const url = new URL(request.url); if (url.pathname === "/submit") { await submitHandler(request, env); } return new Response("Not found", { status: 404 }); }, }; ``` The `submitHandler` has two functions. First, it will parse the form data coming from your HTML5 form. Once the data is parsed, use the Airtable API to persist a new row (a new form submission) to your table: ```js async function submitHandler(request, env) { if (request.method !== "POST") { return new Response("Method Not Allowed", { status: 405, }); } const body = await request.formData(); const { first_name, last_name, email, phone, subject, message } = Object.fromEntries(body); // The keys in "fields" are case-sensitive, and // should exactly match the field names you set up // in your Airtable table, such as "First Name". const reqBody = { fields: { "First Name": first_name, "Last Name": last_name, Email: email, "Phone Number": phone, Subject: subject, Message: message, }, }; await createAirtableRecord(env, reqBody); } // Existing code // export default ... ``` While the majority of this function is concerned with parsing the request body (the data being sent as part of the request), there are two important things to note. First, if the HTTP method sent to this function is not `POST`, you will return a new response with the status code of [`405 Method Not Allowed`](https://httpstatuses.com/405). The variable `reqBody` represents a collection of fields, which are key-value pairs for each column in your Airtable table. By formatting `reqBody` as an object with a collection of fields, you are creating a new record in your table with a value for each field. Then you call `createAirtableRecord` (the function you will define next). The `createAirtableRecord` function accepts a `body` parameter, which conforms to the Airtable API's required format — namely, a JavaScript object containing key-value pairs under `fields`, representing a single record to be created on your table: ```js async function createAirtableRecord(env, body) { try { const result = fetch( `https://api.airtable.com/v0/${env.AIRTABLE_BASE_ID}/${encodeURIComponent(env.AIRTABLE_TABLE_NAME)}`, { method: "POST", body: JSON.stringify(body), headers: { Authorization: `Bearer ${env.AIRTABLE_ACCESS_TOKEN}`, "Content-Type": "application/json", }, }, ); return result; } catch (error) { console.error(error); } } // Existing code // async function submitHandler // export default ... ``` To make an authenticated request to Airtable, you need to provide four constants that represent data about your Airtable account, base, and table name. You have already set `AIRTABLE_ACCESS_TOKEN` using `wrangler secret`, since it is a value that should be encrypted. The **Airtable base ID** and **table name**, and `FORM_URL` are values that can be publicly shared in places like GitHub. Use Wrangler's [`vars`](/workers/wrangler/migration/v1-to-v2/wrangler-legacy/configuration/#vars) feature to pass public environment variables from your Wrangler file. Add a `vars` table at the end of your Wrangler file: <WranglerConfig> ```toml null {7} name = "workers-airtable-form" main = "src/index.js" compatibility_date = "2023-06-13" [vars] AIRTABLE_BASE_ID = "exampleBaseId" AIRTABLE_TABLE_NAME = "Form Submissions" ``` </WranglerConfig> With all these fields submitted, it is time to deploy your Workers serverless function and get your form communicating with it. First, publish your Worker: ```sh title="Deploy your Worker" npx wrangler deploy ``` Your Worker project will deploy to a unique URL — for example, `https://workers-airtable-form.cloudflare.workers.dev`. This represents the first part of your front-end form's `action` attribute — the second part is the path for your form handler, which is `/submit`. In your front-end UI, configure your `form` tag as seen below: ```html <form action="https://workers-airtable-form.cloudflare.workers.dev/submit" method="POST" class="..." > <!-- The rest of your HTML form --> </form> ``` After you have deployed your new form (refer to the [HTML forms](/pages/tutorials/forms) tutorial if you need help creating a form), you should be able to submit a new form submission and see the value show up immediately in Airtable:  ## Conclusion With this tutorial completed, you have created a Worker that can accept form submissions and persist them to Airtable. You have learned how to parse form data, set up environment variables, and use the `fetch` API to make requests to external services outside of your Worker. ## Related resources - [Build a Slackbot](/workers/tutorials/build-a-slackbot) - [Build a To-Do List Jamstack App](/workers/tutorials/build-a-jamstack-app) - [Build a blog using Nuxt.js and Sanity.io on Cloudflare Pages](/pages/tutorials/build-a-blog-using-nuxt-and-sanity) - [James Quick's video on building a Cloudflare Workers + Airtable integration](https://www.youtube.com/watch?v=tFQ2kbiu1K4) --- # Connect to a PostgreSQL database with Cloudflare Workers URL: https://developers.cloudflare.com/workers/tutorials/postgres/ import { Render, PackageManagers, WranglerConfig } from "~/components"; In this tutorial, you will learn how to create a Cloudflare Workers application and connect it to a PostgreSQL database using [TCP Sockets](/workers/runtime-apis/tcp-sockets/) and [Hyperdrive](/hyperdrive/). The Workers application you create in this tutorial will interact with a product database inside of PostgreSQL. ## Prerequisites To continue: 1. Sign up for a [Cloudflare account](https://dash.cloudflare.com/sign-up/workers-and-pages) if you have not already. 2. Install [`npm`](https://docs.npmjs.com/getting-started). 3. Install [`Node.js`](https://nodejs.org/en/). Use a Node version manager like [Volta](https://volta.sh/) or [nvm](https://github.com/nvm-sh/nvm) to avoid permission issues and change Node.js versions. [Wrangler](/workers/wrangler/install-and-update/) requires a Node version of `16.17.0` or later. 4. Make sure you have access to a PostgreSQL database. ## 1. Create a Worker application First, use the [`create-cloudflare` CLI](https://github.com/cloudflare/workers-sdk/tree/main/packages/create-cloudflare) to create a new Worker application. To do this, open a terminal window and run the following command: <PackageManagers type="create" pkg="cloudflare@latest" args={"postgres-tutorial"} /> This will prompt you to install the [`create-cloudflare`](https://www.npmjs.com/package/create-cloudflare) package and lead you through a setup wizard. <Render file="c3-post-run-steps" product="workers" params={{ category: "hello-world", type: "Hello World Worker", lang: "TypeScript", }} /> If you choose to deploy, you will be asked to authenticate (if not logged in already), and your project will be deployed. If you deploy, you can still modify your Worker code and deploy again at the end of this tutorial. Now, move into the newly created directory: ```sh cd postgres-tutorial ``` ### Enable Node.js compatibility [Node.js compatibility](/workers/runtime-apis/nodejs/) is required for database drivers, including Postgres.js, and needs to be configured for your Workers project. <Render file="nodejs_compat" product="workers" /> ## 2. Add the PostgreSQL connection library To connect to a PostgreSQL database, you will need the `postgres` library. In your Worker application directory, run the following command to install the library: ```sh npm install postgres ``` Make sure you are using `postgres` (`Postgres.js`) version `3.4.4` or higher. `Postgres.js` is compatible with both Pages and Workers. ## 3. Configure the connection to the PostgreSQL database Choose one of the two methods to connect to your PostgreSQL database: 1. [Use a connection string](#use-a-connection-string). 2. [Set explicit parameters](#set-explicit-parameters). ### Use a connection string A connection string contains all the information needed to connect to a database. It is a URL that contains the following information: ``` postgresql://username:password@host:port/database ``` Replace `username`, `password`, `host`, `port`, and `database` with the appropriate values for your PostgreSQL database. Set your connection string as a [secret](/workers/configuration/secrets/) so that it is not stored as plain text. Use [`wrangler secret put`](/workers/wrangler/commands/#secret) with the example variable name `DB_URL`: ```sh npx wrangler secret put DB_URL ``` ```sh output âžœ wrangler secret put DB_URL ------------------------------------------------------- ? Enter a secret value: › ******************** ✨ Success! Uploaded secret DB_URL ``` Set your `DB_URL` secret locally in a `.dev.vars` file as documented in [Local Development with Secrets](/workers/configuration/secrets/). <WranglerConfig> ```toml DB_URL="<ENTER YOUR POSTGRESQL CONNECTION STRING>" ``` </WranglerConfig> ### Set explicit parameters Configure each database parameter as an [environment variable](/workers/configuration/environment-variables/) via the [Cloudflare dashboard](/workers/configuration/environment-variables/#add-environment-variables-via-the-dashboard) or in your Wrangler file. Refer to an example of aWrangler file configuration: <WranglerConfig> ```toml [vars] DB_USERNAME = "postgres" # Set your password by creating a secret so it is not stored as plain text DB_HOST = "ep-aged-sound-175961.us-east-2.aws.neon.tech" DB_PORT = "5432" DB_NAME = "productsdb" ``` </WranglerConfig> To set your password as a [secret](/workers/configuration/secrets/) so that it is not stored as plain text, use [`wrangler secret put`](/workers/wrangler/commands/#secret). `DB_PASSWORD` is an example variable name for this secret to be accessed in your Worker: ```sh npx wrangler secret put DB_PASSWORD ``` ```sh output ------------------------------------------------------- ? Enter a secret value: › ******************** ✨ Success! Uploaded secret DB_PASSWORD ``` ## 4. Connect to the PostgreSQL database in the Worker Open your Worker's main file (for example, `worker.ts`) and import the `Client` class from the `pg` library: ```typescript import postgres from "postgres"; ``` In the `fetch` event handler, connect to the PostgreSQL database using your chosen method, either the connection string or the explicit parameters. ### Use a connection string ```typescript const sql = postgres(env.DB_URL); ``` ### Set explicit parameters ```typescript const sql = postgres({ username: env.DB_USERNAME, password: env.DB_PASSWORD, host: env.DB_HOST, port: env.DB_PORT, database: env.DB_NAME, }); ``` ## 5. Interact with the products database To demonstrate how to interact with the products database, you will fetch data from the `products` table by querying the table when a request is received. :::note If you are following along in your own PostgreSQL instance, set up the `products` using the following SQL `CREATE TABLE` statement. This statement defines the columns and their respective data types for the `products` table: ```sql CREATE TABLE products ( id SERIAL PRIMARY KEY, name VARCHAR(255) NOT NULL, description TEXT, price DECIMAL(10, 2) NOT NULL ); ``` ::: Replace the existing code in your `worker.ts` file with the following code: ```typescript import postgres from "postgres"; export default { async fetch(request, env, ctx): Promise<Response> { const sql = postgres( env.DB_URL, { // Workers limit the number of concurrent external connections, so be sure to limit // the size of the local connection pool that postgres.js may establish. max: 5, // If you are using array types in your Postgres schema, it is necessary to fetch // type information to correctly de/serialize them. However, if you are not using // those, disabling this will save you an extra round-trip every time you connect. fetch_types: false, }, ); // Query the products table const result = await sql`SELECT * FROM products;`; // Return the result as JSON const resp = new Response(JSON.stringify(result), { headers: { "Content-Type": "application/json" }, }); return resp; }, } satisfies ExportedHandler<Env>; ``` This code establishes a connection to the PostgreSQL database within your Worker application and queries the `products` table, returning the results as a JSON response. ## 6. Deploy your Worker Run the following command to deploy your Worker: ```sh npx wrangler deploy ``` Your application is now live and accessible at `<YOUR_WORKER>.<YOUR_SUBDOMAIN>.workers.dev`. After deploying, you can interact with your PostgreSQL products database using your Cloudflare Worker. Whenever a request is made to your Worker's URL, it will fetch data from the `products` table and return it as a JSON response. You can modify the query as needed to retrieve the desired data from your products database. ## 7. Insert a new row into the products database To insert a new row into the `products` table, create a new API endpoint in your Worker that handles a `POST` request. When a `POST` request is received with a JSON payload, the Worker will insert a new row into the `products` table with the provided data. Assume the `products` table has the following columns: `id`, `name`, `description`, and `price`. Add the following code snippet inside the `fetch` event handler in your `worker.ts` file, before the existing query code: ```typescript {9-32} import postgres from "postgres"; export default { async fetch(request, env, ctx): Promise<Response> { const sql = postgres(env.DB_URL); const url = new URL(request.url); if (request.method === "POST" && url.pathname === "/products") { // Parse the request's JSON payload const productData = await request.json(); // Insert the new product into the database const values = { name: productData.name, description: productData.description, price: productData.price, }; const insertResult = await sql` INSERT INTO products ${sql(values)} RETURNING * `; // Return the inserted row as JSON const insertResp = new Response(JSON.stringify(insertResult), { headers: { "Content-Type": "application/json" }, }); // Clean up the client return insertResp; } // Query the products table const result = await sql`SELECT * FROM products;`; // Return the result as JSON const resp = new Response(JSON.stringify(result), { headers: { "Content-Type": "application/json" }, }); return resp; }, } satisfies ExportedHandler<Env>; ``` This code snippet does the following: 1. Checks if the request is a `POST` request and the URL path is `/products`. 2. Parses the JSON payload from the request. 3. Constructs an `INSERT` SQL query using the provided product data. 4. Executes the query, inserting the new row into the `products` table. 5. Returns the inserted row as a JSON response. Now, when you send a `POST` request to your Worker's URL with the `/products` path and a JSON payload, the Worker will insert a new row into the `products` table with the provided data. When a request to `/` is made, the Worker will return all products in the database. After making these changes, deploy the Worker again by running: ```sh npx wrangler deploy ``` You can now use your Cloudflare Worker to insert new rows into the `products` table. To test this functionality, send a `POST` request to your Worker's URL with the `/products` path, along with a JSON payload containing the new product data: ```json { "name": "Sample Product", "description": "This is a sample product", "price": 19.99 } ``` You have successfully created a Cloudflare Worker that connects to a PostgreSQL database and handles fetching data and inserting new rows into a products table. ## 8. Use Hyperdrive to accelerate queries Create a Hyperdrive configuration using the connection string for your PostgreSQL database. ```bash npx wrangler hyperdrive create <NAME_OF_HYPERDRIVE_CONFIG> --connection-string="postgres://user:password@HOSTNAME_OR_IP_ADDRESS:PORT/database_name" ``` You can also use explicit parameters by following the [wrangler documentation for Hyperdrive](/workers/wrangler/commands/#r2-bucket-create). This command outputs the Hyperdrive configuration `id` that will be used for your Hyperdrive [binding](/workers/runtime-apis/bindings/). Set up your binding by specifying the `id` in the Wrangler file. <WranglerConfig> ```toml {7-9} name = "hyperdrive-example" main = "src/index.ts" compatibility_date = "2024-08-21" compatibility_flags = ["nodejs_compat"] # Pasted from the output of `wrangler hyperdrive create <NAME_OF_HYPERDRIVE_CONFIG> --connection-string=[...]` above. [[hyperdrive]] binding = "HYPERDRIVE" id = "<ID OF THE CREATED HYPERDRIVE CONFIGURATION>" ``` </WranglerConfig> Create the types for your Hyperdrive binding using the following command: ```bash npx wrangler types ``` Replace your existing connection string in your Worker code with the Hyperdrive connection string. ```js {3-3} export default { async fetch(request, env, ctx): Promise<Response> { const sql = postgres(env.HYPERDRIVE.connectionString) const url = new URL(request.url); //rest of the routes and database queries }, } satisfies ExportedHandler<Env>; ``` ## 9. Redeploy your Worker Run the following command to deploy your Worker: ```sh npx wrangler deploy ``` Your Worker application is now live and accessible at `<YOUR_WORKER>.<YOUR_SUBDOMAIN>.workers.dev`, using Hyperdrive. Hyperdrive accelerates database queries by pooling your connections and caching your requests across the globe. ## Next steps To build more with databases and Workers, refer to [Tutorials](/workers/tutorials) and explore the [Databases documentation](/workers/databases). If you have any questions, need assistance, or would like to share your project, join the Cloudflare Developer community on [Discord](https://discord.cloudflare.com) to connect with fellow developers and the Cloudflare team. --- # OpenAI GPT function calling with JavaScript and Cloudflare Workers URL: https://developers.cloudflare.com/workers/tutorials/openai-function-calls-workers/ import { Render, PackageManagers } from "~/components"; In this tutorial, you will build a project that leverages [OpenAI's function calling](https://platform.openai.com/docs/guides/function-calling) feature, available in OpenAI's latest Chat Completions API models. The function calling feature allows the AI model to intelligently decide when to call a function based on the input, and respond in JSON format to match the function's signature. You will use the function calling feature to request for the model to determine a website URL which contains information relevant to a message from the user, retrieve the text content of the site, and, finally, return a final response from the model informed by real-time web data. ## What you will learn - How to use OpenAI's function calling feature. - Integrating OpenAI's API in a Cloudflare Worker. - Fetching and processing website content using Cheerio. - Handling API responses and function calls in JavaScript. - Storing API keys as secrets with Wrangler. --- <Render file="tutorials-before-you-start" /> ## 1. Create a new Worker project Create a Worker project in the command line: <PackageManagers type="create" pkg="cloudflare@latest" args={"openai-function-calling-workers"} /> <Render file="c3-post-run-steps" product="workers" params={{ category: "hello-world", type: "Hello World Worker", lang: "JavaScript", }} /> Go to your new `openai-function-calling-workers` Worker project: ```sh cd openai-function-calling-workers ``` Inside of your new `openai-function-calling-workers` directory, find the `src/index.js` file. You will configure this file for most of the tutorial. You will also need an OpenAI account and API key for this tutorial. If you do not have one, [create a new OpenAI account](https://platform.openai.com/signup) and [create an API key](https://platform.openai.com/account/api-keys) to continue with this tutorial. Make sure to store you API key somewhere safe so you can use it later. ## 2. Make a request to OpenAI With your Worker project created, make your first request to OpenAI. You will use the OpenAI node library to interact with the OpenAI API. In this project, you will also use the Cheerio library to handle processing the HTML content of websites ```sh npm install openai cheerio ``` Now, define the structure of your Worker in `index.js`: ```js export default { async fetch(request, env, ctx) { // Initialize OpenAI API // Handle incoming requests return new Response("Hello World!"); }, }; ``` Above `export default`, add the imports for `openai` and `cheerio`: ```js import OpenAI from "openai"; import * as cheerio from "cheerio"; ``` Within your `fetch` function, instantiate your `OpenAI` client: ```js async fetch(request, env, ctx) { const openai = new OpenAI({ apiKey: env.OPENAI_API_KEY, }); // Handle incoming requests return new Response('Hello World!'); }, ``` Use [`wrangler secret put`](/workers/wrangler/commands/#put) to set `OPENAI_API_KEY`. This [secret's](/workers/configuration/secrets/) value is the API key you created earlier in the OpenAI dashboard: ```sh npx wrangler secret put <OPENAI_API_KEY> ``` For local development, create a new file `.dev.vars` in your Worker project and add this line. Make sure to replace `OPENAI_API_KEY` with your own OpenAI API key: ```txt OPENAI_API_KEY = "<YOUR_OPENAI_API_KEY>" ``` Now, make a request to the OpenAI [Chat Completions API](https://platform.openai.com/docs/guides/gpt/chat-completions-api): ```js export default { async fetch(request, env, ctx) { const openai = new OpenAI({ apiKey: env.OPENAI_API_KEY, }); const url = new URL(request.url); const message = url.searchParams.get("message"); const messages = [ { role: "user", content: message ? message : "What's in the news today?", }, ]; const tools = [ { type: "function", function: { name: "read_website_content", description: "Read the content on a given website", parameters: { type: "object", properties: { url: { type: "string", description: "The URL to the website to read", }, }, required: ["url"], }, }, }, ]; const chatCompletion = await openai.chat.completions.create({ model: "gpt-4o-mini", messages: messages, tools: tools, tool_choice: "auto", }); const assistantMessage = chatCompletion.choices[0].message; console.log(assistantMessage); //Later you will continue handling the assistant's response here return new Response(assistantMessage.content); }, }; ``` Review the arguments you are passing to OpenAI: - **model**: This is the model you want OpenAI to use for your request. In this case, you are using `gpt-4o-mini`. - **messages**: This is an array containing all messages that are part of the conversation. Initially you provide a message from the user, and we later add the response from the model. The content of the user message is either the `message` query parameter from the request URL or the default "What's in the news today?". - **tools**: An array containing the actions available to the AI model. In this example you only have one tool, `read_website_content`, which reads the content on a given website. - **name**: The name of your function. In this case, it is `read_website_content`. - **description**: A short description that lets the model know the purpose of the function. This is optional but helps the model know when to select the tool. - **parameters**: A JSON Schema object which describes the function. In this case we request a response containing an object with the required property `url`. - **tool_choice**: This argument is technically optional as `auto` is the default. This argument indicates that either a function call or a normal message response can be returned by OpenAI. ## 3. Building your `read_website_content()` function You will now need to define the `read_website_content` function, which is referenced in the `tools` array. The `read_website_content` function fetches the content of a given URL and extracts the text from `<p>` tags using the `cheerio` library: Add this code above the `export default` block in your `index.js` file: ```js async function read_website_content(url) { console.log("reading website content"); const response = await fetch(url); const body = await response.text(); let cheerioBody = cheerio.load(body); const resp = { website_body: cheerioBody("p").text(), url: url, }; return JSON.stringify(resp); } ``` In this function, you take the URL that you received from OpenAI and use JavaScript's [`Fetch API`](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch) to pull the content of the website and extract the paragraph text. Now we need to determine when to call this function. ## 4. Process the Assistant's Messages Next, we need to process the response from the OpenAI API to check if it includes any function calls. If a function call is present, you should execute the corresponding function in your Worker. Note that the assistant may request multiple function calls. Modify the fetch method within the `export default` block as follows: ```js // ... your previous code ... if (assistantMessage.tool_calls) { for (const toolCall of assistantMessage.tool_calls) { if (toolCall.function.name === "read_website_content") { const url = JSON.parse(toolCall.function.arguments).url; const websiteContent = await read_website_content(url); messages.push({ role: "tool", tool_call_id: toolCall.id, name: toolCall.function.name, content: websiteContent, }); } } const secondChatCompletion = await openai.chat.completions.create({ model: "gpt-4o-mini", messages: messages, }); return new Response(secondChatCompletion.choices[0].message.content); } else { // this is your existing return statement return new Response(assistantMessage.content); } ``` Check if the assistant message contains any function calls by checking for the `tool_calls` property. Because the AI model can call multiple functions by default, you need to loop through any potential function calls and add them to the `messages` array. Each `read_website_content` call will invoke the `read_website_content` function you defined earlier and pass the URL generated by OpenAI as an argument. \` The `secondChatCompletion` is needed to provide a response informed by the data you retrieved from each function call. Now, the last step is to deploy your Worker. Test your code by running `npx wrangler dev` and open the provided url in your browser. This will now show you OpenAI’s response using real-time information from the retrieved web data. ## 5. Deploy your Worker application To deploy your application, run the `npx wrangler deploy` command to deploy your Worker application: ```sh npx wrangler deploy ``` You can now preview your Worker at `<YOUR_WORKER>.<YOUR_SUBDOMAIN>.workers.dev`. Going to this URL will display the response from OpenAI. Optionally, add the `message` URL parameter to write a custom message: for example, `https://<YOUR_WORKER>.<YOUR_SUBDOMAIN>.workers.dev/?message=What is the weather in NYC today?`. ## 6. Next steps Reference the [finished code for this tutorial on GitHub](https://github.com/LoganGrasby/Cloudflare-OpenAI-Functions-Demo/blob/main/src/worker.js). To continue working with Workers and AI, refer to [the guide on using LangChain and Cloudflare Workers together](https://blog.cloudflare.com/langchain-and-cloudflare/) or [how to build a ChatGPT plugin with Cloudflare Workers](https://blog.cloudflare.com/magic-in-minutes-how-to-build-a-chatgpt-plugin-with-cloudflare-workers/). If you have any questions, need assistance, or would like to share your project, join the Cloudflare Developer community on [Discord](https://discord.cloudflare.com) to connect with fellow developers and the Cloudflare team. --- # Build Live Cursors with Next.js, RPC and Durable Objects URL: https://developers.cloudflare.com/workers/tutorials/live-cursors-with-nextjs-rpc-do/ import { Render, PackageManagers, Steps, TabItem, Tabs, Details } from "~/components"; In this tutorial, you will learn how to build a real-time [Next.js](https://nextjs.org/) app that displays the live cursor location of each connected user using [Durable Objects](/durable-objects/), the Workers' built-in [RPC (Remote Procedure Call)](/workers/runtime-apis/rpc/) system, and the [OpenNext](https://opennext.js.org/cloudflare) Cloudflare adapter. The application works like this: - An ID is generated for each user that navigates to the application, which is used for identifying the WebSocket connection in the Durable Object. - Once the WebSocket connection is established, the application sends a message to the WebSocket Durable Object to determine the current number of connected users. - A user can close all active WebSocket connections via a Next.js server action that uses an RPC method. - It handles WebSocket and mouse movement events to update the location of other users' cursors in the UI and to send updates about the user's own cursor, as well as join and leave WebSocket events.  --- ## 1. Create a Next.js Workers Project <Steps> 1. Run the following command to create your Next.js Worker named `next-rpc`: <PackageManagers type="create" pkg="cloudflare@latest" args={"next-rpc --framework=next --experimental"} /> <Render file="c3-post-run-steps" product="workers" params={{ category: "web-framework", framework: "Next.js", }} /> 2. Change into your new directory: ```sh cd next-rpc ``` 3. Install [nanoid](https://www.npmjs.com/package/nanoid) so that string IDs can be generated for clients: <PackageManagers type="add" pkg="nanoid" frame="none" /> 4. Install [perfect-cursors](https://www.npmjs.com/package/perfect-cursors) to interpolate cursor positions: <PackageManagers type="add" pkg="perfect-cursors" frame="none" /> 5. Define workspaces for each Worker: <Tabs> <TabItem label="npm"> Update your `package.json` file. ```json title="package.json" ins={14-17} { "name": "next-rpc", "version": "0.1.0", "private": true, "scripts": { "dev": "next dev", "build": "next build", "start": "next start", "lint": "next lint", "deploy": "cloudflare && wrangler deploy", "preview": "cloudflare && wrangler dev", "cf-typegen": "wrangler types --env-interface CloudflareEnv env.d.ts" }, "workspaces": [ ".", "worker" ], // ... } ``` </TabItem> <TabItem label="pnpm"> Create a new file `pnpm-workspace.yaml`. ```yaml title="pnpm-workspace.yaml" packages: - "worker" - "." ``` </TabItem> </Tabs> </Steps> ## 2. Create a Durable Object Worker This Worker will manage the Durable Object and also have internal APIs that will be made available to the Next.js Worker using a [`WorkerEntrypoint`](/workers/runtime-apis/bindings/service-bindings/rpc/) class. <Steps> 1. Create another Worker named `worker` inside the Next.js directory: <PackageManagers type="create" pkg="cloudflare@latest" args={"worker"} /> <Render file="c3-post-run-steps" product="workers" params={{ category: "hello-world", type: "Hello World Worker using Durable Objects", lang: "TypeScript", }} /> </Steps> ## 3. Build Durable Object Worker functionality <Steps> 1. In your `worker/wrangler.toml` file, update the Durable Object binding: ```toml {4,5,9} title="worker/wrangler.toml" # ... Other wrangler configuration settings [[durable_objects.bindings]] name = "CURSOR_SESSIONS" class_name = "CursorSessions" [[migrations]] tag = "v1" new_classes = ["CursorSessions"] ``` 2. Initialize the main methods for the Durable Object and define types for WebSocket messages and cursor sessions in your `worker/src/index.ts` to support type-safe interaction: - `WsMessage`. Specifies the structure of WebSocket messages handled by the Durable Object. - `Session`. Represents the connected user's ID and current cursor coordinates. ```ts title="worker/src/index.ts" import { DurableObject } from 'cloudflare:workers'; export type WsMessage = | { type: "message"; data: string } | { type: "quit"; id: string } | { type: "join"; id: string } | { type: "move"; id: string; x: number; y: number } | { type: "get-cursors" } | { type: "get-cursors-response"; sessions: Session[] }; export type Session = { id: string; x: number; y: number }; export class CursorSessions extends DurableObject<Env> { constructor(ctx: DurableObjectState, env: Env) {} broadcast(message: WsMessage, self?: string) {} async webSocketMessage(ws: WebSocket, message: string) {} async webSocketClose(ws: WebSocket, code: number) {} closeSessions() {} async fetch(request: Request) { return new Response("Hello"); } } export default { async fetch(request, env, ctx) { return new Response("Ok"); }, } satisfies ExportedHandler<Env>; ``` Now update `worker-configuration.d.ts` by running: ```sh cd worker && npm run cf-typegen ``` 3. Update the Durable Object to manage WebSockets: ```ts title="worker/src/index.ts" {29-34,36-43,56,79,89-100} // Rest of the code export class CursorSessions extends DurableObject<Env> { sessions: Map<WebSocket, Session>; constructor(ctx: DurableObjectState, env: Env) { super(ctx, env); this.sessions = new Map(); this.ctx.getWebSockets().forEach((ws) => { const meta = ws.deserializeAttachment(); this.sessions.set(ws, { ...meta }); }); } broadcast(message: WsMessage, self?: string) { this.ctx.getWebSockets().forEach((ws) => { const { id } = ws.deserializeAttachment(); if (id !== self) ws.send(JSON.stringify(message)); }); } async webSocketMessage(ws: WebSocket, message: string) { if (typeof message !== "string") return; const parsedMsg: WsMessage = JSON.parse(message); const session = this.sessions.get(ws); if (!session) return; switch (parsedMsg.type) { case "move": session.x = parsedMsg.x; session.y = parsedMsg.y; ws.serializeAttachment(session); this.broadcast(parsedMsg, session.id); break; case "get-cursors": const sessions: Session[] = []; this.sessions.forEach((session) => { sessions.push(session); }); const wsMessage: WsMessage = { type: "get-cursors-response", sessions }; ws.send(JSON.stringify(wsMessage)); break; case "message": this.broadcast(parsedMsg); break; default: break; } } async webSocketClose(ws: WebSocket, code: number) { const id = this.sessions.get(ws)?.id; id && this.broadcast({ type: 'quit', id }); this.sessions.delete(ws); ws.close(); } closeSessions() { this.ctx.getWebSockets().forEach((ws) => ws.close()); } async fetch(request: Request) { const url = new URL(request.url); const webSocketPair = new WebSocketPair(); const [client, server] = Object.values(webSocketPair); this.ctx.acceptWebSocket(server); const id = url.searchParams.get("id"); if (!id) { return new Response("Missing id", { status: 400 }); } // Set Id and Default Position const sessionInitialData: Session = { id, x: -1, y: -1 }; server.serializeAttachment(sessionInitialData); this.sessions.set(server, sessionInitialData); this.broadcast({ type: "join", id }, id); return new Response(null, { status: 101, webSocket: client, }); } } export default { async fetch(request, env, ctx) { if (request.url.match("/ws")) { const upgradeHeader = request.headers.get("Upgrade"); if (!upgradeHeader || upgradeHeader !== "websocket") { return new Response("Durable Object expected Upgrade: websocket", { status: 426, }); } const id = env.CURSOR_SESSIONS.idFromName("globalRoom"); const stub = env.CURSOR_SESSIONS.get(id); return stub.fetch(request); } return new Response(null, { status: 400, statusText: "Bad Request", headers: { "Content-Type": "text/plain", }, }); }, } satisfies ExportedHandler<Env>; ``` - The main `fetch` handler routes requests with a `/ws` URL to the `CursorSessions` Durable Object where a WebSocket connection is established. - The `CursorSessions` class manages WebSocket connections, session states, and broadcasts messages to other connected clients. - When a new WebSocket connection is established, the Durable Object broadcasts a `join` message to all connected clients; similarly, a `quit` message is broadcast when a client disconnects. - It tracks each WebSocket client's last cursor position under the `move` message, which is broadcasted to all active clients. - When a `get-cursors` message is received, it sends the number of currently active clients to the specific client that requested it. 4. Extend the `WorkerEntrypoint` class for RPC: :::note[Note] A service binding to `SessionsRPC` is used here because Durable Object RPC is not yet supported in multiple `wrangler dev` sessions. In this case, two `wrangler dev` sessions are used: one for the Next.js Worker and one for the Durable Object Worker. In production, however, Durable Object RPC is not an issue. For convenience in local development, a service binding is used instead of directly invoking the Durable Object RPC method. ::: ```ts title="worker/src/index.ts" ins={2,5-12} del={1} import { DurableObject } from 'cloudflare:workers'; import { DurableObject, WorkerEntrypoint } from 'cloudflare:workers'; // ... rest of the code export class SessionsRPC extends WorkerEntrypoint<Env> { async closeSessions() { const id = this.env.CURSOR_SESSIONS.idFromName("globalRoom"); const stub = this.env.CURSOR_SESSIONS.get(id); // Invoking Durable Object RPC method. Same `wrangler dev` session. await stub.closeSessions(); } } export default { async fetch(request, env, ctx) { if (request.url.match("/ws")) { // ... ``` 5. Leave the Durable Object Worker running. It's used for RPC and serves as a local WebSocket server: <PackageManagers type="run" pkg="dev" frame="none" /> 6. Use the resulting address from the previous step to set the Worker host as a public environment variable in your Next.js project: ```text title="next-rpc/.env.local" ins={1} NEXT_PUBLIC_WS_HOST=localhost:8787 ``` </Steps> ## 4. Build Next.js Worker functionality <Steps> 1. In your Next.js Wrangler file, declare the external Durable Object binding and the Service binding to `SessionsRPC`: ```toml title="next-rpc/wrangler.toml" ins={10-18} # ... rest of the configuration compatibility_flags = ["nodejs_compat"] # Minification helps to keep the Worker bundle size down and improve start up time. minify = true # Use the new Workers + Assets to host the static frontend files assets = { directory = ".worker-next/assets", binding = "ASSETS" } [[durable_objects.bindings]] name = "CURSOR_SESSIONS" class_name = "CursorSessions" script_name = "worker" [[services]] binding = "RPC_SERVICE" service = "worker" entrypoint = "SessionsRPC" ``` 2. Update your `env.d.ts` file for type-safety: ```ts title="next-rpc/env.d.ts" {2-5} interface CloudflareEnv { CURSOR_SESSIONS: DurableObjectNamespace< import("./worker/src/index").CursorSessions >; RPC_SERVICE: Service<import("./worker/src/index").SessionsRPC>; ASSETS: Fetcher; } ``` 3. Include Next.js server side logic: - Add a server action to close all active WebSocket connections. - Use the RPC method `closeSessions` from the `RPC_SERVICE` Service binding instead of invoking the Durable Object RPC method because of the limitation mentioned in the note above. - The server component generates unique IDs using `nanoid` to identify the WebSocket connection within the Durable Object. - Set the [`dynamic`](https://nextjs.org/docs/app/api-reference/file-conventions/route-segment-config) value to `force-dynamic` to ensure unique ID generation and avoid static rendering ```tsx title="src/app/page.tsx" {5,9-10,19,25-27,33} import { getCloudflareContext } from "@opennextjs/cloudflare"; import { Cursors } from "./cursor"; import { nanoid } from "nanoid"; export const dynamic = "force-dynamic"; async function closeSessions() { "use server"; const cf = await getCloudflareContext(); await cf.env.RPC_SERVICE.closeSessions(); // Note: Not supported in `wrangler dev` // const id = cf.env.CURSOR_SESSIONS.idFromName("globalRoom"); // const stub = cf.env.CURSOR_SESSIONS.get(id); // await stub.closeSessions(); } export default function Home() { const id = `ws_${nanoid(50)}`; return ( <main className="flex min-h-screen flex-col items-center p-24 justify-center"> <div className="border border-dashed w-full"> <p className="pt-2 px-2">Server Actions</p> <div className="p-2"> <form action={closeSessions}> <button className="border px-2 py-1">Close WebSockets</button> </form> </div> </div> <div className="border border-dashed w-full mt-2.5"> <p className="py-2 px-2">Live Cursors</p> <div className="px-2 space-y-2"> <Cursors id={id}></Cursors> </div> </div> </main> ); } ``` 4. Create a client component to manage WebSocket and mouse movement events: <Details header="src/app/cursor.tsx"> ```tsx title="src/app/cursor.tsx" "use client"; import { useCallback, useEffect, useLayoutEffect, useReducer, useRef, useState } from "react"; import type { Session, WsMessage } from "../../worker/src/index"; import { PerfectCursor } from "perfect-cursors"; const INTERVAL = 55; export function Cursors(props: { id: string }) { const wsRef = useRef<WebSocket | null>(null); const [cursors, setCursors] = useState<Map<string, Session>>(new Map()); const lastSentTimestamp = useRef(0); const [messageState, dispatchMessage] = useReducer(messageReducer, { in: "", out: "", }); const [highlightedIn, highlightIn] = useHighlight(); const [highlightedOut, highlightOut] = useHighlight(); function startWebSocket() { const wsProtocol = window.location.protocol === "https:" ? "wss" : "ws"; const ws = new WebSocket( `${wsProtocol}://${process.env.NEXT_PUBLIC_WS_HOST}/ws?id=${props.id}`, ); ws.onopen = () => { highlightOut(); dispatchMessage({ type: "out", message: "get-cursors" }); const message: WsMessage = { type: "get-cursors" }; ws.send(JSON.stringify(message)); }; ws.onmessage = (message) => { const messageData: WsMessage = JSON.parse(message.data); highlightIn(); dispatchMessage({ type: "in", message: messageData.type }); switch (messageData.type) { case "quit": setCursors((prev) => { const updated = new Map(prev); updated.delete(messageData.id); return updated; }); break; case "join": setCursors((prev) => { const updated = new Map(prev); if (!updated.has(messageData.id)) { updated.set(messageData.id, { id: messageData.id, x: -1, y: -1 }); } return updated; }); break; case "move": setCursors((prev) => { const updated = new Map(prev); const session = updated.get(messageData.id); if (session) { session.x = messageData.x; session.y = messageData.y; } else { updated.set(messageData.id, messageData); } return updated; }); break; case "get-cursors-response": setCursors( new Map( messageData.sessions.map((session) => [session.id, session]), ), ); break; default: break; } }; ws.onclose = () => setCursors(new Map()); return ws; } useEffect(() => { const abortController = new AbortController(); document.addEventListener( "mousemove", (ev) => { const x = ev.pageX / window.innerWidth, y = ev.pageY / window.innerHeight; const now = Date.now(); if ( now - lastSentTimestamp.current > INTERVAL && wsRef.current?.readyState === WebSocket.OPEN ) { const message: WsMessage = { type: "move", id: props.id, x, y }; wsRef.current.send(JSON.stringify(message)); lastSentTimestamp.current = now; highlightOut(); dispatchMessage({ type: "out", message: "move" }); } }, { signal: abortController.signal, }, ); return () => abortController.abort(); // eslint-disable-next-line react-hooks/exhaustive-deps }, []); useEffect(() => { wsRef.current = startWebSocket(); return () => wsRef.current?.close(); // eslint-disable-next-line react-hooks/exhaustive-deps }, [props.id]); function sendMessage() { highlightOut(); dispatchMessage({ type: "out", message: "message" }); wsRef.current?.send( JSON.stringify({ type: "message", data: "Ping" } satisfies WsMessage), ); } const otherCursors = Array.from(cursors.values()).filter( ({ id, x, y }) => id !== props.id && x !== -1 && y !== -1, ); return ( <> <div className="flex border"> <div className="px-2 py-1 border-r">WebSocket Connections</div> <div className="px-2 py-1"> {cursors.size} </div> </div> <div className="flex border"> <div className="px-2 py-1 border-r">Messages</div> <div className="flex flex-1"> <div className="px-2 py-1 border-r">↓</div> <div className="w-full px-2 py-1 [word-break:break-word] transition-colors duration-500" style={{ backgroundColor: highlightedIn ? "#ef4444" : "transparent", }} > {messageState.in} </div> </div> <div className="flex flex-1"> <div className="px-2 py-1 border-x">↑</div> <div className="w-full px-2 py-1 [word-break:break-word] transition-colors duration-500" style={{ backgroundColor: highlightedOut ? "#60a5fa" : "transparent", }} > {messageState.out} </div> </div> </div> <div className="flex gap-2"> <button onClick={sendMessage} className="border px-2 py-1"> ws message </button> <button className="border px-2 py-1 disabled:opacity-80" onClick={() => { wsRef.current?.close(); wsRef.current = startWebSocket(); }} > ws reconnect </button> <button className="border px-2 py-1" onClick={() => wsRef.current?.close()} > ws close </button> </div> <div> {otherCursors.map((session) => ( <SvgCursor key={session.id} point={[ session.x * window.innerWidth, session.y * window.innerHeight, ]} /> ))} </div> </> ); } function SvgCursor({ point }: { point: number[] }) { const refSvg = useRef<SVGSVGElement>(null); const animateCursor = useCallback((point: number[]) => { refSvg.current?.style.setProperty( "transform", `translate(${point[0]}px, ${point[1]}px)`, ); }, []); const onPointMove = usePerfectCursor(animateCursor); useLayoutEffect(() => onPointMove(point), [onPointMove, point]); const [randomColor] = useState( `#${Math.floor(Math.random() * 16777215) .toString(16) .padStart(6, "0")}`, ); return ( <svg ref={refSvg} height="32" width="32" viewBox="0 0 32 32" xmlns="http://www.w3.org/2000/svg" className={"absolute -top-[12px] -left-[12px] pointer-events-none"} > <defs> <filter id="shadow" x="-40%" y="-40%" width="180%" height="180%"> <feDropShadow dx="1" dy="1" stdDeviation="1.2" floodOpacity="0.5" /> </filter> </defs> <g fill="none" transform="rotate(0 16 16)" filter="url(#shadow)"> <path d="M12 24.4219V8.4069L23.591 20.0259H16.81l-.411.124z" fill="white" /> <path d="M21.0845 25.0962L17.4795 26.6312L12.7975 15.5422L16.4835 13.9892z" fill="white" /> <path d="M19.751 24.4155L17.907 25.1895L14.807 17.8155L16.648 17.04z" fill={randomColor} /> <path d="M13 10.814V22.002L15.969 19.136l.428-.139h4.768z" fill={randomColor} /> </g> </svg> ); } function usePerfectCursor(cb: (point: number[]) => void, point?: number[]) { const [pc] = useState(() => new PerfectCursor(cb)); useLayoutEffect(() => { if (point) pc.addPoint(point); return () => pc.dispose(); // eslint-disable-next-line react-hooks/exhaustive-deps }, [pc]); useLayoutEffect(() => { PerfectCursor.MAX_INTERVAL = 58; }, []); const onPointChange = useCallback( (point: number[]) => pc.addPoint(point), [pc], ); return onPointChange; } type MessageState = { in: string; out: string }; type MessageAction = { type: "in" | "out"; message: string }; function messageReducer(state: MessageState, action: MessageAction) { switch (action.type) { case "in": return { ...state, in: action.message }; case "out": return { ...state, out: action.message }; default: return state; } } function useHighlight(duration = 250) { const timestampRef = useRef(0); const [highlighted, setHighlighted] = useState(false); function highlight() { timestampRef.current = Date.now(); setHighlighted(true); setTimeout(() => { const now = Date.now(); if (now - timestampRef.current >= duration) { setHighlighted(false); } }, duration); } return [highlighted, highlight] as const; } ``` </Details> The generated ID is used here and passed as a parameter to the WebSocket server: ```ts const ws = new WebSocket( `${wsProtocol}://${process.env.NEXT_PUBLIC_WS_HOST}/ws?id=${props.id}`, ); ``` The component starts the WebSocket connection and handles 4 types of WebSocket messages, which trigger updates to React's state: - `join`. Received when a new WebSocket connection is established. - `quit`. Received when a WebSocket connection is closed. - `move`. Received when a user's cursor moves. - `get-cursors-response`. Received when a client sends a `get-cursors` message, which is triggered once the WebSocket connection is open. It sends the user's cursor coordinates to the WebSocket server during the [`mousemove`](https://developer.mozilla.org/en-US/docs/Web/API/Element/mousemove_event) event, which then broadcasts them to all active WebSocket clients. Although there are multiple strategies you can use together for real-time cursor synchronization (e.g., batching, interpolation, etc.), in this tutorial throttling, spline interpolation and position normalization are used: ```ts {4-5,8-9} document.addEventListener( "mousemove", (ev) => { const x = ev.pageX / window.innerWidth, y = ev.pageY / window.innerHeight; const now = Date.now(); if ( now - lastSentTimestamp.current > INTERVAL && wsRef.current?.readyState === WebSocket.OPEN ) { const message: WsMessage = { type: "move", id: props.id, x, y }; wsRef.current.send(JSON.stringify(message)); lastSentTimestamp.current = now; // ... } } ); ``` Each animated cursor is controlled by a `PerfectCursor` instance, which animates its position along a spline curve defined by the cursor's latest positions: ```ts {9-11} // SvgCursor react component const refSvg = useRef<SVGSVGElement>(null); const animateCursor = useCallback((point: number[]) => { refSvg.current?.style.setProperty( "transform", `translate(${point[0]}px, ${point[1]}px)`, ); }, []); const onPointMove = usePerfectCursor(animateCursor); // A `point` is added to the path whenever its vule updates; useLayoutEffect(() => onPointMove(point), [onPointMove, point]); // ... ``` 5. Run Next.js development server: :::note[Note] The Durable Object Worker must also be running. ::: <PackageManagers type="run" pkg="dev" frame="none" /> 6. Open the App in the browser. </Steps> ## 5. Deploy the project <Steps> 1. Change into your Durable Object Worker directory: ```sh cd worker ``` Deploy the Worker: <PackageManagers type="run" pkg="deploy" frame="none" /> Copy only the host from the generated Worker URL, excluding the protocol, and set `NEXT_PUBLIC_WS_HOST` in `.env.local` to this value (e.g., `worker-unique-identifier.workers.dev`). ```txt title="next-rpc/.env.local" ins={2} del={1} NEXT_PUBLIC_WS_HOST=localhost:8787 NEXT_PUBLIC_WS_HOST=worker-unique-identifier.workers.dev ``` 2. Change into your root directory and deploy your Next.js app: :::note[Optional Step] Invoking Durable Object RPC methods between separate workers are fully supported in Cloudflare deployments. You can opt to use them instead of Service Bindings RPC: ```ts title="src/app/page.tsx" ins={7-9} del={4} async function closeSessions() { "use server"; const cf = await getCloudflareContext(); await cf.env.RPC_SERVICE.closeSessions(); // Note: Not supported in `wrangler dev` const id = cf.env.CURSOR_SESSIONS.idFromName("globalRoom"); const stub = cf.env.CURSOR_SESSIONS.get(id); await stub.closeSessions(); } ``` ::: <PackageManagers type="run" pkg="deploy" frame="none" /> </Steps> ## Summary In this tutorial, you learned how to integrate Next.js with Durable Objects to build a real-time application to visualize cursors. You also learned how to use Workers' built-in RPC system alongside Next.js server actions. The complete code for this tutorial is available on [GitHub](https://github.com/exectx/next-live-cursors-do-rpc). ## Related resources You can check other Cloudflare tutorials or related resources: - [Workers RPC](/workers/runtime-apis/bindings/service-bindings/rpc/). - [Next.js and Workers Static Assets](/workers/frameworks/framework-guides/nextjs/). - [Build a seat booking app with SQLite in Durable Objects](/durable-objects/tutorials/build-a-seat-booking-app/). --- # Send Emails With Postmark URL: https://developers.cloudflare.com/workers/tutorials/send-emails-with-postmark/ In this tutorial, you will learn how to send transactional emails from Workers using [Postmark](https://postmarkapp.com/). At the end of this tutorial, you’ll be able to: - Create a Worker to send emails. - Sign up and add a Cloudflare domain to Postmark. - Send emails from your Worker using Postmark. - Store API keys securely with secrets. ## Prerequisites To continue with this tutorial, you’ll need: - A [Cloudflare account](https://dash.cloudflare.com/sign-up/workers-and-pages), if you don’t already have one. - A [registered](/registrar/get-started/register-domain/) domain. - Installed [npm](https://docs.npmjs.com/getting-started). - A [Postmark account](https://account.postmarkapp.com/sign_up). ## Create a Worker project Start by using [C3](/pages/get-started/c3/) to create a Worker project in the command line, then, answer the prompts: ```sh npm create cloudflare@latest ``` Alternatively, you can use CLI arguments to speed things up: ```sh npm create cloudflare@latest email-with-postmark -- --type=hello-world --ts=false --git=true --deploy=false ``` This creates a simple hello-world Worker having the following content: ```js export default { async fetch(request, env, ctx) { return new Response("Hello World!"); }, }; ``` ## Add your domain to Postmark If you don’t already have a Postmark account, you can sign up for a [free account here](https://account.postmarkapp.com/sign_up). After signing up, check your inbox for a link to confirm your sender signature. This verifies and enables you to send emails from your registered email address. To enable email sending from other addresses on your domain, navigate to `Sender Signatures` on the Postmark dashboard, `Add Domain or Signature` > `Add Domain`, then type in your domain and click on `Verify Domain`. Next, you’re presented with a list of DNS records to add to your Cloudflare domain. On your Cloudflare dashboard, select the domain you entered earlier and navigate to `DNS` > `Records`. Copy/paste the DNS records (DKIM, and Return-Path) from Postmark to your Cloudflare domain.  :::note If you need more help adding DNS records in Cloudflare, refer to [Manage DNS records](/dns/manage-dns-records/how-to/create-dns-records/). ::: When that’s done, head back to Postmark and click on the `Verify` buttons. If all records are properly configured, your domain status should be updated to `Verified`.  To grab your API token, navigate to the `Servers` tab, then `My First Server` > `API Tokens`, then copy your API key to a safe place. ## Send emails from your Worker The final step is putting it all together in a Worker. In your Worker, make a post request with `fetch` to Postmark’s email API and include your token and message body: :::note [Postmark’s JavaScript library](https://www.npmjs.com/package/postmark) is currently not supported on Workers. Use the [email API](https://postmarkapp.com/developer/user-guide/send-email-with-api) instead. ::: ```jsx export default { async fetch(request, env, ctx) { return await fetch("https://api.postmarkapp.com/email", { method: "POST", headers: { "Content-Type": "application/json", "X-Postmark-Server-Token": "your_postmark_api_token_here", }, body: JSON.stringify({ From: "hello@example.com", To: "someone@example.com", Subject: "Hello World", HtmlBody: "<p>Hello from Workers</p>", }), }); }, }; ``` To test your code locally, run the following command and navigate to [http://localhost:8787/](http://localhost:8787/) in a browser: ```sh npm start ``` Deploy your Worker with `npm run deploy`. ## Move API token to Secrets Sensitive information such as API keys and token should always be stored in secrets. All secrets are encrypted to add an extra layer of protection. That said, it’s a good idea to move your API token to a secret and access it from the environment of your Worker. To add secrets for local development, create a `.dev.vars` file which works exactly like a `.env` file: ```txt POSTMARK_API_TOKEN=your_postmark_api_token_here ``` Also ensure the secret is added to your deployed worker by running: ```sh title="Add secret to deployed Worker" npx wrangler secret put POSTMARK_API_TOKEN ``` The added secret can be accessed on via the `env` parameter passed to your Worker’s fetch event handler: ```jsx export default { async fetch(request, env, ctx) { return await fetch("https://api.postmarkapp.com/email", { method: "POST", headers: { "Content-Type": "application/json", "X-Postmark-Server-Token": env.POSTMARK_API_TOKEN, }, body: JSON.stringify({ From: "hello@example.com", To: "someone@example.com", Subject: "Hello World", HtmlBody: "<p>Hello from Workers</p>", }), }); }, }; ``` And finally, deploy this update with `npm run deploy`. ## Related resources - [Storing API keys and tokens with Secrets](/workers/configuration/secrets/). - [Transferring your domain to Cloudflare](/registrar/get-started/transfer-domain-to-cloudflare/). - [Send emails from Workers](/email-routing/email-workers/send-email-workers/) --- # Send Emails With Resend URL: https://developers.cloudflare.com/workers/tutorials/send-emails-with-resend/ In this tutorial, you will learn how to send transactional emails from Workers using [Resend](https://resend.com/). At the end of this tutorial, you’ll be able to: - Create a Worker to send emails. - Sign up and add a Cloudflare domain to Resend. - Send emails from your Worker using Resend. - Store API keys securely with secrets. ## Prerequisites To continue with this tutorial, you’ll need: - A [Cloudflare account](https://dash.cloudflare.com/sign-up/workers-and-pages), if you don’t already have one. - A [registered](/registrar/get-started/register-domain/) domain. - Installed [npm](https://docs.npmjs.com/getting-started). - A [Resend account](https://resend.com/signup). ## Create a Worker project Start by using [C3](/pages/get-started/c3/) to create a Worker project in the command line, then, answer the prompts: ```sh npm create cloudflare@latest ``` Alternatively, you can use CLI arguments to speed things up: ```sh npm create cloudflare@latest email-with-resend -- --type=hello-world --ts=false --git=true --deploy=false ``` This creates a simple hello-world Worker having the following content: ```js export default { async fetch(request, env, ctx) { return new Response("Hello World!"); }, }; ``` ## Add your domain to Resend If you don’t already have a Resend account, you can sign up for a [free account here](https://resend.com/signup). After signing up, go to `Domains` using the side menu, and click the button to add a new domain. On the modal, enter the domain you want to add and then select a region. Next, you’re presented with a list of DNS records to add to your Cloudflare domain. On your Cloudflare dashboard, select the domain you entered earlier and navigate to `DNS` > `Records`. Copy/paste the DNS records (DKIM, SPF, and DMARC records) from Resend to your Cloudflare domain.  :::note If you need more help adding DNS records in Cloudflare, refer to [Manage DNS records](/dns/manage-dns-records/how-to/create-dns-records/). ::: When that’s done, head back to Resend and click on the `Verify DNS Records` button. If all records are properly configured, your domain status should be updated to `Verified`.  Lastly, navigate to `API Keys` with the side menu, to create an API key. Give your key a descriptive name and the appropriate permissions. Click the button to add your key and then copy your API key to a safe location. ## Send emails from your Worker The final step is putting it all together in a Worker. Open up a terminal in the directory of the Worker you created earlier. Then, install the Resend SDK: ```sh npm i resend ``` In your Worker, import and use the Resend library like so: ```jsx import { Resend } from "resend"; export default { async fetch(request, env, ctx) { const resend = new Resend("your_resend_api_key"); const { data, error } = await resend.emails.send({ from: "hello@example.com", to: "someone@example.com", subject: "Hello World", html: "<p>Hello from Workers</p>", }); return Response.json({ data, error }); }, }; ``` To test your code locally, run the following command and navigate to [http://localhost:8787/](http://localhost:8787/) in a browser: ```sh npm start ``` Deploy your Worker with `npm run deploy`. ## Move API keys to Secrets Sensitive information such as API keys and token should always be stored in secrets. All secrets are encrypted to add an extra layer of protection. That said, it’s a good idea to move your API key to a secret and access it from the environment of your Worker. To add secrets for local development, create a `.dev.vars` file which works exactly like a `.env` file: ```txt RESEND_API_KEY=your_resend_api_key ``` Also ensure the secret is added to your deployed worker by running: ```sh title="Add secret to deployed Worker" npx wrangler secret put RESEND_API_KEY ``` The added secret can be accessed on via the `env` parameter passed to your Worker’s fetch event handler: ```jsx import { Resend } from "resend"; export default { async fetch(request, env, ctx) { const resend = new Resend(env.RESEND_API_KEY); const { data, error } = await resend.emails.send({ from: "hello@example.com", to: "someone@example.com", subject: "Hello World", html: "<p>Hello from Workers</p>", }); return Response.json({ data, error }); }, }; ``` And finally, deploy this update with `npm run deploy`. ## Related resources - [Storing API keys and tokens with Secrets](/workers/configuration/secrets/). - [Transferring your domain to Cloudflare](/registrar/get-started/transfer-domain-to-cloudflare/). - [Send emails from Workers](/email-routing/email-workers/send-email-workers/) --- # Create a serverless, globally distributed REST API with Fauna URL: https://developers.cloudflare.com/workers/tutorials/store-data-with-fauna/ import { Render, TabItem, Tabs, PackageManagers, WranglerConfig } from "~/components"; In this tutorial, you learn how to store and retrieve data in your Cloudflare Workers applications by building a REST API that manages an inventory catalog using [Fauna](https://fauna.com/) as its data layer. ## Learning goals - How to store and retrieve data from Fauna in Workers. - How to use Wrangler to store secrets securely. - How to use [Hono](https://hono.dev) as a web framework for your Workers. Building with Fauna, Workers, and Hono enables you to create a globally distributed, strongly consistent, fully serverless REST API in a single repository. Fauna is a document-based database with a flexible schema. This allows you to define the structure of your data – whatever it may be – and store documents that adhere to that structure. In this tutorial, you will build a product inventory, where each `product` document must contain the following properties: - **title** - A human-friendly string that represents the title or name of a product. - **serialNumber** - A machine-friendly string that uniquely identifies the product. - **weightLbs** - A floating point number that represents the weight in pounds of the product. - **quantity** - A non-negative integer that represents how many items of a particular product there are in the inventory. Documents are stored in a [collection](https://docs.fauna.com/fauna/current/reference/schema_entities/collection/). Collections in document databases are groups of related documents. For this tutorial, all API endpoints are public. However, Fauna also offers multiple avenues for securing endpoints and collections. Refer to [Choosing an authentication strategy with Fauna](https://fauna.com/blog/choosing-an-authentication-strategy-with-fauna) for more information on authenticating users to your applications with Fauna. <Render file="tutorials-before-you-start" /> ## Set up Fauna ### Create your database To create a database, log in to the [Fauna Dashboard](https://dashboard.fauna.com/) and click **Create Database**. When prompted, select your preferred [Fauna region group](https://docs.fauna.com/fauna/current/manage/region-groups/) and other database settings. :::note[Fauna Account] If you do not have a Fauna account, [sign up](https://dashboard.fauna.com/register) and deploy this template using the free tier. ::: ### Create a collection Create a `Products` collection for the database with the following query. To run the query in the Fauna Dashboard, select your database and click the **Shell** tab: ```js title="Create a new collection" Collection.create({ name: "Products" }); ``` The query outputs a result similar to the following: ```js title="Output" { name: "Products", coll: Collection, ts: Time("2099-08-28T15:03:53.773Z"), history_days: 0, indexes: {}, constraints: [] } ``` ### Create a secret key In production, the Worker will use the Cloudflare Fauna integration to automatically connect to Fauna. The integration creates any credentials needed for authentication with Fauna. For local development, you must manually create a [Fauna authentication key](https://docs.fauna.com/fauna/current/learn/security/keys/) and pass the key's secret to your Worker as a [development secret](#add-your-fauna-database-key-for-local-development). To create a Fauna authentication key: 1. In the upper left pane of Fauna Dashboard’s Explorer page, select your database, and click the **Keys** tab. 2. Click **Create Key**. 3. Choose a **Role** of **Server**. 4. Click **Save**. 5. Copy the **Key Secret**. The secret is scoped to the database. :::caution[Protect your keys] Server keys can read and write all documents in all collections and can call all [user-defined functions](https://docs.fauna.com/fauna/current/cookbook/data_model/user_defined_functions) (UDFs). Protect server keys and do not commit them to source control repositories. ::: ## Manage your inventory with Workers ### Create a new Worker project Create a new project by using [C3](https://github.com/cloudflare/workers-sdk/tree/main/packages/create-cloudflare). <PackageManagers type="create" pkg="cloudflare@latest" args={"fauna-workers"} /> To continue with this guide: - For _What would you like to start with_?, select `Framework Starter`. - For _Which development framework do you want to use?_, select `Hono`. - For, _Do you want to deploy your application?_, select `No`. Then, move into your newly created directory: ```sh cd fauna-workers ``` Update the Wrangler file to set the name for the Worker. <WranglerConfig> ```toml title="wrangler.toml" name = "fauna-workers" ``` </WranglerConfig> ### Add your Fauna database key for local development For local development, add a `.dev.vars` file on the project root and add your Fauna key's secret as a [development secret](/workers/configuration/secrets/#local-development-with-secrets): ```plain title=".dev.vars" DATABASE_KEY=<FAUNA_SECRET> ``` ### Add the Fauna integration Deploy your Worker to Cloudflare to ensure that everything is set up correctly: ```sh npm run deploy ``` 1. Login to your [Cloudflare dashboard](https://dash.cloudflare.com/). 2. Select the **Integrations** tab and click on the **Fauna** integration.  3. Login to your Fauna account. 4. Select the Fauna database you created earlier. 5. Select `server` role as your database role. 6. Enter `DATABASE_KEY` as the **Secret Name**. 7. Select **Finish**. 8. Navigate to **Settings** tab and select **Variables**. Notice that a new variable `DATABASE_KEY` is added to your Worker. The integration creates a new Fauna authentication key and stores the key's secret in the Worker's `DATABASE_KEY` secret. The deployed Worker uses this key. :::note You can manage the generated Fauna key in the Fauna Dashboard. See the [Clouflare Fauna integration docs](/workers/databases/native-integrations/fauna). ::: ### Install dependencies Install [the Fauna JavaScript driver](https://github.com/fauna/fauna-js) in your newly created Worker project. <Tabs> <TabItem label="npm"> ```sh title="Install the Fauna driver" npm install fauna ``` </TabItem> <TabItem label="yarn"> ```sh title="Install the Fauna driver" yarn add fauna ``` </TabItem> </Tabs> ### Base inventory logic Replace the contents of your `src/index.ts` file with the skeleton of your API: ```ts title="src/index.ts" import { Hono } from "hono"; import { Client, fql, ServiceError } from "fauna"; type Bindings = { DATABASE_KEY: string; }; type Variables = { faunaClient: Client; }; type Product = { id: string; serialNumber: number; title: string; weightLbs: number; quantity: number; }; const app = new Hono<{ Bindings: Bindings; Variables: Variables }>(); app.use("*", async (c, next) => { const faunaClient = new Client({ secret: c.env.DATABASE_KEY, }); c.set("faunaClient", faunaClient); await next(); }); app.get("/", (c) => { return c.text("Hello World"); }); export default app; ``` This is custom middleware to initialize the Fauna client and set the instance with `c.set()` for later use in another handler: ```js title="Custom middleware for the Fauna Client" app.use("*", async (c, next) => { const faunaClient = new Client({ secret: c.env.DATABASE_KEY, }); c.set("faunaClient", faunaClient); await next(); }); ``` You can access the `DATABASE_KEY` environment variable from `c.env.DATABASE_KEY`. Workers run on a [custom JavaScript runtime](/workers/runtime-apis/) instead of Node.js, so you cannot use `process.env` to access your environment variables. ### Create product documents Add your first Hono handler to the `src/index.ts` file. This route accepts `POST` requests to the `/products` endpoint: ```ts title="Create product documents" app.post("/products", async (c) => { const { serialNumber, title, weightLbs } = await c.req.json<Omit<Product, "id">>(); const query = fql`Products.create({ serialNumber: ${serialNumber}, title: ${title}, weightLbs: ${weightLbs}, quantity: 0 })`; const result = await c.var.faunaClient.query<Product>(query); return c.json(result.data); }); ``` :::caution[Handler order] In Hono, you should place your handler below the custom middleware. This is because middleware and handlers are executed in sequence from top to bottom. If you place the handler first, you cannot retrieve the instance of the Fauna client using `c.var.faunaClient`. ::: This route applied an FQL query in the `fql` function that creates a new document in the **Products** collection: ```js title="Create query in FQL inside JavaScript" fql`Products.create({ serialNumber: ${serialNumber}, title: ${title}, weightLbs: ${weightLbs}, quantity: 0 })`; ``` To review what a document looks like, run the following query. In the Fauna dashboard, go to **Explorer** > Region name > Database name like a `cloudflare_rest_api` > the **SHELL** window: ```js title="Create query in pure FQL" Products.create({ serialNumber: "A48432348", title: "Gaming Console", weightLbs: 5, quantity: 0, }); ``` Fauna returns the created document: ```js title="Newly created document" { id: "<document_id>", coll: Products, ts: "<timestamp>", serialNumber: "A48432348", title: "Gaming Console", weightLbs: 5, quantity: 0 } ``` Examining the route you create, when the query is successful, the data newly created document is returned in the response body: ```js title="Return the new document data" return c.json({ productId: result.data, }); ``` ### Error handling If Fauna returns any error, an exception is raised by the client. You can catch this exception in `app.onError()`, then retrieve and respond with the result from the instance of `ServiceError`. ```ts title="Handle errors" app.onError((e, c) => { if (e instanceof ServiceError) { return c.json( { status: e.httpStatus, code: e.code, message: e.message, }, e.httpStatus, ); } console.trace(e); return c.text("Internal Server Error", 500); }); ``` ### Retrieve product documents Next, create a route that reads a single document from the **Products** collection. Add the following handler to your `src/index.ts` file. This route accepts `GET` requests at the `/products/:productId` endpoint: ```ts title="Retrieve product documents" app.get("/products/:productId", async (c) => { const productId = c.req.param("productId"); const query = fql`Products.byId(${productId})`; const result = await c.var.faunaClient.query<Product>(query); return c.json(result.data); }); ``` The FQL query uses the [`byId()`](https://docs.fauna.com/fauna/current/reference/schema_entities/collection/instance-byid) method to retrieve a full document from the **Productions** collection: ```js title="Retrieve a document by ID in FQL inside JavaScript" fql`Products.byId(productId)`; ``` If the document exists, return it in the response body: ```ts title="Return the document in the response body" return c.json(result.data); ``` If not, an error is returned. ### Delete product documents The logic to delete product documents is similar to the logic for retrieving products. Add the following route to your `src/index.ts` file: ```ts title="Delete product documents" app.delete("/products/:productId", async (c) => { const productId = c.req.param("productId"); const query = fql`Products.byId(${productId})!.delete()`; const result = await c.var.faunaClient.query<Product>(query); return c.json(result.data); }); ``` The only difference from the previous route is that you use the [`delete()`](https://docs.fauna.com/fauna/current/reference/auth/key/delete) method, combined with the `byId()` method, to delete a document. When the delete operation is successful, Fauna returns the deleted document and the route forwards the deleted document in the response's body. If not, an error is returned. ## Test and deploy your Worker Before deploying your Worker, test it locally by using Wrangler's [`dev`](/workers/wrangler/commands/#dev) command: <Tabs> <TabItem label="npm"> ```sh title="Develop your Worker" npm run dev ``` </TabItem> <TabItem label="yarn"> ```sh title="Develop your Worker" yarn dev ``` </TabItem> </Tabs> Once the development server is up and running, start making HTTP requests to your Worker. First, create a new product: ```sh title="Create a new product" curl \ --data '{"serialNumber": "H56N33834", "title": "Bluetooth Headphones", "weightLbs": 0.5}' \ --header 'Content-Type: application/json' \ --request POST \ http://127.0.0.1:8787/products ``` You should receive a `200` response similar to the following: ```json title="Create product response" { "productId": "<document_id>" } ``` :::note Copy the `productId` value for use in the remaining test queries. ::: Next, read the document you created: ```sh title="Read a document" curl \ --header 'Content-Type: application/json' \ --request GET \ http://127.0.0.1:8787/products/<document_id> ``` The response should be the new document serialized to JSON: ```json title="Read product response" { "coll": { "name": "Products" }, "id": "<document_id>", "ts": { "isoString": "<timestamp>" }, "serialNumber": "H56N33834", "title": "Bluetooth Headphones", "weightLbs": 0.5, "quantity": 0 } ``` Finally, deploy your Worker using the [`wrangler deploy`](/workers/wrangler/commands/#deploy) command: <Tabs> <TabItem label="npm"> ```sh title="Deploy your Worker" npm run deploy ``` </TabItem> <TabItem label="yarn"> ```sh title="Deploy your Worker" yarn deploy ``` </TabItem> </Tabs> This publishes the Worker to your `*.workers.dev` subdomain. ## Update inventory quantity As the last step, implement a route to update the quantity of a product in your inventory, which is `0` by default. This will present a problem. To calculate the total quantity of a product, you first need to determine how many items there currently are in your inventory. If you solve this in two queries, first reading the quantity and then updating it, the original data might change. Add the following route to your `src/index.ts` file. This route responds to HTTP `PATCH` requests on the `/products/:productId/add-quantity` URL endpoint: ```ts title="Update inventory quantity" app.patch("/products/:productId/add-quantity", async (c) => { const productId = c.req.param("productId"); const { quantity } = await c.req.json<Pick<Product, "quantity">>(); const query = fql`Products.byId(${productId}){ quantity : .quantity + ${quantity}}`; const result = await c.var.faunaClient.query<Pick<Product, "quantity">>(query); return c.json(result.data); }); ``` Examine the FQL query in more detail: ```js title="Update query in FQL inside JavaScript" fql`Products.byId(${productId}){ quantity : .quantity + ${quantity}}`; ``` :::note[Consistency guarantees in Fauna] Even if multiple Workers update this quantity from different parts of the world, Fauna guarantees the consistency of the data across all Fauna regions. This article on [consistency](https://fauna.com/blog/consistency-without-clocks-faunadb-transaction-protocol?utm_source=Cloudflare&utm_medium=referral&utm_campaign=Q4_CF_2021) explains how Fauna's distributed protocol works without the need for atomic clocks. ::: Test your update route: ```sh title="Update product inventory" curl \ --data '{"quantity": 5}' \ --header 'Content-Type: application/json' \ --request PATCH \ http://127.0.0.1:8787/products/<document_id>/add-quantity ``` The response should be the entire updated document with five additional items in the quantity: ```json title="Update product response" { "quantity": 5 } ``` Update your Worker by deploying it to Cloudflare. <Tabs> <TabItem label="npm"> ```sh title="Update your Worker in Cloudflare" npm run deploy ``` </TabItem> <TabItem label="yarn"> ```sh title="Update your Worker in Cloudflare" yarn deploy ``` </TabItem> </Tabs> --- # Set up and use a Prisma Postgres database URL: https://developers.cloudflare.com/workers/tutorials/using-prisma-postgres-with-workers/ [Prisma Postgres](https://www.prisma.io/postgres) is a managed, serverless PostgreSQL database. It supports features like connection pooling, caching, real-time subscriptions, and query optimization recommendations. In this tutorial, you will learn how to: - Set up a Cloudflare Workers project with [Prisma ORM](https://www.prisma.io/docs). - Create a Prisma Postgres instance from the Prisma CLI. - Model data and run migrations with Prisma Postgres. - Query the database from Workers. - Deploy the Worker to Cloudflare. ## Prerequisites To follow this guide, ensure you have the following: - Node.js `v18.18` or higher installed. - An active [Cloudflare account](https://dash.cloudflare.com/). - A basic familiarity with installing and using command-line interface (CLI) applications. ## 1. Create a new Worker project Begin by using [C3](/pages/get-started/c3/) to create a Worker project in the command line: ```sh npm create cloudflare@latest prisma-postgres-worker -- --type=hello-world --ts=true --git=true --deploy=false ``` Then navigate into your project: ```sh cd ./prisma-postgres-worker ``` Your initial `src/index.ts` file currently contains a simple request handler: ```ts title="src/index.ts" export default { async fetch(request, env, ctx): Promise<Response> { return new Response("Hello World!"); }, } satisfies ExportedHandler<Env>; ``` ## 2. Setup Prisma in your project In this step, you will set up Prisma ORM with a Prisma Postgres database using the CLI. Then you will create and execute helper scripts to create tables in the database and generate a Prisma client to query it. ### 2.1. Install required dependencies Install Prisma CLI as a dev dependency: ```sh npm install prisma --save-dev ``` Install the [Prisma Accelerate client extension](https://www.npmjs.com/package/@prisma/extension-accelerate) as it is required for Prisma Postgres: ```sh npm install @prisma/extension-accelerate ``` Install the [`dotenv-cli` package](https://www.npmjs.com/package/dotenv-cli) to load environment variables from `.dev.vars`: ```sh npm install dotenv-cli --save-dev ``` ### 2.2. Create a Prisma Postgres database and initialize Prisma Initialize Prisma in your application: ```sh npx prisma@latest init --db ``` If you do not have a [Prisma Data Platform](https://console.prisma.io/) account yet, or if you are not logged in, the command will prompt you to log in using one of the available authentication providers. A browser window will open so you can log in or create an account. Return to the CLI after you have completed this step. Once logged in (or if you were already logged in), the CLI will prompt you to select a project name and a database region. Once the command has terminated, it will have created: - A project in your [Platform Console](https://console.prisma.io/) containing a Prisma Postgres database instance. - A `prisma` folder containing `schema.prisma`, where you will define your database schema. - An `.env` file in the project root, which will contain the Prisma Postgres database url `DATABASE_URL=<your-prisma-postgres-database-url>`. Note that Cloudflare Workers do not support `.env` files. You will use a file called `.dev.vars` instead of the `.env` file that was just created. ### 2.3. Prepare environment variables Rename the `.env` file in the root of your application to `.dev.vars` file: ```sh mv .env .dev.vars ``` ### 2.4. Apply database schema changes Open the `schema.prisma` file in the `prisma` folder and add the following `User` model to your database: ```prisma title="prisma/schema.prisma" generator client { provider = "prisma-client-js" } datasource db { provider = "postgresql" url = env("DATABASE_URL") } model User { id Int @id @default(autoincrement()) email String name String } ``` Next, add the following helper scripts to the `scripts` section of your `package.json`: ```json title="package.json" "scripts": { "migrate": "dotenv -e .dev.vars -- npx prisma migrate dev", "generate": "dotenv -e .dev.vars -- npx prisma generate --no-engine", "studio": "dotenv -e .dev.vars -- npx prisma studio", // Additional worker scripts... } ``` Run the migration script to apply changes to the database: ```sh npm run migrate ``` When prompted, provide a name for the migration (for example, `init`). After these steps are complete, Prisma ORM is fully set up and connected to your Prisma Postgres database. ## 3. Develop the application Modify the `src/index.ts` file and replace its contents with the following code: ```ts title="src/index.ts" import { PrismaClient } from "@prisma/client/edge"; import { withAccelerate } from "@prisma/extension-accelerate"; export interface Env { DATABASE_URL: string; } export default { async fetch(request, env, ctx): Promise<Response> { const path = new URL(request.url).pathname; if (path === "/favicon.ico") return new Response("Resource not found", { status: 404, headers: { "Content-Type": "text/plain", }, }); const prisma = new PrismaClient({ datasourceUrl: env.DATABASE_URL, }).$extends(withAccelerate()); const user = await prisma.user.create({ data: { email: `Jon${Math.ceil(Math.random() * 1000)}@gmail.com`, name: "Jon Doe", }, }); const userCount = await prisma.user.count(); return new Response(`\ Created new user: ${user.name} (${user.email}). Number of users in the database: ${userCount}. `); }, } satisfies ExportedHandler<Env>; ``` Run the development server: ```sh npm run dev ``` Visit [`https://localhost:8787`](https://localhost:8787) to see your app display the following output: ```sh Number of users in the database: 1 ``` Every time you refresh the page, a new user is created. The number displayed will increment by `1` with each refresh as it returns the total number of users in your database. ## 4. Deploy the application to Cloudflare When the application is deployed to Cloudflare, it needs access to the `DATABASE_URL` environment variable that is defined locally in `.dev.vars`. You can use the [`npx wrangler secret put`](/workers/configuration/secrets/#adding-secrets-to-your-project) command to upload the `DATABASE_URL` to the deployment environment: ```sh npx wrangler secret put DATABASE_URL ``` When prompted, paste the `DATABASE_URL` value (from `.dev.vars`). If you are logged in via the Wrangler CLI, you will see a prompt asking if you'd like to create a new Worker. Confirm by choosing "yes": ```sh ✔ There doesn't seem to be a Worker called "prisma-postgres-worker". Do you want to create a new Worker with that name and add secrets to it? … yes ``` Then execute the following command to deploy your project to Cloudflare Workers: ```sh npm run deploy ``` The `wrangler` CLI will bundle and upload your application. If you are not already logged in, the `wrangler` CLI will open a browser window prompting you to log in to the [Cloudflare dashboard](https://dash.cloudflare.com/). :::note If you belong to multiple accounts, select the account where you want to deploy the project. ::: Once the deployment completes, verify the deployment by visiting the live URL provided in the deployment output, such as `https://{PROJECT_NAME}.workers.dev`. If you encounter any issues, ensure the secrets were added correctly and check the deployment logs for errors. ## Next steps Congratulations on building and deploying a simple application with Prisma Postgres and Cloudflare Workers! To enhance your application further: - Add [caching](https://www.prisma.io/docs/postgres/caching) to your queries. - Explore the [Prisma Postgres documentation](https://www.prisma.io/docs/postgres/getting-started). To see how to build a real-time application with Cloudflare Workers and Prisma Postgres, read [this](https://www.prisma.io/docs/guides/prisma-postgres-realtime-on-cloudflare) guide. --- # Securely access and upload assets with Cloudflare R2 URL: https://developers.cloudflare.com/workers/tutorials/upload-assets-with-r2/ import { Render, PackageManagers, WranglerConfig } from "~/components"; This tutorial explains how to create a TypeScript-based Cloudflare Workers project that can securely access files from and upload files to a [Cloudflare R2](/r2/) bucket. Cloudflare R2 allows developers to store large amounts of unstructured data without the costly egress bandwidth fees associated with typical cloud storage services. ## Prerequisites To continue: 1. Sign up for a [Cloudflare account](https://dash.cloudflare.com/sign-up/workers-and-pages) if you have not already. 2. Install [`npm`](https://docs.npmjs.com/getting-started). 3. Install [`Node.js`](https://nodejs.org/en/). Use a Node version manager like [Volta](https://volta.sh/) or [nvm](https://github.com/nvm-sh/nvm) to avoid permission issues and change Node.js versions. [Wrangler](/workers/wrangler/install-and-update/) requires a Node version of `16.17.0` or later. ## Create a Worker application First, use the [`create-cloudflare` CLI](https://github.com/cloudflare/workers-sdk/tree/main/packages/create-cloudflare) to create a new Worker. To do this, open a terminal window and run the following command: <PackageManagers type="create" pkg="cloudflare@latest" args={"upload-r2-assets"} /> <Render file="c3-post-run-steps" product="workers" params={{ category: "hello-world", type: "Hello World Worker", lang: "TypeScript", }} /> Move into your newly created directory: ```sh cd upload-r2-assets ``` ## Create an R2 bucket Before you integrate R2 bucket access into your Worker application, an R2 bucket must be created: ```sh npx wrangler r2 bucket create <YOUR_BUCKET_NAME> ``` Replace `<YOUR_BUCKET_NAME>` with the name you want to assign to your bucket. List your account's R2 buckets to verify that a new bucket has been added: ```sh npx wrangler r2 bucket list ``` ## Configure access to an R2 bucket After your new R2 bucket is ready, use it inside your Worker application. Use your R2 bucket inside your Worker project by modifying the [Wrangler configuration file](/workers/wrangler/configuration/) to include an R2 bucket [binding](/workers/runtime-apis/bindings/). Add the following R2 bucket binding to your Wrangler file: <WranglerConfig> ```toml [[r2_buckets]] binding = 'MY_BUCKET' bucket_name = '<YOUR_BUCKET_NAME>' ``` </WranglerConfig> Give your R2 bucket binding name. Replace `<YOUR_BUCKET_NAME>` with the name of the R2 bucket you created earlier. Your Worker application can now access your R2 bucket using the `MY_BUCKET` variable. You can now perform CRUD (Create, Read, Update, Delete) operations on the contents of the bucket. ## Fetch from an R2 bucket After setting up an R2 bucket binding, you will implement the functionalities for the Worker to interact with the R2 bucket, such as, fetching files from the bucket and uploading files to the bucket. To fetch files from the R2 bucket, use the `BINDING.get` function. In the below example, the R2 bucket binding is called `MY_BUCKET`. Using `.get(key)`, you can retrieve an asset based on the URL pathname as the key. In this example, the URL pathname is `/image.png`, and the asset key is `image.png`. ```ts interface Env { MY_BUCKET: R2Bucket; } export default { async fetch(request, env): Promise<Response> { // For example, the request URL my-worker.account.workers.dev/image.png const url = new URL(request.url); const key = url.pathname.slice(1); // Retrieve the key "image.png" const object = await env.MY_BUCKET.get(key); if (object === null) { return new Response("Object Not Found", { status: 404 }); } const headers = new Headers(); object.writeHttpMetadata(headers); headers.set("etag", object.httpEtag); return new Response(object.body, { headers, }); }, } satisfies ExportedHandler<Env>; ``` The code written above fetches and returns data from the R2 bucket when a `GET` request is made to the Worker application using a specific URL path. ## Upload securely to an R2 bucket Next, you will add the ability to upload to your R2 bucket using authentication. To securely authenticate your upload requests, use [Wrangler's secret capability](/workers/wrangler/commands/#secret). Wrangler was installed when you ran the `create cloudflare@latest` command. Create a secret value of your choice -- for instance, a random string or password. Using the Wrangler CLI, add the secret to your project as `AUTH_SECRET`: ```sh npx wrangler secret put AUTH_SECRET ``` Now, add a new code path that handles a `PUT` HTTP request. This new code will check that the previously uploaded secret is correctly used for authentication, and then upload to R2 using `MY_BUCKET.put(key, data)`: ```ts interface Env { MY_BUCKET: R2Bucket; AUTH_SECRET: string; } export default { async fetch(request, env): Promise<Response> { if (request.method === "PUT") { // Note that you could require authentication for all requests // by moving this code to the top of the fetch function. const auth = request.headers.get("Authorization"); const expectedAuth = `Bearer ${env.AUTH_SECRET}`; if (!auth || auth !== expectedAuth) { return new Response("Unauthorized", { status: 401 }); } const url = new URL(request.url); const key = url.pathname.slice(1); await env.MY_BUCKET.put(key, request.body); return new Response(`Object ${key} uploaded successfully!`); } // include the previous code here... }, } satisfies ExportedHandler<Env>; ``` This approach ensures that only clients who provide a valid bearer token, via the `Authorization` header equal to the `AUTH_SECRET` value, will be permitted to upload to the R2 bucket. If you used a different binding name than `AUTH_SECRET`, replace it in the code above. ## Deploy your Worker application After completing your Cloudflare Worker project, deploy it to Cloudflare. Make sure you are in your Worker application directory that you created for this tutorial, then run: ```sh npx wrangler deploy ``` Your application is now live and accessible at `<YOUR_WORKER>.<YOUR_SUBDOMAIN>.workers.dev`. You have successfully created a Cloudflare Worker that allows you to interact with an R2 bucket to accomplish tasks such as uploading and downloading files. You can now use this as a starting point for your own projects. ## Next steps To build more with R2 and Workers, refer to [Tutorials](/workers/tutorials/) and the [R2 documentation](/r2/). If you have any questions, need assistance, or would like to share your project, join the Cloudflare Developer community on [Discord](https://discord.cloudflare.com) to connect with fellow developers and the Cloudflare team. --- # Use Workers KV directly from Rust URL: https://developers.cloudflare.com/workers/tutorials/workers-kv-from-rust/ import { Render, WranglerConfig } from "~/components"; This tutorial will teach you how to read and write to KV directly from Rust using [workers-rs](https://github.com/cloudflare/workers-rs). <Render file="tutorials-before-you-start" /> ## Prerequisites To complete this tutorial, you will need: - [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git). - [Wrangler](/workers/wrangler/) CLI. - The [Rust](https://www.rust-lang.org/tools/install) toolchain. - And `cargo-generate` sub-command by running: ```sh cargo install cargo-generate ``` ## 1. Create your Worker project in Rust Open a terminal window, and run the following command to generate a Worker project template in Rust: ```sh cargo generate cloudflare/workers-rs ``` Then select `template/hello-world-http` template, give your project a descriptive name and select enter. A new project should be created in your directory. Open the project in your editor and run `npx wrangler dev` to compile and run your project. In this tutorial, you will use Workers KV from Rust to build an app to store and retrieve cities by a given country name. ## 2. Create a KV namespace In the terminal, use Wrangler to create a KV namespace for `cities`. This generates a configuration to be added to the project: ```sh npx wrangler kv namespace create cities ``` To add this configuration to your project, open the Wrangler file and create an entry for `kv_namespaces` above the build command: <WranglerConfig> ```toml kv_namespaces = [ { binding = "cities", id = "e29b263ab50e42ce9b637fa8370175e8" } ] # build command... ``` </WranglerConfig> With this configured, you can access the KV namespace with the binding `"cities"` from Rust. ## 3. Write data to KV For this app, you will create two routes: A `POST` route to receive and store the city in KV, and a `GET` route to retrieve the city of a given country. For example, a `POST` request to `/France` with a body of `{"city": "Paris"}` should create an entry of Paris as a city in France. A `GET` request to `/France` should retrieve from KV and respond with Paris. Install [Serde](https://serde.rs/) as a project dependency to handle JSON `cargo add serde`. Then create an app router and a struct for `Country` in `src/lib.rs`: ```rust null {1,6,8,9,10,11,15,17} use serde::{Deserialize, Serialize}; use worker::*; #[event(fetch)] async fn fetch(req: Request, env: Env, _ctx: Context) -> Result<Response> { let router = Router::new(); #[derive(Serialize, Deserialize, Debug)] struct Country { city: String, } router // TODO: .post_async("/:country", |_, _| async move { Response::empty() }) // TODO: .get_async("/:country", |_, _| async move { Response::empty() }) .run(req, env) .await } ``` For the post handler, you will retrieve the country name from the path and the city name from the request body. Then, you will save this in KV with the country as key and the city as value. Finally, the app will respond with the city name: ```rust .post_async("/:country", |mut req, ctx| async move { let country = ctx.param("country").unwrap(); let city = match req.json::<Country>().await { Ok(c) => c.city, Err(_) => String::from(""), }; if city.is_empty() { return Response::error("Bad Request", 400); }; return match ctx.kv("cities")?.put(country, &city)?.execute().await { Ok(_) => Response::ok(city), Err(_) => Response::error("Bad Request", 400), }; }) ``` Save the file and make a `POST` request to test this endpoint: ```sh curl --json '{"city": "Paris"}' http://localhost:8787/France ``` ## 4. Read data from KV To retrieve cities stored in KV, write a `GET` route that pulls the country name from the path and searches KV. You also need some error handling if the country is not found: ```rust .get_async("/:country", |_req, ctx| async move { if let Some(country) = ctx.param("country") { return match ctx.kv("cities")?.get(country).text().await? { Some(city) => Response::ok(city), None => Response::error("Country not found", 404), }; } Response::error("Bad Request", 400) }) ``` Save and make a curl request to test the endpoint: ```sh curl http://localhost:8787/France ``` ## 5. Deploy your project The source code for the completed app should include the following: ```rust use serde::{Deserialize, Serialize}; use worker::*; #[event(fetch)] async fn fetch(req: Request, env: Env, _ctx: Context) -> Result<Response> { let router = Router::new(); #[derive(Serialize, Deserialize, Debug)] struct Country { city: String, } router .post_async("/:country", |mut req, ctx| async move { let country = ctx.param("country").unwrap(); let city = match req.json::<Country>().await { Ok(c) => c.city, Err(_) => String::from(""), }; if city.is_empty() { return Response::error("Bad Request", 400); }; return match ctx.kv("cities")?.put(country, &city)?.execute().await { Ok(_) => Response::ok(city), Err(_) => Response::error("Bad Request", 400), }; }) .get_async("/:country", |_req, ctx| async move { if let Some(country) = ctx.param("country") { return match ctx.kv("cities")?.get(country).text().await? { Some(city) => Response::ok(city), None => Response::error("Country not found", 404), }; } Response::error("Bad Request", 400) }) .run(req, env) .await } ``` To deploy your Worker, run the following command: ```sh npx wrangler deploy ``` ## Related resources - [Rust support in Workers](/workers/languages/rust/). - [Using KV in Workers](/kv/get-started/). --- # Migrations URL: https://developers.cloudflare.com/workers/wrangler/migration/ import { DirectoryListing } from "~/components"; <DirectoryListing /> --- # Migrate from Wrangler v2 to v3 URL: https://developers.cloudflare.com/workers/wrangler/migration/update-v2-to-v3/ There are no special instructions for migrating from Wrangler v2 to v3. You should be able to update Wrangler by following the instructions in [Install/Update Wrangler](/workers/wrangler/install-and-update/#update-wrangler). You should experience no disruption to your workflow. :::caution If you tried to update to Wrangler v3 prior to v3.3, you may have experienced some compatibility issues with older operating systems. Please try again with the latest v3 where those have been resolved. ::: ## Deprecations Refer to [Deprecations](/workers/wrangler/deprecations/#wrangler-v3) for more details on what is no longer supported in v3. ## Additional assistance If you do have an issue or need further assistance, [file an issue](https://github.com/cloudflare/workers-sdk/issues/new/choose) in the `workers-sdk` repo on GitHub. --- # Use fetch() handler URL: https://developers.cloudflare.com/workers-ai/function-calling/embedded/examples/fetch/ A very common use case is to provide the LLM with the ability to perform API calls via function calling. In this example the LLM will retrieve the weather forecast for the next 5 days. To do so a `getWeather` function is defined that is passed to the LLM as tool. The `getWeather`function extracts the user's location from the request and calls the external weather API via the Workers' [`Fetch API`](/workers/runtime-apis/fetch/) and returns the result. ```ts title="Embedded function calling example with fetch()" import { runWithTools } from '@cloudflare/ai-utils'; type Env = { AI: Ai; }; export default { async fetch(request, env, ctx) { // Define function const getWeather = async (args: { numDays: number }) => { const { numDays } = args; // Location is extracted from request based on // https://developers.cloudflare.com/workers/runtime-apis/request/#incomingrequestcfproperties const lat = request.cf?.latitude const long = request.cf?.longitude // Interpolate values for external API call const response = await fetch( `https://api.open-meteo.com/v1/forecast?latitude=${lat}&longitude=${long}&daily=temperature_2m_max,precipitation_sum&timezone=GMT&forecast_days=${numDays}` ); return response.text(); }; // Run AI inference with function calling const response = await runWithTools( env.AI, // Model with function calling support '@hf/nousresearch/hermes-2-pro-mistral-7b', { // Messages messages: [ { role: 'user', content: 'What the weather like the next 5 days? Respond as text', }, ], // Definition of available tools the AI model can leverage tools: [ { name: 'getWeather', description: 'Get the weather for the next [numDays] days', parameters: { type: 'object', properties: { numDays: { type: 'numDays', description: 'number of days for the weather forecast' }, }, required: ['numDays'], }, // reference to previously defined function function: getWeather, }, ], } ); return new Response(JSON.stringify(response)); }, } satisfies ExportedHandler<Env>; ``` --- # Examples URL: https://developers.cloudflare.com/workers-ai/function-calling/embedded/examples/ import { DirectoryListing } from "~/components"; <DirectoryListing /> --- # Use KV API URL: https://developers.cloudflare.com/workers-ai/function-calling/embedded/examples/kv/ Interact with persistent storage to retrieve or store information enables for powerful use cases. In this example we show how embedded function calling can interact with other resources on the Cloudflare Developer Platform with a few lines of code. ## Pre-Requisites For this example to work, you need to provision a [KV](/kv/) namespace first. To do so, follow the [KV - Get started ](/kv/get-started/) guide. Importantly, your Wrangler file must be updated to include the `KV` binding definition to your respective namespace. ## Worker code ```ts title="Embedded function calling example with KV API" import { runWithTools } from "@cloudflare/ai-utils"; type Env = { AI: Ai; KV: KVNamespace; }; export default { async fetch(request, env, ctx) { // Define function const updateKvValue = async ({ key, value, }: { key: string; value: string; }) => { const response = await env.KV.put(key, value); return `Successfully updated key-value pair in database: ${response}`; }; // Run AI inference with function calling const response = await runWithTools( env.AI, "@hf/nousresearch/hermes-2-pro-mistral-7b", { messages: [ { role: "system", content: "Put user given values in KV" }, { role: "user", content: "Set the value of banana to yellow." }, ], tools: [ { name: "KV update", description: "Update a key-value pair in the database", parameters: { type: "object", properties: { key: { type: "string", description: "The key to update", }, value: { type: "string", description: "The value to update", }, }, required: ["key", "value"], }, function: updateKvValue, }, ], }, ); return new Response(JSON.stringify(response)); }, } satisfies ExportedHandler<Env>; ``` ## Verify results To verify the results, run the following command ```sh npx wrangler kv key get banana --binding KV --local ``` --- # Tools based on OpenAPI Spec URL: https://developers.cloudflare.com/workers-ai/function-calling/embedded/examples/openapi/ Oftentimes APIs are defined and documented via [OpenAPI specification](https://swagger.io/specification/). The Cloudflare `ai-utils` package's `createToolsFromOpenAPISpec` function creates tools from the OpenAPI spec, which the LLM can then leverage to fulfill the prompt. In this example the LLM will describe the a Github user, based Github's API and its OpenAPI spec. ```ts title="Embedded function calling example from OpenAPI Spec" import { createToolsFromOpenAPISpec, runWithTools } from '@cloudflare/ai-utils'; type Env = { AI: Ai; }; const APP_NAME = 'cf-fn-calling-example-app'; export default { async fetch(request, env, ctx) { const toolsFromOpenAPISpec = [ // You can pass the OpenAPI spec link or contents directly ...(await createToolsFromOpenAPISpec( 'https://gist.githubusercontent.com/mchenco/fd8f20c8f06d50af40b94b0671273dc1/raw/f9d4b5cd5944cc32d6b34cad0406d96fd3acaca6/partial_api.github.com.json', { overrides: [ { matcher: ({ url }) => { return url.hostname === 'api.github.com'; }, // for all requests on *.github.com, we'll need to add a User-Agent. values: { headers: { 'User-Agent': APP_NAME, }, }, }, ], } )), ]; const response = await runWithTools( env.AI, '@hf/nousresearch/hermes-2-pro-mistral-7b', { messages: [ { role: 'user', content: 'Who is cloudflare on Github and how many repos does the organization have?', }, ], tools: toolsFromOpenAPISpec, } ); return new Response(JSON.stringify(response)); }, } satisfies ExportedHandler<Env>; ``` --- # GitHub integration URL: https://developers.cloudflare.com/workers/ci-cd/builds/git-integration/github-integration/ Cloudflare supports connecting your GitHub repository to your Cloudflare Worker, and will automatically deploy your code every time you push a change to a selected branch. ## Features Beyond automatic builds and deployments, the Cloudflare GitHub integration lets you monitor builds directly in GitHub, keeping you informed without leaving your workflow. :::note[Upcoming features] In Beta, Workers Builds supports automatic builds and deployments only from a single selected branch (e.g. `main`). Support for building and deploying preview versions from multiple branches will be added soon, along with the ability to generate [Preview URLs](/workers/configuration/previews/) for pull requests (PRs). ::: ### Check run If you have one or multiple Workers connected to a repository (i.e. a [monorepo](/workers/ci-cd/builds/advanced-setups/#monorepos)), you can check on the status of each build within GitHub via [GitHub check runs](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/collaborating-on-repositories-with-code-quality-features/about-status-checks#checks). You can see the checks by selecting on the status icon next to a commit within your GitHub repository. In the example below, you can select the green check mark to see the results of the check run.  Check runs will appear like the following in your repository. You can select **Details** to view the build (Build ID) and project (Script) associated with each check.  Note that when using [build watch paths](/workers/ci-cd/builds/build-watch-paths/), only projects that trigger a build will generate a check run. ## Manage access You can deploy projects to Cloudflare Workers from your company or side project on GitHub using the [Cloudflare Workers & Pages GitHub App](https://github.com/apps/cloudflare-workers-and-pages). ### Organizational access When authorizing Cloudflare Workers to access a GitHub account, you can specify access to your individual account or an organization that you belong to on GitHub. To add Cloudflare Workers installation to an organization, your user account must be an owner or have the appropriate role within the organization (i.e. the GitHub Apps Manager role). More information on these roles can be seen on [GitHub's documentation](https://docs.github.com/en/organizations/managing-peoples-access-to-your-organization-with-roles/roles-in-an-organization#github-app-managers). :::caution[GitHub security consideration] A GitHub account should only point to one Cloudflare account. If you are setting up Cloudflare with GitHub for your organization, Cloudflare recommends that you limit the scope of the application to only the repositories you intend to build with Pages. To modify these permissions, go to the [Applications page](https://github.com/settings/installations) on GitHub and select **Switch settings context** to access your GitHub organization settings. Then, select **Cloudflare Workers & Pages** > For **Repository access**, select **Only select repositories** > select your repositories. ::: ### Remove access You can remove Cloudflare Workers' access to your GitHub repository or account by going to the [Applications page](https://github.com/settings/installations) on GitHub (if you are in an organization, select Switch settings context to access your GitHub organization settings). The GitHub App is named Cloudflare Workers and Pages, and it is shared between Workers and Pages projects. #### Remove Cloudflare access to a GitHub repository To remove access to an individual GitHub repository, you can navigate to **Repository access**. Select the **Only select repositories** option, and configure which repositories you would like Cloudflare to have access to.  #### Remove Cloudflare access to the entire GitHub account To remove Cloudflare Workers and Pages access to your entire Git account, you can navigate to **Uninstall "Cloudflare Workers and Pages"**, then select **Uninstall**. Removing access to the Cloudflare Workers and Pages app will revoke Cloudflare's access to _all repositories_ from that GitHub account. If you want to only disable automatic builds and deployments, follow the [Disable Build](/workers/ci-cd/builds/#disconnecting-builds) instructions. Note that removing access to GitHub will disable new builds for Workers and Pages project that were connected to those repositories, though your previous deployments will continue to be hosted by Cloudflare Workers. ### Reinstall the Cloudflare GitHub App When encountering Git integration related issues, one potential troubleshooting step is attempting to uninstall and reinstall the GitHub or GitLab application associated with the Cloudflare Pages installation. The process for each Git provider is provided below. 1. Go to the installation settings page on GitHub: - Navigate to **Settings > Builds** for the Workers or Pages project and select **Manage** under Git Repository. - Alternatively, visit these links to find the Cloudflare Workers and Pages installation and select **Configure**: | | | | ---------------- | ---------------------------------------------------------------------------------- | | **Individual** | `https://github.com/settings/installations` | | **Organization** | `https://github.com/organizations/<YOUR_ORGANIZATION_NAME>/settings/installations` | 2. In the Cloudflare Workers and Pages GitHub App settings page, navigate to **Uninstall "Cloudflare Workers and Pages"** and select **Uninstall**. 3. Go back to the [**Workers & Pages** overview](https://dash.cloudflare.com) page. Select **Create application** > **Pages** > **Connect to Git**. 4. Select the **+ Add account** button, select the GitHub account you want to add, and then select **Install & Authorize**. 5. You should be redirected to the create project page with your GitHub account or organization in the account list. 6. Attempt to make a new deployment with your project which was previously broken. --- # GitLab integration URL: https://developers.cloudflare.com/workers/ci-cd/builds/git-integration/gitlab-integration/ Cloudflare supports connecting your GitLab repository to your Cloudflare Worker, and will automatically deploy your code every time you push a change to a selected branch. ## Features Beyond automatic builds and deployments, the Cloudflare GitLab integration lets you monitor builds directly in GitLab, keeping you informed without leaving your workflow. :::note[Upcoming features] In Beta, Workers Builds supports automatic builds and deployments only from a single selected branch (e.g. `main`). Support for building and deploying preview versions from multiple branches will be added soon, along with the ability to generate [Preview URLs](/workers/configuration/previews/) for pull requests (PRs). ::: ### Commit Status If you have one or multiple Workers connected to a repository (i.e. a [monorepo](/workers/ci-cd/builds/advanced-setups/#monorepos)), you can check on the status of each build within GitLab via [GitLab commit status](https://docs.gitlab.com/ee/user/project/merge_requests/status_checks.html). You can see the statuses by selecting the status icon next to a commit or by going to **Build** > **Pipelines** within your GitLab repository. In the example below, you can select on the green check mark to see the results of the check run.  Check runs will appear like the following in your repository. You can select one of the statuses to view the build on the Cloudflare Dashboard.  Note that when using [build watch paths](/workers/ci-cd/builds/build-watch-paths/), only projects that trigger a build will generate a commit status. ## Manage access You can deploy projects to Cloudflare Workers from your company or side project on GitLab using the Cloudflare Pages app. ### Organizational access When you authorize Cloudflare Workers to access your GitLab account, you automatically give Cloudflare Workers access to organizations, groups, and namespaces accessed by your GitLab account. Managing access to these organizations and groups is handled by GitLab. ### Remove access You can remove Cloudflare Workers' access to your GitLab account by navigating to [Authorized Applications page](https://gitlab.com/-/profile/applications) on GitLab. Find the applications called Cloudflare Pages and select the **Revoke** button to revoke access. Note that the GitLab application Cloudflare Workers is shared between Workers and Pages projects, and removing access to GitLab will disable new builds for Workers and Pages, though your previous deployments will continue to be hosted by Cloudflare Workers. ### Reinstall the Cloudflare GitLab App 1. Go to your application settings page on GitLab: [https://gitlab.com/-/profile/applications](https://gitlab.com/-/profile/applications) 2. Click the "Revoke" button on your Cloudflare Workers installation if it exists. 3. Go back to the [**Workers & Pages** overview](https://dash.cloudflare.com) page. Select **Create application** > **Pages** > **Connect to Git**. 4. Select the **+ Add account** button, select the GitLab account you want to add, and then select **Install & Authorize**. 5. You should be redirected to the create project page with your GitLab account or organization in the account list. 6. Attempt to make a new deployment with your project which was previously broken. --- # Git integration URL: https://developers.cloudflare.com/workers/ci-cd/builds/git-integration/ Cloudflare supports connecting your [GitHub](/workers/ci-cd/builds/git-integration/github-integration/) and [GitLab](/workers/ci-cd/builds/git-integration/gitlab-integration/) repository to your Cloudflare Worker, and will automatically deploy your code every time you push a change to a selected branch. Adding a Git integration also lets you monitor build statuses directly in your Git provider using [check runs](/workers/ci-cd/builds/git-integration/github-integration/#check-run) or [commit statuses](/workers/ci-cd/builds/git-integration/gitlab-integration/#commit-status), so you can manage deployments without leaving your workflow. ## Supported Git Providers Cloudflare supports connecting Cloudflare Workers to your GitHub and GitLab repositories. Workers Builds does not currently support connecting self-hosted instances of GitHub or GitLab. If you using a different Git provider (e.g. Bitbucket), you can use an [external CI/CD provider (e.g. GitHub Actions)](/workers/ci-cd/external-cicd/) and deploy using [Wrangler CLI](/workers/wrangler/commands/#deploy). ## Add a Git Integration Workers Builds provides direct integration with GitHub and GitLab accounts, including both individual and organization accounts, that are _not_ self-hosted. If you do not have a Git account linked to your Cloudflare account, you will be prompted to set up an installation to GitHub or GitLab when [connecting a repository](/workers/ci-cd/builds/#get-started) for the first time, or when adding a new Git account. Follow the prompts and authorize the Cloudflare Git integration.  You can check the following pages to see if your Git integration has been installed: - [GitHub Applications page](https://github.com/settings/installations) (if you are in an organization, select **Switch settings context** to access your GitHub organization settings) - [GitLab Authorized Applications page](https://gitlab.com/-/profile/applications) For details on providing access to organization accounts, see [GitHub organizational access](/workers/ci-cd/builds/git-integration/github-integration/#organizational-access) and [GitLab organizational access](/workers/ci-cd/builds/git-integration/gitlab-integration/#organizational-access). ## Manage a Git Integration You can manage the Git installation associated with your repository connection by navigating to the Worker, then going to **Settings** > **Builds** and selecting **Manage** under **Git Repository**. This can be useful for managing repository access or troubleshooting installation issues by reinstalling. For more details, see the [GitHub](/workers/ci-cd/builds/git-integration/github-integration) and [GitLab](/workers/ci-cd/builds/git-integration/gitlab-integration) guides for how to manage your installation. --- # FastAPI URL: https://developers.cloudflare.com/workers/languages/python/packages/fastapi/ import { Render } from "~/components" The FastAPI package is supported in Python Workers. FastAPI applications use a protocol called the [Asynchronous Server Gateway Interface (ASGI)](https://asgi.readthedocs.io/en/latest/). This means that FastAPI never reads from or writes to a socket itself. An ASGI application expects to be hooked up to an ASGI server, typically [uvicorn](https://www.uvicorn.org/). The ASGI server handles all of the raw sockets on the application’s behalf. The Workers runtime provides [an ASGI server](https://github.com/cloudflare/workerd/blob/main/src/pyodide/internal/asgi.py) directly to your Python Worker, which lets you use FastAPI in Python Workers. ## Get Started <Render file="python-workers-beta-packages" product="workers" /> Clone the `cloudflare/python-workers-examples` repository and run the FastAPI example: ```bash git clone https://github.com/cloudflare/python-workers-examples cd python-workers-examples/03-fastapi npx wrangler@latest dev ``` ### Example code ```python from fastapi import FastAPI, Request from pydantic import BaseModel async def on_fetch(request, env): import asgi return await asgi.fetch(app, request, env) app = FastAPI() @app.get("/") async def root(): return {"message": "Hello, World!"} @app.get("/env") async def root(req: Request): env = req.scope["env"] return {"message": "Here is an example of getting an environment variable: " + env.MESSAGE} class Item(BaseModel): name: str description: str | None = None price: float tax: float | None = None @app.post("/items/") async def create_item(item: Item): return item @app.put("/items/{item_id}") async def create_item(item_id: int, item: Item, q: str | None = None): result = {"item_id": item_id, **item.dict()} if q: result.update({"q": q}) return result @app.get("/items/{item_id}") async def read_item(item_id: int): return {"item_id": item_id} ``` --- # Packages URL: https://developers.cloudflare.com/workers/languages/python/packages/ import { Render } from "~/components"; <Render file="python-workers-beta-packages" product="workers" /> To import a Python package, add the package name to the `requirements.txt` file within the same directory as your [Wrangler configuration file](/workers/wrangler/configuration/). For example, if your Worker depends on [FastAPI](https://fastapi.tiangolo.com/), you would add the following: ``` fastapi ``` ## Package versioning In the example above, you likely noticed that there is no explicit version of the Python package declared in `requirements.txt`. In Workers, Python package versions are set via [Compatibility Dates](/workers/configuration/compatibility-dates/) and [Compatibility Flags](/workers/configuration/compatibility-flags/). Given a particular compatibility date, a specific version of the [Pyodide Python runtime](https://pyodide.org/en/stable/project/changelog.html) is provided to your Worker, providing a specific set of Python packages pinned to specific versions. As new versions of Pyodide and additional Python packages become available in Workers, we will publish compatibility flags and their associated compatibility dates here on this page. ## Supported Packages A subset of the [Python packages that Pyodide supports](https://pyodide.org/en/latest/usage/packages-in-pyodide.html) are provided directly by the Workers runtime: - aiohttp: 3.9.3 - aiohttp-tests: 3.9.3 - aiosignal: 1.3.1 - annotated-types: 0.6.0 - annotated-types-tests: 0.6.0 - anyio: 4.2.0 - async-timeout: 4.0.3 - attrs: 23.2.0 - certifi: 2024.2.2 - charset-normalizer: 3.3.2 - distro: 1.9.0 - [fastapi](/workers/languages/python/packages/fastapi): 0.110.0 - frozenlist: 1.4.1 - h11: 0.14.0 - h11-tests: 0.14.0 - hashlib: 1.0.0 - httpcore: 1.0.4 - httpx: 0.27.0 - idna: 3.6 - jsonpatch: 1.33 - jsonpointer: 2.4 - langchain: 0.1.8 - langchain-core: 0.1.25 - langchain-openai: 0.0.6 - langsmith: 0.1.5 - lzma: 1.0.0 - micropip: 0.6.0 - multidict: 6.0.5 - numpy: 1.26.4 - numpy-tests: 1.26.4 - openai: 1.12.0 - openssl: 1.1.1n - packaging: 23.2 - pydantic: 2.6.1 - pydantic-core: 2.16.2 - pydecimal: 1.0.0 - pydoc-data: 1.0.0 - pyyaml: 6.0.1 - regex: 2023.12.25 - regex-tests: 2023.12.25 - requests: 2.31.0 - six: 1.16.0 - sniffio: 1.3.0 - sniffio-tests: 1.3.0 - sqlite3: 1.0.0 - ssl: 1.0.0 - starlette: 0.36.3 Looking for a package not listed here? Tell us what you'd like us to support by [opening a discussion on Github](https://github.com/cloudflare/workerd/discussions/new?category=python-packages). ## HTTP Client Libraries Only HTTP libraries that are able to make requests asynchronously are supported. Currently, these include [`aiohttp`](https://docs.aiohttp.org/en/stable/index.html) and [`httpx`](https://www.python-httpx.org/). You can also use the [`fetch()` API](/workers/runtime-apis/fetch/) from JavaScript, using Python Workers' [foreign function interface](/workers/languages/python/ffi) to make HTTP requests. --- # Langchain URL: https://developers.cloudflare.com/workers/languages/python/packages/langchain/ import { Render } from "~/components" [LangChain](https://www.langchain.com/) is the most popular framework for building AI applications powered by large language models (LLMs). LangChain publishes multiple Python packages. The following are provided by the Workers runtime: * [`langchain`](https://pypi.org/project/langchain/) (version `0.1.8`) * [`langchain-core`](https://pypi.org/project/langchain-core/) (version `0.1.25`) * [`langchain-openai`](https://pypi.org/project/langchain-openai/) (version `0.0.6`) ## Get Started <Render file="python-workers-beta-packages" product="workers" /> Clone the `cloudflare/python-workers-examples` repository and run the LangChain example: ```bash git clone https://github.com/cloudflare/python-workers-examples cd 04-langchain npx wrangler@latest dev ``` ### Example code ```python from js import Response from langchain_core.prompts import PromptTemplate from langchain_openai import OpenAI async def on_fetch(request, env): prompt = PromptTemplate.from_template("Complete the following sentence: I am a {profession} and ") llm = OpenAI(api_key=env.API_KEY) chain = prompt | llm res = await chain.ainvoke({"profession": "electrician"}) return Response.new(res.split(".")[0].strip()) ``` --- # HTTP URL: https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/http/ import { WranglerConfig } from "~/components"; Worker A that declares a Service binding to Worker B can forward a [`Request`](/workers/runtime-apis/request/) object to Worker B, by calling the `fetch()` method that is exposed on the binding object. For example, consider the following Worker that implements a [`fetch()` handler](/workers/runtime-apis/handlers/fetch/): <WranglerConfig> ```toml name = "worker_b" main = "./src/workerB.js" ``` </WranglerConfig> ```js export default { async fetch(request, env, ctx) { return new Response("Hello World!"); } } ``` The following Worker declares a binding to the Worker above: <WranglerConfig> ```toml name = "worker_a" main = "./src/workerA.js" services = [ { binding = "WORKER_B", service = "worker_b" } ] ``` </WranglerConfig> And then can forward a request to it: ```js export default { async fetch(request, env) { return await env.WORKER_B.fetch(request); }, }; ``` --- # Service bindings URL: https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/ import { WranglerConfig } from "~/components"; ## About Service bindings Service bindings allow one Worker to call into another, without going through a publicly-accessible URL. A Service binding allows Worker A to call a method on Worker B, or to forward a request from Worker A to Worker B. Service bindings provide the separation of concerns that microservice or service-oriented architectures provide, without configuration pain, performance overhead or need to learn RPC protocols. - **Service bindings are fast.** When you use Service Bindings, there is zero overhead or added latency. By default, both Workers run on the same thread of the same Cloudflare server. And when you enable [Smart Placement](/workers/configuration/smart-placement/), each Worker runs in the optimal location for overall performance. - **Service bindings are not just HTTP.** Worker A can expose methods that can be directly called by Worker B. Communicating between services only requires writing JavaScript methods and classes. - **Service bindings don't increase costs.** You can split apart functionality into multiple Workers, without incurring additional costs. Learn more about [pricing for Service Bindings](/workers/platform/pricing/#service-bindings).  Service bindings are commonly used to: * **Provide a shared internal service to multiple Workers.** For example, you can deploy an authentication service as its own Worker, and then have any number of separate Workers communicate with it via Service bindings. * **Isolate services from the public Internet.** You can deploy a Worker that is not reachable via the public Internet, and can only be reached via an explicit Service binding that another Worker declares. * **Allow teams to deploy code independently.** Team A can deploy their Worker on their own release schedule, and Team B can deploy their Worker separately. ## Configuration You add a Service binding by modifying the [Wrangler configuration file](/workers/wrangler/configuration/) of the caller — the Worker that you want to be able to initiate requests. For example, if you want Worker A to be able to call Worker B — you'd add the following to the [Wrangler configuration file](/workers/wrangler/configuration/) for Worker A: <WranglerConfig> ```toml services = [ { binding = "<BINDING_NAME>", service = "<WORKER_NAME>" } ] ``` </WranglerConfig> * `binding`: The name of the key you want to expose on the `env` object. * `service`: The name of the target Worker you would like to communicate with. This Worker must be on your Cloudflare account. ## Interfaces Worker A that declares a Service binding to Worker B can call Worker B in two different ways: 1. [RPC](/workers/runtime-apis/bindings/service-bindings/rpc) lets you communicate between Workers using function calls that you define. For example, `await env.BINDING_NAME.myMethod(arg1)`. This is recommended for most use cases, and allows you to create your own internal APIs that your Worker makes available to other Workers. 2. [HTTP](/workers/runtime-apis/bindings/service-bindings/http) lets you communicate between Workers by calling the [`fetch()` handler](/workers/runtime-apis/handlers/fetch) from other Workers, sending `Request` objects and receiving `Response` objects back. For example, `env.BINDING_NAME.fetch(request)`. ## Example — build your first Service binding using RPC First, create the Worker that you want to communicate with. Let's call this "Worker B". Worker B exposes the public method, `add(a, b)`: <WranglerConfig> ```toml name = "worker_b" main = "./src/workerB.js" ``` </WranglerConfig> ```js import { WorkerEntrypoint } from "cloudflare:workers"; export default class WorkerB extends WorkerEntrypoint { // Currently, entrypoints without a named handler are not supported async fetch() { return new Response(null, {status: 404}); } async add(a, b) { return a + b; } } ``` Next, create the Worker that will call Worker B. Let's call this "Worker A". Worker A declares a binding to Worker B. This is what gives it permission to call public methods on Worker B. <WranglerConfig> ```toml name = "worker_a" main = "./src/workerA.js" services = [ { binding = "WORKER_B", service = "worker_b" } ] ``` </WranglerConfig> ```js export default { async fetch(request, env) { const result = await env.WORKER_B.add(1, 2); return new Response(result); } } ``` To run both Worker A and Worker B in local development, you must run two instances of [Wrangler](/workers/wrangler) in your terminal. For each Worker, open a new terminal and run [`npx wrangler@latest dev`](/workers/wrangler/commands#dev). Each Worker is deployed separately. ## Lifecycle The Service bindings API is asynchronous — you must `await` any method you call. If Worker A invokes Worker B via a Service binding, and Worker A does not await the completion of Worker B, Worker B will be terminated early. For more about the lifecycle of calling a Worker over a Service Binding via RPC, refer to the [RPC Lifecycle](/workers/runtime-apis/rpc/lifecycle) docs. ## Local development Local development is supported for Service bindings. For each Worker, open a new terminal and use [`wrangler dev`](/workers/wrangler/commands/#dev) in the relevant directory. When running `wrangler dev`, service bindings will show as `connected`/`not connected` depending on whether Wrangler can find a running `wrangler dev` session for that Worker. For example: ```sh $ wrangler dev ... Your worker has access to the following bindings: - Services: - SOME_OTHER_WORKER: some-other-worker [connected] - ANOTHER_WORKER: another-worker [not connected] ``` Wrangler also supports running multiple Workers at once with one command. To try it out, pass multiple `-c` flags to Wrangler, like this: `wrangler dev -c wrangler.json -c ../other-worker/wrangler.json`. The first config will be treated as the _primary_ worker, which will be exposed over HTTP as usual at `http://localhost:8787`. The remaining config files will be treated as _secondary_ and will only be accessible via a service binding from the primary worker. :::caution Support for running multiple Workers at once with one Wrangler command is experimental, and subject to change as we work on the experience. If you run into bugs or have any feedback, [open an issue on the workers-sdk repository](https://github.com/cloudflare/workers-sdk/issues/new) ::: ## Deployment Workers using Service bindings are deployed separately. When getting started and deploying for the first time, this means that the target Worker (Worker B in the examples above) must be deployed first, before Worker A. Otherwise, when you attempt to deploy Worker A, deployment will fail, because Worker A declares a binding to Worker B, which does not yet exist. When making changes to existing Workers, in most cases you should: * Deploy changes to Worker B first, in a way that is compatible with the existing Worker A. For example, add a new method to Worker B. * Next, deploy changes to Worker A. For example, call the new method on Worker B, from Worker A. * Finally, remove any unused code. For example, delete the previously used method on Worker B. ## Smart Placement [Smart Placement](/workers/configuration/smart-placement) automatically places your Worker in an optimal location that minimizes latency. You can use Smart Placement together with Service bindings to split your Worker into two services:  Refer to the [docs on Smart Placement](/workers/configuration/smart-placement/#best-practices) for more. ## Limits Service bindings have the following limits: * Each request to a Worker via a Service binding counts toward your [subrequest limit](/workers/platform/limits/#subrequests). * A single request has a maximum of 32 Worker invocations, and each call to a Service binding counts towards this limit. Subsequent calls will throw an exception. * Calling a service binding does not count towards [simultaneous open connection limits](/workers/platform/limits/#simultaneous-open-connections) --- # 📅 Compatibility Dates URL: https://developers.cloudflare.com/workers/testing/miniflare/core/compatibility/ - [Compatibility Dates Reference](/workers/configuration/compatibility-dates) ## Compatibility Dates Miniflare uses compatibility dates to opt-into backwards-incompatible changes from a specific date. If one isn't set, it will default to some time far in the past. ```js const mf = new Miniflare({ compatibilityDate: "2021-11-12", }); ``` ## Compatibility Flags Miniflare also lets you opt-in/out of specific changes using compatibility flags: ```js const mf = new Miniflare({ compatibilityFlags: [ "formdata_parser_supports_files", "durable_object_fetch_allows_relative_url", ], }); ``` --- # RPC (WorkerEntrypoint) URL: https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/rpc/ import { DirectoryListing, Render, WranglerConfig } from "~/components" [Service bindings](/workers/runtime-apis/bindings/service-bindings) allow one Worker to call into another, without going through a publicly-accessible URL. You can use Service bindings to create your own internal APIs that your Worker makes available to other Workers. This can be done by extending the built-in `WorkerEntrypoint` class, and adding your own public methods. These public methods can then be directly called by other Workers on your Cloudflare account that declare a [binding](/workers/runtime-apis/bindings) to this Worker. The [RPC system in Workers](/workers/runtime-apis/rpc) is designed feel as similar as possible to calling a JavaScript function in the same Worker. In most cases, you should be able to write code in the same way you would if everything was in a single Worker. :::note You can also use RPC to communicate between Workers and [Durable Objects](/durable-objects/best-practices/create-durable-object-stubs-and-send-requests/#invoke-rpc-methods). ::: ## Example For example, the following Worker implements the public method `add(a, b)`: <Render file="service-binding-rpc-example" product="workers" /> You do not need to learn, implement, or think about special protocols to use the RPC system. The client, in this case Worker A, calls Worker B and tells it to execute a specific procedure using specific arguments that the client provides. This is accomplished with standard JavaScript classes. ## The `WorkerEntrypoint` Class To provide RPC methods from your Worker, you must extend the `WorkerEntrypoint` class, as shown in the example below: ```js import { WorkerEntrypoint } from "cloudflare:workers"; export default class extends WorkerEntrypoint { async add(a, b) { return a + b; } } ``` A new instance of the class is created every time the Worker is called. Note that even though the Worker is implemented as a class, it is still stateless — the class instance only lasts for the duration of the invocation. If you need to persist or coordinate state in Workers, you should use [Durable Objects](/durable-objects). ### Bindings (`env`) The [`env`](/workers/runtime-apis/bindings) object is exposed as a class property of the `WorkerEntrypoint` class. For example, a Worker that declares a binding to the [environment variable](/workers/configuration/environment-variables/) `GREETING`: <WranglerConfig> ```toml name = "my-worker" [vars] GREETING = "Hello" ``` </WranglerConfig> Can access it by calling `this.env.GREETING`: ```js import { WorkerEntrypoint } from "cloudflare:workers"; export default class extends WorkerEntrypoint { fetch() { return new Response("Hello from my-worker"); } async greet(name) { return this.env.GREETING + name; } } ``` You can use any type of [binding](/workers/runtime-apis/bindings) this way. ### Lifecycle methods (`ctx`) The [`ctx`](/workers/runtime-apis/context) object is exposed as a class property of the `WorkerEntrypoint` class. For example, you can extend the lifetime of the invocation context by calling the `waitUntil()` method: ```js import { WorkerEntrypoint } from "cloudflare:workers"; export default class extends WorkerEntrypoint { fetch() { return new Response("Hello from my-worker"); } async signup(email, name) { // sendEvent() will continue running, even after this method returns a value to the caller this.ctx.waitUntil(this.#sendEvent("signup", email)) // Perform any other work return "Success"; } async #sendEvent(eventName, email) { //... } } ``` ## Named entrypoints You can also export any number of named `WorkerEntrypoint` classes from within a single Worker, in addition to the default export. You can then declare a Service binding to a specific named entrypoint. You can use this to group multiple pieces of compute together. For example, you might create a distinct `WorkerEntrypoint` for each permission role in your application, and use these to provide role-specific RPC methods: <WranglerConfig> ```toml name = "todo-app" [[d1_databases]] binding = "D1" database_name = "todo-app-db" database_id = "<unique-ID-for-your-database>" ``` </WranglerConfig> ```js import { WorkerEntrypoint } from "cloudflare:workers"; export class AdminEntrypoint extends WorkerEntrypoint { async createUser(username) { await this.env.D1.prepare("INSERT INTO users (username) VALUES (?)") .bind(username) .run(); } async deleteUser(username) { await this.env.D1.prepare("DELETE FROM users WHERE username = ?") .bind(username) .run(); } } export class UserEntrypoint extends WorkerEntrypoint { async getTasks(userId) { return await this.env.D1.prepare( "SELECT title FROM tasks WHERE user_id = ?" ) .bind(userId) .all(); } async createTask(userId, title) { await this.env.D1.prepare( "INSERT INTO tasks (user_id, title) VALUES (?, ?)" ) .bind(userId, title) .run(); } } export default class extends WorkerEntrypoint { async fetch(request, env) { return new Response("Hello from my to do app"); } } ``` You can then declare a Service binding directly to `AdminEntrypoint` in another Worker: <WranglerConfig> ```toml name = "admin-app" [[services]] binding = "ADMIN" service = "todo-app" entrypoint = "AdminEntrypoint" ``` </WranglerConfig> ```js export default { async fetch(request, env) { await env.ADMIN.createUser("aNewUser"); return new Response("Hello from admin app"); }, }; ``` You can learn more about how to configure D1 in the [D1 documentation](/d1/get-started/#3-bind-your-worker-to-your-d1-database). You can try out a complete example of this to do app, as well as a Discord bot built with named entrypoints, by cloning the [cloudflare/js-rpc-and-entrypoints-demo repository](https://github.com/cloudflare/js-rpc-and-entrypoints-demo) from GitHub. ## Further reading <DirectoryListing folder="workers/runtime-apis/rpc" /> --- # 📨 Fetch Events URL: https://developers.cloudflare.com/workers/testing/miniflare/core/fetch/ - [`FetchEvent` Reference](/workers/runtime-apis/handlers/fetch/) ## HTTP Requests Whenever an HTTP request is made, a `Request` object is dispatched to your worker, then the generated `Response` is returned. The `Request` object will include a [`cf` object](/workers/runtime-apis/request#incomingrequestcfproperties). Miniflare will log the method, path, status, and the time it took to respond. If the Worker throws an error whilst generating a response, an error page containing the stack trace is returned instead. ## Dispatching Events When using the API, the `dispatchFetch` function can be used to dispatch `fetch` events to your Worker. This can be used for testing responses. `dispatchFetch` has the same API as the regular `fetch` method: it either takes a `Request` object, or a URL and optional `RequestInit` object: ```js import { Miniflare, Request } from "miniflare"; const mf = new Miniflare({ modules: true, script: ` export default { async fetch(request, env, ctx) { const body = JSON.stringify({ url: event.request.url, header: event.request.headers.get("X-Message"), }); return new Response(body, { headers: { "Content-Type": "application/json" }, }); }) } `, }); let res = await mf.dispatchFetch("http://localhost:8787/"); console.log(await res.json()); // { url: "http://localhost:8787/", header: null } res = await mf.dispatchFetch("http://localhost:8787/1", { headers: { "X-Message": "1" }, }); console.log(await res.json()); // { url: "http://localhost:8787/1", header: "1" } res = await mf.dispatchFetch( new Request("http://localhost:8787/2", { headers: { "X-Message": "2" }, }), ); console.log(await res.json()); // { url: "http://localhost:8787/2", header: "2" } ``` When dispatching events, you are responsible for adding [`CF-*` headers](https://support.cloudflare.com/hc/en-us/articles/200170986-How-does-Cloudflare-handle-HTTP-Request-headers-) and the [`cf` object](/workers/runtime-apis/request#incomingrequestcfproperties). This lets you control their values for testing: ```js const res = await mf.dispatchFetch("http://localhost:8787", { headers: { "CF-IPCountry": "GB", }, cf: { country: "GB", }, }); ``` ## Upstream Miniflare will call each `fetch` listener until a response is returned. If no response is returned, or an exception is thrown and `passThroughOnException()` has been called, the response will be fetched from the specified upstream instead: ```js import { Miniflare } from "miniflare"; const mf = new Miniflare({ script: ` addEventListener("fetch", (event) => { event.passThroughOnException(); throw new Error(); }); `, upstream: "https://miniflare.dev", }); // If you don't use the same upstream URL when dispatching, Miniflare will // rewrite it to match the upstream const res = await mf.dispatchFetch("https://miniflare.dev/core/fetch"); console.log(await res.text()); // Source code of this page ``` --- # Core URL: https://developers.cloudflare.com/workers/testing/miniflare/core/ import { DirectoryListing } from "~/components"; <DirectoryListing path="/core" /> --- # 📚 Modules URL: https://developers.cloudflare.com/workers/testing/miniflare/core/modules/ - [Modules Reference](/workers/reference/migrate-to-module-workers/) ## Enabling Modules Miniflare supports both the traditional `service-worker` and the newer `modules` formats for writing workers. To use the `modules` format, enable it with: ```js const mf = new Miniflare({ modules: true, }); ``` You can then use `modules` worker scripts like the following: ```js export default { async fetch(request, env, ctx) { // - `request` is the incoming `Request` instance // - `env` contains bindings, KV namespaces, Durable Objects, etc // - `ctx` contains `waitUntil` and `passThroughOnException` methods return new Response("Hello Miniflare!"); }, async scheduled(controller, env, ctx) { // - `controller` contains `scheduledTime` and `cron` properties // - `env` contains bindings, KV namespaces, Durable Objects, etc // - `ctx` contains the `waitUntil` method console.log("Doing something scheduled..."); }, }; ``` <Aside type="warning" header="Warning"> String scripts via the `script` option are supported using the `modules` format, but you cannot import other modules using them. You must use a script file via the `scriptPath` option for this. </Aside> ## Module Rules Miniflare supports all module types: `ESModule`, `CommonJS`, `Text`, `Data` and `CompiledWasm`. You can specify additional module resolution rules as follows: ```js const mf = new Miniflare({ modulesRules: [ { type: "ESModule", include: ["**/*.js"], fallthrough: true }, { type: "Text", include: ["**/*.txt"] }, ], }); ``` ### Default Rules The following rules are automatically added to the end of your modules rules list. You can override them by specifying rules matching the same `globs`: ```js [ { type: "ESModule", include: ["**/*.mjs"] }, { type: "CommonJS", include: ["**/*.js", "**/*.cjs"] }, ]; ``` --- # Developing URL: https://developers.cloudflare.com/workers/testing/miniflare/developing/ import { DirectoryListing } from "~/components"; <DirectoryListing path="/developing" /> --- # â¬†ï¸ Migrating from Version 2 URL: https://developers.cloudflare.com/workers/testing/miniflare/migrations/from-v2/ Miniflare v3 now uses [`workerd`](https://github.com/cloudflare/workerd), the open-source Cloudflare Workers runtime. This is the same runtime that's deployed on Cloudflare's network, giving bug-for-bug compatibility and practically eliminating behavior mismatches. Refer to the [Miniflare v3](https://blog.cloudflare.com/miniflare-and-workerd/) and [Wrangler v3 announcements](https://blog.cloudflare.com/wrangler3/) for more information. ## CLI Changes Miniflare v3 no longer includes a standalone CLI. To get the same functionality, you will need to switch over to [Wrangler](/workers/wrangler/). Wrangler v3 uses Miniflare v3 by default. To start a local development server, run: ```sh $ npx wrangler@3 dev ``` If there are features from the Miniflare CLI you would like to see in Wrangler, please open an issue on [GitHub](https://github.com/cloudflare/workers-sdk/issues/new/choose). ## API Changes We have tried to keep Miniflare v3's API close to Miniflare v2 where possible, but many options and methods have been removed or changed with the switch to the open-source `workerd` runtime. See the [Getting Started guide for the new API docs](/workers/testing/miniflare/get-started) ### Updated Options - `kvNamespaces/r2Buckets/d1Databases` - In addition to `string[]`s, these options now accept `Record<string, string>`s, mapping binding names to namespace IDs/bucket names/database IDs. This means multiple Workers can bind to the same namespace/bucket/database under different names. - `queueBindings` - Renamed to `queueProducers`. This either accepts a `Record<string, string>` mapping binding names to queue names, or a `string[]` of binding names to queues of the same name. - `queueConsumers` - Either accepts a `Record<string, QueueConsumerOptions>` mapping queue names to consumer options, or a `string[]` of queue names to consume with default options. `QueueConsumerOptions` has the following type: ```ts interface QueueConsumerOptions { // /queues/platform/configuration/#consumer maxBatchSize?: number; // default: 5 maxBatchTimeout?: number /* seconds */; // default: 1 maxRetries?: number; // default: 2 deadLetterQueue?: string; // default: none } ``` - `cfFetch` - Renamed to `cf`. Either accepts a `boolean`, `string` (as before), or an object to use a the `cf` object for incoming requests. ### Removed Options - `wranglerConfigPath/wranglerConfigEnv` - Miniflare no longer handles Wrangler's configuration. To programmatically start up a Worker based on Wrangler configuration, use the [`unstable_dev()`](/workers/wrangler/api/#unstable_dev) API. - `packagePath` - Miniflare no longer loads script paths from `package.json` files. Use the `scriptPath` option to specify your script instead. - `watch` - Miniflare's API is primarily intended for testing use cases, where file watching isn't usually required. This option was here to enable Miniflare's CLI which has now been removed. If you need to watch files, consider using a separate file watcher like [`fs.watch()`](https://nodejs.org/api/fs.html#fswatchfilename-options-listener) or [`chokidar`](https://github.com/paulmillr/chokidar), and calling `setOptions()` with your original configuration on change. - `logUnhandledRejections` - Unhandled rejections can be handled in Workers with [`addEventListener("unhandledrejection")`](https://community.cloudflare.com/t/2021-10-21-workers-runtime-release-notes/318571). - `globals` - Injecting arbitrary globals is not supported by [`workerd`](https://github.com/cloudflare/workerd). If you're using a service worker, `bindings` will be injected as globals, but these must be JSON-serialisable. - `https/httpsKey(Path)/httpsCert(Path)/httpsPfx(Path)/httpsPassphrase` - Miniflare does not support starting HTTPS servers yet. These options may be added back in a future release. - `crons` - [`workerd`](https://github.com/cloudflare/workerd) does not support triggering scheduled events yet. This option may be added back in a future release. - `mounts` - Miniflare no longer has the concept of parent and child Workers. Instead, all Workers can be defined at the same level, using the new `workers` option. Here's an example that uses a service binding to increment a value in a shared KV namespace: ```ts import { Miniflare, Response } from "miniflare"; const message = "The count is "; const mf = new Miniflare({ // Options shared between Workers such as HTTP and persistence configuration // should always be defined at the top level. host: "0.0.0.0", port: 8787, kvPersist: true, workers: [ { name: "worker", kvNamespaces: { COUNTS: "counts" }, serviceBindings: { INCREMENTER: "incrementer", // Service bindings can also be defined as custom functions, with access // to anything defined outside Miniflare. async CUSTOM(request) { // `request` is the incoming `Request` object. return new Response(message); }, }, modules: true, script: `export default { async fetch(request, env, ctx) { // Get the message defined outside const response = await env.CUSTOM.fetch("http://host/"); const message = await response.text(); // Increment the count 3 times await env.INCREMENTER.fetch("http://host/"); await env.INCREMENTER.fetch("http://host/"); await env.INCREMENTER.fetch("http://host/"); const count = await env.COUNTS.get("count"); return new Response(message + count); } }`, }, { name: "incrementer", // Note we're using the same `COUNTS` namespace as before, but binding it // to `NUMBERS` instead. kvNamespaces: { NUMBERS: "counts" }, // Worker formats can be mixed-and-matched script: `addEventListener("fetch", (event) => { event.respondWith(handleRequest()); }) async function handleRequest() { const count = parseInt((await NUMBERS.get("count")) ?? "0") + 1; await NUMBERS.put("count", count.toString()); return new Response(count.toString()); }`, }, ], }); const res = await mf.dispatchFetch("http://localhost"); console.log(await res.text()); // "The count is 3" await mf.dispose(); ``` - `metaProvider` - The `cf` object and `X-Forwarded-Proto`/`X-Real-IP` headers can be specified when calling `dispatchFetch()` instead. A default `cf` object can be specified using the new `cf` option too. - `durableObjectAlarms` - Miniflare now always enables Durable Object alarms. - `globalAsyncIO/globalTimers/globalRandom` - [`workerd`](https://github.com/cloudflare/workerd) cannot support these options without fundamental changes. - `actualTime` - Miniflare now always returns the current time. - `inaccurateCpu` - Set the `inspectorPort: 9229` option to enable the V8 inspector. Visit `chrome://inspect` in Google Chrome to open DevTools and perform CPU profiling. ### Updated Methods - `setOptions()` - Miniflare v3 now requires a full configuration object to be passed, instead of a partial patch. ### Removed Methods - `reload()` - Call `setOptions()` with the original configuration object to reload Miniflare. - `createServer()/startServer()` - Miniflare now always starts a [`workerd`](https://github.com/cloudflare/workerd) server listening on the configured `host` and `port`, so these methods are redundant. - `dispatchScheduled()/startScheduled()` - The functionality of `dispatchScheduled` can now be done via `getWorker()`. For more information read the [scheduled events documentation](/workers/testing/miniflare/core/scheduled#dispatching-events). - `dispatchQueue()` - Use the `queue()` method on [service bindings](/workers/runtime-apis/bindings/service-bindings) or [queue producer bindings](/queues/configuration/configure-queues/#producer) instead. - `getGlobalScope()/getBindings()/getModuleExports()` - These methods returned objects from inside the Workers sandbox. Since Miniflare now uses [`workerd`](https://github.com/cloudflare/workerd), which runs in a different process, these methods can no longer be supported. - `addEventListener()`/`removeEventListener()` - Miniflare no longer emits `reload` events. As Miniflare no longer watches files, reloads are only triggered by initialisation or `setOptions()` calls. In these cases, it's possible to wait for the reload with either `await mf.ready` or `await mf.setOptions()` respectively. - `Response#waitUntil()` - [`workerd`](https://github.com/cloudflare/workerd) does not support waiting for all `waitUntil()`ed promises yet. ### Removed Packages - `@miniflare/*` - Miniflare is now contained within a single `miniflare` package. --- # Migrations URL: https://developers.cloudflare.com/workers/testing/miniflare/migrations/ import { DirectoryListing } from "~/components"; <DirectoryListing /> --- # 🛠Attaching a Debugger URL: https://developers.cloudflare.com/workers/testing/miniflare/developing/debugger/ :::caution This documentation describes breakpoint debugging when using Miniflare directly, which is only relevant for advanced use cases. Instead, most users should refer to the [Workers Observability documentation for how to set this up when using Wrangler](/workers/observability/dev-tools/breakpoints/). ::: You can use regular Node.js tools to debug your Workers. Setting breakpoints, watching values and inspecting the call stack are all examples of things you can do with a debugger. ## Visual Studio Code ### Create configuration The easiest way to debug a Worker in VSCode is to create a new configuration. Open the **Run and Debug** menu in the VSCode activity bar and create a `.vscode/launch.json` file that contains the following: ```json --- filename: .vscode/launch.json --- { "configurations": [ { "name": "Miniflare", "type": "node", "request": "attach", "port": 9229, "cwd": "/", "resolveSourceMapLocations": null, "attachExistingChildren": false, "autoAttachChildProcesses": false, } ] } ``` From the **Run and Debug** menu in the activity bar, select the `Miniflare` configuration, and click the green play button to start debugging. ## WebStorm Create a new configuration, by clicking **Add Configuration** in the top right.  Click the **plus** button in the top left of the popup and create a new **Node.js/Chrome** configuration. Set the **Host** field to `localhost` and the **Port** field to `9229`. Then click **OK**.  With the new configuration selected, click the green debug button to start debugging.  ## DevTools Breakpoints can also be added via the Workers DevTools. For more information, [read the guide](/workers/observability/dev-tools) in the Cloudflare Workers docs. --- # 🕸 Web Standards URL: https://developers.cloudflare.com/workers/testing/miniflare/core/standards/ - [Web Standards Reference](/workers/runtime-apis/web-standards) - [Encoding Reference](/workers/runtime-apis/encoding) - [Fetch Reference](/workers/runtime-apis/fetch) - [Request Reference](/workers/runtime-apis/request) - [Response Reference](/workers/runtime-apis/response) - [Streams Reference](/workers/runtime-apis/streams) - [Web Crypto Reference](/workers/runtime-apis/web-crypto) ## Mocking Outbound `fetch` Requests When using the API, Miniflare allows you to substitute custom `Response`s for `fetch()` calls using `undici`'s [`MockAgent` API](https://undici.nodejs.org/#/docs/api/MockAgent?id=mockagentgetorigin). This is useful for testing Workers that make HTTP requests to other services. To enable `fetch` mocking, create a [`MockAgent`](https://undici.nodejs.org/#/docs/api/MockAgent?id=mockagentgetorigin) using the `createFetchMock()` function, then set this using the `fetchMock` option. ```js import { Miniflare, createFetchMock } from "miniflare"; // Create `MockAgent` and connect it to the `Miniflare` instance const fetchMock = createFetchMock(); const mf = new Miniflare({ modules: true, script: ` export default { async fetch(request, env, ctx) { const res = await fetch("https://example.com/thing"); const text = await res.text(); return new Response(\`response:\${text}\`); } } `, fetchMock, }); // Throw when no matching mocked request is found // (see https://undici.nodejs.org/#/docs/api/MockAgent?id=mockagentdisablenetconnect) fetchMock.disableNetConnect(); // Mock request to https://example.com/thing // (see https://undici.nodejs.org/#/docs/api/MockAgent?id=mockagentgetorigin) const origin = fetchMock.get("https://example.com"); // (see https://undici.nodejs.org/#/docs/api/MockPool?id=mockpoolinterceptoptions) origin .intercept({ method: "GET", path: "/thing" }) .reply(200, "Mocked response!"); const res = await mf.dispatchFetch("http://localhost:8787/"); console.log(await res.text()); // "response:Mocked response!" ``` ## Subrequests Miniflare does not support limiting the amount of [subrequests](/workers/platform/limits#account-plan-limits). Please keep this in mind if you make a large amount of subrequests from your Worker. --- # 🔌 Multiple Workers URL: https://developers.cloudflare.com/workers/testing/miniflare/core/multiple-workers/ Miniflare allows you to run multiple workers in the same instance. All Workers can be defined at the same level, using the `workers` option. Here's an example that uses a service binding to increment a value in a shared KV namespace: ```js import { Miniflare, Response } from "miniflare"; const message = "The count is "; const mf = new Miniflare({ // Options shared between workers such as HTTP and persistence configuration // should always be defined at the top level. host: "0.0.0.0", port: 8787, kvPersist: true, workers: [ { name: "worker", kvNamespaces: { COUNTS: "counts" }, serviceBindings: { INCREMENTER: "incrementer", // Service bindings can also be defined as custom functions, with access // to anything defined outside Miniflare. async CUSTOM(request) { // `request` is the incoming `Request` object. return new Response(message); }, }, modules: true, script: `export default { async fetch(request, env, ctx) { // Get the message defined outside const response = await env.CUSTOM.fetch("http://host/"); const message = await response.text(); // Increment the count 3 times await env.INCREMENTER.fetch("http://host/"); await env.INCREMENTER.fetch("http://host/"); await env.INCREMENTER.fetch("http://host/"); const count = await env.COUNTS.get("count"); return new Response(message + count); } }`, }, { name: "incrementer", // Note we're using the same `COUNTS` namespace as before, but binding it // to `NUMBERS` instead. kvNamespaces: { NUMBERS: "counts" }, // Worker formats can be mixed-and-matched script: `addEventListener("fetch", (event) => { event.respondWith(handleRequest()); }) async function handleRequest() { const count = parseInt((await NUMBERS.get("count")) ?? "0") + 1; await NUMBERS.put("count", count.toString()); return new Response(count.toString()); }`, }, ], }); const res = await mf.dispatchFetch("http://localhost"); console.log(await res.text()); // "The count is 3" await mf.dispose(); ``` ## Routing You can enable routing by specifying `routes` via the API, using the [standard route syntax](/workers/configuration/routing/routes/#matching-behavior). Note port numbers are ignored: ```js const mf = new Miniflare({ workers: [ { scriptPath: "./api/worker.js", routes: ["http://127.0.0.1/api*", "api.mf/*"], }, ], }); ``` When using hostnames that aren't `localhost` or `127.0.0.1`, you may need to edit your computer's `hosts` file, so those hostnames resolve to `localhost`. On Linux and macOS, this is usually at `/etc/hosts`. On Windows, it's at `C:\Windows\System32\drivers\etc\hosts`. For the routes above, we would need to append the following entries to the file: ``` 127.0.0.1 miniflare.test 127.0.0.1 api.mf ``` Alternatively, you can customise the `Host` header when sending the request: ```sh # Dispatches to the "api" worker $ curl "http://localhost:8787/todos/update/1" -H "Host: api.mf" ``` When using the API, Miniflare will use the request's URL to determine which Worker to dispatch to. ```js // Dispatches to the "api" worker const res = await mf.dispatchFetch("http://api.mf/todos/update/1", { ... }); ``` ## Durable Objects Miniflare supports the `script_name` option for accessing Durable Objects exported by other scripts. See [📌 Durable Objects](/workers/testing/miniflare/storage/durable-objects#using-a-class-exported-by-another-script) for more details. --- # 🔑 Variables and Secrets URL: https://developers.cloudflare.com/workers/testing/miniflare/core/variables-secrets/ ## Bindings Variable and secrets are bound as follows: ```js const mf = new Miniflare({ bindings: { KEY1: "value1", KEY2: "value2", }, }); ``` ## Text and Data Blobs Text and data blobs can be loaded from files. File contents will be read and bound as `string`s and `ArrayBuffer`s respectively. ```js const mf = new Miniflare({ textBlobBindings: { TEXT: "text.txt" }, dataBlobBindings: { DATA: "data.bin" }, }); ``` ## Globals Injecting arbitrary globals is not supported by [workerd](https://github.com/cloudflare/workerd). If you're using a service Worker, bindings will be injected as globals, but these must be JSON-serialisable. --- # âš¡ï¸ Live Reload URL: https://developers.cloudflare.com/workers/testing/miniflare/developing/live-reload/ Miniflare automatically refreshes your browser when your Worker script changes when `liveReload` is set to `true`. ```js const mf = new Miniflare({ liveReload: true, }); ``` Miniflare will only inject the `<script>` tag required for live-reload at the end of responses with the `Content-Type` header set to `text/html`: ```js export default { fetch() { const body = ` <!DOCTYPE html> <html> <body> <p>Try update me!</p> </body> </html> `; return new Response(body, { headers: { "Content-Type": "text/html; charset=utf-8" }, }); }, }; ``` --- # âœ‰ï¸ WebSockets URL: https://developers.cloudflare.com/workers/testing/miniflare/core/web-sockets/ - [WebSockets Reference](/workers/runtime-apis/websockets) - [Using WebSockets](/workers/examples/websockets/) ## Server Miniflare will always upgrade Web Socket connections. The Worker must respond with a status `101 Switching Protocols` response including a `webSocket`. For example, the Worker below implements an echo WebSocket server: ```js export default { fetch(request) { const [client, server] = Object.values(new WebSocketPair()); server.accept(); server.addEventListener("message", (event) => { server.send(event.data); }); return new Response(null, { status: 101, webSocket: client, }); }, }; ``` When using `dispatchFetch`, you are responsible for handling WebSockets by using the `webSocket` property on `Response`. As an example, if the above worker script was stored in `echo.mjs`: ```js {13-17} import { Miniflare } from "miniflare"; const mf = new Miniflare({ modules: true, scriptPath: "echo.mjs", }); const res = await mf.dispatchFetch("https://example.com", { headers: { Upgrade: "websocket", }, }); const webSocket = res.webSocket; webSocket.accept(); webSocket.addEventListener("message", (event) => { console.log(event.data); }); webSocket.send("Hello!"); // Above listener logs "Hello!" ``` --- # 🚥 Queues URL: https://developers.cloudflare.com/workers/testing/miniflare/core/queues/ - [Queues Reference](/queues/) ## Producers Specify Queue producers to add to your environment as follows: ```js const mf = new Miniflare({ queueProducers: { MY_QUEUE: "my-queue" }, queueProducers: ["MY_QUEUE"], // If binding and queue names are the same }); ``` ## Consumers Specify Workers to consume messages from your Queues as follows: ```js const mf = new Miniflare({ queueConsumers: { "my-queue": { maxBatchSize: 5, // default: 5 maxBatchTimeout: 1 /* second(s) */, // default: 1 maxRetries: 2, // default: 2 deadLetterQueue: "my-dead-letter-queue", // default: none }, }, queueConsumers: ["my-queue"], // If using default consumer options }); ``` ## Manipulating Outside Workers For testing, it can be valuable to interact with Queues outside a Worker. You can do this by using the `workers` option to run multiple Workers in the same instance: ```js const mf = new Miniflare({ workers: [ { name: "a", modules: true, script: ` export default { async fetch(request, env, ctx) { await env.QUEUE.send(await request.text()); } } `, queueProducers: { QUEUE: "my-queue" }, }, { name: "b", modules: true, script: ` export default { async queue(batch, env, ctx) { console.log(batch); } } `, queueConsumers: { "my-queue": { maxBatchTimeout: 1 } }, }, ], }); const queue = await mf.getQueueProducer("QUEUE", "a"); // Get from worker "a" await queue.send("message"); // Logs "message" 1 second later ``` --- # â° Scheduled Events URL: https://developers.cloudflare.com/workers/testing/miniflare/core/scheduled/ - [`ScheduledEvent` Reference](/workers/runtime-apis/handlers/scheduled/) ## Cron Triggers `scheduled` events are automatically dispatched according to the specified cron triggers: ```js const mf = new Miniflare({ crons: ["15 * * * *", "45 * * * *"], }); ``` ## HTTP Triggers Because waiting for cron triggers is annoying, you can also make HTTP requests to `/cdn-cgi/mf/scheduled` to trigger `scheduled` events: ```sh $ curl "http://localhost:8787/cdn-cgi/mf/scheduled" ``` To simulate different values of `scheduledTime` and `cron` in the dispatched event, use the `time` and `cron` query parameters: ```sh $ curl "http://localhost:8787/cdn-cgi/mf/scheduled?time=1000" $ curl "http://localhost:8787/cdn-cgi/mf/scheduled?cron=*+*+*+*+*" ``` ## Dispatching Events When using the API, the `getWorker` function can be used to dispatch `scheduled` events to your Worker. This can be used for testing responses. It takes optional `scheduledTime` and `cron` parameters, which default to the current time and the empty string respectively. It will return a promise which resolves to an array containing data returned by all waited promises: ```js import { Miniflare } from "miniflare"; const mf = new Miniflare({ modules: true, script: ` export default { async scheduled(controller, env, ctx) { const lastScheduledController = controller; if (controller.cron === "* * * * *") controller.noRetry(); } } `, }); const worker = await mf.getWorker(); let scheduledResult = await worker.scheduled({ cron: "* * * * *", }); console.log(scheduledResult); // { outcome: 'ok', noRetry: true } scheduledResult = await worker.scheduled({ scheduledTime: new Date(1000), cron: "30 * * * *", }); console.log(scheduledResult); // { outcome: 'ok', noRetry: false } ``` --- # ✨ Cache URL: https://developers.cloudflare.com/workers/testing/miniflare/storage/cache/ - [Cache Reference](/workers/runtime-apis/cache) - [How the Cache works](/workers/reference/how-the-cache-works/#cache-api) (note that cache using `fetch` is unsupported) ## Default Cache Access to the default cache is enabled by default: ```js addEventListener("fetch", (e) => { e.respondWith(caches.default.match("http://miniflare.dev")); }); ``` ## Named Caches You can access a namespaced cache using `open`. Note that you cannot name your cache `default`, trying to do so will throw an error: ```js await caches.open("cache_name"); ``` ## Persistence By default, cached data is stored in memory. It will persist between reloads, but not different `Miniflare` instances. To enable persistence to the file system, specify the cache persistence option: ```js const mf = new Miniflare({ cachePersist: true, // Defaults to ./.mf/cache cachePersist: "./data", // Custom path }); ``` ## Manipulating Outside Workers For testing, it can be useful to put/match data from cache outside a Worker. You can do this with the `getCaches` method: ```js {23,24,25,26,27,28,29,30,31,32} import { Miniflare, Response } from "miniflare"; const mf = new Miniflare({ modules: true, script: ` export default { async fetch(request) { const url = new URL(request.url); const cache = caches.default; if(url.pathname === "/put") { await cache.put("https://miniflare.dev/", new Response("1", { headers: { "Cache-Control": "max-age=3600" }, })); } return cache.match("https://miniflare.dev/"); } } `, }); let res = await mf.dispatchFetch("http://localhost:8787/put"); console.log(await res.text()); // 1 const caches = await mf.getCaches(); // Gets the global caches object const cachedRes = await caches.default.match("https://miniflare.dev/"); console.log(await cachedRes.text()); // 1 await caches.default.put( "https://miniflare.dev", new Response("2", { headers: { "Cache-Control": "max-age=3600" }, }), ); res = await mf.dispatchFetch("http://localhost:8787"); console.log(await res.text()); // 2 ``` ## Disabling Both default and named caches can be disabled with the `disableCache` option. When disabled, the caches will still be available in the sandbox, they just won't cache anything. This may be useful during development: ```js const mf = new Miniflare({ cache: false, }); ``` --- # 💾 D1 URL: https://developers.cloudflare.com/workers/testing/miniflare/storage/d1/ - [D1 Reference](/d1/) ## Databases Specify D1 Databases to add to your environment as follows: ```js const mf = new Miniflare({ d1Databases:{ DB:"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" } }); ``` ## Working with D1 Databases For testing, it can be useful to put/get data from D1 storage bound to a Worker. You can do this with the `getD1Database` method: ```js const db = await mf.getD1Database("DB"); const stmt = await db.prepare("<Query>"); const returnValue = await stmt.run(); return Response.json(returnValue.results); ``` --- # 📌 Durable Objects URL: https://developers.cloudflare.com/workers/testing/miniflare/storage/durable-objects/ - [Durable Objects Reference](/durable-objects/api/) - [Using Durable Objects](/durable-objects/) ## Objects Specify Durable Objects to add to your environment as follows: ```js const mf = new Miniflare({ modules: true, script: ` export class Object1 { async fetch(request) { ... } } export default { fetch(request) { ... } } `, durableObjects: { // Note Object1 is exported from main (string) script OBJECT1: "Object1", }, }); ``` ## Persistence By default, Durable Object data is stored in memory. It will persist between reloads, but not different `Miniflare` instances. To enable persistence to the file system, specify the Durable Object persistence option: ```js const mf = new Miniflare({ durableObjectsPersist: true, // Defaults to ./.mf/do durableObjectsPersist: "./data", // Custom path }); ``` ## Manipulating Outside Workers For testing, it can be useful to make requests to your Durable Objects from outside a worker. You can do this with the `getDurableObjectNamespace` method. ```js {28,29,30,31,32} import { Miniflare } from "miniflare"; const mf = new Miniflare({ modules: true, durableObjects: { TEST_OBJECT: "TestObject" }, script: ` export class TestObject { constructor(state) { this.storage = state.storage; } async fetch(request) { const url = new URL(request.url); if (url.pathname === "/put") await this.storage.put("key", 1); return new Response((await this.storage.get("key")).toString()); } } export default { async fetch(request, env) { const stub = env.TEST_OBJECT.get(env.TEST_OBJECT.idFromName("test")); return stub.fetch(request); } } `, }); const ns = await mf.getDurableObjectNamespace("TEST_OBJECT"); const id = ns.idFromName("test"); const stub = ns.get(id); const doRes = await stub.fetch("http://localhost:8787/put"); console.log(await doRes.text()); // "1" const res = await mf.dispatchFetch("http://localhost:8787/"); console.log(await res.text()); // "1" ``` ## Using a Class Exported by Another Script Miniflare supports the `script_name` option for accessing Durable Objects exported by other scripts. This requires mounting the other worker as described in [🔌 Multiple Workers](/workers/testing/miniflare/core/multiple-workers). --- # Storage URL: https://developers.cloudflare.com/workers/testing/miniflare/storage/ import { DirectoryListing } from "~/components"; <DirectoryListing path="/storage" /> --- # 🪣 R2 URL: https://developers.cloudflare.com/workers/testing/miniflare/storage/r2/ - [R2 Reference](/r2/api/workers/workers-api-reference/) ## Buckets Specify R2 Buckets to add to your environment as follows: ```js const mf = new Miniflare({ r2Buckets: ["BUCKET1", "BUCKET2"], }); ``` ## Manipulating Outside Workers For testing, it can be useful to put/get data from R2 storage outside a worker. You can do this with the `getR2Bucket` method: ```js {18,19,23} import { Miniflare } from "miniflare"; const mf = new Miniflare({ modules: true, script: ` export default { async fetch(request, env, ctx) { const object = await env.BUCKET.get("count"); const value = parseInt(await object.text()) + 1; await env.BUCKET.put("count", value.toString()); return new Response(value.toString()); } } `, r2Buckets: ["BUCKET"], }); const bucket = await mf.getR2Bucket("BUCKET"); await bucket.put("count", "1"); const res = await mf.dispatchFetch("http://localhost:8787/"); console.log(await res.text()); // 2 console.log(await (await bucket.get("count")).text()); // 2 ``` --- # 📦 KV URL: https://developers.cloudflare.com/workers/testing/miniflare/storage/kv/ - [KV Reference](/kv/api/) ## Namespaces Specify KV namespaces to add to your environment as follows: ```js const mf = new Miniflare({ kvNamespaces: ["TEST_NAMESPACE1", "TEST_NAMESPACE2"], }); ``` You can now access KV namespaces in your workers: ```js export default { async fetch(request, env) { return new Response(await env.TEST_NAMESPACE1.get("key")); }, }; ``` Miniflare supports all KV operations and data types. ## Manipulating Outside Workers For testing, it can be useful to put/get data from KV outside a worker. You can do this with the `getKVNamespace` method: ```js {17,18,22} import { Miniflare } from "miniflare"; const mf = new Miniflare({ modules: true, script: ` export default { async fetch(request, env, ctx) { const value = parseInt(await env.TEST_NAMESPACE.get("count")) + 1; await env.TEST_NAMESPACE.put("count", value.toString()); return new Response(value.toString()); }, } `, kvNamespaces: ["TEST_NAMESPACE"], }); const ns = await mf.getKVNamespace("TEST_NAMESPACE"); await ns.put("count", "1"); const res = await mf.dispatchFetch("http://localhost:8787/"); console.log(await res.text()); // 2 console.log(await ns.get("count")); // 2 ``` --- # Get started URL: https://developers.cloudflare.com/workers/testing/vitest-integration/get-started/ import { DirectoryListing } from "~/components" For most users, Cloudflare recommends using the Workers Vitest integration for testing Workers and [Pages Functions](/pages/functions/) projects. [Vitest](https://vitest.dev/) is a popular JavaScript testing framework featuring a very fast watch mode, Jest compatibility, and out-of-the-box support for TypeScript. In this integration, Cloudflare provides a custom pool that allows your Vitest tests to run *inside* the Workers runtime. The Workers Vitest integration: * Supports both **unit tests** and **integration tests**. * Provides direct access to Workers runtime APIs and bindings. * Implements isolated per-test storage. * Runs tests fully-locally using [Miniflare](https://miniflare.dev/). * Leverages Vitest's hot-module reloading for near instant reruns. * Provides a declarative interface for mocking outbound requests. * Supports projects with multiple Workers. Get started with one of the available guides: <DirectoryListing /> :::caution The Workers Vitest integration does not support testing Workers using the service worker format. [Migrate to the ES modules format](/workers/reference/migrate-to-module-workers/) to use the Workers Vitest integration. ::: --- # Migrate from Miniflare 2's test environments URL: https://developers.cloudflare.com/workers/testing/vitest-integration/get-started/migrate-from-miniflare-2/ [Miniflare 2](https://github.com/cloudflare/miniflare?tab=readme-ov-file) provided custom environments for Jest and Vitest in the `jest-environment-miniflare` and `vitest-environment-miniflare` packages respectively. The `@cloudflare/vitest-pool-workers` package provides similar functionality using modern Miniflare versions and the [`workerd` runtime](https://github.com/cloudflare/workerd). `workerd` is the same JavaScript/WebAssembly runtime that powers Cloudflare Workers. Using `workerd` practically eliminates behavior mismatches between your tests and deployed code. Refer to the [Miniflare 3 announcement](https://blog.cloudflare.com/miniflare-and-workerd) for more information. :::caution Cloudflare no longer provides a Jest testing environment for Workers. If you previously used Jest, you will need to [migrate to Vitest](https://vitest.dev/guide/migration.html#migrating-from-jest) first, then follow the rest of this guide. Vitest provides built-in support for TypeScript, ES modules, and hot-module reloading for tests out-of-the-box. ::: :::caution The Workers Vitest integration does not support testing Workers using the service worker format. [Migrate to ES modules format](/workers/reference/migrate-to-module-workers/) first. ::: ## Install the Workers Vitest integration First, you will need to uninstall the old environment and install the new pool. Vitest environments can only customize the global scope, whereas pools can run tests using a completely different runtime. In this case, the pool runs your tests inside [`workerd`](https://github.com/cloudflare/workerd) instead of Node.js. ```sh npm uninstall vitest-environment-miniflare npm install --save-dev --save-exact vitest@~3.0.0 npm install --save-dev @cloudflare/vitest-pool-workers ``` ## Update your Vitest configuration file After installing the Workers Vitest configuration, update your Vitest configuration file to use the pool instead. Most Miniflare configuration previously specified `environmentOptions` can be moved to `poolOptions.workers.miniflare` instead. Refer to [Miniflare's `WorkerOptions` interface](https://github.com/cloudflare/workers-sdk/blob/main/packages/miniflare/README.md#interface-workeroptions) for supported options and the [Miniflare version 2 to 3 migration guide](/workers/testing/miniflare/migrations/from-v2/) for more information. If you relied on configuration stored in a Wrangler file, set `wrangler.configPath` too. ```diff + import { defineWorkersConfig } from "@cloudflare/vitest-pool-workers/config"; export default defineWorkersConfig({ test: { - environment: "miniflare", - environmentOptions: { ... }, + poolOptions: { + workers: { + miniflare: { ... }, + wrangler: { configPath: "./wrangler.toml" }, + }, + }, }, }); ``` ## Update your TypeScript configuration file If you are using TypeScript, update your `tsconfig.json` to include the correct ambient `types`: ```diff { "compilerOptions": { ..., "types": [ "@cloudflare/workers-types/experimental" - "vitest-environment-miniflare/globals" + "@cloudflare/vitest-pool-workers" ] }, } ``` ## Access bindings To access [bindings](/workers/runtime-apis/bindings/) in your tests, use the `env` helper from the `cloudflare:test` module. ```diff import { it } from "vitest"; + import { env } from "cloudflare:test"; it("does something", () => { - const env = getMiniflareBindings(); // ... }); ``` If you are using TypeScript, add an ambient `.d.ts` declaration file defining a `ProvidedEnv` `interface` in the `cloudflare:test` module to control the type of `env`: ```ts declare module "cloudflare:test" { interface ProvidedEnv { NAMESPACE: KVNamespace; } // ...or if you have an existing `Env` type... interface ProvidedEnv extends Env {} } ``` ## Use isolated storage Isolated storage is now enabled by default. You no longer need to include `setupMiniflareIsolatedStorage()` in your tests. ```diff - const describe = setupMiniflareIsolatedStorage(); + import { describe } from "vitest"; ``` ## Work with `waitUntil()` The `new ExecutionContext()` constructor and `getMiniflareWaitUntil()` function are now `createExecutionContext()` and `waitOnExecutionContext()` respectively. Note `waitOnExecutionContext()` now returns an empty `Promise<void>` instead of a `Promise` resolving to the results of all `waitUntil()`ed `Promise`s. ```diff + import { createExecutionContext, waitOnExecutionContext } from "cloudflare:test"; it("does something", () => { // ... - const ctx = new ExecutionContext(); + const ctx = createExecutionContext(); const response = worker.fetch(request, env, ctx); - await getMiniflareWaitUntil(ctx); + await waitOnExecutionContext(ctx); }); ``` ## Mock outbound requests The `getMiniflareFetchMock()` function has been replaced with the new `fetchMock` helper from the `cloudflare:test` module. `fetchMock` has the same type as the return type of `getMiniflareFetchMock()`. There are a couple of differences between `fetchMock` and the previous return value of `getMiniflareFetchMock()`: - `fetchMock` is deactivated by default, whereas previously it would start activated. This deactivation prevents unnecessary buffering of request bodies if you are not using `fetchMock`. You will need to call `fetchMock.activate()` before calling `fetch()` to enable it. - `fetchMock` is reset at the start of each test run, whereas previously, interceptors added in previous runs would apply to the current one. This ensures test runs are not affected by previous runs. ```diff import { beforeAll, afterAll } from "vitest"; + import { fetchMock } from "cloudflare:test"; - const fetchMock = getMiniflareFetchMock(); beforeAll(() => { + fetchMock.activate(); fetchMock.disableNetConnect(); fetchMock .get("https://example.com") .intercept({ path: "/" }) .reply(200, "data"); }); afterAll(() => fetchMock.assertNoPendingInterceptors()); ``` ## Use Durable Object helpers The `getMiniflareDurableObjectStorage()`, `getMiniflareDurableObjectState()`, `getMiniflareDurableObjectInstance()`, and `runWithMiniflareDurableObjectGates()` functions have all been replaced with a single `runInDurableObject()` function from the `cloudflare:test` module. The `runInDurableObject()` function accepts a `DurableObjectStub` with a callback accepting the Durable Object and corresponding `DurableObjectState` as arguments. Consolidating these functions into a single function simplifies the API surface, and ensures instances are accessed with the correct request context and [gating behavior](https://blog.cloudflare.com/durable-objects-easy-fast-correct-choose-three/). Refer to the [Test APIs page](/workers/testing/vitest-integration/test-apis/) for more details. ```diff + import { env, runInDurableObject } from "cloudflare:test"; it("does something", async () => { - const env = getMiniflareBindings(); const id = env.OBJECT.newUniqueId(); + const stub = env.OBJECT.get(id); - const storage = await getMiniflareDurableObjectStorage(id); - doSomethingWith(storage); + await runInDurableObject(stub, async (instance, state) => { + doSomethingWith(state.storage); + }); - const state = await getMiniflareDurableObjectState(id); - doSomethingWith(state); + await runInDurableObject(stub, async (instance, state) => { + doSomethingWith(state); + }); - const instance = await getMiniflareDurableObjectInstance(id); - await runWithMiniflareDurableObjectGates(state, async () => { - doSomethingWith(instance); - }); + await runInDurableObject(stub, async (instance) => { + doSomethingWith(instance); + }); }); ``` The `flushMiniflareDurableObjectAlarms()` function has been replaced with the `runDurableObjectAlarm()` function from the `cloudflare:test` module. The `runDurableObjectAlarm()` function accepts a single `DurableObjectStub` and returns a `Promise` that resolves to `true` if an alarm was scheduled and the `alarm()` handler was executed, or `false` otherwise. To "flush" multiple instances' alarms, call `runDurableObjectAlarm()` in a loop. ```diff + import { env, runDurableObjectAlarm } from "cloudflare:test"; it("does something", async () => { - const env = getMiniflareBindings(); const id = env.OBJECT.newUniqueId(); - await flushMiniflareDurableObjectAlarms([id]); + const stub = env.OBJECT.get(id); + const ran = await runDurableObjectAlarm(stub); }); ``` Finally, the `getMiniflareDurableObjectIds()` function has been replaced with the `listDurableObjectIds()` function from the `cloudflare:test` module. The `listDurableObjectIds()` function now accepts a `DurableObjectNamespace` instance instead of a namespace `string` to provide stricter typing. Note the `listDurableObjectIds()` function now respects isolated storage. If enabled, IDs of objects created in other tests will not be returned. ```diff + import { env, listDurableObjectIds } from "cloudflare:test"; it("does something", async () => { - const ids = await getMiniflareDurableObjectIds("OBJECT"); + const ids = await listDurableObjectIds(env.OBJECT); }); ``` --- # Migrate from unstable_dev URL: https://developers.cloudflare.com/workers/testing/vitest-integration/get-started/migrate-from-unstable-dev/ The [`unstable_dev`](/workers/wrangler/api/#unstable_dev) API has been a recommended approach to run integration tests. The `@cloudflare/vitest-pool-workers` package integrates directly with Vitest for fast re-runs, supports both unit and integration tests, all whilst providing isolated per-test storage. This guide demonstrates key differences between tests written with the `unstable_dev` API and the Workers Vitest integration. For more information on writing tests with the Workers Vitest integration, refer to [Write your first test](/workers/testing/vitest-integration/get-started/write-your-first-test/). ## Reference a Worker for integration testing With `unstable_dev`, to trigger a `fetch` event, you would do this: ```js import { unstable_dev } from "wrangler" it("dispatches fetch event", () => { const worker = await unstable_dev("src/index.ts"); const resp = await worker.fetch("http://example.com"); ... }) ``` With the Workers Vitest integration, you can accomplish the same goal using `SELF` from `cloudflare:test`. `SELF` is a [service binding](/workers/runtime-apis/bindings/service-bindings/) to the default export defined by the `main` option in your [Wrangler configuration file](/workers/wrangler/configuration/). This `main` Worker runs in the same isolate as tests so any global mocks will apply to it too. ```js import { SELF } from "cloudflare:test"; import "../src/"; // Currently required to automatically rerun tests when `main` changes it("dispatches fetch event", async () => { const response = await SELF.fetch("http://example.com"); ... }); ``` ## Stop a Worker With the Workers Vitest integration, there is no need to stop a Worker via `worker.stop()`. This functionality is handled automatically after tests run. ## Import Wrangler configuration Via the `unstable_dev` API, you can reference a [Wrangler configuration file](/workers/wrangler/configuration/) by adding it as an option: ```js await unstable_dev("src/index.ts", { config: "wrangler.toml", }); ``` With the Workers Vitest integration, you can now set this reference to a [Wrangler configuration file](/workers/wrangler/configuration/) in `vitest.config.js` for all of your tests: ```js null {5-7} export default defineWorkersConfig({ test: { poolOptions: { workers: { wrangler: { configPath: "wrangler.toml", }, }, }, }, }); --- ``` ## Test service Workers Unlike the `unstable_dev` API, the Workers Vitest integration does not support testing Workers using the service worker format. You will need to first [migrate to the ES modules format](/workers/reference/migrate-to-module-workers/) in order to use the Workers Vitest integration. ## Define types You can remove `UnstableDevWorker` imports from your code. Instead, follow the [Write your first test guide](/workers/testing/vitest-integration/get-started/write-your-first-test/#define-types) to define types for all of your tests. ```diff - import { unstable_dev } from "wrangler"; - import type { UnstableDevWorker } from "wrangler"; + import worker from "src/index.ts"; describe("Worker", () => { - let worker: UnstableDevWorker; ... }); ``` ## Related resources * [Write your first test](/workers/testing/vitest-integration/get-started/write-your-first-test/#define-types) - Write unit tests against Workers. --- # Write your first test URL: https://developers.cloudflare.com/workers/testing/vitest-integration/get-started/write-your-first-test/ import { TabItem, Tabs } from "~/components"; This guide will instruct you through installing and setting up the `@cloudflare/vitest-pool-workers` package. This will help you get started writing tests against your Workers using Vitest. The `@cloudflare/vitest-pool-workers` package works by running code inside a Cloudflare Worker that Vitest would usually run inside a [Node.js worker thread](https://nodejs.org/api/worker_threads.html). For examples of tests using `@cloudflare/vitest-pool-workers`, refer to [Recipes](/workers/testing/vitest-integration/recipes/). ## Prerequisites - Open the root directory of your Worker or [create a new Worker](/workers/get-started/guide/#1-create-a-new-worker-project). - Make sure that your Worker is developed using the ES modules format. To migrate from the service worker format to the ES modules format, refer to the [Migrate to the ES modules format](/workers/reference/migrate-to-module-workers/) guide. - In your project's [Wrangler configuration file](/workers/wrangler/configuration/), define a [compatibility date](/workers/configuration/compatibility-dates/) of `2022-10-31` or higher, and include `nodejs_compat` in your [compatibility flags](/workers/configuration/compatibility-flags). ## Install Vitest and `@cloudflare/vitest-pool-workers` Open a terminal window and make sure you are in your project's root directory. Once you have confirmed that, run: ```sh npm install vitest@~3.0.0 --save-dev --save-exact npm install @cloudflare/vitest-pool-workers --save-dev ``` The above commands will add the packages to your `package.json` file and install them as dev dependencies. :::note Currently, the `@cloudflare/vitest-pool-workers` package _only_ works with Vitest 2.0.x - 3.0.x. ::: ## Define Vitest configuration If you do not already have a `vitest.config.js` or `vitest.config.ts` file, you will need to create one and define the following configuration. You can reference a Wrangler file to leverage its `main` entry point, [compatibility settings](/workers/configuration/compatibility-dates/), and [bindings](/workers/runtime-apis/bindings/). ```js import { defineWorkersConfig } from "@cloudflare/vitest-pool-workers/config"; export default defineWorkersConfig({ test: { poolOptions: { workers: { wrangler: { configPath: "./wrangler.toml" }, }, }, }, }); ``` :::note For a full list of available configuration options, refer to [Configuration](/workers/testing/vitest-integration/configuration/). ::: ### Add configuration options via Miniflare Under the hood, the Workers Vitest integration uses [Miniflare](https://miniflare.dev), the same simulator that powers [`wrangler dev`'s](/workers/wrangler/commands/#dev) local mode. Options can be passed directly to Miniflare for advanced configuration. For example, to add bindings that will be used in tests, you can add `miniflare` to `defineWorkersConfig`: ```js null {6-8} export default defineWorkersConfig({ test: { poolOptions: { workers: { main: "./src/index.ts", miniflare: { kvNamespaces: ["TEST_NAMESPACE"], }, }, }, }, }); ``` This configuration would add a KV namespace `TEST_NAMESPACE` that was only accessible in tests. Using this method, you can add or override existing bindings like Durable Objects or service bindings. :::note For a full list of available Miniflare options, refer to the [Miniflare `WorkersOptions` API documentation](https://github.com/cloudflare/workers-sdk/tree/main/packages/miniflare#interface-workeroptions). ::: ## Define types If you are using TypeScript, you will need to define types for Cloudflare Workers and `cloudflare:test` to make sure they are detected appropriately. Add a `tsconfig.json` in the same folder as your tests (that is, `test`) and add the following: ```js { "extends": "../tsconfig.json", "compilerOptions": { "moduleResolution": "bundler", "types": [ "@cloudflare/workers-types/experimental", "@cloudflare/vitest-pool-workers" ] }, "include": ["./**/*.ts", "../src/env.d.ts"] } ``` Save this file, and you are ready to write your first test. ## Write tests If you created a basic Worker via the guide listed above, you should have the following fetch handler in the `src` folder: <Tabs> <TabItem label="JavaScript" icon="seti:javascript"> ```js export default { async fetch(request, env, ctx) { return new Response("Hello World!"); }, }; ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts export default { async fetch(request, env, ctx): Promise<Response> { return new Response("Hello World!"); }, } satisfies ExportedHandler<Env>; ``` </TabItem> </Tabs> This Worker receives a request, and returns a response of `"Hello World!"`. In order to test this, create a `test` folder with the following test file: <Tabs> <TabItem label="JavaScript" icon="seti:javascript"> ```js import { env, createExecutionContext, waitOnExecutionContext, } from "cloudflare:test"; import { describe, it, expect } from "vitest"; // Could import any other source file/function here import worker from "../src"; describe("Hello World worker", () => { it("responds with Hello World!", async () => { const request = new Request("http://example.com"); // Create an empty context to pass to `worker.fetch()` const ctx = createExecutionContext(); const response = await worker.fetch(request, env, ctx); // Wait for all `Promise`s passed to `ctx.waitUntil()` to settle before running test assertions await waitOnExecutionContext(ctx); expect(await response.text()).toBe("Hello World!"); }); }); ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts import { env, createExecutionContext, waitOnExecutionContext, } from "cloudflare:test"; import { describe, it, expect } from "vitest"; // Could import any other source file/function here import worker from "../src"; // For now, you'll need to do something like this to get a correctly-typed // `Request` to pass to `worker.fetch()`. const IncomingRequest = Request<unknown, IncomingRequestCfProperties>; describe("Hello World worker", () => { it("responds with Hello World!", async () => { const request = new IncomingRequest("http://example.com"); // Create an empty context to pass to `worker.fetch()` const ctx = createExecutionContext(); const response = await worker.fetch(request, env, ctx); // Wait for all `Promise`s passed to `ctx.waitUntil()` to settle before running test assertions await waitOnExecutionContext(ctx); expect(await response.text()).toBe("Hello World!"); }); }); ``` </TabItem> </Tabs> Add functionality to handle a `404` path on the Worker. This functionality will return the text `Not found` as well as the status code `404`. <Tabs> <TabItem label="JavaScript" icon="seti:javascript"> ```js export default { async fetch(request, env, ctx) { const { pathname } = new URL(request.url); if (pathname === "/404") { return new Response("Not found", { status: 404 }); } return new Response("Hello World!"); }, }; ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts export default { async fetch(request, env, ctx): Promise<Response> { const { pathname } = new URL(request.url); if (pathname === "/404") { return new Response("Not found", { status: 404 }); } return new Response("Hello World!"); }, } satisfies ExportedHandler<Env>; ``` </TabItem> </Tabs> To test this, add the following to your test file: <Tabs> <TabItem label="JavaScript" icon="seti:javascript"> ```js it("responds with not found and proper status for /404", async () => { const request = new Request("http://example.com/404"); // Create an empty context to pass to `worker.fetch()` const ctx = createExecutionContext(); const response = await worker.fetch(request, env, ctx); // Wait for all `Promise`s passed to `ctx.waitUntil()` to settle before running test assertions await waitOnExecutionContext(ctx); expect(await response.status).toBe(404); expect(await response.text()).toBe("Not found"); }); ``` </TabItem> <TabItem label="TypeScript" icon="seti:typescript"> ```ts it("responds with not found and proper status for /404", async () => { const request = new IncomingRequest("http://example.com/404"); // Create an empty context to pass to `worker.fetch()` const ctx = createExecutionContext(); const response = await worker.fetch(request, env, ctx); // Wait for all `Promise`s passed to `ctx.waitUntil()` to settle before running test assertions await waitOnExecutionContext(ctx); expect(await response.status).toBe(404); expect(await response.text()).toBe("Not found"); }); ``` </TabItem> </Tabs> ## Related resources - [`@cloudflare/vitest-pool-workers` GitHub repository](https://github.com/cloudflare/workers-sdk/tree/main/fixtures/vitest-pool-workers-examples) - Examples of tests using the `@cloudflare/vitest-pool-workers` package. --- # 1. Migrate webpack projects URL: https://developers.cloudflare.com/workers/wrangler/migration/v1-to-v2/eject-webpack/ import { WranglerConfig } from "~/components"; This guide describes the steps to migrate a webpack project from Wrangler v1 to Wrangler v2. After completing this guide, [update your Wrangler version](/workers/wrangler/migration/v1-to-v2/update-v1-to-v2/). Previous versions of Wrangler offered rudimentary support for [webpack](https://webpack.js.org/) with the `type` and `webpack_config` keys in the [Wrangler configuration file](/workers/wrangler/configuration/). Starting with Wrangler v2, Wrangler no longer supports the `type` and `webpack_config` keys, but you can still use webpack with your Workers. As a developer using webpack with Workers, you may be in one of four categories: 1. [I use `[build]` to run webpack (or another bundler) external to `wrangler`.](#i-use-build-to-run-webpack-or-another-bundler-external-to-wrangler). 2. [I use `type = webpack`, but do not provide my own configuration and let Wrangler take care of it.](#i-use-type--webpack-but-do-not-provide-my-own-configuration-and-let-wrangler-take-care-of-it). 3. [I use `type = webpack` and `webpack_config = <path/to/webpack.config.js>` to handle JSX, TypeScript, WebAssembly, HTML files, and other non-standard filetypes.](#i-use-type--webpack-and-webpack_config--pathtowebpackconfigjs-to-handle-jsx-typescript-webassembly-html-files-and-other-non-standard-filetypes). 4. [I use `type = webpack` and `webpack_config = <path/to/webpack.config.js>` to perform code-transforms and/or other code-modifying functionality.](#i-use-type--webpack-and-webpack_config--pathtowebpackconfigjs-to-perform-code-transforms-andor-other-code-modifying-functionality). If you do not see yourself represented, [file an issue](https://github.com/cloudflare/workers-sdk/issues/new/choose) and we can assist you with your specific situation and improve this guide for future readers. ### I use `[build]` to run webpack (or another bundler) external to Wrangler. Wrangler v2 supports the `[build]` key, so your Workers will continue to build using your own setup. ### I use `type = webpack`, but do not provide my own configuration and let Wrangler take care of it. Wrangler will continue to take care of it. Remove `type = webpack` from your Wrangler file. ### I use `type = webpack` and `webpack_config = <path/to/webpack.config.js>` to handle JSX, TypeScript, WebAssembly, HTML files, and other non-standard filetypes. As of Wrangler v2, Wrangler has built-in support for this use case. Refer to [Bundling](/workers/wrangler/bundling/) for more details. The Workers runtime handles JSX and TypeScript. You can `import` any modules you need into your code and the Workers runtime includes them in the built Worker automatically. You should remove the `type` and `webpack_config` keys from your Wrangler file. ### I use `type = webpack` and `webpack_config = <path/to/webpack.config.js>` to perform code-transforms and/or other code-modifying functionality. Wrangler v2 drops support for project types, including `type = webpack` and configuration via the `webpack_config` key. If your webpack configuration performs operations beyond adding loaders (for example, for TypeScript) you will need to maintain your custom webpack configuration. In the long term, you should [migrate to an external `[build]` process](/workers/wrangler/custom-builds/). In the short term, it is still possible to reproduce Wrangler v1's build steps in newer versions of Wrangler by following the instructions below. 1. Add [wranglerjs-compat-webpack-plugin](https://www.npmjs.com/package/wranglerjs-compat-webpack-plugin) as a `devDependency`. [wrangler-js](https://www.npmjs.com/package/wrangler-js), shipped as a separate library from [Wrangler v1](https://www.npmjs.com/package/@cloudflare/wrangler/v/1.19.11), is a Node script that configures and executes [webpack 4](https://unpkg.com/browse/wrangler-js@0.1.11/package.json) for you. When you set `type = webpack`, Wrangler v1 would execute this script for you. We have ported the functionality over to a new package, [wranglerjs-compat-webpack-plugin](https://www.npmjs.com/package/wranglerjs-compat-webpack-plugin), which you can use as a [webpack plugin](https://v4.webpack.js.org/configuration/plugins/). To do that, you will need to add it as a dependency: ``` npm install --save-dev webpack@^4.46.0 webpack-cli wranglerjs-compat-webpack-plugin # or yarn add --dev webpack@4.46.0 webpack-cli wranglerjs-compat-webpack-plugin ``` You should see this reflected in your `package.json` file: ```json { "name": "my-worker", "version": "x.y.z", // ... "devDependencies": { // ... "wranglerjs-compat-webpack-plugin": "^x.y.z", "webpack": "^4.46.0", "webpack-cli": "^x.y.z" } } ``` 2. Add `wranglerjs-compat-webpack-plugin` to `webpack.config.js`. Modify your `webpack.config.js` file to include the plugin you just installed. ```js const { WranglerJsCompatWebpackPlugin, } = require("wranglerjs-compat-webpack-plugin"); module.exports = { // ... plugins: [new WranglerJsCompatWebpackPlugin()], }; ``` 3. Add a build script your `package.json`. ```json { "name": "my-worker", "version": "2.0.0", // ... "scripts": { "build": "webpack" // <-- Add this line! // ... } } ``` 4. Remove unsupported entries from your [Wrangler configuration file](/workers/wrangler/configuration/). Remove the `type` and `webpack_config` keys from your Wrangler file, as they are not supported anymore. <WranglerConfig> ```toml # Remove these! type = "webpack" webpack_config = "webpack.config.js" ``` </WranglerConfig> 5. Tell Wrangler how to bundle your Worker. Wrangler no longer has any knowledge of how to build your Worker. You will need to tell it how to call webpack and where to look for webpack's output. This translates into two fields: <WranglerConfig> ```toml main = "./worker/script.js" # by default, or whatever file webpack outputs [build] command = "npm run build" # or "yarn build" # ... ``` </WranglerConfig> 6. Test your project. Try running `npx wrangler deploy` to test that your configuration works as expected. --- # Migrate from Wrangler v1 to v2 URL: https://developers.cloudflare.com/workers/wrangler/migration/v1-to-v2/ import { DirectoryListing } from "~/components"; This guide details how to migrate from Wrangler v1 to v2. <DirectoryListing /> --- # 2. Update to Wrangler v2 URL: https://developers.cloudflare.com/workers/wrangler/migration/v1-to-v2/update-v1-to-v2/ This document describes the steps to migrate a project from Wrangler v1 to Wrangler v2. Before updating your Wrangler version, review and complete [Migrate webpack projects from Wrangler version 1](/workers/wrangler/migration/v1-to-v2/eject-webpack/) if it applies to your project. Wrangler v2 ships with new features and improvements that may require some changes to your configuration. The CLI itself will guide you through the upgrade process. <div style="position: relative; padding-top: 56.25%;"> <iframe src="https://iframe.videodelivery.net/2a60561afea1159f7dd270fd9dce999f?poster=https%3A%2F%2Fcloudflarestream.com%2F2a60561afea1159f7dd270fd9dce999f%2Fthumbnails%2Fthumbnail.jpg%3Ftime%3D%26height%3D600" style="border: none; position: absolute; top: 0; left: 0; height: 100%; width: 100%;" allow="accelerometer; gyroscope; autoplay; encrypted-media; picture-in-picture;" allowfullscreen="true" ></iframe> </div> :::note To learn more about the improvements to Wrangler, refer to [Wrangler v1 and v2 comparison](/workers/wrangler/deprecations/#wrangler-v1-and-v2-comparison-tables). ::: ## Update Wrangler version ### 1. Uninstall Wrangler v1 If you had previously installed Wrangler v1 globally using npm, you can uninstall it with: ```sh npm uninstall -g @cloudflare/wrangler ``` If you used Cargo to install Wrangler v1, you can uninstall it with: ```sh cargo uninstall wrangler ``` ### 2. Install Wrangler Now, install the latest version of Wrangler. ```sh npm install -g wrangler ``` ### 3. Verify your install To check that you have installed the correct Wrangler version, run: ```sh npx wrangler --version ``` ## Test Wrangler v2 on your previous projects Now you will test that Wrangler v2 can build your Wrangler v1 project. In most cases, it will build just fine. If there are errors, the command line should instruct you with exactly what to change to get it to build. If you would like to read more on the deprecated [Wrangler configuration file](/workers/wrangler/configuration/) fields that cause Wrangler v2 to error, refer to [Deprecations](/workers/wrangler/deprecations/). Run the `wrangler dev` command. This will show any warnings or errors that should be addressed. Note that in most cases, the messages will include actionable instructions on how to resolve the issue. ```sh npx wrangler dev ``` - Errors need to be fixed before Wrangler can build your Worker. - In most cases, you will only see warnings. These do not stop Wrangler from building your Worker, but consider updating the configuration to remove them. Here is an example of some warnings and errors: ```bash â›…ï¸ wrangler 2.x ------------------------------------------------------- â–² [WARNING] Processing wrangler.toml configuration: - 😶 Ignored: "type": Most common features now work out of the box with wrangler, including modules, jsx, typescript, etc. If you need anything more, use a custom build. - Deprecation: "zone_id": This is unnecessary since we can deduce this from routes directly. - Deprecation: "build.upload.format": The format is inferred automatically from the code. ✘ [ERROR] Processing wrangler.toml configuration: - Expected "route" to be either a string, or an object with shape { pattern, zone_id | zone_name }, but got "". ``` ## Deprecations Refer to [Deprecations](/workers/wrangler/deprecations/) for more details on what is no longer supported. --- # Authentication URL: https://developers.cloudflare.com/workers/wrangler/migration/v1-to-v2/wrangler-legacy/authentication/ import { Render } from "~/components"; <Render file="wrangler-v1-deprecation" /> ## Background In Cloudflare’s system, a user can have multiple accounts and zones. As a result, your user is configured globally on your machine via a single Cloudflare Token. Your account(s) and zone(s) will be configured per project, but will use your Cloudflare Token to authenticate all API calls. A configuration file is created in a `.wrangler` directory in your computer’s home directory. --- ### Using commands To set up Wrangler to work with your Cloudflare user, use the following commands: - `login`: a command that opens a Cloudflare account login page to authorize Wrangler. - `config`: an alternative to `login` that prompts you to enter your `email` and `api` key. - `whoami`: run this command to confirm that your configuration is appropriately set up. When successful, this command will print out your account email and your `account_id` needed for your project's Wrangler file. ### Using environment variables You can also configure your global user with environment variables. This is the preferred method for using Wrangler in CI (continuous integration) environments. To customize the authentication tokens that Wrangler uses, you may provide the `CF_ACCOUNT_ID` and `CF_API_TOKEN` environment variables when running any Wrangler command. The account ID may be obtained from the Cloudflare dashboard in **Overview** and you may [create or reuse an existing API token](#generate-tokens). ```sh CF_ACCOUNT_ID=accountID CF_API_TOKEN=veryLongAPIToken wrangler publish ``` Alternatively, you may use the `CF_EMAIL` and `CF_API_KEY` environment variable combination instead: ```sh CF_EMAIL=cloudflareEmail CF_API_KEY=veryLongAPI wrangler publish ``` You can also specify or override the target Zone ID by defining the `CF_ZONE_ID` environment variable. Defining environment variables inline will override the default credentials stored in `wrangler config` or in your Wrangler file. --- ## Generate Tokens ### API token 1. In **Overview**, select [**Get your API token**](/fundamentals/api/get-started/create-token/). 2. After being taken to the **Profile** page, select **Create token**. 3. Under the **API token templates** section, find the **Edit Cloudflare Workers** template and select **Use template**. 4. Fill out the rest of the fields and then select **Continue to summary**, where you can select **Create Token** and issue your token for use. ### Global API Key 1. In **Overview**, select **Get your API token**. 2. After being taken to the **Profile** page, scroll to **API Keys**. 3. Select **View** to copy your **Global API Key**.\* :::caution[Warning] \* Treat your Global API Key like a password. It should not be stored in version control or in your code – use environment variables if possible. ::: --- ## Use Tokens After getting your token or key, you can set up your default credentials on your local machine by running `wrangler config`: ```sh wrangler config ``` ```sh output Enter API token: superlongapitoken ``` Use the `--api-key` flag to instead configure with email and global API key: ```sh wrangler config --api-key ``` ```sh output Enter email: testuser@example.com Enter global API key: superlongapikey ``` --- # Configuration URL: https://developers.cloudflare.com/workers/wrangler/migration/v1-to-v2/wrangler-legacy/configuration/ import { Render, WranglerConfig } from "~/components" <Render file="wrangler-v1-deprecation" /> ## Background Your project will need some configuration before you can publish your Worker. Configuration is done through changes to keys and values stored in a Wrangler file located in the root of your project directory. You must manually edit this file to edit your keys and values before you can publish. *** ## Environments The top-level configuration is the collection of values you specify at the top of your Wrangler file. These values will be inherited by all environments, unless otherwise defined in the environment. The layout of a top-level configuration in a Wrangler file is displayed below: <WranglerConfig> ```toml name = "your-worker" type = "javascript" account_id = "your-account-id" # This field specifies that the Worker # will be deployed to a *.workers.dev domain workers_dev = true # -- OR -- # These fields specify that the Worker # will deploy to a custom domain zone_id = "your-zone-id" routes = ["example.com/*"] ``` </WranglerConfig> Environment configuration (optional): the configuration values you specify under an `[env.name]` in your Wrangler file. Environments allow you to deploy the same project to multiple places under multiple names. These environments are utilized with the `--env` or `-e` flag on the [commands](/workers/wrangler/migration/v1-to-v2/wrangler-legacy/commands/) that are deploying live Workers: * `build` * `dev` * `preview` * `publish` * `secret` Some environment properties can be [*inherited*](#keys) from the top-level configuration, but if new values are configured in an environment, they will always override those at the top level. An example of an `[env.name]` configuration looks like this: <WranglerConfig> ```toml type = "javascript" name = "your-worker" account_id = "your-account-id" [vars] FOO = "default FOO value" BAR = "default BAR value" [[kv_namespaces]] binding = "FOO" id = "1a..." preview_id = "1b..." [env.helloworld] # Now adding configuration keys for the "helloworld" environment. # These new values will override the top-level configuration. name = "your-worker-helloworld" account_id = "your-other-account-id" [env.helloworld.vars] FOO = "env-helloworld FOO value" BAR = "env-helloworld BAR value" [[env.helloworld.kv_namespaces]] # Redeclare kv namespace bindings for each environment # NOTE: In this case, passing new IDs because new `account_id` value. binding = "FOO" id = "888..." preview_id = "999..." ``` </WranglerConfig> To deploy this example Worker to the `helloworld` environment, you would run `wrangler publish --env helloworld`. *** ## Keys There are three types of keys in a Wrangler file: * Top level only keys are required to be configured at the top level of your Wrangler file only; multiple environments on the same project must share this key's value. * Inherited keys can be configured at the top level and/or environment. If the key is defined only at the top level, the environment will use the key's value from the top level. If the key is defined in the environment, the environment value will override the top-level value. * Non-inherited keys must be defined for every environment individually. * `name` inherited required * The name of your Worker script. If inherited, your environment name will be appended to the top level. * `type` top level required * Specifies how `wrangler build` will build your project. There are three options: `javascript`, `webpack`, and `rust`. `javascript` checks for a build command specified in the `[build]` section, `webpack` builds your project using webpack v4, and `rust` compiles the Rust in your project to WebAssembly. :::note Cloudflare will continue to support `rust` and `webpack` project types, but recommends using the `javascript` project type and specifying a custom [`build`](#build) section. ::: * `account_id` inherited required * This is the ID of the account associated with your zone. You might have more than one account, so make sure to use the ID of the account associated with the `zone_id` you provide, if you provide one. It can also be specified through the `CF_ACCOUNT_ID` environment variable. * `zone_id` inherited optional * This is the ID of the zone or domain you want to run your Worker on. It can also be specified through the `CF_ZONE_ID` environment variable. This key is optional if you are using only a `*.workers.dev` subdomain. * `workers_dev` inherited optional * This is a boolean flag that specifies if your Worker will be deployed to your [`*.workers.dev`](https://workers.dev) subdomain. If omitted, it defaults to false. * `route` not inherited optional * A route, specified by URL pattern, on your zone that you would like to run your Worker on. <br />`route = "http://example.com/*"`. A `route` OR `routes` key is only required if you are not using a [`*.workers.dev`](https://workers.dev) subdomain. * `routes` not inherited optional * A list of routes you would like to use your Worker on. These follow exactly the same rules a `route`, but you can specify a list of them.<br />`routes = ["http://example.com/hello", "http://example.com/goodbye"]`. A `route` OR `routes` key is only required if you are not using a `*.workers.dev` subdomain. * `webpack_config` inherited optional * This is the path to a custom webpack configuration file for your Worker. You must specify this field to use a custom webpack configuration, otherwise Wrangler will use a default configuration for you. Refer to the [Wrangler webpack page](/workers/wrangler/migration/v1-to-v2/eject-webpack/) for more information. * `vars` not inherited optional * An object containing text variables that can be directly accessed in a Worker script. * `kv_namespaces` not inherited optional * These specify any [Workers KV](#kv_namespaces) Namespaces you want to access from inside your Worker. * `site` inherited optional * Determines the local folder to upload and serve from a Worker. * `dev` not inherited optional * Arguments for `wrangler dev` that configure local server. * `triggers` inherited optional * Configures cron triggers for running a Worker on a schedule. * `usage_model` inherited optional * Specifies the [Usage Model](/workers/platform/pricing/#workers) for your Worker. There are two options - [`bundled`](/workers/platform/limits/#worker-limits) and [`unbound`](/workers/platform/limits/#worker-limits). For newly created Workers, if the Usage Model is omitted it will be set to the [default Usage Model set on the account](https://dash.cloudflare.com/?account=workers/default-usage-model). For existing Workers, if the Usage Model is omitted, it will be set to the Usage Model configured in the dashboard for that Worker. * `build` top level optional * Configures a custom build step to be run by Wrangler when building your Worker. Refer to the [custom builds documentation](#build) for more details. ### vars The `vars` key defines a table of [environment variables](/workers/configuration/environment-variables/) provided to your Worker script. All values are plaintext values. Usage: <WranglerConfig> ```toml [vars] FOO = "some value" BAR = "some other string" ``` </WranglerConfig> The table keys are available to your Worker as global variables, which will contain their associated values. ```js // Worker code: console.log(FOO); //=> "some value" console.log(BAR); //=> "some other string" ``` Alternatively, you can define `vars` using an inline table format. This style should not include any new lines to be considered a valid TOML configuration: <WranglerConfig> ```toml vars = { FOO = "some value", BAR = "some other string" } ``` </WranglerConfig> :::note Secrets should be handled using the [`wrangler secret`](/workers/wrangler/commands/#secret) command. ::: ### kv\_namespaces `kv_namespaces` defines a list of KV namespace bindings for your Worker. Usage: <WranglerConfig> ```toml kv_namespaces = [ { binding = "FOO", id = "0f2ac74b498b48028cb68387c421e279", preview_id = "6a1ddb03f3ec250963f0a1e46820076f" }, { binding = "BAR", id = "068c101e168d03c65bddf4ba75150fb0", preview_id = "fb69528dbc7336525313f2e8c3b17db0" } ] ``` </WranglerConfig> Alternatively, you can define `kv namespaces` like so: <WranglerConfig> ```toml [[kv_namespaces]] binding = "FOO" preview_id = "abc456" id = "abc123" [[kv_namespaces]] binding = "BAR" preview_id = "xyz456" id = "xyz123" ``` </WranglerConfig> Much like environment variables and secrets, the `binding` names are available to your Worker as global variables. ```js // Worker script: let value = await FOO.get("keyname"); //=> gets the value for "keyname" from //=> the FOO variable, which points to //=> the "0f2ac...e279" KV namespace ``` * `binding` required * The name of the global variable your code will reference. It will be provided as a [KV runtime instance](/kv/api/). * `id` required * The ID of the KV namespace that your `binding` should represent. Required for `wrangler publish`. * `preview_id` required * The ID of the KV namespace that your `binding` should represent during `wrangler dev` or `wrangler preview`. Required for `wrangler dev` and `wrangler preview`. :::note Creating your KV namespaces can be handled using Wrangler’s [KV Commands](/workers/wrangler/migration/v1-to-v2/wrangler-legacy/commands/#kv). You can also define your `kv_namespaces` using an [alternative TOML syntax](https://github.com/toml-lang/toml/blob/master/toml.md#user-content-table). ::: ### site A [Workers Site](/workers/configuration/sites/start-from-scratch) generated with [`wrangler generate --site`](/workers/wrangler/migration/v1-to-v2/wrangler-legacy/commands/#generate) or [`wrangler init --site`](/workers/wrangler/migration/v1-to-v2/wrangler-legacy/commands/#init). Usage: <WranglerConfig> ```toml [site] bucket = "./public" entry-point = "workers-site" ``` </WranglerConfig> * `bucket` required * The directory containing your static assets. It must be a path relative to your Wrangler file. Example: `bucket = "./public"` * `entry-point` optional * The location of your Worker script. The default location is `workers-site`. Example: `entry-point = "./workers-site"` * `include` optional * An exclusive list of `.gitignore`-style patterns that match file or directory names from your `bucket` location. Only matched items will be uploaded. Example: `include = ["upload_dir"]` * `exclude` optional * A list of `.gitignore`-style patterns that match files or directories in your `bucket` that should be excluded from uploads. Example: `exclude = ["ignore_dir"]` You can also define your `site` using an [alternative TOML syntax](https://github.com/toml-lang/toml/blob/master/toml.md#user-content-inline-table). #### Storage Limits For exceptionally large pages, Workers Sites may not be ideal. There is a 25 MiB limit per page or file. Additionally, Wrangler will create an asset manifest for your files that will count towards your script’s size limit. If you have too many files, you may not be able to use Workers Sites. #### Exclusively including files/directories If you want to include only a certain set of files or directories in your `bucket`, add an `include` field to your `[site]` section of your Wrangler file: <WranglerConfig> ```toml [site] bucket = "./public" entry-point = "workers-site" include = ["included_dir"] # must be an array. ``` </WranglerConfig> Wrangler will only upload files or directories matching the patterns in the `include` array. #### Excluding files/directories If you want to exclude files or directories in your `bucket`, add an `exclude` field to your `[site]` section of your Wrangler file: <WranglerConfig> ```toml [site] bucket = "./public" entry-point = "workers-site" exclude = ["excluded_dir"] # must be an array. ``` </WranglerConfig> Wrangler will ignore files or directories matching the patterns in the `exclude` array when uploading assets to Workers KV. #### Include > Exclude If you provide both `include` and `exclude` fields, the `include` field will be used and the `exclude` field will be ignored. #### Default ignored entries Wrangler will always ignore: * `node_modules` * Hidden files and directories * Symlinks #### More about include/exclude patterns Refer to the [gitignore documentation](https://git-scm.com/docs/gitignore) to learn more about the standard matching patterns. #### Customizing your Sites Build Workers Sites projects use webpack by default. Though you can [bring your own webpack configuration](/workers/wrangler/migration/v1-to-v2/eject-webpack/), be aware of your `entry` and `context` settings. You can also use the `[build]` section with Workers Sites, as long as your build step will resolve dependencies in `node_modules`. Refer to the [custom builds](#build) section for more information. ### triggers A set of cron triggers used to call a Worker on a schedule. Usage: <WranglerConfig> ```toml [triggers] crons = ["0 0 * JAN-JUN FRI", "0 0 LW JUL-DEC *"] ``` </WranglerConfig> * `crons` optional * A set of [cron expressions](https://crontab.guru/), where each expression is a separate schedule to run the Worker on. ### dev Arguments for `wrangler dev` can be configured here so you do not have to repeatedly pass them. Usage: <WranglerConfig> ```toml [dev] port = 9000 local_protocol = "https" ``` </WranglerConfig> * `ip` optional * IP address for the local `wrangler dev` server to listen on, defaults to `127.0.0.1`. * `port` optional * Port for local `wrangler dev` server to listen on, defaults to `8787`. * `local_protocol` optional * Protocol that local `wrangler dev` server listen to requests on, defaults to `http`. * `upstream_protocol` optional * Protocol that `wrangler dev` forwards requests on, defaults to `https`. ### build A custom build command for your project. There are two configurations based on the format of your Worker: `service-worker` and `modules`. #### Service Workers This section is for customizing Workers with the `service-worker` format. These Workers use `addEventListener` and look like the following: ```js addEventListener("fetch", (event) => { event.respondWith(new Response("I'm a service Worker!")); }); ``` Usage: <WranglerConfig> ```toml [build] command = "npm install && npm run build" [build.upload] format = "service-worker" ``` </WranglerConfig> ##### `[build]` * `command` optional * The command used to build your Worker. On Linux and macOS, the command is executed in the `sh` shell and the `cmd` shell for Windows. The `&&` and `||` shell operators may be used. * `cwd` optional * The working directory for commands, defaults to the project root directory. * `watch_dir` optional * The directory to watch for changes while using `wrangler dev`, defaults to the `src` relative to the project root directory. ##### `[build.upload]` * `format` required * The format of the Worker script, must be `"service-worker"`. :::note Ensure the `main` field in your `package.json` references the Worker you want to publish. ::: #### Modules Workers now supports the ES Modules syntax. This format allows you to export a collection of files and/or modules, unlike the Service Worker format which requires a single file to be uploaded. Module Workers `export` their event handlers instead of using `addEventListener` calls. Modules receive all bindings (KV Namespaces, Environment Variables, and Secrets) as arguments to the exported handlers. With the Service Worker format, these bindings are available as global variables. :::note Refer to the [`fetch()` handler documentation](/workers/runtime-apis/handlers/fetch) to learn more about the differences between the Service Worker and Module worker formats. ::: An uploaded module may `import` other uploaded ES Modules. If using the CommonJS format, you may `require` other uploaded CommonJS modules. ```js import html from "./index.html"; export default { // * request is the same as `event.request` from the service worker format // * waitUntil() and passThroughOnException() are accessible from `ctx` instead of `event` from the service worker format // * env is where bindings like KV namespaces, Durable Object namespaces, Config variables, and Secrets // are exposed, instead of them being placed in global scope. async fetch(request, env, ctx) { const headers = { "Content-Type": "text/html;charset=UTF-8" }; return new Response(html, { headers }); }, }; ``` To create a Workers project using Wrangler and Modules, add a `[build]` section: <WranglerConfig> ```toml [build] command = "npm install && npm run build" [build.upload] format = "modules" main = "./worker.mjs" ``` </WranglerConfig> ##### `[build]` * `command` optional * The command used to build your Worker. On Linux and macOS system, the command is executed in the `sh` shell and the `cmd` shell for Windows. The `&&` and `||` shell operators may be used. * `cwd` optional * The working directory for commands, defaults to the project root directory. * `watch_dir` optional * The directory to watch for changes while using `wrangler dev`, defaults to the `src` relative to the project root directory. ##### `[build.upload]` * `format` required * The format of the Workers script, must be `"modules"`. * `dir` optional * The directory you wish to upload your modules from, defaults to the `dist` relative to the project root directory. * `main` required * The relative path of the main module from `dir`, including the `./` prefix. The main module must be an ES module. For projects with a build script, this usually refers to the output of your JavaScript bundler. :::note If your project is written using CommonJS modules, you will need to re-export your handlers and Durable Object classes using an ES module shim. Refer to the [modules-webpack-commonjs](https://github.com/cloudflare/modules-webpack-commonjs) template as an example. ::: * `rules` optional * An ordered list of rules that define which modules to import, and what type to import them as. You will need to specify rules to use Text, Data, and CompiledWasm modules, or when you wish to have a `.js` file be treated as an `ESModule` instead of `CommonJS`. Defaults: <WranglerConfig> ```toml [build.upload] format = "modules" main = "./worker.mjs" # You do not need to include these default rules in your [Wrangler configuration file](/workers/wrangler/configuration/), they are implicit. # The default rules are treated as the last two rules in the list. [[build.upload.rules]] type = "ESModule" globs = ["**/*.mjs"] [[build.upload.rules]] type = "CommonJS" globs = ["**/*.js", "**/*.cjs"] ``` </WranglerConfig> * `type` required * The module type, see the table below for acceptable options: * `globs` required * UNIX-style [glob rules](https://docs.rs/globset/0.4.6/globset/#syntax) that are used to determine the module type to use for a given file in `dir`. Globs are matched against the module's relative path from `build.upload.dir` without the `./` prefix. Rules are evaluated in order, starting at the top. * `fallthrough` optional * This option allows further rules for this module type to be considered if set to true. If not specified or set to false, further rules for this module type will be ignored. *** ## Example To illustrate how these levels are applied, here is a Wrangler file using multiple environments: <WranglerConfig> ```toml # top level configuration type = "javascript" name = "my-worker-dev" account_id = "12345678901234567890" zone_id = "09876543210987654321" route = "dev.example.com/*" usage_model = "unbound" kv_namespaces = [ { binding = "FOO", id = "b941aabb520e61dcaaeaa64b4d8f8358", preview_id = "03c8c8dd3b032b0528f6547d0e1a83f3" }, { binding = "BAR", id = "90e6f6abd5b4f981c748c532844461ae", preview_id = "e5011a026c5032c09af62c55ecc3f438" } ] [build] command = "webpack" [build.upload] format = "service-worker" [site] bucket = "./public" entry-point = "workers-site" [dev] ip = "0.0.0.0" port = 9000 local_protocol="http" upstream_protocol="https" # environment configuration [env.staging] name = "my-worker-staging" route = "staging.example.com/*" kv_namespaces = [ { binding = "FOO", id = "0f2ac74b498b48028cb68387c421e279" }, { binding = "BAR", id = "068c101e168d03c65bddf4ba75150fb0" } ] # environment configuration [env.production] workers_dev= true kv_namespaces = [ { binding = "FOO", id = "0d2ac74b498b48028cb68387c421e233" }, { binding = "BAR", id = "0d8c101e168d03c65bddf4ba75150f33" } ] ``` </WranglerConfig> --- # Commands URL: https://developers.cloudflare.com/workers/wrangler/migration/v1-to-v2/wrangler-legacy/commands/ import { Render, Type, MetaInfo, WranglerConfig } from "~/components"; <Render file="wrangler-v1-deprecation" /> Complete list of all commands available for [`wrangler`](https://github.com/cloudflare/wrangler-legacy), the Workers CLI. --- ## generate Scaffold a Cloudflare Workers project from a public GitHub repository. ```sh wrangler generate [$NAME] [$TEMPLATE] [--type=$TYPE] [--site] ``` Default values indicated by =value. - `$NAME` =worker optional - The name of the Workers project. This is both the directory name and `name` property in the generated [Wrangler configuration](/workers/wrangler/migration/v1-to-v2/wrangler-legacy/configuration/) file. - `$TEMPLATE` =[https://github.com/cloudflare/worker-template](https://github.com/cloudflare/worker-template) optional - The GitHub URL of the [repository to use as the template](https://github.com/cloudflare/worker-template) for generating the project. - `--type=$TYPE` =webpack optional - The type of project; one of `webpack`, `javascript`, or `rust`. - `--site` optional - When defined, the default `$TEMPLATE` value is changed to [`cloudflare/workers-sdk/templates/worker-sites`](https://github.com/cloudflare/workers-sdk/tree/main/templates/worker-sites). This scaffolds a [Workers Site](/workers/configuration/sites/start-from-scratch) project. --- ## init Create a skeleton [Wrangler configuration file](/workers/wrangler/configuration/) in an existing directory. This command can be used as an alternative to `generate` if you prefer to clone a template repository yourself or you already have a JavaScript project and would like to use Wrangler. ```sh wrangler init [$NAME] [--type=$TYPE] [--site] ``` Default values indicated by =value. - `$NAME` =(Name of working directory) optional - The name of the Workers project. This is both the directory name and `name` property in the generated [Wrangler configuration](/workers/wrangler/migration/v1-to-v2/wrangler-legacy/configuration/) file. - `--type=$TYPE` =webpack optional - The type of project; one of `webpack`, `javascript`, or `rust`. - `--site` optional - When defined, the default `$TEMPLATE` value is changed to [`cloudflare/workers-sdk/templates/worker-sites`](https://github.com/cloudflare/workers-sdk/tree/main/templates/worker-sites). This scaffolds a [Workers Site](/workers/configuration/sites/start-from-scratch) project. --- ## build Build your project (if applicable). This command looks at your Wrangler file and reacts to the [`"type"` value](/workers/wrangler/migration/v1-to-v2/wrangler-legacy/configuration/#keys) specified. When using `type = "webpack"`, Wrangler will build the Worker using its internal webpack installation. When using `type = "javascript"` , the [`build.command`](/workers/wrangler/migration/v1-to-v2/wrangler-legacy/configuration/#build-1), if defined, will run. ```sh wrangler build [--env $ENVIRONMENT_NAME] ``` - `--env` optional - If defined, Wrangler will load the matching environment's configuration before building. Refer to [Environments](/workers/wrangler/environments/) for more information. --- ## login Authorize Wrangler with your Cloudflare account. This will open a login page in your browser and request your account access permissions. This command is the alternative to `wrangler config` and it uses OAuth tokens. ```sh wrangler login [--scopes-list] [--scopes $SCOPES] ``` All of the arguments and flags to this command are optional: - `--scopes-list` optional - List all the available OAuth scopes with descriptions. - `--scopes $SCOPES` optional - Allows to choose your set of OAuth scopes. The set of scopes must be entered in a whitespace-separated list, for example, `wrangler login --scopes account:read user:read`. `wrangler login` uses all the available scopes by default if no flags are provided. --- ## logout Remove Wrangler's authorization for accessing your account. This command will invalidate your current OAuth token and delete the configuration file, if present. ```sh wrangler logout ``` This command only invalidates OAuth tokens acquired through the `wrangler login` command. However, it will try to delete the configuration file regardless of your authorization method. If you wish to delete your API token, log in to the Cloudflare dashboard and go to **Overview** > **Get your API token** in the right side menu > select the three-dot menu on your Wrangler token and select **Delete** if you wish to delete your API token. --- ## config Configure Wrangler so that it may acquire a Cloudflare API Token or Global API key, instead of OAuth tokens, in order to access and manage account resources. ```sh wrangler config [--api-key] ``` - `--api-key` optional - To provide your email and global API key instead of a token. (This is not recommended for security reasons.) You can also use environment variables to authenticate, or `wrangler login` to authorize with OAuth tokens. --- ## publish Publish your Worker to Cloudflare. Several keys in your Wrangler file determine whether you are publishing to a `*.workers.dev` subdomain or a custom domain. However, custom domains must be proxied (orange-clouded) through Cloudflare. Refer to the [Get started guide](/workers/configuration/routing/custom-domains/) for more information. ```sh wrangler publish [--env $ENVIRONMENT_NAME] ``` - `--env` optional - If defined, Wrangler will load the matching environment's configuration before building and deploying. Refer to [Environments](/workers/wrangler/environments/) for more information. To use this command, the following fields are required in your Wrangler file: - `name` string - The name of the Workers project. This is both the directory name and `name` property in the generated [Wrangler configuration](/workers/wrangler/migration/v1-to-v2/wrangler-legacy/configuration/) file. - `type` string - The type of project; one of `webpack`, `javascript`, or `rust`. - `account_id` string - The Cloudflare account ID. This can be found in the Cloudflare dashboard, for example, `account_id = "a655bacaf2b4cad0e2b51c5236a6b974"`. You can publish to [\<your-worker>.\<your-subdomain>.workers.dev](https://workers.dev) or to a custom domain. When you publish changes to an existing Worker script, all new requests will automatically route to the updated version of the Worker without downtime. Any inflight requests will continue running on the previous version until completion. Once all inflight requests have finished complete, the previous Worker version will be purged and will no longer handle requests. ### Publishing to workers.dev To publish to [`*.workers.dev`](https://workers.dev), you will first need to have a subdomain registered. You can register a subdomain by executing the [`wrangler subdomain`](#subdomain) command. After you have registered a subdomain, add `workers_dev` to your Wrangler file. - `workers_dev` bool - When `true`, indicates that the Worker should be deployed to a `*.workers.dev` domain. ### Publishing to your own domain To publish to your own domain, specify these three fields in your Wrangler file. - `zone_id` string - The Cloudflare zone ID, for example, `zone_id = "b6558acaf2b4cad1f2b51c5236a6b972"`, which can be found in the [Cloudflare dashboard](https://dash.cloudflare.com). - `route` string - The route you would like to publish to, for example, `route = "example.com/my-worker/*"`. - `routes` Array - The routes you would like to publish to, for example, `routes = ["example.com/foo/*", example.com/bar/*]`. :::note Make sure to use only `route` or `routes`, not both. ::: ### Publishing the same code to multiple domains To publish your code to multiple domains, refer to the [documentation for environments](/workers/wrangler/environments/). --- ## dev `wrangler dev` is a command that establishes a connection between `localhost` and a global network server that operates your Worker in development. A `cloudflared` tunnel forwards all requests to the global network server, which continuously updates as your Worker code changes. This allows full access to Workers KV, Durable Objects and other Cloudflare developer platform products. The `dev` command is a way to test your Worker while developing. ```sh wrangler dev [--env $ENVIRONMENT_NAME] [--ip <ip>] [--port <port>] [--host <host>] [--local-protocol <http|https>] [--upstream-protocol <http|https>] ``` - `--env` optional - If defined, Wrangler will load the matching environment's configuration. Refer to [Environments](/workers/wrangler/environments/) for more information. - `--ip` optional - The IP to listen on, defaults to `127.0.0.1`. - `--port` optional - The port to listen on, defaults to `8787`. - `--host` optional - The host to forward requests to, defaults to the zone of the project or to `tutorial.cloudflareworkers.com` if unauthenticated. - `--local-protocol` optional - The protocol to listen to requests on, defaults to `http`. - `--upstream-protocol` optional - The protocol to forward requests to host on, defaults to `https`. These arguments can also be set in your Wrangler file. Refer to the [`wrangler dev` configuration](/workers/wrangler/migration/v1-to-v2/wrangler-legacy/configuration/#dev) documentation for more information. ### Usage You should run `wrangler dev` from your Worker directory. Wrangler will run a local server accepting requests, executing your Worker, and forwarding them to a host. If you want to use another host other than your zone or `tutorials.cloudflare.com`, you can specify with `--host example.com`. ```sh wrangler dev ``` ```sh output 💠JavaScript project found. Skipping unnecessary build! 💠watching "./" 👂 Listening on http://127.0.0.1:8787 ``` With `wrangler dev` running, you can send HTTP requests to `localhost:8787` and your Worker should execute as expected. You will also see `console.log` messages and exceptions appearing in your terminal. If either of these things do not happen, or you think the output is incorrect, [file an issue](https://github.com/cloudflare/wrangler-legacy). --- ## tail Start a session to livestream logs from a deployed Worker. ```sh wrangler tail [--format $FORMAT] [--status $STATUS] [OPTIONS] ``` - `--format $FORMAT` json|pretty - The format of the log entries. - `--status $STATUS` - Filter by invocation status \[possible values: `ok`, `error`, `canceled`]. - `--header $HEADER` - Filter by HTTP header. - `--method $METHOD` - Filter by HTTP method. - `--sampling-rate $RATE` - Add a percentage of requests to log sampling rate. - `--search $SEARCH` - Filter by a text match in `console.log` messages. After starting `wrangler tail` in a directory with a project, you will receive a live feed of console and exception logs for each request your Worker receives. Like all Wrangler commands, run `wrangler tail` from your Worker’s root directory (the directory with your Wrangler file). :::caution[Legacy issues with existing cloudflared configuration] `wrangler tail` versions older than version 1.19.0 use `cloudflared` to run. Update to the [latest Wrangler version](/workers/wrangler/migration/v1-to-v2/wrangler-legacy/install-update/) to avoid any issues. ::: --- ## preview Preview your project using the [Cloudflare Workers preview service](https://cloudflareworkers.com/). ```sh wrangler preview [--watch] [--env $ENVIRONMENT_NAME] [ --url $URL] [$METHOD] [$BODY] ``` Default values indicated by =value. - `--env $ENVIRONMENT_NAME` optional - If defined, Wrangler will load the matching environment's configuration. Refer to [Environments](/workers/wrangler/environments/) for more information. - `--watch` recommended - When enabled, any changes to the Worker project will continually update the preview service with the newest version of your project. By default, `wrangler preview` will only bundle your project a single time. - `$METHOD` ="GET" optional - The type of request to preview your Worker with (`GET`, `POST`). - `$BODY` ="Null" optional - The body string to post to your preview Worker request. For example, `wrangler preview post hello=hello`. ### kv_namespaces If you are using [kv_namespaces](/workers/wrangler/migration/v1-to-v2/wrangler-legacy/configuration/#kv_namespaces) with `wrangler preview`, you will need to specify a `preview_id` in your Wrangler file before you can start the session. This is so that you do not accidentally write changes to your production namespace while you are developing. You may make `preview_id` equal to `id` if you would like to preview with your production namespace, but you should ensure that you are not writing values to KV that would break your production Worker. To create a `preview_id` run: ```sh wrangler kv:namespace create --preview "NAMESPACE" ``` ### Previewing on Windows Subsystem for Linux (WSL 1/2) #### Setting $BROWSER to your browser binary WSL is a Linux environment, so Wrangler attempts to invoke `xdg-open` to open your browser. To make `wrangler preview` work with WSL, you should set your `$BROWSER` to the path of your browser binary: ```sh export BROWSER="/mnt/c/tools/firefox.exe" wrangler preview ``` Spaces in filepaths are not common in Linux, and some programs like `xdg-open` will break on [paths with spaces](https://github.com/microsoft/WSL/issues/3632#issuecomment-432821522). You can work around this by linking the binary to your `/usr/local/bin`: ```sh ln -s "/mnt/c/Program Files/Mozilla Firefox/firefox.exe" firefox export BROWSER=firefox ``` #### Setting $BROWSER to `wsl-open` Another option is to install [wsl-open](https://github.com/4U6U57/wsl-open#standalone) and set the `$BROWSER` env variable to `wsl-open` via `wsl-open -w`. This ensures that `xdg-open` uses `wsl-open` when it attempts to open your browser. If you are using WSL 2, you will need to install `wsl-open` following their [standalone method](https://github.com/4U6U57/wsl-open#standalone) rather than through `npm`. This is because their npm package has not yet been updated with WSL 2 support. --- ## `route` List or delete a route associated with a domain: ```sh wrangler route list [--env $ENVIRONMENT_NAME] ``` Default values indicated by =value. - `--env $ENVIRONMENT_NAME` optional - If defined, the changes will only apply to the specified environment. Refer to [Environments](/workers/wrangler/environments/) for more information. This command will forward the JSON response from the [List Routes API](/api/resources/workers/subresources/routes/methods/list/). Each object within the JSON list will include the route id, route pattern, and the assigned Worker name for the route. Piping this through a tool such as `jq` will render the output nicely. ```sh wrangler route delete $ID [--env $ENVIRONMENT_NAME] ``` Default values indicated by =value. - `$ID` required - The hash of the route ID to delete. - `--env $ENVIRONMENT_NAME` optional - If defined, the changes will only apply to the specified environment. Refer to [Environments](/workers/wrangler/environments/) for more information. --- ## subdomain Create or change your [`*.workers.dev`](https://workers.dev) subdomain. ```sh wrangler subdomain <name> ``` --- ## secret Interact with your secrets. ### `put` Create or replace a secret. ```sh wrangler secret put <name> --env ENVIRONMENT_NAME Enter the secret text you would like assigned to the variable name on the Worker named my-worker-ENVIRONMENT_NAME: ``` You will be prompted to input the secret's value. This command can receive piped input, so the following example is also possible: ```sh echo "-----BEGIN PRIVATE KEY-----\nM...==\n-----END PRIVATE KEY-----\n" | wrangler secret put PRIVATE_KEY ``` - `name` - The variable name to be accessible in the script. - `--env $ENVIRONMENT_NAME` optional - If defined, the changes will only apply to the specified environment. Refer to [Environments](/workers/wrangler/environments/) for more information. ### `delete` Delete a secret from a specific script. ```sh wrangler secret delete <name> --env ENVIRONMENT_NAME ``` - `name` - The variable name to be accessible in the script. - `--env $ENVIRONMENT_NAME` optional - If defined, the changes will only apply to the specified environment. Refer to [Environments](/workers/wrangler/environments/) for more information. ### `list` List all the secret names bound to a specific script. ```sh wrangler secret list --env ENVIRONMENT_NAME ``` - `--env $ENVIRONMENT_NAME` optional - If defined, only the specified environment's secrets will be listed. Refer to [Environments](/workers/wrangler/environments/) for more information. --- ## kv The `kv` subcommand allows you to store application data in the Cloudflare network to be accessed from Workers using [Workers KV](https://www.cloudflare.com/products/workers-kv/). KV operations are scoped to your account, so in order to use any of these commands, you: - must configure an `account_id` in your project's Wrangler file. - run all `wrangler kv:<command>` operations in your terminal from the project's root directory. ### Getting started To use Workers KV with your Worker, the first thing you must do is create a KV namespace. This is done with the `kv:namespace` subcommand. The `kv:namespace` subcommand takes a new binding name as its argument. A Workers KV namespace will be created using a concatenation of your Worker’s name (from your Wrangler file) and the binding name you provide: ```sh wrangler kv:namespace create "MY_KV" ``` ```sh output 🌀 Creating namespace with title "my-site-MY_KV" ✨ Success! Add the following to your configuration file: kv_namespaces = [ { binding = "MY_KV", id = "e29b263ab50e42ce9b637fa8370175e8" } ] ``` Successful operations will print a new configuration block that should be copied into your Wrangler file. Add the output to the existing `kv_namespaces` configuration if already present. You can now access the binding from within a Worker: ```js let value = await MY_KV.get("my-key"); ``` To write a value to your KV namespace using Wrangler, run the `wrangler kv:key put` subcommand. ```sh wrangler kv:key put --binding=MY_KV "key" "value" ``` ```sh output ✨ Success ``` Instead of `--binding`, you may use `--namespace-id` to specify which KV namespace should receive the operation: ```sh wrangler kv:key put --namespace-id=e29b263ab50e42ce9b637fa8370175e8 "key" "value" ``` ```sh output ✨ Success ``` Additionally, KV namespaces can be used with environments. This is useful for when you have code that refers to a KV binding like `MY_KV`, and you want to be able to have these bindings point to different namespaces (like one for staging and one for production). A Wrangler file with two environments: <WranglerConfig> ```toml [env.staging] kv_namespaces = [ { binding = "MY_KV", id = "e29b263ab50e42ce9b637fa8370175e8" } ] [env.production] kv_namespaces = [ { binding = "MY_KV", id = "a825455ce00f4f7282403da85269f8ea" } ] ``` </WranglerConfig> To insert a value into a specific KV namespace, you can use: ```sh wrangler kv:key put --env=staging --binding=MY_MV "key" "value" ``` ```sh output ✨ Success ``` Since `--namespace-id` is always unique (unlike binding names), you do not need to specify an `--env` argument. ### Concepts Most `kv` commands require you to specify a namespace. A namespace can be specified in two ways: 1. With a `--binding`: ```sh wrangler kv:key get --binding=MY_KV "my key" ``` - This can be combined with `--preview` flag to interact with a preview namespace instead of a production namespace. 2. With a `--namespace-id`: ```sh wrangler kv:key get --namespace-id=06779da6940b431db6e566b4846d64db "my key" ``` Most `kv` subcommands also allow you to specify an environment with the optional `--env` flag. This allows you to publish Workers running the same code but with different namespaces. For example, you could use separate staging and production namespaces for KV data in your Wrangler file: <WranglerConfig> ```toml type = "webpack" name = "my-worker" account_id = "<account id here>" route = "staging.example.com/*" workers_dev = false kv_namespaces = [ { binding = "MY_KV", id = "06779da6940b431db6e566b4846d64db" } ] [env.production] route = "example.com/*" kv_namespaces = [ { binding = "MY_KV", id = "07bc1f3d1f2a4fd8a45a7e026e2681c6" } ] ``` </WranglerConfig> With the Wrangler file above, you can specify `--env production` when you want to perform a KV action on the namespace `MY_KV` under `env.production`. For example, with the Wrangler file above, you can get a value out of a production KV instance with: ```sh wrangler kv:key get --binding "MY_KV" --env=production "my key" ``` To learn more about environments, refer to [Environments](/workers/wrangler/environments/). ### `kv:namespace` #### `create` Create a new namespace. ```sh wrangler kv:namespace create $NAME [--env=$ENVIRONMENT_NAME] [--preview] ``` - `$NAME` - The name of the new namespace. - `--env $ENVIRONMENT_NAME` optional - If defined, the changes will only apply to the specified environment. Refer to [Environments](/workers/wrangler/environments/) for more information. - `--preview` optional - Interact with a preview namespace (the `preview_id` value) instead of production. ##### Usage ```sh wrangler kv:namespace create "MY_KV" 🌀 Creating namespace with title "worker-MY_KV" ✨ Add the following to your wrangler.toml: kv_namespaces = [ { binding = "MY_KV", id = "e29b263ab50e42ce9b637fa8370175e8" } ] ``` ```sh wrangler kv:namespace create "MY_KV" --preview 🌀 Creating namespace with title "my-site-MY_KV_preview" ✨ Success! Add the following to your wrangler.toml: kv_namespaces = [ { binding = "MY_KV", preview_id = "15137f8edf6c09742227e99b08aaf273" } ] ``` #### `list` List all KV namespaces associated with an account ID. ```sh wrangler kv:namespace list ``` ##### Usage This example passes the Wrangler command through the `jq` command: ```sh wrangler kv:namespace list | jq "." [ { "id": "06779da6940b431db6e566b4846d64db", "title": "TEST_NAMESPACE" }, { "id": "32ac1b3c2ed34ed3b397268817dea9ea", "title": "STATIC_CONTENT" } ] ``` #### `delete` Delete a given namespace. ```sh wrangler kv:namespace delete --binding= [--namespace-id=] ``` - `--binding` required (if no <code>--namespace-id</code>) - The name of the namespace to delete. - `--namespace-id` required (if no <code>--binding</code>) - The ID of the namespace to delete. - `--env $ENVIRONMENT_NAME` optional - If defined, the changes will only apply to the specified environment. Refer to [Environments](/workers/wrangler/environments/) for more information. - `--preview` optional - Interact with a preview namespace instead of production. ##### Usage ```sh wrangler kv:namespace delete --binding=MY_KV Are you sure you want to delete namespace f7b02e7fc70443149ac906dd81ec1791? [y/n] yes 🌀 Deleting namespace f7b02e7fc70443149ac906dd81ec1791 ✨ Success ``` ```sh wrangler kv:namespace delete --binding=MY_KV --preview Are you sure you want to delete namespace 15137f8edf6c09742227e99b08aaf273? [y/n] yes 🌀 Deleting namespace 15137f8edf6c09742227e99b08aaf273 ✨ Success ``` ### `kv:key` #### `put` Write a single key-value pair to a particular namespace. ```sh wrangler kv:key put --binding= [--namespace-id=] $KEY $VALUE ✨ Success ``` - `$KEY` required - The key to write to. - `$VALUE` required - The value to write. - `--binding` required (if no <code>--namespace-id</code>) - The name of the namespace to write to. - `--namespace-id` required (if no <code>--binding</code>) - The ID of the namespace to write to. - `--env $ENVIRONMENT_NAME` optional - If defined, the changes will only apply to the specified environment. Refer to [Environments](/workers/wrangler/environments/) for more information. - `--preview` optional - Interact with a preview namespace instead of production. Pass this to the Wrangler file’s `kv_namespaces.preview_id` instead of `kv_namespaces.id`. - `--ttl` optional - The lifetime (in number of seconds) the document should exist before expiring. Must be at least `60` seconds. This option takes precedence over the `expiration` option. - `--expiration` optional - The timestamp, in UNIX seconds, indicating when the key-value pair should expire. - `--path` optional - When defined, Wrangler reads the `--path` file location to upload its contents as KV documents. This is ideal for security-sensitive operations because it avoids saving keys and values into your terminal history. ##### Usage ```sh wrangler kv:key put --binding=MY_KV "key" "value" ✨ Success ``` ```sh wrangler kv:key put --binding=MY_KV --preview "key" "value" ✨ Success ``` ```sh wrangler kv:key put --binding=MY_KV "key" "value" --ttl=10000 ✨ Success ``` ```sh wrangler kv:key put --binding=MY_KV "key" value.txt --path ✨ Success ``` #### `list` Output a list of all keys in a given namespace. ```sh wrangler kv:key list --binding= [--namespace-id=] [--prefix] [--env] ``` - `--binding` required (if no <code>--namespace-id</code>) - The name of the namespace to list. - `--namespace-id` required (if no <code>--binding</code>) - The ID of the namespace to list. - `--env $ENVIRONMENT_NAME` optional - If defined, the changes will only apply to the specified environment. Refer to [Environments](/workers/wrangler/environments/) for more information. - `--prefix` optional - A prefix to filter listed keys. ##### Usage This example passes the Wrangler command through the `jq` command: ```sh wrangler kv:key list --binding=MY_KV --prefix="public" | jq "." [ { "name": "public_key" }, { "name": "public_key_with_expiration", "expiration": "2019-09-10T23:18:58Z" } ] ``` #### `get` Read a single value by key from the given namespace. ```sh wrangler kv:key get --binding= [--env=] [--preview] [--namespace-id=] "$KEY" ``` - `$KEY` required - The key value to get. - `--binding` required (if no <code>--namespace-id</code>) - The name of the namespace to get from. - `--namespace-id` required (if no <code>--binding</code>) - The ID of the namespace to get from. - `--env $ENVIRONMENT_NAME` optional - If defined, the operation will only apply to the specified environment. Refer to [Environments](/workers/wrangler/environments/) for more information. - `--preview` optional - Interact with a preview namespace instead of production. Pass this to use your Wrangler file’s `kv_namespaces.preview_id` instead of `kv_namespaces.id` ##### Usage ```sh wrangler kv:key get --binding=MY_KV "key" value ``` #### `delete` Removes a single key value pair from the given namespace. ```sh wrangler kv:key delete --binding= [--env=] [--preview] [--namespace-id=] "$KEY" ``` - `$KEY` required - The key value to delete. - `--binding` required (if no <code>--namespace-id</code>) - The name of the namespace to delete from. - `--namespace-id` required (if no <code>--binding</code>) - The id of the namespace to delete from. - `--env` optional - Perform on a specific environment specified as `$ENVIRONMENT_NAME`. - `--preview` optional - Interact with a preview namespace instead of production. Pass this to use your Wrangler configuration file's `kv_namespaces.preview_id` instead of `kv_namespaces.id` ##### Usage ```sh wrangler kv:key delete --binding=MY_KV "key" Are you sure you want to delete key "key"? [y/n] yes 🌀 Deleting key "key" ✨ Success ``` ### `kv:bulk` #### `put` Write a file full of key-value pairs to the given namespace. ```sh wrangler kv:bulk put --binding= [--env=] [--preview] [--namespace-id=] $FILENAME ``` - `$FILENAME` required - The file to write to the namespace - `--binding` required (if no <code>--namespace-id</code>) - The name of the namespace to put to. - `--namespace-id` required (if no <code>--binding</code>) - The id of the namespace to put to. - `--env $ENVIRONMENT_NAME` optional - If defined, the changes will only apply to the specified environment. Refer to [Environments](/workers/wrangler/environments/) for more information. - `--preview` optional - Interact with a preview namespace instead of production. Pass this to use your Wrangler file’s `kv_namespaces.preview_id` instead of `kv_namespaces.id` This command takes a JSON file as an argument with a list of key-value pairs to upload. An example of JSON input: ```json [ { "key": "test_key", "value": "test_value", "expiration_ttl": 3600 } ] ``` In order to save JSON data, cast `value` to a string: ```json [ { "key": "test_key", "value": "{\"name\": \"test_value\"}", "expiration_ttl": 3600 } ] ``` The schema below is the full schema for key-value entries uploaded via the bulk API: - `key` <Type text="string" /> <MetaInfo text="required" /> - The key’s name. The name may be 512 bytes maximum. All printable, non-whitespace characters are valid. - `value` <Type text="string" /> <MetaInfo text="required" /> - The UTF-8 encoded string to be stored, up to 25 MB in length. - `expiration` int optional - The time, measured in number of seconds since the UNIX epoch, at which the key should expire. - `expiration_ttl` int optional - The number of seconds the document should exist before expiring. Must be at least `60` seconds. - `base64` bool optional - When true, the server will decode the value as base64 before storing it. This is useful for writing values that would otherwise be invalid JSON strings, such as images. Defaults to `false`. If both `expiration` and `expiration_ttl` are specified for a given key, the API will prefer `expiration_ttl`. ##### Usage ```sh wrangler kv:bulk put --binding=MY_KV allthethingsupload.json 🌀 uploading 1 key value pairs ✨ Success ``` #### `delete` Delete all specified keys within a given namespace. ```sh wrangler kv:bulk delete --binding= [--env=] [--preview] [--namespace-id=] $FILENAME ``` - `$FILENAME` required - The file with key-value pairs to delete. - `--binding` required (if no <code>--namespace-id</code>) - The name of the namespace to delete from. - `--namespace-id` required (if no <code>--binding</code>) - The ID of the namespace to delete from. - `--env $ENVIRONMENT_NAME` optional - If defined, the changes will only apply to the specified environment. Refer to [Environments](/workers/wrangler/environments/) for more information. - `--preview` optional - Interact with a preview namespace instead of production. Pass this to use your Wrangler file’s `kv_namespaces.preview_id` instead of `kv_namespaces.id` This command takes a JSON file as an argument with a list of key-value pairs to delete. An example of JSON input: ```json [ { "key": "test_key", "value": "" } ] ``` - `key` <Type text="string" /> <MetaInfo text="required" /> - The key’s name. The name may be at most 512 bytes. All printable, non-whitespace characters are valid. - `value` <Type text="string" /> <MetaInfo text="required" /> - This field must be specified for deserialization purposes, but is unused because the provided keys are being deleted, not written. ##### Usage ```sh wrangler kv:bulk delete --binding=MY_KV allthethingsdelete.json ``` ```sh output Are you sure you want to delete all keys in allthethingsdelete.json? [y/n] y 🌀 deleting 1 key value pairs ✨ Success ``` --- ## Environment variables Wrangler supports any [Wrangler configuration file](/workers/wrangler/configuration/) keys passed in as environment variables. This works by passing in `CF_` + any uppercased TOML key. For example: `CF_NAME=my-worker CF_ACCOUNT_ID=1234 wrangler dev` --- ## --help ```sh wrangler --help ``` ```sh output 👷 ✨ wrangler 1.12.3 The Wrangler Team <wrangler@cloudflare.com> USAGE: wrangler [SUBCOMMAND] FLAGS: -h, --help Prints help information -V, --version Prints version information SUBCOMMANDS: kv:namespace ðŸ—‚ï¸ Interact with your Workers KV Namespaces kv:key 🔑 Individually manage Workers KV key-value pairs kv:bulk 💪 Interact with multiple Workers KV key-value pairs at once route âž¡ï¸ List or delete worker routes. secret 🤫 Generate a secret that can be referenced in the worker script generate 👯 Generate a new worker project init 📥 Create a wrangler.toml for an existing project build 🦀 Build your worker preview 🔬 Preview your code temporarily on cloudflareworkers.com dev 👂 Start a local server for developing your worker publish 🆙 Publish your worker to the orange cloud config ðŸ•µï¸ Authenticate Wrangler with a Cloudflare API Token or Global API Key subdomain 👷 Configure your workers.dev subdomain whoami ðŸ•µï¸ Retrieve your user info and test your auth config tail 🦚 Aggregate logs from production worker login 🔓 Authorize Wrangler with your Cloudflare username and password logout âš™ï¸ Remove authorization from Wrangler. help Prints this message or the help of the given subcommand(s) ``` --- # Wrangler v1 (legacy) URL: https://developers.cloudflare.com/workers/wrangler/migration/v1-to-v2/wrangler-legacy/ import { DirectoryListing, Render } from "~/components" The following documentation applied to Wrangler v1 usage. <Render file="wrangler-v1-deprecation" /> <DirectoryListing /> --- # Install / Update URL: https://developers.cloudflare.com/workers/wrangler/migration/v1-to-v2/wrangler-legacy/install-update/ import { Render } from "~/components"; <Render file="wrangler-v1-deprecation" /> ## Install ### Install with `npm` ```sh npm i @cloudflare/wrangler -g ``` :::note[EACCESS error] You may have already installed npm. It is possible that an `EACCES` error may be thrown while installing Wrangler. This is related to how many systems install the npm binary. It is recommended that you reinstall npm using a Node version manager like [nvm](https://github.com/nvm-sh/nvm#installing-and-updating) or [Volta](https://volta.sh/). ::: ### Install with `cargo` Assuming you have Rust’s package manager, [Cargo](https://github.com/rust-lang/cargo), installed, run: ```sh cargo install wrangler ``` Otherwise, to install Cargo, you must first install rustup. On Linux and macOS systems, `rustup` can be installed as follows: ```sh curl https://sh.rustup.rs -sSf | sh ``` Additional installation methods are available [on the Rust site](https://forge.rust-lang.org/other-installation-methods.html). Windows users will need to install Perl as a dependency for `openssl-sys` — [Strawberry Perl](https://www.perl.org/get.html) is recommended. After Cargo is installed, you may now install Wrangler: ```sh cargo install wrangler ``` :::note[Customize OpenSSL] By default, a copy of OpenSSL is included to make things easier during installation, but this can make the binary size larger. If you want to use your system's OpenSSL installation, provide the feature flag `sys-openssl` when running install: ```sh cargo install wrangler --features sys-openssl ``` ::: ### Manual install 1. Download the binary tarball for your platform from the [releases page](https://github.com/cloudflare/wrangler-legacy/releases). You do not need the `wranglerjs-*.tar.gz` download – Wrangler will install that for you. 2. Unpack the tarball and place the Wrangler binary somewhere on your `PATH`, preferably `/usr/local/bin` for Linux/macOS or `Program Files` for Windows. ## Update To update [Wrangler](https://github.com/cloudflare/wrangler-legacy), run one of the following: ### Update with `npm` ```sh npm update -g @cloudflare/wrangler ``` ### Update with `cargo` ```sh cargo install wrangler --force ``` --- # Webpack URL: https://developers.cloudflare.com/workers/wrangler/migration/v1-to-v2/wrangler-legacy/webpack/ import { Render, WranglerConfig } from "~/components" <Render file="wrangler-v1-deprecation" /> Wrangler allows you to develop modern ES6 applications with support for modules. This support is possible because of Wrangler's [webpack](https://webpack.js.org/) integration. This document describes how Wrangler uses webpack to build your Workers and how you can bring your own configuration. :::note[Configuration and webpack version] Wrangler includes `webpack@4`. If you want to use `webpack@5`, or another bundler like esbuild or Rollup, you must set up [custom builds](/workers/wrangler/migration/v1-to-v2/wrangler-legacy/configuration/#build) in your Wrangler file. You must set `type = "webpack"` in your Wrangler file to use Wrangler's webpack integration. If you are encountering warnings about specifying `webpack_config`, refer to [backwards compatibility](#backwards-compatibility). ::: ## Sensible defaults This is the default webpack configuration that Wrangler uses to build your Worker: ```js module.exports = { target: "webworker", entry: "./index.js", // inferred from "main" in package.json }; ``` The `"main"` field in the `package.json` file determines the `entry` configuration value. When undefined or missing, `"main"` defaults to `index.js`, meaning that `entry` also defaults to `index.js`. The default configuration sets `target` to `webworker`. This is the correct value because Cloudflare Workers are built to match the [Service Worker API](https://developer.mozilla.org/en-US/docs/Web/API/Service_Worker_API). Refer to the [webpack documentation](https://webpack.js.org/concepts/targets/) for an explanation of this `target` value. ## Bring your own configuration You can tell Wrangler to use a custom webpack configuration file by setting `webpack_config` in your Wrangler file. Always set `target` to `webworker`. ### Example ```js module.exports = { target: 'webworker', entry: './index.js', mode: 'production', }; ``` <WranglerConfig> ```toml type = "webpack" name = "my-worker" account_id = "12345678901234567890" workers_dev = true webpack_config = "webpack.config.js" ``` </WranglerConfig> ### Example with multiple environments It is possible to use different webpack configuration files within different [Wrangler environments](/workers/wrangler/environments/). For example, the `"webpack.development.js"` configuration file is used during `wrangler dev` for development, but other, more production-ready configurations are used when building for the staging or production environments: <WranglerConfig> ```toml type = "webpack" name = "my-worker-dev" account_id = "12345678901234567890" workers_dev = true webpack_config = "webpack.development.js" [env.staging] name = "my-worker-staging" webpack_config = "webpack.staging.js" [env.production] name = "my-worker-production" webpack_config = "webpack.production.js" ``` </WranglerConfig> ```js module.exports = { target: 'webworker', devtool: 'cheap-module-source-map', // avoid "eval": Workers environment doesn’t allow it entry: './index.js', mode: 'development', }; ``` ```js module.exports = { target: 'webworker', entry: './index.js', mode: 'production', }; ``` ### Using with Workers Sites Wrangler commands are run from the project root. Ensure your `entry` and `context` are set appropriately. For a project with structure: ```txt . ├── public │  ├── 404.html │  └── index.html ├── workers-site │  ├── index.js │  ├── package-lock.json │  ├── package.json │  └── webpack.config.js └── wrangler.toml ``` The corresponding `webpack.config.js` file should look like this: ```js module.exports = { context: __dirname, target: 'webworker', entry: './index.js', mode: 'production', }; ``` ## Shimming globals When you want to bring your own implementation of an existing global API, you may [shim](https://webpack.js.org/guides/shimming/#shimming-globals) a third-party module in its place as a webpack plugin. For example, you may want to replace the `URL` global class with the `url-polyfill` npm package. After defining the package as a dependency in your `package.json` file and installing it, add a plugin entry to your webpack configuration. ### Example with webpack plugin ```js null {1,7,8,9,10,11} const webpack = require('webpack'); module.exports = { target: 'webworker', entry: './index.js', mode: 'production', plugins: [ new webpack.ProvidePlugin({ URL: 'url-polyfill', }), ], }; ``` ## Backwards compatibility If you are using `wrangler@1.6.0` or earlier, a `webpack.config.js` file at the root of your project is loaded automatically. This is not always obvious, which is why versions of Wrangler after `wrangler@1.6.0` require you to specify a `webpack_config` value in your Wrangler file. When [upgrading from `wrangler@1.6.0`](/workers/wrangler/migration/v1-to-v2/wrangler-legacy/install-update/), you may encounter webpack configuration warnings. To resolve this, add `webpack_config = "webpack.config.js"` to your Wrangler file. --- # React URL: https://developers.cloudflare.com/workers/frameworks/framework-guides/react/ import { Badge, Description, InlineBadge, Render, PackageManagers, } from "~/components"; In this guide, you will create a new [React](https://react.dev/) application and deploy to Cloudflare Workers (with the new [<InlineBadge preset="beta" /> Workers Assets](/workers/static-assets/)). ## 1. Set up a new project Use the [`create-cloudflare`](https://www.npmjs.com/package/create-cloudflare) CLI (C3) to set up a new project. C3 will create a new project directory, use code from the official React template, and provide the option to deploy instantly. To use `create-cloudflare` to create a new React project with <InlineBadge preset="beta" /> Workers Assets, run the following command: <PackageManagers type="create" pkg="cloudflare@latest my-react-app" args={"--framework=react --platform=workers"} /> <Render file="c3-post-run-steps" product="workers" params={{ category: "web-framework", framework: "React", }} /> After setting up your project, change your directory by running the following command: ```sh cd my-react-app ``` ## 2. Develop locally After you have created your project, run the following command in the project directory to start a local server. This will allow you to preview your project locally during development. <PackageManagers type="run" args={"dev"} /> ## 3. Deploy your project Your project can be deployed to a `*.workers.dev` subdomain or a [Custom Domain](/workers/configuration/routing/custom-domains/), from your own machine or from any CI/CD system, including [Cloudflare's own](/workers/ci-cd/builds/). The following command will build and deploy your project. If you are using CI, ensure you update your ["deploy command"](/workers/ci-cd/builds/configuration/#build-settings) configuration appropriately. <PackageManagers type="run" args={"deploy"} /> --- ## Static assets You can serve static assets in your React application by [placing them in the `./public/` directory](https://vite.dev/guide/assets#the-public-directory). This can be useful for resource files such as images, stylesheets, fonts, and manifests. <Render file="workers-assets-routing-summary" /> ---