Cloudflare Docs
Workers
Edit this page
Report an issue with this page
Log into the Cloudflare dashboard
Set theme to dark (⇧+D)

Configure wrangler.toml

Wrangler optionally uses a wrangler.toml configuration file to customize the development and deployment setup for a Worker.

It is best practice to treat wrangler.toml as the source of truth for configuring a Worker.

​​ Sample wrangler.toml configuration

wrangler.toml
# Top-level configuration
name = "my-worker"
main = "src/index.js"
compatibility_date = "2022-07-12"
workers_dev = false
route = { pattern = "example.org/*", zone_name = "example.org" }
kv_namespaces = [
{ binding = "<MY_NAMESPACE>", id = "<KV_ID>" }
]
[env.staging]
name = "my-worker-staging"
route = { pattern = "staging.example.org/*", zone_name = "example.org" }
kv_namespaces = [
{ binding = "<MY_NAMESPACE>", id = "<STAGING_KV_ID>" }
]

​​ Environments

The configuration for a Worker can become complex when you define different environments, and each environment has its own configuration. There is a default (top-level) environment and named environments that provide environment-specific configuration.

These are defined under [env.name] keys, such as [env.staging] which you can then preview or deploy with the -e / --env flag in the wrangler commands like npx wrangler deploy --env staging.

The majority of keys are inheritable, meaning that top-level configuration can be used in environments. Bindings, such as vars or kv_namespaces, are not inheritable and need to be defined explicitly.

Further, there are a few keys that can only appear at the top-level.

​​ Top-level only keys

Top-level keys apply to the Worker as a whole (and therefore all environments). They cannot be defined within named environments.

  • keep_vars boolean optional

    • Whether Wrangler should keep variables configured in the dashboard on deploy. Refer to source of truth.
  • send_metrics boolean optional

    • Whether Wrangler should send usage metrics to Cloudflare for this project.
  • site object optional

    • See the Workers Sites section below for more information. Cloudflare Pages is preferred over this approach.

​​ Inheritable keys

Inheritable keys are configurable at the top-level, and can be inherited (or overridden) by environment-specific configuration.

  • name string required

    • The name of your Worker. Alphanumeric characters (a,b,c, etc.) and dashes (-) only. Do not use underscores (_).
  • main string required

    • The path to the entrypoint of your Worker that will be executed. For example: ./src/index.ts.
  • compatibility_date string required

    • A date in the form yyyy-mm-dd, which will be used to determine which version of the Workers runtime is used. Refer to Compatibility dates.
  • account_id string optional

    • This is the ID of the account associated with your zone. You might have more than one account, so make sure to use the ID of the account associated with the zone/route you provide, if you provide one. It can also be specified through the CLOUDFLARE_ACCOUNT_ID environment variable.
  • compatibility_flags string[] optional

    • A list of flags that enable features from upcoming features of the Workers runtime, usually used together with compatibility_date. Refer to compatibility dates.
  • workers_dev boolean optional

    • Enables use of *.workers.dev subdomain to test and deploy your Worker. If you have a Worker that is only for scheduled events, you can set this to false. Defaults to true.
  • route Route optional

    • A route that your Worker should be deployed to. Only one of routes or route is required. Refer to types of routes.
  • routes Route[] optional

    • An array of routes that your Worker should be deployed to. Only one of routes or route is required. Refer to types of routes.
  • tsconfig string optional

    • Path to a custom tsconfig.
  • triggers object optional

    • Cron definitions to trigger a Worker’s scheduled function. Refer to triggers.
  • rules Rule optional

    • An ordered list of rules that define which modules to import, and what type to import them as. You will need to specify rules to use Text, Data and CompiledWasm modules, or when you wish to have a .js file be treated as an ESModule instead of CommonJS.
  • build Build optional

    • Configures a custom build step to be run by Wrangler when building your Worker. Refer to Custom builds.
  • no_bundle boolean optional

    • Skip internal build steps and directly deploy your Worker script. You must have a plain JavaScript Worker with no dependencies.
  • minify boolean optional

    • Minify the Worker script before uploading.
  • node_compat boolean optional

  • preserve_file_names boolean optional

    • Determines whether Wrangler will preserve the file names of additional modules bundled with the Worker. The default is to prepend filenames with a content hash. For example, 34de60b44167af5c5a709e62a4e20c4f18c9e3b6-favicon.ico.
  • logpush boolean optional

    • Enables Workers Trace Events Logpush for a Worker. Any scripts with this property will automatically get picked up by the Workers Logpush job configured for your account. Defaults to false.
  • limits Limits optional

    • Configures limits to be imposed on execution at runtime. Refer to Limits.

​​ Usage model

As of March 1, 2024 the usage model configured in your Worker’s wrangler.toml will be ignored. The Standard usage model applies.

Some Workers Enterprise customers maintain the ability to change usage models. Your usage model must be configured through the Cloudflare dashboard by going to Workers & Pages > select your Worker > Settings > Usage Model.

​​ Non-inheritable keys

Non-inheritable keys are configurable at the top-level, but cannot be inherited by environments and must be specified for each environment.

  • define Record<string, string> optional

    • A map of values to substitute when deploying your Worker.
  • vars object optional

  • durable_objects object optional

    • A list of Durable Objects that your Worker should be bound to. Refer to Durable Objects.
  • kv_namespaces object optional

    • A list of KV namespaces that your Worker should be bound to. Refer to KV namespaces.
  • r2_buckets object optional

    • A list of R2 buckets that your Worker should be bound to. Refer to R2 buckets.
  • vectorize object optional

    • A list of Vectorize indexes that your Worker should be bound to.. Refer to Vectorize indexes.
  • services object optional

    • A list of service bindings that your Worker should be bound to. Refer to service bindings.
  • tail_consumers object optional

    • A list of the Tail Workers your Worker sends data to. Refer to Tail Workers.

​​ Types of routes

There are three types of routes: Custom Domains, routes, and workers.dev.

​​ Custom Domains

Custom Domains allow you to connect your Worker to a domain or subdomain, without having to make changes to your DNS settings or perform any certificate management.

  • pattern string required

    • The pattern that your Worker should be run on, for example, "example.com".
  • custom_domain boolean optional

    • Whether the Worker should be on a Custom Domain as opposed to a route. Defaults to false.

Example:

wrangler.toml
route = { pattern = "example.com", custom_domain = true }
# or
routes = [
{ pattern = "shop.example.com", custom_domain = true }
]

​​ Routes

Routes allow users to map a URL pattern to a Worker. A route can be configured as a zone ID route, a zone name route, or a simple route.

​​ Zone ID route

  • pattern string required

    • The pattern that your Worker can be run on, for example,"example.com/*".
  • zone_id string required

Example:

wrangler.toml
routes = [
{ pattern = "subdomain.example.com/*", zone_id = "<YOUR_ZONE_ID>" }
]

​​ Zone name route

  • pattern string required

    • The pattern that your Worker should be run on, for example, "example.com/*".
  • zone_name string required

    • The name of the zone that your pattern is associated with. If you are using API tokens, this will require the Account scope.

Example:

wrangler.toml
routes = [
{ pattern = "subdomain.example.com/*", zone_name = "example.com" }
]

​​ Simple route

This is a simple route that only requires a pattern.

Example:

wrangler.toml
route = "example.com/*"

​​ workers.dev

Cloudflare Workers accounts come with a workers.dev subdomain that is configurable in the Cloudflare dashboard.

  • workers_dev boolean optional

    • Whether the Worker runs on a custom workers.dev account subdomain. Defaults to true.
wrangler.toml
workers_dev = false

​​ Triggers

Triggers allow you to define the cron expression to invoke your Worker’s scheduled function. Refer to Supported cron expressions.

  • crons string[] required

    • An array of cron expressions.
    • To disable a Cron Trigger, set crons = []. Commenting out the crons key will not disable a Cron Trigger.

Example:

wrangler.toml
[triggers]
crons = ["* * * * *"]

​​ Custom builds

You can configure a custom build step that will be run before your Worker is deployed. Refer to Custom builds.

  • command string optional

    • The command used to build your Worker. On Linux and macOS, the command is executed in the sh shell and the cmd shell for Windows. The && and || shell operators may be used.
  • cwd string optional

    • The directory in which the command is executed.
  • watch_dir string | string[] optional

    • The directory to watch for changes while using wrangler dev. Defaults to the current working directory.

Example:

wrangler.toml
[build]
command = "npm run build"
cwd = "build_cwd"
watch_dir = "build_watch_dir"

​​ Limits

You can impose limits on your Worker’s behavior at runtime. Limits are only supported for the Standard Usage Model. Limits are only enforced when deployed to Cloudflare’s network, not in local development. The CPU limit can be set to a maximum of 30,000 milliseconds (30 seconds).

Each isolate has some built-in flexibility to allow for cases where your Worker infrequently runs over the configured limit. If your Worker starts hitting the limit consistently, its execution will be terminated according to the limit configured.

  • cpu_ms number optional

    • The maximum CPU time allowed per invocation, in milliseconds.

Example:

wrangler.toml
[limits]
cpu_ms = 100

​​ Bindings

​​ Browser Rendering

The Workers Browser Rendering API allows developers to programmatically control and interact with a headless browser instance and create automation flows for their applications and products.

A browser binding will provide your Worker with an authenticated endpoint to interact with a dedicated Chromium browser instance.

  • binding string required

Example:

wrangler.toml
browser = { binding = "<BINDING_NAME>" }
# or
[browser]
binding = "<BINDING_NAME>"

​​ D1 databases

D1 is Cloudflare’s serverless SQL database. A Worker can query a D1 database (or databases) by creating a binding to each database for D1’s client API.

To bind D1 databases to your Worker, assign an array of the below object to the [[d1_databases]] key.

  • binding string required

  • database_name string required

    • The name of the database. This is a human-readable name that allows you to distinguish between different databases, and is set when you first create the database.
  • database_id string required

    • The ID of the database. The database ID is available when you first use wrangler d1 create or when you call wrangler d1 list, and uniquely identifies your database.
  • preview_database_id string optional

    • The preview ID of this D1 database. If provided, wrangler dev will use this ID. Otherwise, it will use database_id. This option is required when using wrangler dev --remote.

    • The ID of the database. The database ID is available when you first use wrangler d1 create or when you call wrangler d1 list, and uniquely identifies your database.

Example:

wrangler.toml
d1_databases = [
{ binding = "<BINDING_NAME>", database_name = "<DATABASE_NAME>", database_id = "<DATABASE_ID>" }
]
# or
[[d1_databases]]
binding = "<BINDING_NAME>"
database_name = "<DATABASE_NAME>"
database_id = "<DATABASE_ID>"

​​ Dispatch namespace bindings (Workers for Platforms)

Dispatch namespace bindings allow for communication between a dynamic dispatch Worker and a dispatch namespace. Dispatch namespace bindings are used in Workers for Platforms. Workers for Platforms helps you deploy serverless functions programmatically on behalf of your customers.

wrangler.toml
[[dispatch_namespaces]]
binding = "<BINDING_NAME>"
namespace = "<NAMESPACE_NAME>"
outbound = {service = "<WORKER_NAME>", parameters = ["params_object"]}

​​ Durable Objects

Durable Objects provide low-latency coordination and consistent storage for the Workers platform.

To bind Durable Objects to your Worker, assign an array of the below object to the durable_objects.bindings key.

  • name string required

    • The name of the binding used to refer to the Durable Object.
  • class_name string required

    • The exported class name of the Durable Object.
  • script_name string optional

    • The name of the Worker where the Durable Object is defined, if it is external to this Worker. This option can be used both in local and remote development. In local development, you must run the external Worker in a separate process (via wrangler dev). In remote development, the appropriate remote binding must be used.
  • environment string optional

    • The environment of the script_name to bind to.

Example:

wrangler.toml
durable_objects.bindings = [
{ name = "<BINDING_NAME>", class_name = "<CLASS_NAME>" }
]
# or
[[durable_objects.bindings]]
name = "<BINDING_NAME>"
class_name = "<CLASS_NAME>"

​​ Migrations

When making changes to your Durable Object classes, you must perform a migration. Refer to Durable Object migrations.

  • tag string required

    • A unique identifier for this migration.
  • new_classes string[] optional

    • The new Durable Objects being defined.
  • renamed_classes {from: string, to: string}[] optional

    • The Durable Objects being renamed.
  • deleted_classes string[] optional

    • The Durable Objects being removed.

Example:

wrangler.toml
[[migrations]]
tag = "v1" # Should be unique for each entry
new_classes = ["DurableObjectExample"] # Array of new classes
[[migrations]]
tag = "v2"
renamed_classes = [{from = "DurableObjectExample", to = "UpdatedName" }] # Array of rename directives
deleted_classes = ["DeprecatedClass"] # Array of deleted class names

​​ Email bindings

You can send an email about your Worker’s activity from your Worker to an email address verified on Email Routing. This is useful for when you want to know about certain types of events being triggered, for example.

Before you can bind an email address to your Worker, you need to enable Email Routing and have at least one verified email address. Then, assign an array to the object send_email with the type of email binding you need.

You can add one or more types of bindings to your wrangler.toml file. However, each attribute must be on its own line:

send_email = [
{name = "<NAME_FOR_BINDING1>"},
{name = "<NAME_FOR_BINDING2>", destination_address = "<YOUR_EMAIL>@example.com"},
{name = "<NAME_FOR_BINDING3>", allowed_destination_addresses = ["<YOUR_EMAIL>@example.com", "<YOUR_EMAIL2>@example.com"]},
]

​​ Environment variables

Environment variables are a type of binding that allow you to attach text strings or JSON values to your Worker.

Example:

wrangler.toml
name = "my-worker-dev"
[vars]
API_HOST = "example.com"
API_ACCOUNT_ID = "example_user"
SERVICE_X_DATA = { URL = "service-x-api.dev.example", MY_ID = 123 }

​​ Hyperdrive

Hyperdrive bindings allow you to interact with and query any Postgres database from within a Worker.

  • binding string required

    • The binding name.
  • id string required

    • The ID of the Hyperdrive configuration.

Example:

wrangler.toml
node_compat = true # required for database drivers to function
[[hyperdrive]]
binding = "<BINDING_NAME>"
id = "<ID>"

​​ KV namespaces

Workers KV is a global, low-latency, key-value data store. It stores data in a small number of centralized data centers, then caches that data in Cloudflare’s data centers after access.

To bind KV namespaces to your Worker, assign an array of the below object to the kv_namespaces key.

  • binding string required

    • The binding name used to refer to the KV namespace.
  • id string required

    • The ID of the KV namespace.
  • preview_id string optional

    • The preview ID of this KV namespace. This option is required when using wrangler dev --remote to develop against remote resources. If developing locally (without --remote), this is an optional field. wrangler dev will use this ID for the KV namespace. Otherwise, wrangler dev will use id.

Example:

wrangler.toml
kv_namespaces = [
{ binding = "<BINDING_NAME1>", id = "<NAMESPACE_ID1>" },
{ binding = "<BINDING_NAME2>", id = "<NAMESPACE_ID2>" }
]
# or
[[kv_namespaces]]
binding = "<BINDING_NAME1>"
id = "<NAMESPACE_ID1>"
[[kv_namespaces]]
binding = "<BINDING_NAME2>"
id = "<NAMESPACE_ID2>"

​​ Queues

Queues is Cloudflare’s global message queueing service, providing guaranteed delivery and message batching. To interact with a queue with Workers, you need a producer Worker to send messages to the queue and a consumer Worker to pull batches of messages out of the Queue. A single Worker can produce to and consume from multiple Queues.

To bind Queues to your producer Worker, assign an array of the below object to the [[queues.producers]] key.

  • queue string required

    • The name of the queue, used on the Cloudflare dashboard.
  • binding string required

  • delivery_delay number optional

Example:

wrangler.toml
[[queues.producers]]
binding = "<BINDING_NAME>"
queue = "<QUEUE_NAME>"
delivery_delay = 60 # Delay messages by 60 seconds before they are delivered to a consumer

To bind Queues to your consumer Worker, assign an array of the below object to the [[queues.consumers]] key.

  • queue string required

    • The name of the queue, used on the Cloudflare dashboard.
  • max_batch_size number optional

    • The maximum number of messages allowed in each batch.
  • max_batch_timeout number optional

    • The maximum number of seconds to wait for messages to fill a batch before the batch is sent to the consumer Worker.
  • max_retries number optional

    • The maximum number of retries for a message, if it fails or retryAll() is invoked.
  • dead_letter_queue string optional

    • The name of another queue to send a message if it fails processing at least max_retries times.
    • If a dead_letter_queue is not defined, messages that repeatedly fail processing will be discarded.
    • If there is no queue with the specified name, it will be created automatically.
  • max_concurrency number optional

    • The maximum number of concurrent consumers allowed to run at once. Leaving this unset will mean that the number of invocations will scale to the currently supported maximum.
    • Refer to Consumer concurrency for more information on how consumers autoscale, particularly when messages are retried.
  • retry_delay number optional

Example:

wrangler.toml
[[queues.consumers]]
queue = "my-queue"
max_batch_size = 10
max_batch_timeout = 30
max_retries = 10
dead_letter_queue = "my-queue-dlq"
max_concurrency = 5
retry_delay = 120 # Delay retried messages by 2 minutes before re-attempting delivery

​​ R2 buckets

Cloudflare R2 Storage allows developers to store large amounts of unstructured data without the costly egress bandwidth fees associated with typical cloud storage services.

To bind R2 buckets to your Worker, assign an array of the below object to the r2_buckets key.

  • binding string required

    • The binding name used to refer to the R2 bucket.
  • bucket_name string required

    • The name of this R2 bucket.
  • jurisdiction string optional

  • preview_bucket_name string optional

    • The preview name of this R2 bucket. If provided, wrangler dev will use this name for the R2 bucket. Otherwise, it will use bucket_name. This option is required when using wrangler dev --remote.

Example:

wrangler.toml
r2_buckets = [
{ binding = "<BINDING_NAME1>", bucket_name = "<BUCKET_NAME1>"},
{ binding = "<BINDING_NAME2>", bucket_name = "<BUCKET_NAME2>"}
]
# or
[[r2_buckets]]
binding = "<BINDING_NAME1>"
bucket_name = "<BUCKET_NAME1>"
[[r2_buckets]]
binding = "<BINDING_NAME2>"
bucket_name = "<BUCKET_NAME2>"

​​ Vectorize indexes

A Vectorize index allows you to insert and query vector embeddings for semantic search, classification and other vector search use-cases.

To bind Vectorize indexes to your Worker, assign an array of the below object to the vectorize key.

  • binding string required

    • The binding name used to refer to the bound index from your Worker code.
  • index_name string required

    • The name of the index to bind.

Example:

wrangler.toml
vectorize = [
{ binding = "<BINDING_NAME>", index_name = "<INDEX_NAME>"}
]
# or
[[vectorize]]
binding = "<BINDING_NAME>"
index_name = "<INDEX_NAME>"

​​ Service bindings

A service binding allows you to send HTTP requests to another Worker without those requests going over the Internet. The request immediately invokes the downstream Worker, reducing latency as compared to a request to a third-party service. Refer to About Service Bindings.

To bind other Workers to your Worker, assign an array of the below object to the services key.

  • binding string required

    • The binding name used to refer to the bound Worker.
  • service string required

    • The name of the Worker.
  • entrypoint string optional

    • The name of the entrypoint to bind to. If you do not specify an entrypoint, the default export of the Worker will be used.

Example:

wrangler.toml
services = [
{ binding = "<BINDING_NAME>", service = "<WORKER_NAME>", entrypoint = "<ENTRYPOINT_NAME>" }
]
# or
[[services]]
binding = "<BINDING_NAME>"
service = "<WORKER_NAME>"
entrypoint = "<ENTRYPOINT_NAME>"

​​ Analytics Engine Datasets

Workers Analytics Engine provides analytics, observability and data logging from Workers. Write data points to your Worker binding then query the data using the SQL API.

To bind Analytics Engine datasets to your Worker, assign an array of the below object to the analytics_engine_datasets key.

  • binding string required

    • The binding name used to refer to the dataset.
  • dataset string optional

    • The dataset name to write to. This will default to the same name as the binding if it is not supplied.

Example:

wrangler.toml
analytics_engine_datasets = [{ binding = "<BINDING_NAME>", dataset = "<DATASET_NAME>" }]
# or
[[analytics_engine_datasets]]
binding = "<BINDING_NAME>"
dataset = "<DATASET_NAME>"

​​ mTLS Certificates

To communicate with origins that require client authentication, a Worker can present a certificate for mTLS in subrequests. Wrangler provides the mtls-certificate command to upload and manage these certificates.

To create a binding to an mTLS certificate for your Worker, assign an array of objects with the following shape to the mtls_certificates key.

  • binding string required

    • The binding name used to refer to the certificate.
  • certificate_id string required

    • The ID of the certificate. Wrangler displays this via the mtls-certificate upload and mtls-certificate list commands.

Example of a wrangler.toml configuration that includes an mTLS certificate binding:

wrangler.toml
mtls_certificates = [
{ binding = "<BINDING_NAME1>", certificate_id = "<CERTIFICATE_ID1>" },
{ binding = "<BINDING_NAME2>", certificate_id = "<CERTIFICATE_ID2>" }
]
# or
[[mtls_certificates]]
binding = "<BINDING_NAME1>"
certificate_id = "<CERTIFICATE_ID1>"
[[mtls_certificates]]
binding = "<BINDING_NAME2>"
certificate_id = "<CERTIFICATE_ID2>"

mTLS certificate bindings can then be used at runtime to communicate with secured origins via their fetch method.

​​ Workers AI

Workers AI allows you to run machine learning models, on the Cloudflare network, from your own code – whether that be from Workers, Pages, or anywhere via REST API.

Unlike other bindings, this binding is limited to one AI binding per Worker project.

  • binding string required

    • The binding name.

Example:

wrangler.toml
ai = { binding = "<AI>" }
# or
[ai]
binding = "AI" # available in your Worker code on `env.AI`

​​ Bundling

You can bundle assets into your Worker using the rules key, making these assets available to be imported when your Worker is invoked. The rules key will be an array of the below object.

  • type string required

    • The type of asset. Must be one of: ESModule, CommonJS, CompiledWasm, Text or Data.
  • globs string[] required

  • fallthrough boolean optional

    • When set to true on a rule, this allows you to have multiple rules for the same Type.

Example:

wrangler.toml
rules = [
{ type = "Text", globs = ["**/*.md"], fallthrough = true }
]

​​ Importing assets within a Worker

You can import and refer to these assets within your Worker, like so:

index.js
import markdown from './example.md'
export default {
async fetch() {
return new Response(markdown)
}
}

​​ Local development settings

You can configure various aspects of local development, such as the local protocol or port.

  • ip string optional

    • IP address for the local dev server to listen on. Defaults to localhost.
  • port number optional

    • Port for the local dev server to listen on. Defaults to 8787.
  • local_protocol string optional

    • Protocol that local dev server listens to requests on. Defaults to http.
  • upstream_protocol string optional

    • Protocol that the local dev server forwards requests on. Defaults to https.
  • host string optional

    • Host to forward requests to, defaults to the host of the first route of the Worker.

Example:

wrangler.toml
[dev]
ip = "192.168.1.1"
port = 8080
local_protocol = "http"

​​ Secrets

Secrets are a type of binding that allow you to attach encrypted text values to your Worker.

When developing your Worker or Pages Function, create a .dev.vars file in the root of your project to define secrets that will be used when running wrangler dev or wrangler pages dev, as opposed to using environment variables in wrangler.toml. This works both in local and remote development modes.

The .dev.vars file should be formatted like a dotenv file, such as KEY="VALUE":

.dev.vars
SECRET_KEY="value"
API_TOKEN="eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9"

​​ Node compatibility

If you depend on Node.js APIs, either directly in your own code or via a library you depend on, you can either use a subset of Node.js APIs available directly in the Workers runtime, or add polyfills for a subset of Node.js APIs to your own code.

​​ Use runtime APIs directly

A growing subset of Node.js APIs are available directly as Runtime APIs, with no need to add polyfills to your own code. To enable these APIs in your Worker, add the nodejs_compat compatibility flag to your wrangler.toml:

wrangler.toml
compatibility_flags = [ "nodejs_compat" ]

​​ Add polyfills using Wrangler

Add polyfills for a subset of Node.js APIs to your Worker by adding the node_compat key to your wrangler.toml or by passing the --node-compat flag to wrangler.

wrangler.toml
node_compat = true

It is not possible to polyfill all Node APIs or behaviors, but it is possible to polyfill some of them.

This is currently powered by @esbuild-plugins/node-globals-polyfill which in itself is powered by rollup-plugin-node-polyfills.

​​ Source maps

Source maps translate compiled and minified code back to the original code that you wrote. Source maps are combined with the stack trace returned by the JavaScript runtime to present you with a stack trace.

Example:

wrangler.toml
upload_source_maps = true

​​ Workers Sites

Workers Sites allows you to host static websites, or dynamic websites using frameworks like Vue or React, on Workers.

  • bucket string required

    • The directory containing your static assets. It must be a path relative to your wrangler.toml file.
  • include string[] optional

    • An exclusive list of .gitignore-style patterns that match file or directory names from your bucket location. Only matched items will be uploaded.
  • exclude string[] optional

    • A list of .gitignore-style patterns that match files or directories in your bucket that should be excluded from uploads.

Example:

wrangler.toml
[site]
bucket = "./public"
include = ["upload_dir"]
exclude = ["ignore_dir"]

​​ Proxy support

Corporate networks will often have proxies on their networks and this can sometimes cause connectivity issues. To configure Wrangler with the appropriate proxy details, use the below environmental variables:

  • https_proxy
  • HTTPS_PROXY
  • http_proxy
  • HTTP_PROXY

To configure this on macOS, add HTTP_PROXY=http://<YOUR_PROXY_HOST>:<YOUR_PROXY_PORT> before your Wrangler commands.

Example:

$ HTTP_PROXY=http://localhost:8080 wrangler dev

If your IT team has configured your computer’s proxy settings, be aware that the first non-empty environment variable in this list will be used when Wrangler makes outgoing requests.

For example, if both https_proxy and http_proxy are set, Wrangler will only use https_proxy for outgoing requests.

​​ Source of truth

We recommend treating your wrangler.toml file as the source of truth for your Worker configuration, and to avoid making changes to your Worker via the Cloudflare dashboard if you are using Wrangler.

If you need to make changes to your Worker from the Cloudflare dashboard, the dashboard will generate a TOML snippet for you to copy into your wrangler.toml file, which will help ensure your wrangler.toml file is always up to date.

If you change your environment variables in the Cloudflare dashboard, Wrangler will override them the next time you deploy. If you want to disable this behavior, add keep_vars = true to your wrangler.toml.

If you change your routes in the dashboard, Wrangler will override them in the next deploy with the routes you have set in your wrangler.toml. To manage routes via the Cloudflare dashboard only, remove any route and routes keys from your wrangler.toml file. Then add workers_dev = false to your wrangler.toml file. For more information, refer to Deprecations.

Wrangler will not delete your secrets (encrypted environment variables) unless you run wrangler secret delete <key>.