Commands
Wrangler offers a number of commands to manage your Cloudflare Workers.
docs
- Open this page in your default browser.init
- Create a new project from a variety of web frameworks and templates.generate
- Create a Wrangler project using an existing Workers template ↗.d1
- Interact with D1.vectorize
- Interact with Vectorize indexes.hyperdrive
- Manage your Hyperdrives.deploy
- Deploy your Worker to Cloudflare.dev
- Start a local server for developing your Worker.publish
- Publish your Worker to Cloudflare.delete
- Delete your Worker from Cloudflare.kv namespace
- Manage Workers KV namespaces.kv key
- Manage key-value pairs within a Workers KV namespace.kv bulk
- Manage multiple key-value pairs within a Workers KV namespace in batches.r2 bucket
- Manage Workers R2 buckets.r2 object
- Manage Workers R2 objects.secret
- Manage the secret variables for a Worker.secret:bulk
- Manage multiple secret variables for a Worker.workflows
- Manage and configure Workflows.tail
- Start a session to livestream logs from a deployed Worker.pages
- Configure Cloudflare Pages.queues
- Configure Workers Queues.login
- Authorize Wrangler with your Cloudflare account using OAuth.logout
- Remove Wrangler’s authorization for accessing your account.whoami
- Retrieve your user information and test your authentication configuration.versions
- Retrieve details for recent versions.deployments
- Retrieve details for recent deployments.rollback
- Rollback to a recent deployment.dispatch-namespace
- Interact with a dispatch namespace.mtls-certificate
- Manage certificates used for mTLS connections.types
- Generate types from bindings and module rules in configuration.
This page provides a reference for Wrangler commands.
Since Cloudflare recommends installing Wrangler locally in your project(rather than globally), the way to run Wrangler will depend on your specific setup and package manager.
You can add Wrangler commands that you use often as scripts in your project’s package.json
file:
You can then run them using your package manager of choice:
Open the Cloudflare developer documentation in your default browser.
COMMAND
string optional- The Wrangler command you want to learn more about. This opens your default browser to the section of the documentation that describes the command.
Create a new project via the create-cloudflare-cli (C3) tool. A variety of web frameworks are available to choose from as well as templates. Dependencies are installed by default, with the option to deploy your project immediately.
NAME
string optional (default: name of working directory)- The name of the Workers project. This is both the directory name and
name
property in the generatedwrangler.toml
configuration file.
- The name of the Workers project. This is both the directory name and
--yes
boolean optional- Answer yes to any prompts for new projects.
--from-dash
string optional- Fetch a Worker initialized from the dashboard. This is done by passing the flag and the Worker name.
wrangler init --from-dash <WORKER_NAME>
. - The
--from-dash
command will not automatically sync changes made to the dashboard after the command is used. Therefore, it is recommended that you continue using the CLI.
- Fetch a Worker initialized from the dashboard. This is done by passing the flag and the Worker name.
Create a new project using an existing Workers template ↗.
NAME
string optional (default: name of working directory)- The name of the Workers project. This is both the directory name and
name
property in the generatedwrangler.toml
configuration file.
- The name of the Workers project. This is both the directory name and
TEMPLATE
string optional- The URL of a GitHub template, with a default worker-template ↗. Browse a list of available templates on the cloudflare/workers-sdk ↗ repository.
Interact with Cloudflare’s D1 service.
Creates a new D1 database, and provides the binding and UUID that you will put in your wrangler.toml
file.
DATABASE_NAME
string required- The name of the new D1 database.
--location
string optional- Provide an optional location hint for your database leader.
- Available options include
weur
(Western Europe),eeur
(Eastern Europe),apac
(Asia Pacific),oc
(Oceania),wnam
(Western North America), andenam
(Eastern North America).
Get information about a D1 database, including the current database size and state.
DATABASE_NAME
string required- The name of the D1 database to get information about.
--json
boolean optional- Return output as JSON rather than a table.
List all D1 databases in your account.
--json
boolean optional- Return output as JSON rather than a table.
Delete a D1 database.
DATABASE_NAME
string required- The name of the D1 database to delete.
-y, --skip-confirmation
boolean optional- Skip deletion confirmation prompt.
Execute a query on a D1 database.
DATABASE_NAME
string required- The name of the D1 database to execute a query on.
--command
string optional- The SQL query you wish to execute.
--file
string optional- Path to the SQL file you wish to execute.
-y, --yes
boolean optional- Answer
yes
to any prompts.
- Answer
--local
boolean (default: true) optional- Execute commands/files against a local database for use with wrangler dev.
--remote
boolean (default: false) optional- Execute commands/files against a remote D1 database for use with wrangler dev —remote.
--persist-to
string optional- Specify directory to use for local persistence (for use in combination with
--local
).
- Specify directory to use for local persistence (for use in combination with
--json
boolean optional- Return output as JSON rather than a table.
--preview
boolean optional- Execute commands/files against a preview D1 database (as defined by
preview_database_id
in Wrangler.toml).
- Execute commands/files against a preview D1 database (as defined by
--batch-size
number optional- Number of queries to send in a single batch.
Export a D1 database or table’s schema and/or content to a .sql
file.
DATABASE_NAME
string required- The name of the D1 database to export.
--remote
boolean (default: false) optional- Execute commands/files against a remote D1 database for use with wrangler dev —remote.
--output
string optional- Path to the SQL file for your export.
--table
string optional- The name of the table within a D1 database to export.
--no-data
boolean (default: false) optional- Controls whether export SQL file contains database data. Note that
--no-data=true
is not recommended due to a known wrangler limitation that intreprets the value as false.
- Controls whether export SQL file contains database data. Note that
--no-schema
boolean (default: false) optional- Controls whether export SQL file contains database schema. Note that
--no-schema=true
is not recommended due to a known wrangler limitation that intreprets the value as false.
- Controls whether export SQL file contains database schema. Note that
Restore a database to a specific point-in-time using Time Travel.
DATABASE_NAME
string required- The name of the D1 database to execute a query on.
--bookmark
string optional- A D1 bookmark representing the state of a database at a specific point in time.
--timestamp
string optional- A UNIX timestamp or JavaScript date-time
string
within the last 30 days.
- A UNIX timestamp or JavaScript date-time
--json
boolean optional- Return output as JSON rather than a table.
Inspect the current state of a database for a specific point-in-time using Time Travel.
DATABASE_NAME
string required- The name of the D1 database to execute a query on.
--timestamp
string optional- A UNIX timestamp or JavaScript date-time
string
within the last 30 days.
- A UNIX timestamp or JavaScript date-time
--json
bboolean optional- Return output as JSON rather than a table.
Initiate a D1 backup.
DATABASE_NAME
string required- The name of the D1 database to backup.
List all available backups.
DATABASE_NAME
string required- The name of the D1 database to list the backups of.
Restore a backup into a D1 database.
DATABASE_NAME
string required- The name of the D1 database to restore the backup into.
BACKUP_ID
string required- The ID of the backup you wish to restore.
Download existing data to your local machine.
DATABASE_NAME
string required- The name of the D1 database you wish to download the backup of.
BACKUP_ID
string required- The ID of the backup you wish to download.
--output
string optional- The
.sqlite3
file to write to (defaults to'<DB_NAME>.<SHORT_BACKUP_ID>.sqlite3'
).
- The
Create a new migration.
This will generate a new versioned file inside the migrations
folder. Name your migration file as a description of your change. This will make it easier for you to find your migration in the migrations
folder. An example filename looks like:
0000_create_user_table.sql
The filename will include a version number and the migration name you specify below.
DATABASE_NAME
string required- The name of the D1 database you wish to create a migration for.
MIGRATION_NAME
string required- A descriptive name for the migration you wish to create.
View a list of unapplied migration files.
DATABASE_NAME
string required- The name of the D1 database you wish to list unapplied migrations for.
--local
boolean optional- Show the list of unapplied migration files on your locally persisted D1 database.
--remote
boolean (default: false) optional- Show the list of unapplied migration files on your remote D1 database.
--persist-to
string optional- Specify directory to use for local persistence (for use in combination with
--local
).
- Specify directory to use for local persistence (for use in combination with
--preview
boolean optional- Show the list of unapplied migration files on your preview D1 database (as defined by
preview_database_id
inwrangler.toml
).
- Show the list of unapplied migration files on your preview D1 database (as defined by
Apply any unapplied migrations.
This command will prompt you to confirm the migrations you are about to apply. Confirm that you would like to proceed. After, a backup will be captured.
The progress of each migration will be printed in the console.
When running the apply command in a CI/CD environment or another non-interactive command line, the confirmation step will be skipped, but the backup will still be captured.
If applying a migration results in an error, this migration will be rolled back, and the previous successful migration will remain applied.
DATABASE_NAME
string required- The name of the D1 database you wish to apply your migrations on.
--env
string optional- Specify which environment configuration to use for D1 binding
--local
boolean (default: true) optional- Execute any unapplied migrations on your locally persisted D1 database.
--remote
boolean (default: false) optional- Execute any unapplied migrations on your remote D1 database.
--persist-to
string optional- Specify directory to use for local persistence (for use in combination with
--local
).
- Specify directory to use for local persistence (for use in combination with
--preview
boolean optional- Execute any unapplied migrations on your preview D1 database (as defined by
preview_database_id
inwrangler.toml
).
- Execute any unapplied migrations on your preview D1 database (as defined by
--batch-size
number optional- Number of queries to send in a single batch.
Manage Hyperdrive database configurations.
Create a new Hyperdrive configuration.
CONFIG_NAME
string required- The name of the Hyperdrive configuration to create.
--connection-string
string optional- The database connection string in the form
postgres://user:password@hostname:port/database
.
- The database connection string in the form
--origin-host
string optional- The hostname or IP address Hyperdrive should connect to.
--origin-port
number optional- The database port to connect to.
--origin-scheme
string optional- The scheme used to connect to the origin database, for example, postgresql or postgres.
--database
string optional- The database (name) to connect to. For example, Postgres or defaultdb.
--origin-user
string optional- The username used to authenticate to the database.
--origin-password
string optional- The password used to authenticate to the database.
--access-client-id
string optional- The Client ID of the Access token to use when connecting to the origin database, must be set with a Client Access Secret. Mutually exclusive with
origin-port
.
- The Client ID of the Access token to use when connecting to the origin database, must be set with a Client Access Secret. Mutually exclusive with
--access-client-secret
string optional- The Client Secret of the Access token to use when connecting to the origin database, must be set with a Client Access ID. Mutually exclusive with
origin-port
.
- The Client Secret of the Access token to use when connecting to the origin database, must be set with a Client Access ID. Mutually exclusive with
--caching-disabled
boolean optional- Disables the caching of SQL responses.
--max-age
number optional- Specifies max duration for which items should persist in the cache, cannot be set when caching is disabled.
--swr
number optional- Stale While Revalidate - Indicates the number of seconds cache may serve the response after it becomes stale, cannot be set when caching is disabled.
Update an existing Hyperdrive configuration.
ID
string required- The ID of the Hyperdrive configuration to update.
--name
string optional- The new name of the Hyperdrive configuration.
--connection-string
string optional- The database connection string in the form
postgres://user:password@hostname:port/database
.
- The database connection string in the form
--origin-host
string optional- The new database hostname or IP address Hyperdrive should connect to.
--origin-port
string optional- The new database port to connect to.
--origin-scheme
string optional- The scheme used to connect to the origin database, for example, postgresql or postgres.
--database
string optional- The new database (name) to connect to. For example, Postgres or defaultdb.
--origin-user
string optional- The new username used to authenticate to the database.
--origin-password
string optional- The new password used to authenticate to the database.
--access-client-id
string optional- The Client ID of the Access token to use when connecting to the origin database, must be set with a Client Access Secret. Mutually exclusive with
origin-port
.
- The Client ID of the Access token to use when connecting to the origin database, must be set with a Client Access Secret. Mutually exclusive with
--access-client-secret
string optional- The Client Secret of the Access token to use when connecting to the origin database, must be set with a Client Access ID. Mutually exclusive with
origin-port
.
- The Client Secret of the Access token to use when connecting to the origin database, must be set with a Client Access ID. Mutually exclusive with
--caching-disabled
boolean optional- Disables the caching of SQL responses.
--max-age
number optional- Specifies max duration for which items should persist in the cache, cannot be set when caching is disabled.
--swr
number optional- Stale While Revalidate - Indicates the number of seconds cache may serve the response after it becomes stale, cannot be set when caching is disabled.
List all Hyperdrive configurations.
Delete an existing Hyperdrive configuration.
ID
string required- The name of the Hyperdrive configuration to delete.
Get an existing Hyperdrive configuration.
ID
string required- The name of the Hyperdrive configuration to get.
Interact with a Vectorize vector database.
Creates a new vector index, and provides the binding and name that you will put in your wrangler.toml
file.
INDEX_NAME
string required- The name of the new index to create. Must be unique for an account and cannot be changed after creation or reused after the deletion of an index.
--dimensions
number required- The vector dimension width to configure the index for. Cannot be changed after creation.
--metric
string required- The distance metric to use for calculating vector distance. Must be one of
cosine
,euclidean
, ordot-product
.
- The distance metric to use for calculating vector distance. Must be one of
--description
string optional- A description for your index.
--deprecated-v1
boolean optional- Create a legacy Vectorize index. Please note that legacy Vectorize indexes are on a deprecation path.
List all Vectorize indexes in your account, including the configured dimensions and distance metric.
--deprecated-v1
boolean optional- List legacy Vectorize indexes. Please note that legacy Vectorize indexes are on a deprecation path.
Get details about an individual index, including its configuration.
INDEX_NAME
string required- The name of the index to fetch details for.
--deprecated-v1
boolean optional- Get a legacy Vectorize index. Please note that legacy Vectorize indexes are on a deprecation path.
Get some additional information about an individual index, including the vector count and details about the last processed mutation.
INDEX_NAME
string required- The name of the index to fetch details for.
Delete a Vectorize index.
INDEX_NAME
string required- The name of the Vectorize index to delete.
--force
boolean optional- Skip confirmation when deleting the index (Note: This is not a recoverable operation).
--deprecated-v1
boolean optional- Delete a legacy Vectorize index. Please note that legacy Vectorize indexes are on a deprecation path.
Insert vectors into an index.
INDEX_NAME
string required- The name of the Vectorize index to upsert vectors in.
--file
string required- A file containing the vectors to insert in newline-delimited JSON (JSON) format.
--batch-size
number optional- The number of vectors to insert at a time (default:
1000
).
- The number of vectors to insert at a time (default:
--deprecated-v1
boolean optional- Insert into a legacy Vectorize index. Please note that legacy Vectorize indexes are on a deprecation path.
Upsert vectors into an index. Existing vectors in the index would be overwritten.
INDEX_NAME
string required- The name of the Vectorize index to upsert vectors in.
--file
string required- A file containing the vectors to insert in newline-delimited JSON (JSON) format.
--batch-size
number optional- The number of vectors to insert at a time (default:
5000
).
- The number of vectors to insert at a time (default:
Query a Vectorize index for similar vectors.
INDEX_NAME
string required- The name of the Vectorize index to query.
--vector
array optional- Vector against which the Vectorize index is queried. Either this or the
vector-id
param must be provided.
- Vector against which the Vectorize index is queried. Either this or the
--vector-id
string optional- Identifier for a vector that is already present in the index against which the index is queried. Either this or the
vector
param must be provided.
- Identifier for a vector that is already present in the index against which the index is queried. Either this or the
--top-k
number optional- The number of vectors to query (default:
5
).
- The number of vectors to query (default:
--return-values
boolean optional- Enable to return vector values in the response (default:
false
).
- Enable to return vector values in the response (default:
--return-metadata
string optional- Enable to return vector metadata in the response. Must be one of
none
,indexed
, orall
(default:none
).
- Enable to return vector metadata in the response. Must be one of
--namespace
string optional- Query response to only include vectors from this namespace.
--filter
string optional- Filter vectors based on this metadata filter. Example:
'{ 'p1': 'abc', 'p2': { '$ne': true }, 'p3': 10, 'p4': false, 'nested.p5': 'abcd' }'
- Filter vectors based on this metadata filter. Example:
Fetch vectors from a Vectorize index using the provided ids.
INDEX_NAME
string required- The name of the Vectorize index from which vectors need to be fetched.
--ids
array required- List of ids for which vectors must be fetched.
Delete vectors in a Vectorize index using the provided ids.
INDEX_NAME
string required- The name of the Vectorize index from which vectors need to be deleted.
--ids
array required- List of ids corresponding to the vectors that must be deleted.
Enable metadata filtering on the specified property.
INDEX_NAME
string required- The name of the Vectorize index for which metadata index needs to be created.
--property-name
string required- Metadata property for which metadata filtering should be enabled.
--type
string required- Data type of the property. Must be one of
string
,number
, orboolean
.
- Data type of the property. Must be one of
List metadata properties on which metadata filtering is enabled.
INDEX_NAME
string required- The name of the Vectorize index for which metadata indexes needs to be fetched.
Disable metadata filtering on the specified property.
INDEX_NAME
string required- The name of the Vectorize index for which metadata index needs to be disabled.
--property-name
string required- Metadata property for which metadata filtering should be disabled.
Start a local server for developing your Worker.
SCRIPT
string- The path to an entry point for your Worker. Only required if your
wrangler.toml
does not include amain
key (for example,main = "index.js"
).
- The path to an entry point for your Worker. Only required if your
--name
string optional- Name of the Worker.
--no-bundle
boolean (default: false) optional- Skip Wrangler’s build steps. Particularly useful when using custom builds. Refer to Bundling ↗ for more information.
--env
string optional- Perform on a specific environment.
--compatibility-date
string optional- A date in the form yyyy-mm-dd, which will be used to determine which version of the Workers runtime is used.
--compatibility-flags
,--compatibility-flag
string[] optional- Flags to use for compatibility checks.
--latest
boolean (default: true) optional- Use the latest version of the Workers runtime.
--ip
string optional- IP address to listen on, defaults to
localhost
.
- IP address to listen on, defaults to
--port
number optional- Port to listen on.
--inspector-port
number optional- Port for devtools to connect to.
--routes
,--route
string[] optional- Routes to upload.
- For example:
--route example.com/*
.
--host
string optional- Host to forward requests to, defaults to the zone of project.
--local-protocol
'http'|'https' (default: http) optional- Protocol to listen to requests on.
--https-key-path
string optional- Path to a custom certificate key.
--https-cert-path
string optional- Path to a custom certificate.
--local-upstream
string optional- Host to act as origin in local mode, defaults to
dev.host
or route.
- Host to act as origin in local mode, defaults to
--assets
string optional beta- Folder of static assets to be served. Replaces Workers Sites. Visit assets for more information.
--legacy-assets
string optional deprecated, use `--assets`- Folder of static assets to be served.
--site
string optional deprecated, use `--assets`- Folder of static assets for Workers Sites.
--site-include
string[] optional deprecated- Array of
.gitignore
-style patterns that match file or directory names from the sites directory. Only matched items will be uploaded.
- Array of
--site-exclude
string[] optional deprecated- Array of
.gitignore
-style patterns that match file or directory names from the sites directory. Matched items will not be uploaded.
- Array of
--upstream-protocol
'http'|'https' (default: https) optional- Protocol to forward requests to host on.
--var
key:value\[] optional- Array of
key:value
pairs to inject as variables into your code. The value will always be passed as a string to your Worker. - For example,
--var git_hash:$(git rev-parse HEAD) test:123
makes thegit_hash
andtest
variables available in your Worker’senv
. - This flag is an alternative to defining
vars
in yourwrangler.toml
. If defined in both places, this flag’s values will be used.
- Array of
--define
key:value\[] optional- Array of
key:value
pairs to replace global identifiers in your code. - For example,
--define GIT_HASH:$(git rev-parse HEAD)
will replace all uses ofGIT_HASH
with the actual value at build time. - This flag is an alternative to defining
define
in yourwrangler.toml
. If defined in both places, this flag’s values will be used.
- Array of
--tsconfig
string optional- Path to a custom
tsconfig.json
file.
- Path to a custom
--minify
boolean optional- Minify the Worker.
--node-compat
boolean optional- Enable Node.js compatibility.
--persist-to
string optional- Specify directory to use for local persistence.
--remote
boolean (default: false) optional- Develop against remote resources and data stored on Cloudflare’s network.
--test-scheduled
boolean (default: false) optional- Exposes a
/__scheduled
fetch route which will trigger a scheduled event (Cron Trigger) for testing during development. To simulate different cron patterns, acron
query parameter can be passed in:/__scheduled?cron=*+*+*+*+*
.
- Exposes a
--log-level
'debug'|'info'|'log'|'warn'|'error|'none' (default: log) optional- Specify Wrangler’s logging level.
--show-interactive-dev-session
boolean (default: true if the terminal supports interactivity) optional- Show the interactive dev session.
--alias
Array<string>
- Specify modules to alias using module aliasing.
wrangler dev
is a way to locally test your Worker while developing. With wrangler dev
running, send HTTP requests to localhost:8787
and your Worker should execute as expected. You will also see console.log
messages and exceptions appearing in your terminal.
Deploy your Worker to Cloudflare.
SCRIPT
string- The path to an entry point for your Worker. Only required if your
wrangler.toml
does not include amain
key (for example,main = "index.js"
).
- The path to an entry point for your Worker. Only required if your
--name
string optional- Name of the Worker.
--no-bundle
boolean (default: false) optional- Skip Wrangler’s build steps. Particularly useful when using custom builds. Refer to Bundling ↗ for more information.
--env
string optional- Perform on a specific environment.
--outdir
string optional- Path to directory where Wrangler will write the bundled Worker files.
--compatibility-date
string optional- A date in the form yyyy-mm-dd, which will be used to determine which version of the Workers runtime is used.
--compatibility-flags
,--compatibility-flag
string[] optional- Flags to use for compatibility checks.
--latest
boolean (default: true) optional- Use the latest version of the Workers runtime.
--assets
string optional beta- Folder of static assets to be served. Replaces Workers Sites. Visit assets for more information.
--legacy-assets
string optional deprecated, use `--assets`- Folder of static assets to be served.
--site
string optional deprecated, use `--assets`- Folder of static assets for Workers Sites.
--site-include
string[] optional deprecated- Array of
.gitignore
-style patterns that match file or directory names from the sites directory. Only matched items will be uploaded.
- Array of
--site-exclude
string[] optional deprecated- Array of
.gitignore
-style patterns that match file or directory names from the sites directory. Matched items will not be uploaded.
- Array of
--var
key:value\[] optional- Array of
key:value
pairs to inject as variables into your code. The value will always be passed as a string to your Worker. - For example,
--var git_hash:$(git rev-parse HEAD) test:123
makes thegit_hash
andtest
variables available in your Worker’senv
. - This flag is an alternative to defining
vars
in yourwrangler.toml
. If defined in both places, this flag’s values will be used.
- Array of
--define
key:value\[] optional- Array of
key:value
pairs to replace global identifiers in your code. - For example,
--define GIT_HASH:$(git rev-parse HEAD)
will replace all uses ofGIT_HASH
with the actual value at build time. - This flag is an alternative to defining
define
in yourwrangler.toml
. If defined in both places, this flag’s values will be used.
- Array of
--triggers
,--schedule
,--schedules
string[] optional- Cron schedules to attach to the deployed Worker. Refer to Cron Trigger Examples.
--routes
,--route
string[] optional- Routes where this Worker will be deployed.
- For example:
--route example.com/*
.
--tsconfig
string optional- Path to a custom
tsconfig.json
file.
- Path to a custom
--minify
boolean optional- Minify the bundled Worker before deploying.
--node-compat
boolean optional- Enable node.js compatibility.
--dry-run
boolean (default: false) optional- Compile a project without actually deploying to live servers. Combined with
--outdir
, this is also useful for testing the output ofnpx wrangler deploy
. It also gives developers a chance to upload our generated sourcemap to a service like Sentry, so that errors from the Worker can be mapped against source code, but before the service goes live.
- Compile a project without actually deploying to live servers. Combined with
--keep-vars
boolean (default: false) optional- It is recommended best practice to treat your Wrangler developer environment as a source of truth for your Worker configuration, and avoid making changes via the Cloudflare dashboard.
- If you change your environment variables or bindings in the Cloudflare dashboard, Wrangler will override them the next time you deploy. If you want to disable this behaviour set
keep-vars
totrue
.
--dispatch-namespace
string optional- Specify the Workers for Platforms dispatch namespace to upload this Worker to.
Publish your Worker to Cloudflare.
Delete your Worker and all associated Cloudflare developer platform resources.
SCRIPT
string- The path to an entry point for your Worker. Only required if your
wrangler.toml
does not include amain
key (for example,main = "index.js"
).
- The path to an entry point for your Worker. Only required if your
--name
string optional- Name of the Worker.
--env
string optional- Perform on a specific environment.
--dry-run
boolean (default: false) optional- Do not actually delete the Worker. This is useful for testing the output of
wrangler delete
.
- Do not actually delete the Worker. This is useful for testing the output of
Manage Workers KV namespaces.
Create a new namespace.
NAMESPACE
string required- The name of the new namespace.
--env
string optional- Perform on a specific environment.
--preview
boolean optional- Interact with a preview namespace (the
preview_id
value).
- Interact with a preview namespace (the
The following is an example of using the create
command to create a KV namespace called MY_KV
.
The following is an example of using the create
command to create a preview KV namespace called MY_KV
.
List all KV namespaces associated with the current account ID.
The following is an example that passes the Wrangler command through the jq
command:
Delete a given namespace.
--binding
string- The binding name of the namespace, as stored in the
wrangler.toml
file, to delete.
- The binding name of the namespace, as stored in the
--namespace-id
string- The ID of the namespace to delete.
--env
string optional- Perform on a specific environment.
--preview
boolean optional- Interact with a preview namespace instead of production.
The following is an example of deleting a KV namespace called MY_KV.
The following is an example of deleting a preview KV namespace called MY_KV
.
Manage key-value pairs within a Workers KV namespace.
Write a single key-value pair to a particular namespace.
KEY
string required- The key to write to.
VALUE
string optional- The value to write.
--path
optional- When defined, the value is loaded from the file at
--path
rather than reading it from theVALUE
argument. This is ideal for security-sensitive operations because it avoids saving keys and values into your terminal history.
- When defined, the value is loaded from the file at
--binding
string- The binding name of the namespace, as stored in the
wrangler.toml
file, to write to.
- The binding name of the namespace, as stored in the
--namespace-id
string- The ID of the namespace to write to.
--env
string optional- Perform on a specific environment.
--preview
boolean optional- Interact with a preview namespace instead of production.
--ttl
number optional- The lifetime (in number of seconds) that the key-value pair should exist before expiring. Must be at least
60
seconds. This option takes precedence over theexpiration
option.
- The lifetime (in number of seconds) that the key-value pair should exist before expiring. Must be at least
--expiration
number optional- The timestamp, in UNIX seconds, indicating when the key-value pair should expire.
--metadata
string optional- Any (escaped) JSON serialized arbitrary object to a maximum of 1024 bytes.
--local
boolean optional- Interact with locally persisted data.
--persist-to
string optional- Specify directory for locally persisted data.
The following is an example that puts a key-value into the namespace with binding name of MY_KV
.
The following is an example that puts a key-value into the preview namespace with binding name of MY_KV
.
The following is an example that puts a key-value into a namespace, with a time-to-live value of 10000
seconds.
The following is an example that puts a key-value into a namespace, where the value is read from the value.txt
file.
Output a list of all keys in a given namespace.
--binding
string- The binding name of the namespace, as stored in the
wrangler.toml
file, to list from.
- The binding name of the namespace, as stored in the
--namespace-id
string- The ID of the namespace to list from.
--env
string optional- Perform on a specific environment.
--preview
boolean optional- Interact with a preview namespace instead of production.
--prefix
string optional- Only list keys that begin with the given prefix.
--local
boolean optional- Interact with locally persisted data.
--persist-to
string optional- Specify directory for locally persisted data.
Below is an example that passes the Wrangler command through the jq
command:
Read a single value by key from the given namespace.
KEY
string required- The key value to get.
--binding
string- The binding name of the namespace, as stored in the
wrangler.toml
file, to get from.
- The binding name of the namespace, as stored in the
--namespace-id
string- The ID of the namespace to get from.
--env
string optional- Perform on a specific environment.
--preview
boolean optional- Interact with a preview namespace instead of production.
--text
boolean optional- Decode the returned value as a UTF-8 string.
--local
boolean optional- Interact with locally persisted data.
--persist-to
string optional- Specify directory for locally persisted data.
The following is an example that gets the value of the "my-key"
key from the KV namespace with binding name MY_KV
.
Remove a single key value pair from the given namespace.
KEY
string required- The key value to get.
--binding
string- The binding name of the namespace, as stored in the
wrangler.toml
file, to delete from.
- The binding name of the namespace, as stored in the
--namespace-id
string- The ID of the namespace to delete from.
--env
string optional- Perform on a specific environment.
--preview
boolean optional- Interact with a preview namespace instead of production.
--local
boolean optional- Interact with locally persisted data.
--persist-to
string optional- Specify directory for locally persisted data.
The following is an example that deletes the key-value pair with key "my-key"
from the KV namespace with binding name MY_KV
.
Manage multiple key-value pairs within a Workers KV namespace in batches.
Write a JSON file containing an array of key-value pairs to the given namespace.
FILENAME
string required- The JSON file containing an array of key-value pairs to write to the namespace.
--binding
string- The binding name of the namespace, as stored in the
wrangler.toml
file, to write to.
- The binding name of the namespace, as stored in the
--namespace-id
string- The ID of the namespace to write to.
--env
string optional- Perform on a specific environment.
--preview
boolean optional- Interact with a preview namespace instead of production.
--local
boolean optional- Interact with locally persisted data.
--persist-to
string optional- Specify directory for locally persisted data.
This command takes a JSON file as an argument with a list of key-value pairs to upload. An example of JSON input:
KV namespace values can only store strings. In order to save complex a value, stringify it to JSON:
Refer to the full schema for key-value entries uploaded via the bulk API:
key
string required- The key’s name. The name may be 512 bytes maximum. All printable, non-whitespace characters are valid.
value
string required- The UTF-8 encoded string to be stored, up to 25 MB in length.
metadata
object optional- Any arbitrary object (must serialize to JSON) to a maximum of 1,024 bytes.
expiration
number optional- The time, measured in number of seconds since the UNIX epoch, at which the key should expire.
expiration_ttl
number optional- The number of seconds the document should exist before expiring. Must be at least
60
seconds.
- The number of seconds the document should exist before expiring. Must be at least
base64
boolean optional- When true, the server will decode the value as base64 before storing it. This is useful for writing values that would otherwise be invalid JSON strings, such as images. Defaults to
false
.
- When true, the server will decode the value as base64 before storing it. This is useful for writing values that would otherwise be invalid JSON strings, such as images. Defaults to
The following is an example of writing all the key-value pairs found in the allthethingsupload.json
file.
Delete all keys read from a JSON file within a given namespace.
FILENAME
string required- The JSON file containing an array of keys to delete from the namespace.
--binding
string- The binding name of the namespace, as stored in the
wrangler.toml
file, to delete from.
- The binding name of the namespace, as stored in the
--namespace-id
string- The ID of the namespace to delete from.
--env
string optional- Perform on a specific environment.
--preview
boolean optional- Interact with a preview namespace instead of production.
--local
boolean optional- Interact with locally persisted data.
--persist-to
string optional- Specify directory for locally persisted data.
This command takes a JSON file as an argument containing an array of keys to delete.
The following is an example of the JSON input:
The following is an example of deleting all the keys found in the allthethingsdelete.json
file.
Interact with buckets in an R2 store.
Create a new R2 bucket.
NAME
string required- The name of the new R2 bucket.
--location
string optional- The optional location hint that determines geographic placement of the R2 bucket.
--storage-class
'Standard|InfrequentAccess' optional- The default storage class for objects uploaded to the bucket.
--jurisdiction
string optional- The jurisdiction where the R2 bucket is created. Refer to jurisdictional restrictions.
Delete an R2 bucket.
NAME
string required- The name of the R2 bucket to delete.
List R2 bucket in the current account.
Connect a custom domain to an R2 bucket.
NAME
string required- The name of the R2 bucket to connect a custom domain to.
--domain
string required- The custom domain to connect to the R2 bucket.
--zone-id
string required- The zone ID associated with the custom domain.
--min-tls
'1.0'|'1.1'|'1.2'|'1.3' optional- Set the minimum TLS version for the custom domain (defaults to 1.0 if not set).
--jurisdiction
string optional- The jurisdiction where the bucket exists, if a jurisdiction has been specified. Refer to jurisdictional restrictions.
--force
boolean optional- Skip confirmation when adding the custom domain.
Remove a custom domain from an R2 bucket.
NAME
string required- The name of the R2 bucket to remove the custom domain from.
--domain
string required- The custom domain to remove from the R2 bucket.
--jurisdiction
string optional- The jurisdiction where the bucket exists, if a jurisdiction has been specified. Refer to jurisdictional restrictions.
--force
boolean optional- Skip confirmation when removing the custom domain.
Update settings for a custom domain connected to an R2 bucket.
NAME
string required- The name of the R2 bucket associated with the custom domain to update.
--domain
string required- The custom domain whose settings will be updated.
--min-tls
'1.0'|'1.1'|'1.2'|'1.3' optional- Update the minimum TLS version for the custom domain.
--jurisdiction
string optional- The jurisdiction where the bucket exists, if a jurisdiction has been specified. Refer to jurisdictional restrictions.
List custom domains for an R2 bucket.
NAME
string required- The name of the R2 bucket whose connected custom domains will be listed.
--jurisdiction
string optional- The jurisdiction where the bucket exists, if a jurisdiction has been specified. Refer to jurisdictional restrictions.
Create an event notification rule for an R2 bucket.
NAME
string required- The name of the R2 bucket to create an event notification rule for.
--event-type
'object-create'|'object-delete'[] required- The type of event(s) that will trigger event notifications.
--queue
string required- The name of the queue that will receive event notification messages.
--prefix
string optional- The prefix that an object must match to emit event notifications (note: regular expressions are not supported).
--suffix
string optional- The suffix that an object must match to emit event notifications (note: regular expressions are not supported).
--description
string optional- A description that can be used to identify the event notification rule after creation.
Remove an event notification rule from a bucket’s event notification configuration.
NAME
string required- The name of the R2 bucket to delete an event notification rule for.
--queue
string required- The name of the queue that corresponds to the event notification rule. If no
rule
is provided, all event notification rules associated with the queue will be deleted.
- The name of the queue that corresponds to the event notification rule. If no
--rule
string optional- The ID of the event notification rule to delete.
List the event notification rules for a bucket.
NAME
string required- The name of the R2 bucket to get event notification rules for.
Enable Sippy incremental migration for a bucket.
NAME
string required- The name of the R2 bucket to enable Sippy.
--provider
'AWS'|'GCS' required- The provider of your source object storage bucket.
--bucket
string required- The name of your source object storage bucket.
--r2-key-id
string required- Your R2 Access Key ID. Requires read and write access.
--r2-secret-access-key
string required- Your R2 Secret Access Key. Requires read and write access.
--jurisdiction
string optional- The jurisdiction where the bucket exists, if a jurisdiction has been specified. Refer to jurisdictional restrictions.
- AWS S3 provider-specific options:
--key-id
string optional- Your AWS Access Key ID. Requires read and list access.
--secret-access-key
string optional- Your AWS Secret Access Key. Requires read and list access.
--region
string optional- The AWS region where your S3 bucket is located. For example:
us-west-2
.
- The AWS region where your S3 bucket is located. For example:
- Google Cloud Storage provider-specific options:
--service-account-key-file
string optional- The path to your Google Cloud service account key JSON file. This will read the service account key file and populate
client_email
andprivate_key
options. Requires read and list access.
- The path to your Google Cloud service account key JSON file. This will read the service account key file and populate
--client-email
string optional- The client email for your Google Cloud service account key. Requires read and list access.
--private-key
string optional- The private key for your Google Cloud service account key. Requires read and list access.
- Note that you must provide either
service-account-key-file
orclient_email
andprivate_key
for this command to run successfully.
Disable Sippy incremental migration for a bucket.
NAME
string required- The name of the R2 bucket to disable Sippy.
Get the status of Sippy incremental migration for a bucket.
NAME
string required- The name of the R2 bucket to get the status of Sippy.
Interact with R2 objects.
Fetch an object from an R2 bucket.
OBJECT_PATH
string required- The source object path in the form of
{bucket}/{key}
.
- The source object path in the form of
--local
boolean optional- Interact with locally persisted data.
--persist-to
string optional- Specify directory for locally persisted data.
Create an object in an R2 bucket.
OBJECT_PATH
string required- The destination object path in the form of
{bucket}/{key}
.
- The destination object path in the form of
--file
string optional- The path of the file to upload. Note you must provide either
--file
or--pipe
.
- The path of the file to upload. Note you must provide either
--pipe
boolean optional- Enables the file to be piped in, rather than specified with the
--file
option. Note you must provide either--file
or--pipe
.
- Enables the file to be piped in, rather than specified with the
--content-type
string optional- A standard MIME type describing the format of the object data.
--content-disposition
string optional- Specifies presentational information for the object.
--content-encoding
string optional- Specifies what content encodings have been applied to the object and thus what decoding mechanisms must be applied to obtain the media-type referenced by the
Content-Type
header field.
- Specifies what content encodings have been applied to the object and thus what decoding mechanisms must be applied to obtain the media-type referenced by the
--content-language
string optional- The language the content is in.
--cache-control
string optional- Specifies caching behavior along the request/reply chain.
--expires
string optional- The date and time at which the object is no longer cacheable.
--local
boolean optional- Interact with locally persisted data.
--persist-to
string optional- Specify directory for locally persisted data.
Delete an object in an R2 bucket.
OBJECT_PATH
string required- The destination object path in the form of
{bucket}/{key}
.
- The destination object path in the form of
--local
boolean optional- Interact with locally persisted data.
--persist-to
string optional- Specify directory for locally persisted data.
Manage the secret variables for a Worker.
This action creates a new version of the Worker and deploys it immediately. To only create a new version of the Worker, use the wrangler versions secret
commands.
Create or replace a secret for a Worker.
-
KEY
string required- The variable name for this secret to be accessed in the Worker.
-
--name
string optional- Perform on a specific Worker rather than inheriting from
wrangler.toml
.
- Perform on a specific Worker rather than inheriting from
-
--env
string optional- Perform on a specific environment.
When running this command, you will be prompted to input the secret’s value:
The put
command can also receive piped input. For example:
Delete a secret for a Worker.
-
KEY
string required- The variable name for this secret to be accessed in the Worker.
-
--name
string optional- Perform on a specific Worker rather than inheriting from
wrangler.toml
.
- Perform on a specific Worker rather than inheriting from
-
--env
string optional- Perform on a specific environment.
List the names of all the secrets for a Worker.
-
--name
string optional- Perform on a specific Worker rather than inheriting from
wrangler.toml
.
- Perform on a specific Worker rather than inheriting from
-
--env
string optional- Perform on a specific environment
The following is an example of listing the secrets for the current Worker.
Upload multiple secrets for a Worker at once.
-
FILENAME
string optional- The JSON file containing key-value pairs to upload as secrets, in the form
{"SECRET_NAME": "secret value", ...}
. - If omitted, Wrangler expects to receive input from
stdin
rather than a file.
- The JSON file containing key-value pairs to upload as secrets, in the form
-
--name
string optional- Perform on a specific Worker rather than inheriting from
wrangler.toml
.
- Perform on a specific Worker rather than inheriting from
-
--env
string optional- Perform on a specific environment.
The following is an example of uploading secrets from a JSON file redirected to stdin
. When complete, the output summary will show the number of secrets uploaded and the number of secrets that failed to upload.
Manage and configure Workflows.
Lists the registered Workflows for this account.
--page
number optional- Show a specific page from the listing. You can configure page size using “per-page”.
--per-page
number optional- Configure the maximum number of Workflows to show per page.
Manage and interact with specific instances of a Workflow.
List Workflow instances.
WORKFLOW_NAME
string required- The name of a registered Workflow.
Describe a specific instance of a Workflow, including its current status, any persisted state, and per-step outputs.
WORKFLOW_NAME
string required- The name of a registered Workflow.
ID
string required- The ID of a Workflow instance. You can optionally provide
latest
to refer to the most recently created instance of a Workflow.
- The ID of a Workflow instance. You can optionally provide
Terminate (permanently stop) a Workflow instance.
WORKFLOW_NAME
string required- The name of a registered Workflow.
ID
string required- The ID of a Workflow instance.
Pause (until resumed) a Workflow instance.
WORKFLOW_NAME
string required- The name of a registered Workflow.
ID
string required- The ID of a Workflow instance.
Resume a paused Workflow instance.
WORKFLOW_NAME
string required- The name of a registered Workflow.
ID
string required- The ID of a Workflow instance.
WORKFLOW_NAME
string required- The name of a registered Workflow.
Trigger (create) a Workflow instance.
WORKFLOW_NAME
string required- The name of a registered Workflow.
PARAMS
string optional- The parameters to pass to the Workflow as an event. Must be a JSON-encoded string.
Start a session to livestream logs from a deployed Worker.
WORKER
string required- The name of your Worker or the route the Worker is running on.
--format
'json'|'pretty' optional- The format of the log entries.
--status
'ok'|'error'|'canceled' optional- Filter by invocation status.
--header
string optional- Filter by HTTP header.
--method
string optional- Filter by HTTP method.
--sampling-rate
number optional- Add a fraction of requests to log sampling rate (between
0
and1
).
- Add a fraction of requests to log sampling rate (between
--search
string optional- Filter by a text match in
console.log
messages.
- Filter by a text match in
--ip
(string|'self')\[] ” optional- Filter by the IP address the request originates from. Use
"self"
to show only messages from your own IP.
- Filter by the IP address the request originates from. Use
--version-id
string optional- Filter by Worker version.
After starting wrangler tail
, you will receive a live feed of console and exception logs for each request your Worker receives.
If your Worker has a high volume of traffic, the tail might enter sampling mode. This will cause some of your messages to be dropped and a warning to appear in your tail logs. To prevent messages from being dropped, add the options listed above to filter the volume of tail messages.
If sampling persists after using options to filter messages, consider using instant logs ↗.
Configure Cloudflare Pages.
Develop your full-stack Pages application locally.
DIRECTORY
string optional- The directory of static assets to serve.
--local
boolean optional (default: true)- Run on your local machine.
--ip
string optional- IP address to listen on, defaults to
localhost
.
- IP address to listen on, defaults to
--port
number optional (default: 8788)- The port to listen on (serve from).
--binding
string[] optional- Bind an environment variable or secret (for example,
--binding <VARIABLE_NAME>=<VALUE>
).
- Bind an environment variable or secret (for example,
--kv
string[] optional- Binding name of KV namespace to bind (for example,
--kv <BINDING_NAME>
).
- Binding name of KV namespace to bind (for example,
--r2
string[] optional- Binding name of R2 bucket to bind (for example,
--r2 <BINDING_NAME>
).
- Binding name of R2 bucket to bind (for example,
--d1
string[] optional- Binding name of D1 database to bind (for example,
--d1 <BINDING_NAME>
).
- Binding name of D1 database to bind (for example,
--do
string[] optional- Binding name of Durable Object to bind (for example,
--do <BINDING_NAME>=<CLASS>
).
- Binding name of Durable Object to bind (for example,
--live-reload
boolean optional (default: false)- Auto reload HTML pages when change is detected.
--compatibility-flag
string[] optional- Runtime compatibility flags to apply.
--compatibility-date
string optional- Runtime compatibility date to apply.
--show-interactive-dev-session
boolean optional (default: true if the terminal supports interactivity)- Show the interactive dev session.
--https-key-path
string optional- Path to a custom certificate key.
--https-cert-path
string optional- Path to a custom certificate.
Download your Pages project config as a wrangler.toml file.
List your Pages projects.
Create a new Cloudflare Pages project.
PROJECT_NAME
string required- The name of your Pages project.
--production-branch
string optional- The name of the production branch of your project.
Delete a Cloudflare Pages project.
PROJECT_NAME
string required- The name of the Pages project to delete.
--yes
boolean optional- Answer
"yes"
to confirmation prompt.
- Answer
List deployments in your Cloudflare Pages project.
--project-name
string optional- The name of the project you would like to list deployments for.
Start a session to livestream logs from your deployed Pages Functions.
DEPLOYMENT
string optional- ID or URL of the deployment to tail. Specify by environment if deployment ID is unknown.
--project-name
string optional- The name of the project you would like to tail.
--environment
'production'|'preview' optional- When not providing a specific deployment ID, specifying environment will grab the latest production or preview deployment.
--format
'json'|'pretty' optional- The format of the log entries.
--status
'ok'|'error'|'canceled' optional- Filter by invocation status.
--header
string optional- Filter by HTTP header.
--method
string optional- Filter by HTTP method.
--sampling-rate
number optional- Add a percentage of requests to log sampling rate.
--search
string optional- Filter by a text match in
console.log
messages.
- Filter by a text match in
--ip
(string|'self')\[] optional- Filter by the IP address the request originates from. Use
"self"
to show only messages from your own IP.
- Filter by the IP address the request originates from. Use
After starting wrangler pages deployment tail
, you will receive a live stream of console and exception logs for each request your Functions receive.
Deploy a directory of static assets as a Pages deployment.
BUILD_OUTPUT_DIRECTORY
string optional- The directory of static files to upload. As of Wrangler 3.45.0, this is only required when your Pages project does not have a
wrangler.toml
file. Refer to the Pages Functions configuration guide for more information.
- The directory of static files to upload. As of Wrangler 3.45.0, this is only required when your Pages project does not have a
--project-name
string optional- The name of the project you want to deploy to.
--branch
string optional- The name of the branch you want to deploy to.
--commit-hash
string optional- The SHA to attach to this deployment.
--commit-message
string optional- The commit message to attach to this deployment.
--commit-dirty
boolean optional- Whether or not the workspace should be considered dirty for this deployment.
Publish a directory of static assets as a Pages deployment.
Create or update a secret for a Pages project.
-
KEY
string required- The variable name for this secret to be accessed in the Pages project.
-
--project-name
string optional- The name of your Pages project.
Delete a secret from a Pages project.
-
KEY
string required- The variable name for this secret to be accessed in the Pages project.
-
--project-name
string optional- The name of your Pages project.
List the names of all the secrets for a Pages project.
-
--project-name
string optional- The name of your Pages project.
Upload multiple secrets for a Pages project at once.
-
FILENAME
string optional- The JSON file containing key-value pairs to upload as secrets, in the form
{"SECRET_NAME": "secret value", ...}
. - If omitted, Wrangler expects to receive input from
stdin
rather than a file.
- The JSON file containing key-value pairs to upload as secrets, in the form
-
--project-name
string optional- The name of your Pages project.
Manage your Workers Queues configurations.
Create a new Queue.
name
string required- The name of the queue to create.
--delivery-delay-secs
number optional- How long a published message should be delayed for, in seconds. Must be a positive integer.
Delete an existing queue.
name
string required- The name of the queue to delete.
List all queues in the current account.
Manage queue consumer configurations.
Add a Worker script as a queue consumer.
queue-name
string required- The name of the queue to add the consumer to.
script-name
string required- The name of the Workers script to add as a consumer of the named queue.
--batch-size
number optional- Maximum number of messages per batch. Must be a positive integer.
--batch-timeout
number optional- Maximum number of seconds to wait to fill a batch with messages. Must be a positive integer.
--message-retries
number optional- Maximum number of retries for each message. Must be a positive integer.
--max-concurrency
number optional- The maximum number of concurrent consumer invocations that will be scaled up to handle incoming message volume. Must be a positive integer.
--retry-delay-secs
number optional- How long a retried message should be delayed for, in seconds. Must be a positive integer.
Remove a consumer from a queue.
queue-name
string required- The name of the queue to remove the consumer from.
script-name
string required- The name of the Workers script to remove as the consumer.
Authorize Wrangler with your Cloudflare account using OAuth. Wrangler will attempt to automatically open your web browser to login with your Cloudflare account.
If you prefer to use API tokens for authentication, such as in headless or continuous integration environments, refer to Running Wrangler in CI/CD.
--scopes-list
string optional- List all the available OAuth scopes with descriptions.
--scopes $SCOPES
string optional- Allows to choose your set of OAuth scopes. The set of scopes must be entered in a whitespace-separated list,
for example,
npx wrangler login --scopes account:read user:read
.
- Allows to choose your set of OAuth scopes. The set of scopes must be entered in a whitespace-separated list,
for example,
If Wrangler fails to open a browser, you can copy and paste the URL generated by wrangler login
in your terminal into a browser and log in.
If you are using Wrangler from a remote machine, but run the login flow from your local browser, you will receive the following error message after logging in:This site can't be reached
.
To finish the login flow, run wrangler login
and go through the login flow in the browser:
The browser login flow will redirect you to a localhost
URL on your machine.
Leave the login flow active. Open a second terminal session. In that second terminal session, use curl
or an equivalent request library on the remote machine to fetch this localhost
URL. Copy and paste the localhost
URL that was generated during the wrangler login
flow and run:
Remove Wrangler’s authorization for accessing your account. This command will invalidate your current OAuth token.
If you are using CLOUDFLARE_API_TOKEN
instead of OAuth, and you can logout by deleting your API token in the Cloudflare dashboard:
- Log in to the Cloudflare dashboard ↗.
- Go to My Profile > API Tokens.
- Select the three-dot menu on your Wrangler token.
- Select Delete.
Retrieve your user information and test your authentication configuration.
Upload a new version of your Worker that is not deployed immediately.
--tag
string optional- Add a version tag. Accepts empty string.
--message
string optional- Add a version message. Accepts empty string.
--name
string optional- Perform on a specific Worker rather than inheriting from
wrangler.toml
.
- Perform on a specific Worker rather than inheriting from
Deploy a previously created version of your Worker all at once or create a gradual deployment to incrementally shift traffic to a new version by following an interactive prompt.
--name
string optional- Perform on a specific Worker rather than inheriting from
wrangler.toml
.
- Perform on a specific Worker rather than inheriting from
Retrieve details for the 10 most recent versions. Details include Version ID
, Created on
, Author
, Source
, and optionally, Tag
or Message
.
--name
string optional- Perform on a specific Worker rather than inheriting from
wrangler.toml
.
- Perform on a specific Worker rather than inheriting from
Create or replace a secret for a Worker. Creates a new version with modified secrets without deploying the Worker.
KEY
string required- The variable name for this secret to be accessed in the Worker.
--name
string optional- Perform on a specific Worker rather than inheriting from
wrangler.toml
.
- Perform on a specific Worker rather than inheriting from
--env
string optional- Perform on a specific environment.
Delete a secret for a Worker. Creates a new version with modified secrets without deploying the Worker.
KEY
string required- The variable name for this secret to be accessed in the Worker.
--name
string optional- Perform on a specific Worker rather than inheriting from
wrangler.toml
.
- Perform on a specific Worker rather than inheriting from
--env
string optional- Perform on a specific environment.
Upload multiple secrets for a Worker at once. Creates a new version with modified secrets without deploying the Worker.
FILENAME
string optional- The JSON file containing key-value pairs to upload as secrets, in the form
{"SECRET_NAME": "secret value", ...}
. - If omitted, Wrangler expects to receive input from
stdin
rather than a file.
- The JSON file containing key-value pairs to upload as secrets, in the form
--name
string optional- Perform on a specific Worker rather than inheriting from
wrangler.toml
.
- Perform on a specific Worker rather than inheriting from
--env
string optional- Perform on a specific environment.
Apply changes to triggers (Routes or domains and Cron Triggers) when using wrangler versions upload
.
--name
string optional- Perform on a specific Worker rather than inheriting from
wrangler.toml
.
- Perform on a specific Worker rather than inheriting from
Deployments track the version(s) of your Worker that are actively serving traffic.
Retrieve details for the 10 most recent deployments. Details include Created on
, Author
, Source
, an optional Message
, and metadata about the Version(s)
in the deployment.
--name
string optional- Perform on a specific Worker rather than inheriting from
wrangler.toml
.
- Perform on a specific Worker rather than inheriting from
Retrieve details for the most recent deployment. Details include Created on
, Author
, Source
, an optional Message
, and metadata about the Version(s)
in the deployment.
--name
string optional- Perform on a specific Worker rather than inheriting from
wrangler.toml
.
- Perform on a specific Worker rather than inheriting from
VERSION_ID
string optional- The ID of the version you wish to roll back to. If not supplied, the
rollback
command defaults to the version uploaded before the latest version.
- The ID of the version you wish to roll back to. If not supplied, the
--name
string optional- Perform on a specific Worker rather than inheriting from
wrangler.toml
.
- Perform on a specific Worker rather than inheriting from
--message
string optional- Add message for rollback. Accepts empty string. When specified, interactive prompts for rollback confirmation and message are skipped.
List all dispatch namespaces.
Get information about a dispatch namespace.
-
NAME
string required- The name of the dispatch namespace to get details about.
Create a dispatch namespace.
-
NAME
string required- The name of the dispatch namespace to create.
Delete a dispatch namespace.
-
NAME
string required- The name of the dispatch namespace to delete.
Rename a dispatch namespace.
-
OLD_NAME
string required- The previous name of the dispatch namespace.
-
NEW_NAME
string required- The new name of the dispatch namespace.
Manage client certificates used for mTLS connections in subrequests.
These certificates can be used in mtls_certificate
bindings, which allow a Worker to present the certificate when establishing a connection with an origin that requires client authentication (mTLS).
Upload a client certificate.
--cert
string required- A path to the TLS certificate to upload. Certificate chains are supported.
--key
string required- A path to the private key to upload.
--name
string optional- The name assigned to the mTLS certificate at upload.
The following is an example of using the upload
command to upload an mTLS certificate.
You can then add this certificate as a binding in your wrangler.toml
:
Note that the certificate and private keys must be in separate (typically .pem
) files when uploading.
List mTLS certificates associated with the current account ID.
The following is an example of using the list
command to upload an mTLS certificate.
Delete a client certificate.
--id
string- The ID of the mTLS certificate.
--name
string- The name assigned to the mTLS certificate at upload.
The following is an example of using the delete
command to delete an mTLS certificate.
Generate types from bindings and module rules in configuration.
-
PATH
string (default: `./worker-configuration.d.ts`)- The path to where the
Env
types for your Worker will be written. - The path must have a
d.ts
extension.
- The path to where the
-
--env-interface
string (default: `Env`)- The name of the interface to generate for the environment object.
- Not valid if the Worker uses the Service Worker syntax.
-
--experimental-include-runtime
string optional (default: `./.wrangler/types/runtime.d.ts`)- The path to where the runtime types file will be written.
- Leave the path blank to use the default option, e.g.
npx wrangler types --x-include-runtime
- A custom path must be relative to the project root, e.g.
./my-runtime-types.d.ts
- A custom path must have a
d.ts
extension.