In workers-rs ↗, Rust panics were previously non-recoverable. A panic would put the Worker into an invalid state, and further function calls could result in memory overflows or exceptions.
Now, when a panic occurs, in-flight requests will throw 500 errors, but the Worker will automatically and instantly recover for future requests.
This ensures more reliable deployments. Automatic panic recovery is enabled for all new workers-rs deployments as of version 0.6.5, with no configuration required.
Rust Workers are built with Wasm Bindgen, which treats panics as non-recoverable. After a panic, the entire Wasm application is considered to be in an invalid state.
We now attach a default panic handler in Rust:
std::panic::set_hook(Box::new(move |panic_info| {hook_impl(panic_info);}));Which is registered by default in the JS initialization:
JavaScript import { setPanicHook } from "./index.js";setPanicHook(function (err) {console.error("Panic handler!", err);});When a panic occurs, we reset the Wasm state to revert the Wasm application to how it was when the application started.
We worked upstream on the Wasm Bindgen project to implement a new
--experimental-reset-state-functioncompilation option ↗ which outputs a new__wbg_reset_statefunction.This function clears all internal state related to the Wasm VM, and updates all function bindings in place to reference the new WebAssembly instance.
One other necessary change here was associating Wasm-created JS objects with an instance identity. If a JS object created by an earlier instance is then passed into a new instance later on, a new "stale object" error is specially thrown when using this feature.
Building on this new Wasm Bindgen feature, layered with our new default panic handler, we also added a proxy wrapper to ensure all top-level exported class instantiations (such as for Rust Durable Objects) are tracked and fully reinitialized when resetting the Wasm instance. This was necessary because the workerd runtime will instantiate exported classes, which would then be associated with the Wasm instance.
This approach now provides full panic recovery for Rust Workers on subsequent requests.
Of course, we never want panics, but when they do happen they are isolated and can be investigated further from the error logs - avoiding broader service disruption.
In the future, full support for recoverable panics could be implemented without needing reinitialization at all, utilizing the WebAssembly Exception Handling ↗ proposal, part of the newly announced WebAssembly 3.0 ↗ specification. This would allow unwinding panics as normal JS errors, and concurrent requests would no longer fail.
We're making significant improvements to the reliability of Rust Workers ↗. Join us in
#rust-on-workerson the Cloudflare Developers Discord ↗ to stay updated.
We recently increased the available disk space from 8 GB to 20 GB for all plans. Building on that improvement, we’re now doubling the CPU power available for paid plans — from 2 vCPU to 4 vCPU.
These changes continue our focus on making Workers Builds faster and more reliable.
Metric Free Plan Paid Plans CPU 2 vCPU 4 vCPU - Fast build times: Even single-threaded workloads benefit from having more vCPUs
- 2x faster multi-threaded builds: Tools like esbuild ↗ and webpack ↗ can now utilize additional cores, delivering near-linear performance scaling
All other build limits — including memory, build minutes, and timeout remain unchanged.
To prevent the accidental exposure of applications, we've updated how Worker preview URLs (
<PREVIEW>-<WORKER_NAME>.<SUBDOMAIN>.workers.dev) are handled. We made this change to ensure preview URLs are only active when intentionally configured, improving the default security posture of your Workers.We performed a one-time update to disable preview URLs for existing Workers where the workers.dev subdomain was also disabled.
Because preview URLs were historically enabled by default, users who had intentionally disabled their workers.dev route may not have realized their Worker was still accessible at a separate preview URL. This update was performed to ensure that using a preview URL is always an intentional, opt-in choice.
If your Worker was affected, its preview URL (
<PREVIEW>-<WORKER_NAME>.<SUBDOMAIN>.workers.dev) will now direct to an informational page explaining this change.How to Re-enable Your Preview URL
If your preview URL was disabled, you can re-enable it via the Cloudflare dashboard by navigating to your Worker's Settings page and toggling on the Preview URL.
Alternatively, you can use Wrangler by adding the
preview_urls = truesetting to your Wrangler file and redeploying the Worker.JSONC {"preview_urls": true}TOML preview_urls = trueNote: You can set
preview_urls = truewith any Wrangler version that supports the preview URL flag (v3.91.0+). However, we recommend updating to v4.34.0 or newer, as this version defaultspreview_urlsto false, ensuring preview URLs are always enabled by explicit choice.
Three months ago we announced the public beta of remote bindings for local development. Now, we're excited to say that it's available for everyone in Wrangler, Vite, and Vitest without using an experimental flag!
With remote bindings, you can now connect to deployed resources like R2 buckets and D1 databases while running Worker code on your local machine. This means you can test your local code changes against real data and services, without the overhead of deploying for each iteration.
To enable remote bindings, add
"remote" : trueto each binding that you want to rely on a remote resource running on Cloudflare:JSONC {"name": "my-worker",// Set this to today's date"compatibility_date": "2026-04-04","r2_buckets": [{"bucket_name": "screenshots-bucket","binding": "screenshots_bucket","remote": true,},],}TOML name = "my-worker"# Set this to today's datecompatibility_date = "2026-04-04"[[r2_buckets]]bucket_name = "screenshots-bucket"binding = "screenshots_bucket"remote = trueWhen remote bindings are configured, your Worker still executes locally, but all binding calls are proxied to the deployed resource that runs on Cloudflare's network.
You can try out remote bindings for local development today with:
D1 now detects read-only queries and automatically attempts up to two retries to execute those queries in the event of failures with retryable errors. You can access the number of execution attempts in the returned response metadata property
total_attempts.At the moment, only read-only queries are retried, that is, queries containing only the following SQLite keywords:
SELECT,EXPLAIN,WITH. Queries containing any SQLite keyword ↗ that leads to database writes are not retried.The retry success ratio among read-only retryable errors varies from 5% all the way up to 95%, depending on the underlying error and its duration (like network errors or other internal errors).
The retry success ratio among all retryable errors is lower, indicating that there are write-queries that could be retried. Therefore, we recommend D1 users to continue applying retries in their own code for queries that are not read-only but are idempotent according to the business logic of the application.

D1 ensures that any retry attempt does not cause database writes, making the automatic retries safe from side-effects, even if a query causing changes slips through the read-only detection. D1 achieves this by checking for modifications after every query execution, and if any write occurred due to a retry attempt, the query is rolled back.
The read-only query detection heuristics are simple for now, and there is room for improvement to capture more cases of queries that can be retried, so this is just the beginning.
The number of recent versions available for a Worker rollback has been increased from 10 to 100.
This allows you to:
-
Promote any of the 100 most recent versions to be the active deployment.
-
Split traffic using gradual deployments between your latest code and any of the 100 most recent versions.
You can do this through the Cloudflare dashboard or with Wrangler's rollback command
Learn more about versioned deployments and rollbacks.
-
We've shipped a new release for the Agents SDK ↗ bringing full compatibility with AI SDK v5 ↗ and introducing automatic message migration that handles all legacy formats transparently.
This release includes improved streaming and tool support, tool confirmation detection (for "human in the loop" systems), enhanced React hooks with automatic tool resolution, improved error handling for streaming responses, and seamless migration utilities that work behind the scenes.
This makes it ideal for building production AI chat interfaces with Cloudflare Workers AI models, agent workflows, human-in-the-loop systems, or any application requiring reliable message handling across SDK versions — all while maintaining backward compatibility.
Additionally, we've updated workers-ai-provider v2.0.0, the official provider for Cloudflare Workers AI models, to be compatible with AI SDK v5.
Creates a new chat interface with enhanced v5 capabilities.
TypeScript // Basic chat setupconst { messages, sendMessage, addToolResult } = useAgentChat({agent,experimental_automaticToolResolution: true,tools,});// With custom tool confirmationconst chat = useAgentChat({agent,experimental_automaticToolResolution: true,toolsRequiringConfirmation: ["dangerousOperation"],});Tools are automatically categorized based on their configuration:
TypeScript const tools = {// Auto-executes (has execute function)getLocalTime: {description: "Get current local time",inputSchema: z.object({}),execute: async () => new Date().toLocaleString(),},// Requires confirmation (no execute function)deleteFile: {description: "Delete a file from the system",inputSchema: z.object({filename: z.string(),}),},// Server-executed (no client confirmation)analyzeData: {description: "Analyze dataset on server",inputSchema: z.object({ data: z.array(z.number()) }),serverExecuted: true,},} satisfies Record<string, AITool>;Send messages using the new v5 format with parts array:
TypeScript // Text messagesendMessage({role: "user",parts: [{ type: "text", text: "Hello, assistant!" }],});// Multi-part message with filesendMessage({role: "user",parts: [{ type: "text", text: "Analyze this image:" },{ type: "image", image: imageData },],});Simplified logic for detecting pending tool confirmations:
TypeScript const pendingToolCallConfirmation = messages.some((m) =>m.parts?.some((part) => isToolUIPart(part) && part.state === "input-available",),);// Handle tool confirmationif (pendingToolCallConfirmation) {await addToolResult({toolCallId: part.toolCallId,tool: getToolName(part),output: "User approved the action",});}Seamlessly handle legacy message formats without code changes.
TypeScript // All these formats are automatically converted:// Legacy v4 string contentconst legacyMessage = {role: "user",content: "Hello world",};// Legacy v4 with tool callsconst legacyWithTools = {role: "assistant",content: "",toolInvocations: [{toolCallId: "123",toolName: "weather",args: { city: "SF" },state: "result",result: "Sunny, 72°F",},],};// Automatically becomes v5 format:// {// role: "assistant",// parts: [{// type: "tool-call",// toolCallId: "123",// toolName: "weather",// args: { city: "SF" },// state: "result",// result: "Sunny, 72°F"// }]// }Migrate tool definitions to use the new
inputSchemaproperty.TypeScript // Before (AI SDK v4)const tools = {weather: {description: "Get weather information",parameters: z.object({city: z.string(),}),execute: async (args) => {return await getWeather(args.city);},},};// After (AI SDK v5)const tools = {weather: {description: "Get weather information",inputSchema: z.object({city: z.string(),}),execute: async (args) => {return await getWeather(args.city);},},};Seamless integration with Cloudflare Workers AI models through the updated workers-ai-provider v2.0.0.
Use Cloudflare Workers AI models directly in your agent workflows:
TypeScript import { createWorkersAI } from "workers-ai-provider";import { useAgentChat } from "agents/ai-react";// Create Workers AI model (v2.0.0 - same API, enhanced v5 internals)const model = createWorkersAI({binding: env.AI,})("@cf/meta/llama-3.2-3b-instruct");Workers AI models now support v5 file handling with automatic conversion:
TypeScript // Send images and files to Workers AI modelssendMessage({role: "user",parts: [{ type: "text", text: "Analyze this image:" },{type: "file",data: imageBuffer,mediaType: "image/jpeg",},],});// Workers AI provider automatically converts to proper formatEnhanced streaming support with automatic warning detection:
TypeScript // Streaming with Workers AI modelsconst result = await streamText({model: createWorkersAI({ binding: env.AI })("@cf/meta/llama-3.2-3b-instruct"),messages,onChunk: (chunk) => {// Enhanced streaming with warning handlingconsole.log(chunk);},});Update your imports to use the new v5 types:
TypeScript // Before (AI SDK v4)import type { Message } from "ai";import { useChat } from "ai/react";// After (AI SDK v5)import type { UIMessage } from "ai";// or alias for compatibilityimport type { UIMessage as Message } from "ai";import { useChat } from "@ai-sdk/react";- Migration Guide ↗ - Comprehensive migration documentation
- AI SDK v5 Documentation ↗ - Official AI SDK migration guide
- An Example PR showing the migration from AI SDK v4 to v5 ↗
- GitHub Issues ↗ - Report bugs or request features
We'd love your feedback! We're particularly interested in feedback on:
- Migration experience - How smooth was the upgrade process?
- Tool confirmation workflow - Does the new automatic detection work as expected?
- Message format handling - Any edge cases with legacy message conversion?
We've updated our "Built with Cloudflare" button to make it easier to share that you're building on Cloudflare with the world. Embed it in your project's README, blog post, or wherever you want to let people know.
Check out the documentation for usage information.
Deploying static site to Workers is now easier. When you run
wrangler deploy [directory]orwrangler deploy --assets [directory]without an existing configuration file, Wrangler CLI now guides you through the deployment process with interactive prompts.Before: Required remembering multiple flags and parameters
Terminal window wrangler deploy --assets ./dist --compatibility-date 2025-09-09 --name my-projectAfter: Simple directory deployment with guided setup
Terminal window wrangler deploy dist# Interactive prompts handle the rest as shown in the example flow belowInteractive prompts for missing configuration:
- Wrangler detects when you're trying to deploy a directory of static assets
- Prompts you to confirm the deployment type
- Asks for a project name (with smart defaults)
- Automatically sets the compatibility date to today
Automatic configuration generation:
- Creates a
wrangler.jsoncfile with your deployment settings - Stores your choices for future deployments
- Eliminates the need to remember complex command-line flags
Terminal window # Deploy your built static sitewrangler deploy dist# Wrangler will prompt:✔ It looks like you are trying to deploy a directory of static assets only. Is this correct? … yes✔ What do you want to name your project? … my-astro-site# Automatically generates a wrangler.jsonc file and adds it to your project:{"name": "my-astro-site","compatibility_date": "2025-09-09","assets": {"directory": "dist"}}# Next time you run wrangler deploy, this will use the configuration in your newly generated wrangler.jsonc filewrangler deploy- You must use Wrangler version 4.24.4 or later in order to use this feature
You can now upload up to 100,000 static assets per Worker version
- Paid and Workers for Platforms users can now upload up to 100,000 static assets per Worker version, a 5x increase from the previous limit of 20,000.
- Customers on the free plan still have the same limit as before — 20,000 static assets per version of your Worker
- The individual file size limit of 25 MiB remains unchanged for all customers.
This increase allows you to build larger applications with more static assets without hitting limits.
To take advantage of the increased limits, you must use Wrangler version 4.34.0 or higher. Earlier versions of Wrangler will continue to enforce the previous 20,000 file limit.
For more information about Workers static assets, see the Static Assets documentation and Platform Limits.
You can now manage Workers, Versions, and Deployments as separate resources with a new, resource-oriented API (Beta).
This new API is supported in the Cloudflare Terraform provider ↗ and the Cloudflare Typescript SDK ↗, allowing platform teams to manage a Worker's infrastructure in Terraform, while development teams handle code deployments from a separate repository or workflow. We also designed this API with AI agents in mind, as a clear, predictable structure is essential for them to reliably build, test, and deploy applications.
- New beta API endpoints
- Cloudflare TypeScript SDK v5.0.0 ↗
- Cloudflare Go SDK v6.0.0 ↗
- Terraform provider v5.9.0 ↗:
cloudflare_worker↗ ,cloudflare_worker_version↗, andcloudflare_workers_deployments↗ resources. - See full examples in our Infrastructure as Code (IaC) guide

The existing API was originally designed for simple, one-shot script uploads:
Terminal window curl -X PUT "https://api.cloudflare.com/client/v4/accounts/$ACCOUNT_ID/workers/scripts/$SCRIPT_NAME" \-H "X-Auth-Email: $CLOUDFLARE_EMAIL" \-H "X-Auth-Key: $CLOUDFLARE_API_KEY" \-H "Content-Type: multipart/form-data" \-F 'metadata={"main_module": "worker.js","compatibility_date": "$today$"}' \-F "worker.js=@worker.js;type=application/javascript+module"This API worked for creating a basic Worker, uploading all of its code, and deploying it immediately — but came with challenges:
-
A Worker couldn't exist without code: To create a Worker, you had to upload its code in the same API request. This meant platform teams couldn't provision Workers with the proper settings, and then hand them off to development teams to deploy the actual code.
-
Several endpoints implicitly created deployments: Simple updates like adding a secret or changing a script's content would implicitly create a new version and immediately deploy it.
-
Updating a setting was confusing: Configuration was scattered across eight endpoints with overlapping responsibilities. This ambiguity made it difficult for human developers (and even more so for AI agents) to reliably update a Worker via API.
-
Scripts used names as primary identifiers: This meant simple renames could turn into a risky migration, requiring you to create a brand new Worker and update every reference. If you were using Terraform, this could inadvertently destroy your Worker altogether.

All endpoints now use simple JSON payloads, with script content embedded as
base64-encoded strings -- a more consistent and reliable approach than the previousmultipart/form-dataformat.-
Worker: The parent resource representing your application. It has a stable UUID and holds persistent settings like
name,tags, andlogpush. You can now create a Worker to establish its identity and settings before any code is uploaded. -
Version: An immutable snapshot of your code and its specific configuration, like bindings and
compatibility_date. Creating a new version is a safe action that doesn't affect live traffic. -
Deployment: An explicit action that directs traffic to a specific version.
Workers are now standalone resources that can be created and configured without any code. Platform teams can provision Workers with the right settings, then hand them off to development teams for implementation.
TypeScript // Step 1: Platform team creates the Worker resource (no code needed)const worker = await client.workers.beta.workers.create({name: "payment-service",account_id: "...",observability: {enabled: true,},});// Step 2: Development team adds code and creates a version laterconst version = await client.workers.beta.workers.versions.create(worker.id, {account_id: "...",main_module: "worker.js",compatibility_date: "$today",bindings: [ /*...*/ ],modules: [{name: "worker.js",content_type: "application/javascript+module",content_base64: Buffer.from(scriptContent).toString("base64"),},],});// Step 3: Deploy explicitly when readyconst deployment = await client.workers.scripts.deployments.create(worker.name, {account_id: "...",strategy: "percentage",versions: [{percentage: 100,version_id: version.id,},],});If you use Terraform, you can now declare the Worker in your Terraform configuration and manage configuration outside of Terraform in your Worker's
wrangler.jsoncfile and deploy code changes using Wrangler.resource "cloudflare_worker" "my_worker" {account_id = "..."name = "my-important-service"}# Manage Versions and Deployments here or outside of Terraform# resource "cloudflare_worker_version" "my_worker_version" {}# resource "cloudflare_workers_deployment" "my_worker_deployment" {}Creating a version and deploying it are now always explicit, separate actions - never implicit side effects. To update version-specific settings (like bindings), you create a new version with those changes. The existing deployed version remains unchanged until you explicitly deploy the new one.
Terminal window # Step 1: Create a new version with updated settings (doesn't affect live traffic)POST /workers/workers/{id}/versions{"compatibility_date": "$today","bindings": [{"name": "MY_NEW_ENV_VAR","text": "new_value","type": "plain_text"}],"modules": [...]}# Step 2: Explicitly deploy when ready (now affects live traffic)POST /workers/scripts/{script_name}/deployments{"strategy": "percentage","versions": [{"percentage": 100,"version_id": "new_version_id"}]}Configuration is now logically divided: Worker settings (like
nameandtags) persist across all versions, while Version settings (likebindingsandcompatibility_date) are specific to each code snapshot.Terminal window # Worker settings (the parent resource)PUT /workers/workers/{id}{"name": "payment-service","tags": ["production"],"logpush": true,}Terminal window # Version settings (the "code")POST /workers/workers/{id}/versions{"compatibility_date": "$today","bindings": [...],"modules": [...]}The
/workers/workers/path now supports addressing a Worker by both its immutable UUID and its mutable name.Terminal window # Both work for the same WorkerGET /workers/workers/29494978e03748669e8effb243cf2515 # UUID (stable for automation)GET /workers/workers/payment-service # Name (convenient for humans)This dual approach means:
- Developers can use readable names for debugging.
- Automation can rely on stable UUIDs to prevent errors when Workers are renamed.
- Terraform can rename Workers without destroying and recreating them.
- The pre-existing Workers REST API remains fully supported. Once the new API exits beta, we'll provide a migration timeline with ample notice and comprehensive migration guides.
- Existing Terraform resources and SDK methods will continue to be fully supported through the current major version.
- While the Deployments API currently remains on the
/scripts/endpoint, we plan to introduce a new Deployments endpoint under/workers/to match the new API structure.
JavaScript asset responses have been updated to use the
text/javascriptContent-Type header instead ofapplication/javascript. While both MIME types are widely supported by browsers, the HTML Living Standard explicitly recommendstext/javascriptas the preferred type going forward.This change improves:
- Standards alignment: Ensures consistency with the HTML spec and modern web platform guidance.
- Interoperability: Some developer tools, validators, and proxies expect text/javascript and may warn or behave inconsistently with application/javascript.
- Future-proofing: By following the spec-preferred MIME type, we reduce the risk of deprecation warnings or unexpected behavior in evolving browser environments.
- Consistency: Most frameworks, CDNs, and hosting providers now default to text/javascript, so this change matches common ecosystem practice.
Because all major browsers accept both MIME types, this update is backwards compatible and should not cause breakage.
Users will see this change on the next deployment of their assets.
You can now build Workflows using Python. With Python Workflows, you get automatic retries, state persistence, and the ability to run multi-step operations that can span minutes, hours, or weeks using Python’s familiar syntax and the Python Workers runtime.
Python Workflows use the same step-based execution model as JavaScript Workflows, but with Python syntax and access to Python’s ecosystem. Python Workflows also enable DAG (Directed Acyclic Graph) workflows, where you can define complex dependencies between steps using the depends parameter.
Here’s a simple example:
Python from workers import Response, WorkflowEntrypointclass PythonWorkflowStarter(WorkflowEntrypoint):async def run(self, event, step):@step.do("my first step")async def my_first_step():# do some workreturn "Hello Python!"await my_first_step()await step.sleep("my-sleep-step", "10 seconds")@step.do("my second step")async def my_second_step():# do some more workreturn "Hello again!"await my_second_step()class Default(WorkerEntrypoint):async def fetch(self, request):await self.env.MY_WORKFLOW.create()return Response("Hello Workflow creation!")Python Workflows support the same core capabilities as JavaScript Workflows, including sleep scheduling, event-driven workflows, and built-in error handling with configurable retry policies.
To learn more and get started, refer to Python Workflows documentation.
You can now create a client (a Durable Object stub) to a Durable Object with the new
getByNamemethod, removing the need to convert Durable Object names to IDs and then create a stub.JavaScript // Before: (1) translate name to ID then (2) get a clientconst objectId = env.MY_DURABLE_OBJECT.idFromName("foo"); // or .newUniqueId()const stub = env.MY_DURABLE_OBJECT.get(objectId);// Now: retrieve client to Durable Object directly via its nameconst stub = env.MY_DURABLE_OBJECT.getByName("foo");// Use client to send request to the remote Durable Objectconst rpcResponse = await stub.sayHello();Each Durable Object has a globally-unique name, which allows you to send requests to a specific object from anywhere in the world. Thus, a Durable Object can be used to coordinate between multiple clients who need to work together. You can have billions of Durable Objects, providing isolation between application tenants.
To learn more, visit the Durable Objects API Documentation or the getting started guide.
Wrangler's error screen has received several improvements to enhance your debugging experience!
The error screen now features a refreshed design thanks to youch ↗, with support for both light and dark themes, improved source map resolution logic that handles missing source files more reliably, and better error cause display.
Before After (Light) After (Dark) 


Try it out now with
npx wrangler@latest devin your Workers project.
Implementations of the
node:fsmodule ↗ and the Web File System API ↗ are now available in Workers.The
node:fsmodule provides access to a virtual file system in Workers. You can use it to read and write files, create directories, and perform other file system operations.The virtual file system is ephemeral with each individual request havig its own isolated temporary file space. Files written to the file system will not persist across requests and will not be shared across requests or across different Workers.
Workers running with the
nodejs_compatcompatibility flag will have access to thenode:fsmodule by default when the compatibility date is set to2025-09-01or later. Support for the API can also be enabled using theenable_nodejs_fs_modulecompatibility flag together with thenodejs_compatflag. Thenode:fsmodule can be disabled using thedisable_nodejs_fs_modulecompatibility flag.JavaScript import fs from "node:fs";const config = JSON.parse(fs.readFileSync("/bundle/config.json", "utf-8"));export default {async fetch(request) {return new Response(`Config value: ${config.value}`);},};There are a number of initial limitations to the
node:fsimplementation:- The glob APIs (e.g.
fs.globSync(...)) are not implemented. - The file watching APIs (e.g.
fs.watch(...)) are not implemented. - The file timestamps (modified time, access time, etc) are only partially supported. For now, these will always return the Unix epoch.
Refer to the Node.js documentation ↗ for more information on the
node:fsmodule and its APIs.The Web File System API provides access to the same virtual file system as the
node:fsmodule, but with a different API surface. The Web File System API is only available in Workers running with theenable_web_file_systemcompatibility flag. Thenodejs_compatcompatibility flag is not required to use the Web File System API.JavaScript const root = navigator.storage.getDirectory();export default {async fetch(request) {const tmp = await root.getDirectoryHandle("/tmp");const file = await tmp.getFileHandle("data.txt", { create: true });const writable = await file.createWritable();const writer = writable.getWriter();await writer.write("Hello, World!");await writer.close();return new Response("File written successfully!");},};As there are still some parts of the Web File System API tht are not fully standardized, there may be some differences between the Workers implementation and the implementations in browsers.
- The glob APIs (e.g.
Static Assets: Fixed a bug in how redirect rules ↗ defined in your Worker's
_redirectsfile are processed.If you're serving Static Assets with a
_redirectsfile containing a rule like/ja/* /:splat, paths with double slashes were previously misinterpreted as external URLs. For example, visiting/ja//example.comwould incorrectly redirect tohttps://example.cominstead of/example.comon your domain. This has been fixed and double slashes now correctly resolve as local paths. Note: Cloudflare Pages was not affected by this issue.
We've updated preview URLs for Cloudflare Workers to support long branch names.
Previously, branch and Worker names exceeding the 63-character DNS limit would cause alias generation to fail, leaving pull requests without aliased preview URLs. This particularly impacted teams relying on descriptive branch naming.
Now, Cloudflare automatically truncates long branch names and appends a unique hash, ensuring every pull request gets a working preview link.
- 63 characters or less:
<branch-name>-<worker-name>→ Uses actual branch name as is - 64 characters or more:
<truncated-branch-name>--<hash>-<worker-name>→ Uses truncated name with 4-character hash - Hash generation: The hash is derived from the full branch name to ensure uniqueness
- Stable URLs: The same branch always generates the same hash across all commits
- Wrangler 4.30.0 or later: This feature requires updating to wrangler@4.30.0+
- No configuration needed: Works automatically with existing preview URL setups
- 63 characters or less:
We are changing how Python Workers are structured by default. Previously, handlers were defined at the top-level of a module as
on_fetch,on_scheduled, etc. methods, but now they live in an entrypoint class.Here's an example of how to now define a Worker with a fetch handler:
Python from workers import Response, WorkerEntrypointclass Default(WorkerEntrypoint):async def fetch(self, request):return Response("Hello World!")To keep using the old-style handlers, you can specify the
disable_python_no_global_handlerscompatibility flag in your wrangler file:JSONC {"compatibility_flags": ["disable_python_no_global_handlers"]}TOML compatibility_flags = [ "disable_python_no_global_handlers" ]Consult the Python Workers documentation for more details.
The recent Cloudflare Terraform Provider ↗ and SDK releases (such as cloudflare-typescript ↗) bring significant improvements to the Workers developer experience. These updates focus on reliability, performance, and adding Python Workers support.
Resolved several issues with the
cloudflare_workers_scriptresource that resulted in unwarranted plan diffs, including:- Using Durable Objects migrations
- Using some bindings such as
secret_text - Using smart placement
A resource should never show a plan diff if there isn't an actual change. This fix reduces unnecessary noise in your Terraform plan and is available in Cloudflare Terraform Provider 5.8.0.
You can now specify
content_fileandcontent_sha256instead ofcontent. This prevents the Workers script content from being stored in the state file which greatly reduces plan diff size and noise. If your workflow synced plans remotely, this should now happen much faster since there is less data to sync. This is available in Cloudflare Terraform Provider 5.7.0.resource "cloudflare_workers_script" "my_worker" {account_id = "123456789"script_name = "my_worker"main_module = "worker.mjs"content_file = "worker.mjs"content_sha256 = filesha256("worker.mjs")}Fixed the
cloudflare_workers_scriptresource to properly support headers and redirects for Assets:resource "cloudflare_workers_script" "my_worker" {account_id = "123456789"script_name = "my_worker"main_module = "worker.mjs"content_file = "worker.mjs"content_sha256 = filesha256("worker.mjs")assets = {config = {headers = file("_headers")redirects = file("_redirects")}# Completion jwt from:# https://developers.cloudflare.com/api/resources/workers/subresources/assets/subresources/upload/jwt = "jwt"}}Available in Cloudflare Terraform Provider 5.8.0.
Added support for uploading Python Workers (beta) in Terraform. You can now deploy Python Workers with:
resource "cloudflare_workers_script" "my_worker" {account_id = "123456789"script_name = "my_worker"content_file = "worker.py"content_sha256 = filesha256("worker.py")content_type = "text/x-python"}Available in Cloudflare Terraform Provider 5.8.0.
Fixed an issue where Workers script versions in the SDK did not allow uploading files. This now works, and also has an improved files upload interface:
JavaScript const scriptContent = `export default {async fetch(request, env, ctx) {return new Response('Hello World!', { status: 200 });}};`;client.workers.scripts.versions.create('my-worker', {account_id: '123456789',metadata: {main_module: 'my-worker.mjs',},files: [await toFile(Buffer.from(scriptContent),'my-worker.mjs',{type: "application/javascript+module",})]});Will be available in cloudflare-typescript 4.6.0. A similar change will be available in cloudflare-python 4.4.0.
Previously when creating a KV value like this:
JavaScript await cf.kv.namespaces.values.update("my-kv-namespace", "key1", {account_id: "123456789",metadata: "my metadata",value: JSON.stringify({hello: "world"})});...and recalling it in your Worker like this:
TypeScript const value = await c.env.KV.get<{hello: string}>("key1", "json");You'd get back this:
{metadata:'my metadata', value:"{'hello':'world'}"}instead of the correct value of{hello: 'world'}This is fixed in cloudflare-typescript 4.5.0 and will be fixed in cloudflare-python 4.4.0.
A minimal implementation of the MessageChannel API ↗ is now available in Workers. This means that you can use
MessageChannelto send messages between different parts of your Worker, but not across different Workers.The
MessageChannelandMessagePortAPIs will be available by default at the global scope with any worker using a compatibility date of2025-08-15or later. It is also available using theexpose_global_message_channelcompatibility flag, or can be explicitly disabled using theno_expose_global_message_channelcompatibility flag.JavaScript const { port1, port2 } = new MessageChannel();port2.onmessage = (event) => {console.log('Received message:', event.data);};port2.postMessage('Hello from port2!');Any value that can be used with the
structuredClone(...)API can be sent over the port.There are a number of key limitations to the
MessageChannelAPI in Workers:- Transfer lists are currently not supported. This means that you will not be able to transfer
ownership of objects like
ArrayBufferorMessagePortbetween ports. - The
MessagePortis not yet serializable. This means that you cannot send aMessagePortobject through thepostMessagemethod or via JSRPC calls. - The
'messageerror'event is only partially supported. If the'onmessage'handler throws an error, the'messageerror'event will be triggered, however, it will not be triggered when there are errors serializing or deserializing the message data. Instead, the error will be thrown when thepostMessagemethod is called on the sending port. - The
'close'event will be emitted on both ports when one of the ports is closed, however it will not be emitted when the Worker is terminated or when one of the ports is garbage collected.
- Transfer lists are currently not supported. This means that you will not be able to transfer
ownership of objects like
Now, you can use
.envfiles to provide secrets and override environment variables on theenvobject during local development with Wrangler and the Cloudflare Vite plugin.Previously in local development, if you wanted to provide secrets or environment variables during local development, you had to use
.dev.varsfiles. This is still supported, but you can now also use.envfiles, which are more familiar to many developers.You can create a
.envfile in your project root to define environment variables that will be used when runningwrangler devorvite dev. The.envfile should be formatted like adotenvfile, such asKEY="VALUE":.env TITLE="My Worker"API_TOKEN="dev-token"When you run
wrangler devorvite dev, the environment variables defined in the.envfile will be available in your Worker code via theenvobject:JavaScript export default {async fetch(request, env) {const title = env.TITLE; // "My Worker"const apiToken = env.API_TOKEN; // "dev-token"const response = await fetch(`https://api.example.com/data?token=${apiToken}`,);return new Response(`Title: ${title} - ` + (await response.text()));},};If your Worker defines multiple environments, you can set different variables for each environment (ex: production or staging) by creating files named
.env.<environment-name>.When you use
wrangler <command> --env <environment-name>orCLOUDFLARE_ENV=<environment-name> vite dev, the corresponding environment-specific file will also be loaded and merged with the.envfile.For example, if you want to set different environment variables for the
stagingenvironment, you can create a file named.env.staging:.env.staging API_TOKEN="staging-token"When you run
wrangler dev --env stagingorCLOUDFLARE_ENV=staging vite dev, the environment variables from.env.stagingwill be merged onto those from.env.JavaScript export default {async fetch(request, env) {const title = env.TITLE; // "My Worker" (from `.env`)const apiToken = env.API_TOKEN; // "staging-token" (from `.env.staging`, overriding the value from `.env`)const response = await fetch(`https://api.example.com/data?token=${apiToken}`,);return new Response(`Title: ${title} - ` + (await response.text()));},};For more information on how to use
.envfiles with Wrangler and the Cloudflare Vite plugin, see the following documentation:
You can now import
waitUntilfromcloudflare:workersto extend your Worker's execution beyond the request lifecycle from anywhere in your code.Previously,
waitUntilcould only be accessed through the execution context (ctx) parameter passed to your Worker's handler functions. This meant that if you needed to schedule background tasks from deeply nested functions or utility modules, you had to pass thectxobject through multiple function calls to accesswaitUntil.Now, you can import
waitUntildirectly and use it anywhere in your Worker without needing to passctxas a parameter:JavaScript import { waitUntil } from "cloudflare:workers";export function trackAnalytics(eventData) {const analyticsPromise = fetch("https://analytics.example.com/track", {method: "POST",body: JSON.stringify(eventData),});// Extend execution to ensure analytics tracking completeswaitUntil(analyticsPromise);}This is particularly useful when you want to:
- Schedule background tasks from utility functions or modules
- Extend execution for analytics, logging, or cleanup operations
- Avoid passing the execution context through multiple layers of function calls
JavaScript import { waitUntil } from "cloudflare:workers";export default {async fetch(request, env, ctx) {// Background task that should complete even after response is sentcleanupTempData(env.KV_NAMESPACE);return new Response("Hello, World!");}};function cleanupTempData(kvNamespace) {// This function can now use waitUntil without needing ctxconst deletePromise = kvNamespace.delete("temp-key");waitUntil(deletePromise);}For more information, see the
waitUntildocumentation.
By setting the value of the
cacheproperty tono-cache, you can force Cloudflare's cache to revalidate its contents with the origin when making subrequests from Cloudflare Workers.index.js export default {async fetch(req, env, ctx) {const request = new Request("https://cloudflare.com", {cache: "no-cache",});const response = await fetch(request);return response;},};index.ts export default {async fetch(req, env, ctx): Promise<Response> {const request = new Request("https://cloudflare.com", { cache: 'no-cache'});const response = await fetch(request);return response;}} satisfies ExportedHandler<Environment>When
no-cacheis set, the Worker request will first look for a match in Cloudflare's cache, then:- If there is a match, a conditional request is sent to the origin, regardless of whether or not the match is fresh or stale. If the resource has not changed, the cached version is returned. If the resource has changed, it will be downloaded from the origin, updated in the cache, and returned.
- If there is no match, Workers will make a standard request to the origin and cache the response.
This increases compatibility with NPM packages and JavaScript frameworks that rely on setting the
cacheproperty, which is a cross-platform standard part of theRequestinterface. Previously, if you set thecacheproperty onRequestto'no-cache', the Workers runtime threw an exception.- Learn how the Cache works with Cloudflare Workers
- Enable Node.js compatibility for your Cloudflare Worker
- Explore Runtime APIs and Bindings available in Cloudflare Workers
The latest releases of @cloudflare/agents ↗ brings major improvements to MCP transport protocols support and agents connectivity. Key updates include:
MCP servers can now request user input during tool execution, enabling interactive workflows like confirmations, forms, and multi-step processes. This feature uses durable storage to preserve elicitation state even during agent hibernation, ensuring seamless user interactions across agent lifecycle events.
TypeScript // Request user confirmation via elicitationconst confirmation = await this.elicitInput({message: `Are you sure you want to increment the counter by ${amount}?`,requestedSchema: {type: "object",properties: {confirmed: {type: "boolean",title: "Confirm increment",description: "Check to confirm the increment",},},required: ["confirmed"],},});Check out our demo ↗ to see elicitation in action.
MCP now supports HTTP streamable transport which is recommended over SSE. This transport type offers:
- Better performance: More efficient data streaming and reduced overhead
- Improved reliability: Enhanced connection stability and error recover- Automatic fallback: If streamable transport is not available, it gracefully falls back to SSE
TypeScript export default MyMCP.serve("/mcp", {binding: "MyMCP",});The SDK automatically selects the best available transport method, gracefully falling back from streamable-http to SSE when needed.
Significant improvements to MCP server connections and transport reliability:
- Auto transport selection: Automatically determines the best transport method, falling back from streamable-http to SSE as needed
- Improved error handling: Better connection state management and error reporting for MCP servers
- Reliable prop updates: Centralized agent property updates ensure consistency across different contexts
You can use
.queue()to enqueue background work — ideal for tasks like processing user messages, sending notifications etc.TypeScript class MyAgent extends Agent {doSomethingExpensive(payload) {// a long running process that you want to run in the background}queueSomething() {await this.queue("doSomethingExpensive", somePayload); // this will NOT block further execution, and runs in the backgroundawait this.queue("doSomethingExpensive", someOtherPayload); // the callback will NOT run until the previous callback is complete// ... call as many times as you want}}Want to try it yourself? Just define a method like processMessage in your agent, and you’re ready to scale.
Want to build an AI agent that can receive and respond to emails automatically? With the new email adapter and onEmail lifecycle method, now you can.
TypeScript export class EmailAgent extends Agent {async onEmail(email: AgentEmail) {const raw = await email.getRaw();const parsed = await PostalMime.parse(raw);// create a response based on the email contents// and then send a replyawait this.replyToEmail(email, {fromName: "Email Agent",body: `Thanks for your email! You've sent us "${parsed.subject}". We'll process it shortly.`,});}}You route incoming mail like this:
TypeScript export default {async email(email, env) {await routeAgentEmail(email, env, {resolver: createAddressBasedEmailResolver("EmailAgent"),});},};You can find a full example here ↗.
Custom methods are now automatically wrapped with the agent's context, so calling
getCurrentAgent()should work regardless of where in an agent's lifecycle it's called. Previously this would not work on RPC calls, but now just works out of the box.TypeScript export class MyAgent extends Agent {async suggestReply(message) {// getCurrentAgent() now correctly works, even when called inside an RPC methodconst { agent } = getCurrentAgent()!;return generateText({prompt: `Suggest a reply to: "${message}" from "${agent.name}"`,tools: [replyWithEmoji],});}}Try it out and tell us what you build!