Skip to content

Changelog

New updates and improvements at Cloudflare.

Developer platform
hero image
  1. You can now capture a maximum of 256 KB of log events per Workers invocation, helping you gain better visibility into application behavior.

    All console.log() statements, exceptions, request metadata, and headers are automatically captured during the Worker invocation and emitted as JSON object. Workers Logs deserializes this object before indexing the fields and storing them. You can also capture, transform, and export the JSON object in a Tail Worker.

    256 KB is a 2x increase from the previous 128 KB limit. After you exceed this limit, further context associated with the request will not be recorded in your logs.

    This limit is automatically applied to all Workers.

  1. Workflows is now Generally Available (or "GA"): in short, it's ready for production workloads. Alongside marking Workflows as GA, we've introduced a number of changes during the beta period, including:

    • A new waitForEvent API that allows a Workflow to wait for an event to occur before continuing execution.
    • Increased concurrency: you can run up to 4,500 Workflow instances concurrently — and this will continue to grow.
    • Improved observability, including new CPU time metrics that allow you to better understand which Workflow instances are consuming the most resources and/or contributing to your bill.
    • Support for vitest for testing Workflows locally and in CI/CD pipelines.

    Workflows also supports the new increased CPU limits that apply to Workers, allowing you to run more CPU-intensive tasks (up to 5 minutes of CPU time per instance), not including the time spent waiting on network calls, AI models, or other I/O bound tasks.

    Human-in-the-loop

    The new step.waitForEvent API allows a Workflow instance to wait on events and data, enabling human-in-the-the-loop interactions, such as approving or rejecting a request, directly handling webhooks from other systems, or pushing event data to a Workflow while it's running.

    Because Workflows are just code, you can conditionally execute code based on the result of a waitForEvent call, and/or call waitForEvent multiple times in a single Workflow based on what the Workflow needs.

    For example, if you wanted to implement a human-in-the-loop approval process, you could use waitForEvent to wait for a user to approve or reject a request, and then conditionally execute code based on the result.

    JavaScript
    import {
    WorkflowEntrypoint,
    WorkflowStep,
    WorkflowEvent,
    } from "cloudflare:workers";
    export class MyWorkflow extends WorkflowEntrypoint {
    async run(event, step) {
    // Other steps in your Workflow
    let stripeEvent = await step.waitForEvent(
    "receive invoice paid webhook from Stripe",
    { type: "stripe-webhook", timeout: "1 hour" },
    );
    // Rest of your Workflow
    }
    }

    You can then send a Workflow an event from an external service via HTTP or from within a Worker using the Workers API for Workflows:

    JavaScript
    export default {
    async fetch(req, env) {
    const instanceId = new URL(req.url).searchParams.get("instanceId");
    const webhookPayload = await req.json();
    let instance = await env.MY_WORKFLOW.get(instanceId);
    // Send our event, with `type` matching the event type defined in
    // our step.waitForEvent call
    await instance.sendEvent({
    type: "stripe-webhook",
    payload: webhookPayload,
    });
    return Response.json({
    status: await instance.status(),
    });
    },
    };

    Read the GA announcement blog to learn more about what landed as part of the Workflows GA.

  1. We're excited to share that you can now use Playwright's browser automation capabilities from Cloudflare Workers.

    Playwright is an open-source package developed by Microsoft that can do browser automation tasks; it's commonly used to write software tests, debug applications, create screenshots, and crawl pages. Like Puppeteer, we forked Playwright and modified it to be compatible with Cloudflare Workers and Browser Rendering.

    Below is an example of how to use Playwright with Browser Rendering to test a TODO application using assertions:

    Assertion example
    import { launch, type BrowserWorker } from "@cloudflare/playwright";
    import { expect } from "@cloudflare/playwright/test";
    interface Env {
    MYBROWSER: BrowserWorker;
    }
    export default {
    async fetch(request: Request, env: Env) {
    const browser = await launch(env.MYBROWSER);
    const page = await browser.newPage();
    await page.goto("https://demo.playwright.dev/todomvc");
    const TODO_ITEMS = [
    "buy some cheese",
    "feed the cat",
    "book a doctors appointment",
    ];
    const newTodo = page.getByPlaceholder("What needs to be done?");
    for (const item of TODO_ITEMS) {
    await newTodo.fill(item);
    await newTodo.press("Enter");
    }
    await expect(page.getByTestId("todo-title")).toHaveCount(TODO_ITEMS.length);
    await Promise.all(
    TODO_ITEMS.map((value, index) =>
    expect(page.getByTestId("todo-title").nth(index)).toHaveText(value),
    ),
    );
    },
    };

    Playwright is available as an npm package at @cloudflare/playwright and the code is at GitHub.

    Learn more in our documentation.

  1. Queues now supports the ability to pause message delivery and/or purge (delete) messages on a queue. These operations can be useful when:

    • Your consumer has a bug or downtime, and you want to temporarily stop messages from being processed while you fix the bug
    • You have pushed invalid messages to a queue due to a code change during development, and you want to clean up the backlog
    • Your queue has a backlog that is stale and you want to clean it up to allow new messages to be consumed

    To pause a queue using Wrangler, run the pause-delivery command. Paused queues continue to receive messages. And you can easily unpause a queue using the resume-delivery command.

    Pause and resume a queue
    $ wrangler queues pause-delivery my-queue
    Pausing message delivery for queue my-queue.
    Paused message delivery for queue my-queue.
    $ wrangler queues resume-delivery my-queue
    Resuming message delivery for queue my-queue.
    Resumed message delivery for queue my-queue.

    Purging a queue permanently deletes all messages in the queue. Unlike pausing, purging is an irreversible operation:

    Purge a queue
    $ wrangler queues purge my-queue
    This operation will permanently delete all the messages in queue my-queue. Type my-queue to proceed. my-queue
    Purged queue 'my-queue'

    You can also do these operations using the Queues REST API, or the dashboard page for a queue.

    Pause and purge using the dashboard

    This feature is available on all new and existing queues. Head over to the pause and purge documentation to learn more. And if you haven't used Cloudflare Queues before, get started with the Cloudflare Queues guide.

  1. You can now run a Worker for up to 5 minutes of CPU time for each request.

    Previously, each Workers request ran for a maximum of 30 seconds of CPU time — that is the time that a Worker is actually performing a task (we still allowed unlimited wall-clock time, in case you were waiting on slow resources). This meant that some compute-intensive tasks were impossible to do with a Worker. For instance, you might want to take the cryptographic hash of a large file from R2. If this computation ran for over 30 seconds, the Worker request would have timed out.

    By default, Workers are still limited to 30 seconds of CPU time. This protects developers from incurring accidental cost due to buggy code.

    By changing the cpu_ms value in your Wrangler configuration, you can opt in to any value up to 300,000 (5 minutes).

    JSONC
    {
    // ...rest of your configuration...
    "limits": {
    "cpu_ms": 300000,
    },
    // ...rest of your configuration...
    }

    For more information on the updates limits, see the documentation on Wrangler configuration for cpu_ms and on Workers CPU time limits.

    For building long-running tasks on Cloudflare, we also recommend checking out Workflows and Queues.

  1. Source maps are now Generally Available (GA). You can now be uploaded with a maximum gzipped size of 15 MB. Previously, the maximum size limit was 15 MB uncompressed.

    Source maps help map between the original source code and the transformed/minified code that gets deployed to production. By uploading your source map, you allow Cloudflare to map the stack trace from exceptions onto the original source code making it easier to debug.

    Stack Trace without Source Map remapping

    With no source maps uploaded: notice how all the Javascript has been minified to one file, so the stack trace is missing information on file name, shows incorrect line numbers, and incorrectly references js instead of ts.

    Stack Trace with Source Map remapping

    With source maps uploaded: all methods reference the correct files and line numbers.

    Uploading source maps and stack trace remapping happens out of band from the Worker execution, so source maps do not affect upload speed, bundle size, or cold starts. The remapped stack traces are accessible through Tail Workers, Workers Logs, and Workers Logpush.

    To enable source maps, add the following to your Pages Function's or Worker's wrangler configuration:

    JSONC
    {
    "upload_source_maps": true
    }
  1. Update: Mon Mar 24th, 11PM UTC: Next.js has made further changes to address a smaller vulnerability introduced in the patches made to its middleware handling. Users should upgrade to Next.js versions 15.2.4, 14.2.26, 13.5.10 or 12.3.6. If you are unable to immediately upgrade or are running an older version of Next.js, you can enable the WAF rule described in this changelog as a mitigation.

    Update: Mon Mar 24th, 8PM UTC: Next.js has now backported the patch for this vulnerability to cover Next.js v12 and v13. Users on those versions will need to patch to 13.5.9 and 12.3.5 (respectively) to mitigate the vulnerability.

    Update: Sat Mar 22nd, 4PM UTC: We have changed this WAF rule to opt-in only, as sites that use auth middleware with third-party auth vendors were observing failing requests.

    We strongly recommend updating your version of Next.js (if eligible) to the patched versions, as your app will otherwise be vulnerable to an authentication bypass attack regardless of auth provider.

    This rule is opt-in only for sites on the Pro plan or above in the WAF managed ruleset.

    To enable the rule:

    1. Head to Security > WAF > Managed rules in the Cloudflare dashboard for the zone (website) you want to protect.
    2. Click the three dots next to Cloudflare Managed Ruleset and choose Edit
    3. Scroll down and choose Browse Rules
    4. Search for CVE-2025-29927 (ruleId: 34583778093748cc83ff7b38f472013e)
    5. Change the Status to Enabled and the Action to Block. You can optionally set the rule to Log, to validate potential impact before enabling it. Log will not block requests.
    6. Click Next
    7. Scroll down and choose Save

    This will enable the WAF rule and block requests with the x-middleware-subrequest header regardless of Next.js version.

    Create a WAF rule (manual)

    For users on the Free plan, or who want to define a more specific rule, you can create a Custom WAF rule to block requests with the x-middleware-subrequest header regardless of Next.js version.

    To create a custom rule:

    1. Head to Security > WAF > Custom rules in the Cloudflare dashboard for the zone (website) you want to protect.
    2. Give the rule a name - e.g. next-js-CVE-2025-29927
    3. Set the matching parameters for the rule match any request where the x-middleware-subrequest header exists per the rule expression below.
    Terminal window
    (len(http.request.headers["x-middleware-subrequest"]) > 0)
    1. Set the action to 'block'. If you want to observe the impact before blocking requests, set the action to 'log' (and edit the rule later).
    2. Deploy the rule.
    Next.js CVE-2025-29927 WAF rule

    Next.js CVE-2025-29927

    We've made a WAF (Web Application Firewall) rule available to all sites on Cloudflare to protect against the Next.js authentication bypass vulnerability (CVE-2025-29927) published on March 21st, 2025.

    Note: This rule is not enabled by default as it blocked requests across sites for specific authentication middleware.

    • This managed rule protects sites using Next.js on Workers and Pages, as well as sites using Cloudflare to protect Next.js applications hosted elsewhere.
    • This rule has been made available (but not enabled by default) to all sites as part of our WAF Managed Ruleset and blocks requests that attempt to bypass authentication in Next.js applications.
    • The vulnerability affects almost all Next.js versions, and has been fully patched in Next.js 14.2.26 and 15.2.4. Earlier, interim releases did not fully patch this vulnerability.
    • Users on older versions of Next.js (11.1.4 to 13.5.6) did not originally have a patch available, but this the patch for this vulnerability and a subsequent additional patch have been backported to Next.js versions 12.3.6 and 13.5.10 as of Monday, March 24th. Users on Next.js v11 will need to deploy the stated workaround or enable the WAF rule.

    The managed WAF rule mitigates this by blocking external user requests with the x-middleware-subrequest header regardless of Next.js version, but we recommend users using Next.js 14 and 15 upgrade to the patched versions of Next.js as an additional mitigation.

  1. Smart Placement is a unique Cloudflare feature that can make decisions to move your Worker to run in a more optimal location (such as closer to a database). Instead of always running in the default location (the one closest to where the request is received), Smart Placement uses certain “heuristics” (rules and thresholds) to decide if a different location might be faster or more efficient.

    Previously, if these heuristics weren't consistently met, your Worker would revert to running in the default location—even after it had been optimally placed. This meant that if your Worker received minimal traffic for a period of time, the system would reset to the default location, rather than remaining in the optimal one.

    Now, once Smart Placement has identified and assigned an optimal location, temporarily dropping below the heuristic thresholds will not force a return to default locations. For example in the previous algorithm, a drop in requests for a few days might return to default locations and heuristics would have to be met again. This was problematic for workloads that made requests to a geographically located resource every few days or longer. In this scenario, your Worker would never get placed optimally. This is no longer the case.

  1. We are excited to announce that AI Gateway now supports real-time AI interactions with the new Realtime WebSockets API.

    This new capability allows developers to establish persistent, low-latency connections between their applications and AI models, enabling natural, real-time conversational AI experiences, including speech-to-speech interactions.

    The Realtime WebSockets API works with the OpenAI Realtime API, Google Gemini Live API, and supports real-time text and speech interactions with models from Cartesia, and ElevenLabs.

    Here's how you can connect AI Gateway to OpenAI's Realtime API using WebSockets:

    OpenAI Realtime API example
    import WebSocket from "ws";
    const url =
    "wss://gateway.ai.cloudflare.com/v1/<account_id>/<gateway>/openai?model=gpt-4o-realtime-preview-2024-12-17";
    const ws = new WebSocket(url, {
    headers: {
    "cf-aig-authorization": process.env.CLOUDFLARE_API_KEY,
    Authorization: "Bearer " + process.env.OPENAI_API_KEY,
    "OpenAI-Beta": "realtime=v1",
    },
    });
    ws.on("open", () => console.log("Connected to server."));
    ws.on("message", (message) => console.log(JSON.parse(message.toString())));
    ws.send(
    JSON.stringify({
    type: "response.create",
    response: { modalities: ["text"], instructions: "Tell me a joke" },
    }),
    );

    Get started by checking out the Realtime WebSockets API documentation.

  1. In Cloudflare Terraform Provider versions 5.2.0 and above, dozens of resources now have proper drift detection. Before this fix, these resources would indicate they needed to be updated or replaced — even if there was no real change. Now, you can rely on your terraform plan to only show what resources are expected to change.

    This issue affected resources related to these products and features:

    • API Shield
    • Argo Smart Routing
    • Argo Tiered Caching
    • Bot Management
    • BYOIP
    • D1
    • DNS
    • Email Routing
    • Hyperdrive
    • Observatory
    • Pages
    • R2
    • Rules
    • SSL/TLS
    • Waiting Room
    • Workers
    • Zero Trust
  1. In the Cloudflare Terraform Provider versions 5.2.0 and above, sensitive properties of resources are redacted in logs. Sensitive properties in Cloudflare's OpenAPI Schema are now annotated with x-sensitive: true. This results in proper auto-generation of the corresponding Terraform resources, and prevents sensitive values from being shown when you run Terraform commands.

    This issue affected resources related to these products and features:

    • Alerts and Audit Logs
    • Device API
    • DLP
    • DNS
    • Magic Visibility
    • Magic WAN
    • TLS Certs and Hostnames
    • Tunnels
    • Turnstile
    • Workers
    • Zaraz
  1. Document conversion plays an important role when designing and developing AI applications and agents. Workers AI now provides the toMarkdown utility method that developers can use to for quick, easy, and convenient conversion and summary of documents in multiple formats to Markdown language.

    You can call this new tool using a binding by calling env.AI.toMarkdown() or the using the REST API endpoint.

    In this example, we fetch a PDF document and an image from R2 and feed them both to env.AI.toMarkdown(). The result is a list of converted documents. Workers AI models are used automatically to detect and summarize the image.

    TypeScript
    import { Env } from "./env";
    export default {
    async fetch(request: Request, env: Env, ctx: ExecutionContext) {
    // https://pub-979cb28270cc461d94bc8a169d8f389d.r2.dev/somatosensory.pdf
    const pdf = await env.R2.get("somatosensory.pdf");
    // https://pub-979cb28270cc461d94bc8a169d8f389d.r2.dev/cat.jpeg
    const cat = await env.R2.get("cat.jpeg");
    return Response.json(
    await env.AI.toMarkdown([
    {
    name: "somatosensory.pdf",
    blob: new Blob([await pdf.arrayBuffer()], {
    type: "application/octet-stream",
    }),
    },
    {
    name: "cat.jpeg",
    blob: new Blob([await cat.arrayBuffer()], {
    type: "application/octet-stream",
    }),
    },
    ]),
    );
    },
    };

    This is the result:

    [
    {
    "name": "somatosensory.pdf",
    "mimeType": "application/pdf",
    "format": "markdown",
    "tokens": 0,
    "data": "# somatosensory.pdf\n## Metadata\n- PDFFormatVersion=1.4\n- IsLinearized=false\n- IsAcroFormPresent=false\n- IsXFAPresent=false\n- IsCollectionPresent=false\n- IsSignaturesPresent=false\n- Producer=Prince 20150210 (www.princexml.com)\n- Title=Anatomy of the Somatosensory System\n\n## Contents\n### Page 1\nThis is a sample document to showcase..."
    },
    {
    "name": "cat.jpeg",
    "mimeType": "image/jpeg",
    "format": "markdown",
    "tokens": 0,
    "data": "The image is a close-up photograph of Grumpy Cat, a cat with a distinctive grumpy expression and piercing blue eyes. The cat has a brown face with a white stripe down its nose, and its ears are pointed upright. Its fur is light brown and darker around the face, with a pink nose and mouth. The cat's eyes are blue and slanted downward, giving it a perpetually grumpy appearance. The background is blurred, but it appears to be a dark brown color. Overall, the image is a humorous and iconic representation of the popular internet meme character, Grumpy Cat. The cat's facial expression and posture convey a sense of displeasure or annoyance, making it a relatable and entertaining image for many people."
    }
    ]

    See Markdown Conversion for more information on supported formats, REST API and pricing.

  1. npm i agents

    agents-sdk -> agents Updated

    📝 We've renamed the Agents package to agents!

    If you've already been building with the Agents SDK, you can update your dependencies to use the new package name, and replace references to agents-sdk with agents:

    Terminal window
    # Install the new package
    npm i agents
    Terminal window
    # Remove the old (deprecated) package
    npm uninstall agents-sdk
    # Find instances of the old package name in your codebase
    grep -r 'agents-sdk' .
    # Replace instances of the old package name with the new one
    # (or use find-replace in your editor)
    sed -i 's/agents-sdk/agents/g' $(grep -rl 'agents-sdk' .)

    All future updates will be pushed to the new agents package, and the older package has been marked as deprecated.

    Agents SDK updates New

    We've added a number of big new features to the Agents SDK over the past few weeks, including:

    • You can now set cors: true when using routeAgentRequest to return permissive default CORS headers to Agent responses.
    • The regular client now syncs state on the agent (just like the React version).
    • useAgentChat bug fixes for passing headers/credentials, including properly clearing cache on unmount.
    • Experimental /schedule module with a prompt/schema for adding scheduling to your app (with evals!).
    • Changed the internal zod schema to be compatible with the limitations of Google's Gemini models by removing the discriminated union, allowing you to use Gemini models with the scheduling API.

    We've also fixed a number of bugs with state synchronization and the React hooks.

    JavaScript
    // via https://github.com/cloudflare/agents/tree/main/examples/cross-domain
    export default {
    async fetch(request, env) {
    return (
    // Set { cors: true } to enable CORS headers.
    (await routeAgentRequest(request, env, { cors: true })) ||
    new Response("Not found", { status: 404 })
    );
    },
    };

    Call Agent methods from your client code New

    We've added a new @unstable_callable() decorator for defining methods that can be called directly from clients. This allows you call methods from within your client code: you can call methods (with arguments) and get native JavaScript objects back.

    JavaScript
    // server.ts
    import { unstable_callable, Agent } from "agents";
    export class Rpc extends Agent {
    // Use the decorator to define a callable method
    @unstable_callable({
    description: "rpc test",
    })
    async getHistory() {
    return this.sql`SELECT * FROM history ORDER BY created_at DESC LIMIT 10`;
    }
    }

    agents-starter Updated

    We've fixed a number of small bugs in the agents-starter project — a real-time, chat-based example application with tool-calling & human-in-the-loop built using the Agents SDK. The starter has also been upgraded to use the latest wrangler v4 release.

    If you're new to Agents, you can install and run the agents-starter project in two commands:

    Terminal window
    # Install it
    $ npm create cloudflare@latest agents-starter -- --template="cloudflare/agents-starter"
    # Run it
    $ npm run start

    You can use the starter as a template for your own Agents projects: open up src/server.ts and src/client.tsx to see how the Agents SDK is used.

    More documentation Updated

    We've heard your feedback on the Agents SDK documentation, and we're shipping more API reference material and usage examples, including:

    • Expanded API reference documentation, covering the methods and properties exposed by the Agents SDK, as well as more usage examples.
    • More Client API documentation that documents useAgent, useAgentChat and the new @unstable_callable RPC decorator exposed by the SDK.
    • New documentation on how to route requests to agents and (optionally) authenticate clients before they connect to your Agents.

    Note that the Agents SDK is continually growing: the type definitions included in the SDK will always include the latest APIs exposed by the agents package.

    If you're still wondering what Agents are, read our blog on building AI Agents on Cloudflare and/or visit the Agents documentation to learn more.

  1. Workers AI is excited to add 4 new models to the catalog, including 2 brand new classes of models with a text-to-speech and reranker model. Introducing:

    • @cf/baai/bge-m3 - a multi-lingual embeddings model that supports over 100 languages. It can also simultaneously perform dense retrieval, multi-vector retrieval, and sparse retrieval, with the ability to process inputs of different granularities.
    • @cf/baai/bge-reranker-base - our first reranker model! Rerankers are a type of text classification model that takes a query and context, and outputs a similarity score between the two. When used in RAG systems, you can use a reranker after the initial vector search to find the most relevant documents to return to a user by reranking the outputs.
    • @cf/openai/whisper-large-v3-turbo - a faster, more accurate speech-to-text model. This model was added earlier but is graduating out of beta with pricing included today.
    • @cf/myshell-ai/melotts - our first text-to-speech model that allows users to generate an MP3 with voice audio from inputted text.

    Pricing is available for each of these models on the Workers AI pricing page.

    This docs update includes a few minor bug fixes to the model schema for llama-guard, llama-3.2-1b, which you can review on the product changelog.

    Try it out and let us know what you think! Stay tuned for more models in the coming days.

  1. You can now access bindings from anywhere in your Worker by importing the env object from cloudflare:workers.

    Previously, env could only be accessed during a request. This meant that bindings could not be used in the top-level context of a Worker.

    Now, you can import env and access bindings such as secrets or environment variables in the initial setup for your Worker:

    JavaScript
    import { env } from "cloudflare:workers";
    import ApiClient from "example-api-client";
    // API_KEY and LOG_LEVEL now usable in top-level scope
    const apiClient = ApiClient.new({ apiKey: env.API_KEY });
    const LOG_LEVEL = env.LOG_LEVEL || "info";
    export default {
    fetch(req) {
    // you can use apiClient or LOG_LEVEL, configured before any request is handled
    },
    };

    Additionally, env was normally accessed as a argument to a Worker's entrypoint handler, such as fetch. This meant that if you needed to access a binding from a deeply nested function, you had to pass env as an argument through many functions to get it to the right spot. This could be cumbersome in complex codebases.

    Now, you can access the bindings from anywhere in your codebase without passing env as an argument:

    JavaScript
    // helpers.js
    import { env } from "cloudflare:workers";
    // env is *not* an argument to this function
    export async function getValue(key) {
    let prefix = env.KV_PREFIX;
    return await env.KV.get(`${prefix}-${key}`);
    }

    For more information, see documentation on accessing env.

  1. You can now retry your Cloudflare Pages and Workers builds directly from GitHub. No need to switch to the Cloudflare Dashboard for a simple retry!

    Let\u2019s say you push a commit, but your build fails due to a spurious error like a network timeout. Instead of going to the Cloudflare Dashboard to manually retry, you can now rerun the build with just a few clicks inside GitHub, keeping you inside your workflow.

    For Pages and Workers projects connected to a GitHub repository:

    1. When a build fails, go to your GitHub repository or pull request
    2. Select the failed Check Run for the build
    3. Select "Details" on the Check Run
    4. Select "Rerun" to trigger a retry build for that commit

    Learn more about Pages Builds and Workers Builds.

  1. We've released the next major version of Wrangler, the CLI for Cloudflare Workers — wrangler@4.0.0. Wrangler v4 is a major release focused on updates to underlying systems and dependencies, along with improvements to keep Wrangler commands consistent and clear.

    You can run the following command to install it in your projects:

    npm i wrangler@latest

    Unlike previous major versions of Wrangler, which were foundational rewrites and rearchitectures — Version 4 of Wrangler includes a much smaller set of changes. If you use Wrangler today, your workflow is very unlikely to change.

    A detailed migration guide is available and if you find a bug or hit a roadblock when upgrading to Wrangler v4, open an issue on the cloudflare/workers-sdk repository on GitHub.

    Going forward, we'll continue supporting Wrangler v3 with bug fixes and security updates until Q1 2026, and with critical security updates until Q1 2027, at which point it will be out of support.

  1. You can now debug your Workers tests with our Vitest integration by running the following command:

    Terminal window
    vitest --inspect --no-file-parallelism

    Attach a debugger to the port 9229 and you can start stepping through your Workers tests. This is available with @cloudflare/vitest-pool-workers v0.7.5 or later.

    Learn more in our documentation.

  1. We’re removing some of the restrictions in Email Routing so that AI Agents and task automation can better handle email workflows, including how Workers can reply to incoming emails.

    It's now possible to keep a threaded email conversation with an Email Worker script as long as:

    • The incoming email has to have valid DMARC.
    • The email can only be replied to once in the same EmailMessage event.
    • The recipient in the reply must match the incoming sender.
    • The outgoing sender domain must match the same domain that received the email.
    • Every time an email passes through Email Routing or another MTA, an entry is added to the References list. We stop accepting replies to emails with more than 100 References entries to prevent abuse or accidental loops.

    Here's an example of a Worker responding to Emails using a Workers AI model:

    AI model responding to emails
    import PostalMime from "postal-mime";
    import { createMimeMessage } from "mimetext";
    import { EmailMessage } from "cloudflare:email";
    export default {
    async email(message, env, ctx) {
    const email = await PostalMime.parse(message.raw);
    const res = await env.AI.run("@cf/meta/llama-2-7b-chat-fp16", {
    messages: [
    {
    role: "user",
    content: email.text ?? "",
    },
    ],
    });
    // message-id is generated by mimetext
    const response = createMimeMessage();
    response.setHeader("In-Reply-To", message.headers.get("Message-ID")!);
    response.setSender("agent@example.com");
    response.setRecipient(message.from);
    response.setSubject("Llama response");
    response.addMessage({
    contentType: "text/plain",
    data:
    res instanceof ReadableStream
    ? await new Response(res).text()
    : res.response!,
    });
    const replyMessage = new EmailMessage(
    "<email>",
    message.from,
    response.asRaw(),
    );
    await message.reply(replyMessage);
    },
    } satisfies ExportedHandler<Env>;

    See Reply to emails from Workers for more information.

  1. You can now access environment variables and secrets on process.env when using the nodejs_compat compatibility flag.

    JavaScript
    const apiClient = ApiClient.new({ apiKey: process.env.API_KEY });
    const LOG_LEVEL = process.env.LOG_LEVEL || "info";

    In Node.js, environment variables are exposed via the global process.env object. Some libraries assume that this object will be populated, and many developers may be used to accessing variables in this way.

    Previously, the process.env object was always empty unless written to in Worker code. This could cause unexpected errors or friction when developing Workers using code previously written for Node.js.

    Now, environment variables, secrets, and version metadata can all be accessed on process.env.

    To opt-in to the new process.env behaviour now, add the nodejs_compat_populate_process_env compatibility flag to your wrangler.json configuration:

    JSONC
    {
    // Rest of your configuration
    // Add "nodejs_compat_populate_process_env" to your compatibility_flags array
    "compatibility_flags": ["nodejs_compat", "nodejs_compat_populate_process_env"],
    // Rest of your configuration

    After April 1, 2025, populating process.env will become the default behavior when both nodejs_compat is enabled and your Worker's compatibility_date is after "2025-04-01".

  1. Hyperdrive now pools database connections in one or more regions close to your database. This means that your uncached queries and new database connections have up to 90% less latency as measured from connection pools.

    Hyperdrive query latency decreases by 90% during Hyperdrive's gradual rollout of regional pooling.

    By improving placement of Hyperdrive database connection pools, Workers' Smart Placement is now more effective when used with Hyperdrive, ensuring that your Worker can be placed as close to your database as possible.

    With this update, Hyperdrive also uses Cloudflare's standard IP address ranges to connect to your database. This enables you to configure the firewall policies (IP access control lists) of your database to only allow access from Cloudflare and Hyperdrive.

    Refer to documentation on how Hyperdrive makes connecting to regional databases from Cloudflare Workers fast.

    This improvement is enabled on all Hyperdrive configurations.

  1. You can now use bucket locks to set retention policies on your R2 buckets (or specific prefixes within your buckets) for a specified period — or indefinitely. This can help ensure compliance by protecting important data from accidental or malicious deletion.

    Locks give you a few ways to ensure your objects are retained (not deleted or overwritten). You can:

    • Lock objects for a specific duration, for example 90 days.
    • Lock objects until a certain date, for example January 1, 2030.
    • Lock objects indefinitely, until the lock is explicitly removed.

    Buckets can have up to 1,000 bucket lock rules. Each rule specifies which objects it covers (via prefix) and how long those objects must remain retained.

    Here are a couple of examples showing how you can configure bucket lock rules using Wrangler:

    Ensure all objects in a bucket are retained for at least 180 days

    Terminal window
    npx wrangler r2 bucket lock add <bucket> --name 180-days-all --retention-days 180

    Prevent deletion or overwriting of all logs indefinitely (via prefix)

    Terminal window
    npx wrangler r2 bucket lock add <bucket> --name indefinite-logs --prefix logs/ --retention-indefinite

    For more information on bucket locks and how to set retention policies for objects in your R2 buckets, refer to our documentation.

  1. Today, we are thrilled to announce Media Transformations, a new service that brings the magic of Image Transformations to short-form video files, wherever they are stored!

    For customers with a huge volume of short video — generative AI output, e-commerce product videos, social media clips, or short marketing content — uploading those assets to Stream is not always practical. Sometimes, the greatest friction to getting started was the thought of all that migrating. Customers want a simpler solution that retains their current storage strategy to deliver small, optimized MP4 files. Now you can do that with Media Transformations.

    To transform a video or image, enable transformations for your zone, then make a simple request with a specially formatted URL. The result is an MP4 that can be used in an HTML video element without a player library. If your zone already has Image Transformations enabled, then it is ready to optimize videos with Media Transformations, too.

    URL format
    https://example.com/cdn-cgi/media/<OPTIONS>/<SOURCE-VIDEO>

    For example, we have a short video of the mobile in Austin's office. The original is nearly 30 megabytes and wider than necessary for this layout. Consider a simple width adjustment:

    Example URL
    https://example.com/cdn-cgi/media/width=640/<SOURCE-VIDEO>
    https://developers.cloudflare.com/cdn-cgi/media/width=640/https://pub-d9fcbc1abcd244c1821f38b99017347f.r2.dev/aus-mobile.mp4

    The result is less than 3 megabytes, properly sized, and delivered dynamically so that customers do not have to manage the creation and storage of these transformed assets.

    For more information, learn about Transforming Videos.

  1. We've released a release candidate of the next major version of Wrangler, the CLI for Cloudflare Workers — wrangler@4.0.0-rc.0.

    You can run the following command to install it and be one of the first to try it out:

    npm i wrangler@v4-rc

    Unlike previous major versions of Wrangler, which were foundational rewrites and rearchitectures — Version 4 of Wrangler includes a much smaller set of changes. If you use Wrangler today, your workflow is very unlikely to change. Before we release Wrangler v4 and advance past the release candidate stage, we'll share a detailed migration guide in the Workers developer docs. But for the vast majority of cases, you won't need to do anything to migrate — things will just work as they do today. We are sharing this release candidate in advance of the official release of v4, so that you can try it out early and share feedback.

    New JavaScript language features that you can now use with Wrangler v4

    Version 4 of Wrangler updates the version of esbuild that Wrangler uses internally, allowing you to use modern JavaScript language features, including:

    The using keyword from Explicit Resource Management

    The using keyword from the Explicit Resource Management standard makes it easier to work with the JavaScript-native RPC system built into Workers. This means that when you obtain a stub, you can ensure that it is automatically disposed when you exit scope it was created in:

    JavaScript
    function sendEmail(id, message) {
    using user = await env.USER_SERVICE.findUser(id);
    await user.sendEmail(message);
    // user[Symbol.dispose]() is implicitly called at the end of the scope.
    }
    Import attributes

    Import attributes allow you to denote the type or other attributes of the module that your code imports. For example, you can import a JSON module, using the following syntax:

    JavaScript
    import data from "./data.json" with { type: "json" };

    Other changes

    --local is now the default for all CLI commands

    All commands that access resources (for example, wrangler kv, wrangler r2, wrangler d1) now access local datastores by default, ensuring consistent behavior.

    Clearer policy for the minimum required version of Node.js required to run Wrangler

    Moving forward, the active, maintenance, and current versions of Node.js will be officially supported by Wrangler. This means the minimum officially supported version of Node.js you must have installed for Wrangler v4 will be Node.js v18 or later. This policy mirrors how many other packages and CLIs support older versions of Node.js, and ensures that as long as you are using a version of Node.js that the Node.js project itself supports, this will be supported by Wrangler as well.

    Features previously deprecated in Wrangler v3 are now removed in Wrangler v4

    All previously deprecated features in Wrangler v2 and in Wrangler v3 have now been removed. Additionally, the following features that were deprecated during the Wrangler v3 release have been removed:

    • Legacy Assets (using wrangler dev/deploy --legacy-assets or the legacy_assets config file property). Instead, we recommend you migrate to Workers assets.
    • Legacy Node.js compatibility (using wrangler dev/deploy --node-compat or the node_compat config file property). Instead, use the nodejs_compat compatibility flag. This includes the functionality from legacy node_compat polyfills and natively implemented Node.js APIs.
    • wrangler version. Instead, use wrangler --version to check the current version of Wrangler.
    • getBindingsProxy() (via import { getBindingsProxy } from "wrangler"). Instead, use the getPlatformProxy() API, which takes exactly the same arguments.
    • usage_model. This no longer has any effect, after the rollout of Workers Standard Pricing.

    We'd love your feedback! If you find a bug or hit a roadblock when upgrading to Wrangler v4, open an issue on the cloudflare/workers-sdk repository on GitHub.

  1. We've released a new REST API for Browser Rendering in open beta, making interacting with browsers easier than ever. This new API provides endpoints for common browser actions, with more to be added in the future.

    With the REST API you can:

    • Capture screenshots – Use /screenshot to take a screenshot of a webpage from provided URL or HTML.
    • Generate PDFs – Use /pdf to convert web pages into PDFs.
    • Extract HTML content – Use /content to retrieve the full HTML from a page. Snapshot (HTML + Screenshot) – Use /snapshot to capture both the page's HTML and a screenshot in one request
    • Scrape Web Elements – Use /scrape to extract specific elements from a page.

    For example, to capture a screenshot:

    Screenshot example
    curl -X POST 'https://api.cloudflare.com/client/v4/accounts/<accountId>/browser-rendering/screenshot' \
    -H 'Authorization: Bearer <apiToken>' \
    -H 'Content-Type: application/json' \
    -d '{
    "html": "Hello World!",
    "screenshotOptions": {
    "type": "webp",
    "omitBackground": true
    }
    }' \
    --output "screenshot.webp"

    Learn more in our documentation.