Skip to content
Cloudflare Docs

Changelog

New updates and improvements at Cloudflare.

All products
hero image
  1. The latest release of the Agents SDK rewrites observability from scratch with diagnostics_channel, adds keepAlive() to prevent Durable Object eviction during long-running work, and introduces waitForMcpConnections so MCP tools are always available when onChatMessage runs.

    Observability rewrite

    The previous observability system used console.log() with a custom Observability.emit() interface. v0.7.0 replaces it with structured events published to diagnostics channels — silent by default, zero overhead when nobody is listening.

    Every event has a type, payload, and timestamp. Events are routed to seven named channels:

    ChannelEvent types
    agents:statestate:update
    agents:rpcrpc, rpc:error
    agents:messagemessage:request, message:response, message:clear, message:cancel, message:error, tool:result, tool:approval
    agents:scheduleschedule:create, schedule:execute, schedule:cancel, schedule:retry, schedule:error, queue:retry, queue:error
    agents:lifecycleconnect, destroy
    agents:workflowworkflow:start, workflow:event, workflow:approved, workflow:rejected, workflow:terminated, workflow:paused, workflow:resumed, workflow:restarted
    agents:mcpmcp:client:preconnect, mcp:client:connect, mcp:client:authorize, mcp:client:discover

    Use the typed subscribe() helper from agents/observability for type-safe access:

    JavaScript
    import { subscribe } from "agents/observability";
    const unsub = subscribe("rpc", (event) => {
    if (event.type === "rpc") {
    console.log(`RPC call: ${event.payload.method}`);
    }
    if (event.type === "rpc:error") {
    console.error(
    `RPC failed: ${event.payload.method}${event.payload.error}`,
    );
    }
    });
    // Clean up when done
    unsub();

    In production, all diagnostics channel messages are automatically forwarded to Tail Workers — no subscription code needed in the agent itself:

    JavaScript
    export default {
    async tail(events) {
    for (const event of events) {
    for (const msg of event.diagnosticsChannelEvents) {
    // msg.channel is "agents:rpc", "agents:workflow", etc.
    console.log(msg.timestamp, msg.channel, msg.message);
    }
    }
    },
    };

    The custom Observability override interface is still supported for users who need to filter or forward events to external services.

    For the full event reference, refer to the Observability documentation.

    keepAlive() and keepAliveWhile()

    Durable Objects are evicted after a period of inactivity (typically 70-140 seconds with no incoming requests, WebSocket messages, or alarms). During long-running operations — streaming LLM responses, waiting on external APIs, running multi-step computations — the agent can be evicted mid-flight.

    keepAlive() prevents this by creating a 30-second heartbeat schedule. The alarm firing resets the inactivity timer. Returns a disposer function that cancels the heartbeat when called.

    JavaScript
    const dispose = await this.keepAlive();
    try {
    const result = await longRunningComputation();
    await sendResults(result);
    } finally {
    dispose();
    }

    keepAliveWhile() wraps an async function with automatic cleanup — the heartbeat starts before the function runs and stops when it completes:

    JavaScript
    const result = await this.keepAliveWhile(async () => {
    const data = await longRunningComputation();
    return data;
    });

    Key details:

    • Multiple concurrent callers — Each keepAlive() call returns an independent disposer. Disposing one does not affect others.
    • AIChatAgent built-inAIChatAgent automatically calls keepAlive() during streaming responses. You do not need to add it yourself.
    • Uses the scheduling system — The heartbeat does not conflict with your own schedules. It shows up in getSchedules() if you need to inspect it.

    For the full API reference and when-to-use guidance, refer to Schedule tasks — Keeping the agent alive.

    waitForMcpConnections

    AIChatAgent now waits for MCP server connections to settle before calling onChatMessage. This ensures this.mcp.getAITools() returns the full set of tools, especially after Durable Object hibernation when connections are being restored in the background.

    JavaScript
    export class ChatAgent extends AIChatAgent {
    // Default — waits up to 10 seconds
    // waitForMcpConnections = { timeout: 10_000 };
    // Wait forever
    waitForMcpConnections = true;
    // Disable waiting
    waitForMcpConnections = false;
    }
    ValueBehavior
    { timeout: 10_000 }Wait up to 10 seconds (default)
    { timeout: N }Wait up to N milliseconds
    trueWait indefinitely until all connections ready
    falseDo not wait (old behavior before 0.2.0)

    For lower-level control, call this.mcp.waitForConnections() directly inside onChatMessage instead.

    Other improvements

    • MCP deduplication by name and URLaddMcpServer with HTTP transport now deduplicates on both server name and URL. Calling it with the same name but a different URL creates a new connection. URLs are normalized before comparison (trailing slashes, default ports, hostname case).
    • callbackHost optional for non-OAuth serversaddMcpServer no longer requires callbackHost when connecting to MCP servers that do not use OAuth.
    • MCP URL security — Server URLs are validated before connection to prevent SSRF. Private IP ranges, loopback addresses, link-local addresses, and cloud metadata endpoints are blocked.
    • Custom denial messagesaddToolOutput now supports state: "output-error" with errorText for custom denial messages in human-in-the-loop tool approval flows.
    • requestId in chat optionsonChatMessage options now include a requestId for logging and correlating events.

    Upgrade

    To update to the latest version:

    Terminal window
    npm i agents@latest @cloudflare/ai-chat@latest
  1. You can now start using AI Gateway with a single API call — no setup required. Use default as your gateway ID, and AI Gateway creates one for you automatically on the first request.

    To try it out, create an API token with AI Gateway - Read, AI Gateway - Edit, and Workers AI - Read permissions, then run:

    Terminal window
    curl -X POST https://gateway.ai.cloudflare.com/v1/$CLOUDFLARE_ACCOUNT_ID/default/compat/chat/completions \
    --header "cf-aig-authorization: Bearer $CLOUDFLARE_API_TOKEN" \
    --header 'Content-Type: application/json' \
    --data '{
    "model": "workers-ai/@cf/meta/llama-3.3-70b-instruct-fp8-fast",
    "messages": [
    {
    "role": "user",
    "content": "What is Cloudflare?"
    }
    ]
    }'

    AI Gateway gives you logging, caching, rate limiting, and access to multiple AI providers through a single endpoint. For more information, refer to Get started.

  1. You can now copy Cloudflare One resources as JSON or as a ready-to-use API POST request directly from the dashboard. This makes it simple to transition workflows into API calls, automation scripts, or infrastructure-as-code pipelines.

    To use this feature, click the overflow menu (⋮) on any supported resource and select Copy as JSON or Copy as POST request. The copied output includes only the fields present on your resource, giving you a clean and minimal starting point for your own API calls.

    Initially supported resources:

    • Access applications
    • Access policies
    • Gateway policies
    • Resolver policies
    • Service tokens
    • Identity providers

    We will continue to add support for more resources throughout 2026.

  1. This week's release introduces new detections for vulnerabilities in SmarterTools SmarterMail (CVE-2025-52691 and CVE-2026-23760), alongside improvements to an existing Command Injection (nslookup) detection to enhance coverage.

    Key Findings

    • CVE-2025-52691: SmarterTools SmarterMail mail server is vulnerable to Arbitrary File Upload, allowing an unauthenticated attacker to upload files to any location on the mail server, potentially enabling remote code execution.
    • CVE-2026-23760: SmarterTools SmarterMail versions prior to build 9511 contain an authentication bypass vulnerability in the password reset API permitting unaunthenticated to reset system administrator accounts failing to verify existing password or reset token.

    Impact

    Successful exploitation of these SmarterMail vulnerabilities could lead to full system compromise or unauthorized administrative access to mail servers. Administrators are strongly encouraged to apply vendor patches without delay.

    RulesetRule IDLegacy Rule IDDescriptionPrevious ActionNew ActionComments
    Cloudflare Managed Ruleset N/ASmarterMail - Arbitrary File Upload - CVE-2025-52691LogBlockThis is a new detection.
    Cloudflare Managed Ruleset N/ASmarterMail - Authentication Bypass - CVE-2026-23760LogBlockThis is a new detection.
    Cloudflare Managed Ruleset N/ACommand Injection - Nslookup - BetaLogBlockThis rule is merged into the original rule "Command Injection - Nslookup" (ID: )
  1. Announcement DateRelease DateRelease BehaviorLegacy Rule IDRule IDDescriptionComments
    2026-03-022026-03-09LogN/A Ivanti EPMM - Code Injection - CVE:CVE-2026-1281 CVE:CVE-2026-1340This is a new detection.
  1. Gateway Protocol Detection now supports seven additional protocols in beta:

    ProtocolNotes
    IMAPInternet Message Access Protocol — email retrieval
    POP3Post Office Protocol v3 — email retrieval
    SMTPSimple Mail Transfer Protocol — email sending
    MYSQLMySQL database wire protocol
    RSYNC-DAEMONrsync daemon protocol
    LDAPLightweight Directory Access Protocol
    NTPNetwork Time Protocol

    These protocols join the existing set of detected protocols (HTTP, HTTP2, SSH, TLS, DCERPC, MQTT, and TPKT) and can be used with the Detected Protocol selector in Network policies to identify and filter traffic based on the application-layer protocol, without relying on port-based identification.

    If protocol detection is enabled on your account, these protocols will automatically be logged when detected in your Gateway network traffic.

    For more information on using Protocol Detection, refer to the Protocol detection documentation.

  1. Radar now tracks post-quantum encryption support on origin servers, provides a tool to test any host for post-quantum compatibility, and introduces a Key Transparency dashboard for monitoring end-to-end encrypted messaging audit logs.

    Post-quantum origin support

    The new Post-Quantum API provides the following endpoints:

    The new Post-Quantum Encryption page shows the share of customer origins supporting X25519MLKEM768, derived from daily automated TLS scans of TLS 1.3-compatible origins. The scanner tests for algorithm support rather than the origin server's configured preference.

    Screenshot of the origin post-quantum support graph on Radar

    A host test tool allows checking any publicly accessible website for post-quantum encryption compatibility. Enter a hostname and optional port to see whether the server negotiates a post-quantum key exchange algorithm.

    Screenshot of the post-quantum host test tool on Radar

    Key Transparency

    A new Key Transparency section displays the audit status of Key Transparency logs for end-to-end encrypted messaging services. The page launches with two monitored logs: WhatsApp and Facebook Messenger Transport.

    Each log card shows the current status, last signed epoch, last verified epoch, and the root hash of the Auditable Key Directory tree. The data is also available through the Key Transparency Auditor API.

    Screenshot of the Key Transparency dashboard on Radar

    Learn more about these features in our blog post and check out the Post-Quantum Encryption and Key Transparency pages to explore the data.

  1. Cloudflare's stale-while-revalidate support is now fully asynchronous. Previously, the first request for a stale (expired) asset in cache had to wait for an origin response, after which that visitor received a REVALIDATED or EXPIRED status. Now, the first request after the asset expires triggers revalidation in the background and immediately receives stale content with an UPDATING status. All following requests also receive stale content with an UPDATING status until the origin responds, after which subsequent requests receive fresh content with a HIT status.

    stale-while-revalidate is a Cache-Control directive set by your origin server that allows Cloudflare to serve an expired cached asset while a fresh copy is fetched from the origin.

    Asynchronous revalidation brings:

    • Lower latency: No visitor is waiting for the origin when the asset is already in cache. Every request is served from cache during revalidation.
    • Consistent experience: All visitors receive the same cached response during revalidation.
    • Reduced error exposure: The first request is no longer vulnerable to origin timeouts or errors. All visitors receive a cached response while revalidation happens in the background.

    Availability

    This change is live for all Free, Pro, and Business zones. Approximately 75% of Enterprise zones have been migrated, with the remaining zones rolling out throughout the quarter.

    Get started

    To use this feature, make sure your origin includes the stale-while-revalidate directive in the Cache-Control header. Refer to the Cache-Control documentation for details.

  1. Cloudflare now returns structured Markdown responses for Cloudflare-generated 1xxx errors when clients send Accept: text/markdown.

    Each response includes YAML frontmatter plus guidance sections (What happened / What you should do) so agents can make deterministic retry and escalation decisions without parsing HTML.

    In measured 1,015 comparisons, Markdown reduced payload size and token footprint by over 98% versus HTML.

    Included frontmatter fields:

    • error_code, error_name, error_category, http_status
    • ray_id, timestamp, zone
    • cloudflare_error, retryable, retry_after (when applicable), owner_action_required

    Default behavior is unchanged: clients that do not explicitly request Markdown continue to receive HTML error pages.

    Negotiation behavior

    Cloudflare uses standard HTTP content negotiation on the Accept header.

    • Accept: text/markdown -> Markdown
    • Accept: text/markdown, text/html;q=0.9 -> Markdown
    • Accept: text/* -> Markdown
    • Accept: */* -> HTML (default browser behavior)

    When multiple values are present, Cloudflare selects the highest-priority supported media type using q values. If Markdown is not explicitly preferred, HTML is returned.

    Availability

    Available now for Cloudflare-generated 1xxx errors.

    Get started

    Terminal window
    curl -H "Accept: text/markdown" https://<your-domain>/cdn-cgi/error/1015

    Reference: Cloudflare 1xxx error documentation

  1. The latest release of the Agents SDK lets you define an Agent and an McpAgent in the same Worker and connect them over RPC — no HTTP, no network overhead. It also makes OAuth opt-in for simple MCP connections, hardens the schema converter for production workloads, and ships a batch of @cloudflare/ai-chat reliability fixes.

    RPC transport for MCP

    You can now connect an Agent to an McpAgent in the same Worker using a Durable Object binding instead of an HTTP URL. The connection stays entirely within the Cloudflare runtime — no network round-trips, no serialization overhead.

    Pass the Durable Object namespace directly to addMcpServer:

    JavaScript
    import { Agent } from "agents";
    export class MyAgent extends Agent {
    async onStart() {
    // Connect via DO binding — no HTTP, no network overhead
    await this.addMcpServer("counter", env.MY_MCP);
    // With props for per-user context
    await this.addMcpServer("counter", env.MY_MCP, {
    props: { userId: "user-123", role: "admin" },
    });
    }
    }

    The addMcpServer method now accepts string | DurableObjectNamespace as the second parameter with full TypeScript overloads, so HTTP and RPC paths are type-safe and cannot be mixed.

    Key capabilities:

    • Hibernation support — RPC connections survive Durable Object hibernation automatically. The binding name and props are persisted to storage and restored on wake-up, matching the behavior of HTTP MCP connections.
    • Deduplication — Calling addMcpServer with the same server name returns the existing connection instead of creating duplicates. Connection IDs are stable across hibernation restore.
    • Smaller surface area — The RPC transport internals have been rewritten and reduced from 609 lines to 245 lines. RPCServerTransport now uses JSONRPCMessageSchema from the MCP SDK for validation instead of hand-written checks.

    Optional OAuth for MCP connections

    addMcpServer() no longer eagerly creates an OAuth provider for every connection. For servers that do not require authentication, a simple call is all you need:

    JavaScript
    // No callbackHost, no OAuth config — just works
    await this.addMcpServer("my-server", "https://mcp.example.com");

    If the server responds with a 401, the SDK throws a clear error: "This MCP server requires OAuth authentication. Provide callbackHost in addMcpServer options to enable the OAuth flow." The restore-from-storage flow also handles missing callback URLs gracefully, skipping auth provider creation for non-OAuth servers.

    Hardened JSON Schema to TypeScript converter

    The schema converter used by generateTypes() and getAITools() now handles edge cases that previously caused crashes in production:

    • Depth and circular reference guards — Prevents stack overflows on recursive or deeply nested schemas
    • $ref resolution — Supports internal JSON Pointers (#/definitions/..., #/$defs/..., #)
    • Tuple supportprefixItems (JSON Schema 2020-12) and array items (draft-07)
    • OpenAPI 3.0 nullable: true — Supported across all schema branches
    • Per-tool error isolation — One malformed schema cannot crash the full pipeline in generateTypes() or getAITools()
    • Missing inputSchema fallbackgetAITools() falls back to { type: "object" } instead of throwing

    @cloudflare/ai-chat fixes

    • Tool denial flow — Denied tool approvals (approved: false) now transition to output-denied with a tool_result, fixing Anthropic provider compatibility. Custom denial messages are supported via state: "output-error" and errorText.
    • Abort/cancel support — Streaming responses now properly cancel the reader loop when the abort signal fires and send a done signal to the client.
    • Duplicate message persistencepersistMessages() now reconciles assistant messages by content and order, preventing duplicate rows when clients resend full history.
    • requestId in OnChatMessageOptions — Handlers can now send properly-tagged error responses for pre-stream failures.
    • redacted_thinking preservation — The message sanitizer no longer strips Anthropic redacted_thinking blocks.
    • /get-messages reliability — Endpoint handling moved from a prototype onRequest() override to a constructor wrapper, so it works even when users override onRequest without calling super.onRequest().
    • Client tool APIs undeprecatedcreateToolsFromClientSchemas, clientTools, AITool, extractClientToolSchemas, and the tools option on useAgentChat are restored for SDK use cases where tools are defined dynamically at runtime.
    • jsonSchema initialization — Fixed jsonSchema not initialized error when calling getAITools() in onChatMessage.

    Upgrade

    To update to the latest version:

    Terminal window
    npm i agents@latest @cloudflare/ai-chat@latest
  1. You can now run more Containers concurrently with significantly higher limits on memory, vCPU, and disk.

    LimitPrevious LimitNew Limit
    Memory for concurrent live Container instances400GiB6TiB
    vCPU for concurrent live Container instances1001,500
    Disk for concurrent live Container instances2TB30TB

    This 15x increase enables larger-scale workloads on Containers. You can now run 15,000 instances of the lite instance type, 6,000 instances of basic, over 1,500 instances of standard-1, or over 1,000 instances of standard-2 concurrently.

    Refer to Limits for more details on the available instance types and limits.

  1. Radar now includes Autonomous System Provider Authorization (ASPA) deployment insights, providing visibility into the adoption and verification of ASPA objects across the global routing ecosystem.

    New API endpoints

    The new ASPA API provides the following endpoints:

    New Radar widgets

    The global routing page now shows the ASPA deployment trend over time by counting daily ASPA objects.

    Screenshot of the ASPA deployment trend chart

    The global routing page also displays the most recent ASPA objects, searchable by ASN or AS name.

    Screenshot of the ASPA objects table

    On country and region routing pages, a new widget shows the ASPA deployment rate for ASNs registered in the selected country or region.

    Screenshot of the ASPA deployment trent chart for Germany

    On AS routing pages, the connectivity table now includes checkmarks for ASPA-verified upstreams. All ASPA upstreams are listed in a dedicated table, and a timeline shows ASPA changes at daily granularity.

    Screenshot of the ASPA changes timeline on an AS routing page

    Check out the Radar routing page to explore the data.

  1. Pywrangler, the CLI tool for managing Python Workers and packages, now supports Windows, allowing you to develop and deploy Python Workers from Windows environments. Previously, Pywrangler was only available on macOS and Linux.

    You can install and use Pywrangler on Windows the same way you would on other platforms. Specify your Worker's Python dependencies in your pyproject.toml file, then use the following commands to develop and deploy:

    Terminal window
    uvx --from workers-py pywrangler dev
    uvx --from workers-py pywrangler deploy

    All existing Pywrangler functionality, including package management, local development, and deployment, works on Windows without any additional configuration.

    Requirements

    This feature requires the following minimum versions:

    • wrangler >= 4.64.0
    • workers-py >= 1.72.0
    • uv >= 0.29.8

    To upgrade workers-py (which includes Pywrangler) in your project, run:

    Terminal window
    uv tool upgrade workers-py

    To upgrade wrangler, run:

    Terminal window
    npm install -g wrangler@latest

    To upgrade uv, run:

    Terminal window
    uv self update

    To get started with Python Workers on Windows, refer to the Python packages documentation for full details on Pywrangler.

  1. Workers Observability now includes a query language that lets you write structured queries directly in the search bar to filter your logs and traces. The search bar doubles as a free text search box — type any term to search across all metadata and attributes, or write field-level queries for precise filtering.

    Workers Observability search bar with autocomplete suggestions and Query Builder sidebar filters

    Queries written in the search bar sync with the Query Builder sidebar, so you can write a query by hand and then refine it visually, or build filters in the Query Builder and see the corresponding query syntax. The search bar provides autocomplete suggestions for metadata fields and operators as you type.

    The query language supports:

    • Free text search — search everywhere with a keyword like error, or match an exact phrase with "exact phrase"
    • Field queries — filter by specific fields using comparison operators (for example, status = 500 or $workers.wallTimeMs > 100)
    • Operators=, !=, >, >=, <, <=, and : (contains)
    • Functionscontains(field, value), startsWith(field, prefix), regex(field, pattern), and exists(field)
    • Boolean logic — add conditions with AND, OR, and NOT

    Select the help icon next to the search bar to view the full syntax reference, including all supported operators, functions, and keyboard shortcuts.

    Go to the Workers Observability dashboard to try the query language.

  1. You can now deploy any existing project to Cloudflare Workers — even without a Wrangler configuration file — and wrangler deploy will just work.

    Starting with Wrangler 4.68.0, running wrangler deploy automatically configures your project by detecting your framework, installing required adapters, and deploying it to Cloudflare Workers.

    Using Wrangler locally

    Terminal window
    npx wrangler deploy

    When you run wrangler deploy in a project without a configuration file, Wrangler:

    1. Detects your framework from package.json
    2. Prompts you to confirm the detected settings
    3. Installs any required adapters
    4. Generates a wrangler.jsonc configuration file
    5. Deploys your project to Cloudflare Workers

    You can also use wrangler setup to configure without deploying, or pass --yes to skip prompts.

    Using the Cloudflare dashboard

    Automatic configuration pull request created by Workers Builds

    When you connect a repository through the Workers dashboard, a pull request is generated for you with all necessary files, and a preview deployment to check before merging.

    Background

    In December 2025, we introduced automatic configuration as an experimental feature. It is now generally available and the default behavior.

    If you have questions or run into issues, join the GitHub discussion.

  1. A new GA release for the Windows WARP client is now available on the stable releases downloads page.

    This release contains minor fixes, improvements, and new features.

    Changes and improvements

    • Improvements to multi-user mode. Fixed an issue where when switching from a pre-login registration to a user registration, Mobile Device Management (MDM) configuration association could be lost.
    • Added a new feature to manage NetBIOS over TCP/IP functionality on the Windows client. NetBIOS over TCP/IP on the Windows client is now disabled by default and can be enabled in device profile settings.
    • Fixed an issue causing failure of the local network exclusion feature when configured with a timeout of 0.
    • Improvement for the Windows client certificate posture check to ensure logged results are from checks that run once users log in.
    • Improvement for more accurate reporting of device colocation information in the Cloudflare One dashboard.
    • Fixed an issue where misconfigured DEX HTTP tests prevented new registrations.
    • Fixed an issue causing DNS requests to fail with clients in Traffic and DNS mode.
    • Improved service shutdown behavior in cases where the daemon is unresponsive.

    Known issues

    • For Windows 11 24H2 users, Microsoft has confirmed a regression that may lead to performance issues like mouse lag, audio cracking, or other slowdowns. Cloudflare recommends users experiencing these issues upgrade to a minimum Windows 11 24H2 KB5062553 or higher for resolution.

    • Devices with KB5055523 installed may receive a warning about Win32/ClickFix.ABA being present in the installer. To resolve this false positive, update Microsoft Security Intelligence to version 1.429.19.0 or later.

    • DNS resolution may be broken when the following conditions are all true:

      • WARP is in Secure Web Gateway without DNS filtering (tunnel-only) mode.
      • A custom DNS server address is configured on the primary network adapter.
      • The custom DNS server address on the primary network adapter is changed while WARP is connected.

      To work around this issue, reconnect the WARP client by toggling off and back on.

  1. A new GA release for the macOS WARP client is now available on the stable releases downloads page.

    This release contains minor fixes and improvements.

    Changes and improvements

    • Fixed an issue causing failure of the local network exclusion feature when configured with a timeout of 0.
    • Improvement for more accurate reporting of device colocation information in the Cloudflare One dashboard.
    • Fixed an issue with DNS server configuration failures that caused tunnel connection delays.
    • Fixed an issue where misconfigured DEX HTTP tests prevented new registrations.
    • Fixed an issue causing DNS requests to fail with clients in Traffic and DNS mode.
  1. A new GA release for the Linux WARP client is now available on the stable releases downloads page.

    This release contains minor fixes and improvements.

    WARP client version 2025.8.779.0 introduced an updated public key for Linux packages. The public key must be updated if it was installed before September 12, 2025 to ensure the repository remains functional after December 4, 2025. Instructions to make this update are available at pkg.cloudflareclient.com.

    Changes and improvements

    • Fixed an issue causing failure of the local network exclusion feature when configured with a timeout of 0.
    • Improvement for more accurate reporting of device colocation information in the Cloudflare One dashboard.
    • Fixed an issue where misconfigured DEX HTTP tests prevented new registrations.
    • Fixed issues causing DNS requests to fail with clients in Traffic and DNS mode or DNS only mode.
  1. deleteAll() now deletes a Durable Object alarm in addition to stored data for Workers with a compatibility date of 2026-02-24 or later. This change simplifies clearing a Durable Object's storage with a single API call.

    Previously, deleteAll() only deleted user-stored data for an object. Alarm usage stores metadata in an object's storage, which required a separate deleteAlarm() call to fully clean up all storage for an object. The deleteAll() change applies to both KV-backed and SQLite-backed Durable Objects.

    JavaScript
    // Before: two API calls required to clear all storage
    await this.ctx.storage.deleteAlarm();
    await this.ctx.storage.deleteAll();
    // Now: a single call clears both data and the alarm
    await this.ctx.storage.deleteAll();

    For more information, refer to the Storage API documentation.

  1. Cloudflare Pipelines ingests streaming data via Workers or HTTP endpoints, transforms it with SQL, and writes it to R2 as Apache Iceberg tables. Today we're shipping three improvements to help you understand why streaming events get dropped, catch data quality issues early, and set up Pipelines faster.

    Dropped event metrics

    When stream events don't match the expected schema, Pipelines accepts them during ingestion but drops them when attempting to deliver them to the sink. To help you identify the root cause of these issues, we are introducing a new dashboard and metrics that surface dropped events with detailed error messages.

    The Errors tab in the Cloudflare dashboard showing deserialization errors grouped by type with individual error details

    Dropped events can also be queried programmatically via the new pipelinesUserErrorsAdaptiveGroups GraphQL dataset. The dataset breaks down failures by specific error type (missing_field, type_mismatch, parse_failure, or null_value) so you can trace issues back to the source.

    query GetPipelineUserErrors(
    $accountTag: String!
    $pipelineId: String!
    $datetimeStart: Time!
    $datetimeEnd: Time!
    ) {
    viewer {
    accounts(filter: { accountTag: $accountTag }) {
    pipelinesUserErrorsAdaptiveGroups(
    limit: 100
    filter: {
    pipelineId: $pipelineId
    datetime_geq: $datetimeStart
    datetime_leq: $datetimeEnd
    }
    orderBy: [count_DESC]
    ) {
    count
    dimensions {
    errorFamily
    errorType
    }
    }
    }
    }
    }

    For the full list of dimensions, error types, and additional query examples, refer to User error metrics.

    Typed Pipelines bindings

    Sending data to a Pipeline from a Worker previously used a generic Pipeline<PipelineRecord> type, which meant schema mismatches (wrong field names, incorrect types) were only caught at runtime as dropped events.

    Running wrangler types now generates schema-specific TypeScript types for your Pipeline bindings. TypeScript catches missing required fields and incorrect field types at compile time, before your code is deployed.

    TypeScript
    declare namespace Cloudflare {
    type EcommerceStreamRecord = {
    user_id: string;
    event_type: string;
    product_id?: string;
    amount?: number;
    };
    interface Env {
    STREAM: import("cloudflare:pipelines").Pipeline<Cloudflare.EcommerceStreamRecord>;
    }
    }

    For more information, refer to Typed Pipeline bindings.

    Improved Pipelines setup

    Setting up a new Pipeline previously required multiple manual steps: creating an R2 bucket, enabling R2 Data Catalog, generating an API token, and configuring format, compression, and rolling policies individually.

    The wrangler pipelines setup command now offers a Simple setup mode that applies recommended defaults and automatically creates the R2 bucket and enables R2 Data Catalog if they do not already exist. Validation errors during setup prompt you to retry inline rather than restarting the entire process.

    For a full walkthrough, refer to the Getting started guide.

  1. You can now disable a live input to reject incoming RTMPS and SRT connections. When a live input is disabled, any broadcast attempts will fail to connect.

    This gives you more control over your live inputs:

    • Temporarily pause an input without deleting it
    • Programmatically end creator broadcasts
    • Prevent new broadcasts from starting on a specific input

    To disable a live input via the API, set the enabled property to false:

    Terminal window
    curl --request PUT \
    https://api.cloudflare.com/client/v4/accounts/{account_id}/stream/live_inputs/{input_id} \
    --header "Authorization: Bearer <API_TOKEN>" \
    --data '{"enabled": false}'

    You can also disable or enable a live input from the Live inputs list page or the live input detail page in the Dashboard.

    All existing live inputs remain enabled by default. For more information, refer to Start a live stream.

  1. Sandboxes now support createBackup() and restoreBackup() methods for creating and restoring point-in-time snapshots of directories.

    This allows you to restore environments quickly. For instance, in order to develop in a sandbox, you may need to include a user's codebase and run a build step. Unfortunately git clone and npm install can take minutes, and you don't want to run these steps every time the user starts their sandbox.

    Now, after the initial setup, you can just call createBackup(), then restoreBackup() the next time this environment is needed. This makes it practical to pick up exactly where a user left off, even after days of inactivity, without repeating expensive setup steps.

    TypeScript
    const sandbox = getSandbox(env.Sandbox, "my-sandbox");
    // Make non-trivial changes to the file system
    await sandbox.gitCheckout(endUserRepo, { targetDir: "/workspace" });
    await sandbox.exec("npm install", { cwd: "/workspace" });
    // Create a point-in-time backup of the directory
    const backup = await sandbox.createBackup({ dir: "/workspace" });
    // Store the handle for later use
    await env.KV.put(`backup:${userId}`, JSON.stringify(backup));
    // ... in a future session...
    // Restore instead of re-cloning and reinstalling
    await sandbox.restoreBackup(backup);

    Backups are stored in R2 and can take advantage of R2 object lifecycle rules to ensure they do not persist forever.

    Key capabilities:

    • Persist and reuse across sandbox sessions — Easily store backup handles in KV, D1, or Durable Object storage for use in subsequent sessions
    • Usable across multiple instances — Fork a backup across many sandboxes for parallel work
    • Named backups — Provide optional human-readable labels for easier management
    • TTLs — Set time-to-live durations so backups are automatically removed from storage once they are no longer neeeded

    To get started, refer to the backup and restore guide for setup instructions and usage patterns, or the Backups API reference for full method documentation.

  1. Hyperdrive now treats queries containing PostgreSQL STABLE functions as uncacheable, in addition to VOLATILE functions.

    Previously, only functions that PostgreSQL categorizes as VOLATILE (for example, RANDOM(), LASTVAL()) were detected as uncacheable. STABLE functions (for example, NOW(), CURRENT_TIMESTAMP, CURRENT_DATE) were incorrectly allowed to be cached.

    Because STABLE functions can return different results across different SQL statements within the same transaction, caching their results could serve stale or incorrect data. This change aligns Hyperdrive's caching behavior with PostgreSQL's function volatility semantics.

    If your queries use STABLE functions, and you were relying on them being cached, move the function call to your application code and pass the result as a query parameter. For example, instead of WHERE created_at > NOW(), compute the timestamp in your Worker and pass it as WHERE created_at > $1.

    Hyperdrive uses text-based pattern matching to detect uncacheable functions. References to function names like NOW() in SQL comments also cause the query to be marked as uncacheable.

    For more information, refer to Query caching and Troubleshoot and debug.

  1. TL;DR: You can now create and save custom configurations of the Threat Events dashboard, allowing you to instantly return to specific filtered views — such as industry-specific attacks or regional Sankey flows — without manual reconfiguration.

    Why this matters

    Threat intelligence is most effective when it is personalized. Previously, analysts had to manually re-apply complex filters (like combining specific industry datasets with geographic origins) every time they logged in. This update provides material value by:

    • Analysts can now jump straight into "Known Ransomware Infrastructure" or "Retail Sector Targets" views with a single click, eliminating repetitive setup tasks
    • Teams can ensure everyone is looking at the same data subsets by using standardized saved views, reducing the risk of missing critical patterns due to inconsistent filtering.

    Cloudforce One subscribers can start saving their custom views now in Application Security > Threat Intelligence > Threat Events.

  1. The @cloudflare/codemode package has been rewritten into a modular, runtime-agnostic SDK.

    Code Mode enables LLMs to write and execute code that orchestrates your tools, instead of calling them one at a time. This can (and does) yield significant token savings, reduces context window pressure and improves overall model performance on a task.

    The new Executor interface is runtime agnostic and comes with a prebuilt DynamicWorkerExecutor to run generated code in a Dynamic Worker Loader.

    Breaking changes

    • Removed experimental_codemode() and CodeModeProxy — the package no longer owns an LLM call or model choice
    • New import path: createCodeTool() is now exported from @cloudflare/codemode/ai

    New features

    • createCodeTool() — Returns a standard AI SDK Tool to use in your AI agents.
    • Executor interface — Minimal execute(code, fns) contract. Implement for any code sandboxing primitive or runtime.

    DynamicWorkerExecutor

    Runs code in a Dynamic Worker. It comes with the following features:

    • Network isolationfetch() and connect() blocked by default (globalOutbound: null) when using DynamicWorkerExecutor
    • Console captureconsole.log/warn/error captured and returned in ExecuteResult.logs
    • Execution timeout — Configurable via timeout option (default 30s)

    Usage

    JavaScript
    import { createCodeTool } from "@cloudflare/codemode/ai";
    import { DynamicWorkerExecutor } from "@cloudflare/codemode";
    import { streamText } from "ai";
    const executor = new DynamicWorkerExecutor({ loader: env.LOADER });
    const codemode = createCodeTool({ tools: myTools, executor });
    const result = streamText({
    model,
    tools: { codemode },
    messages,
    });

    Wrangler configuration

    {
    "worker_loaders": [{ "binding": "LOADER" }],
    }

    See the Code Mode documentation for full API reference and examples.

    Upgrade

    Terminal window
    npm i @cloudflare/codemode@latest