Skip to content

Changelog

New updates and improvements at Cloudflare.

Developer platform
hero image
  1. New AI Search instances created after today will work differently. New instances come with built-in storage and a vector index, so you can upload a file, have it indexed immediately, and search it right away.

    Additionally new Workers Bindings are now available to use with AI Search. The new namespace binding lets you create and manage instances at runtime, and cross-instance search API lets you query across multiple instances in one call.

    Built-in storage and vector index

    All new instances now comes with built-in storage which allows you to upload files directly to it using the Items API or the dashboard. No R2 buckets to set up, no external data sources to connect first.

    TypeScript
    const instance = env.AI_SEARCH.get("my-instance");
    // upload and wait for indexing to complete
    const item = await instance.items.uploadAndPoll("faq.md", content);
    // search immediately after indexing
    const results = await instance.search({
    messages: [{ role: "user", content: "onboarding guide" }],
    });

    Namespace binding

    The new ai_search_namespaces binding replaces the previous env.AI.autorag() API provided through the AI binding. It gives your Worker access to all instances within a namespace and lets you create, update, and delete instances at runtime without redeploying.

    JSONC
    // wrangler.jsonc
    {
    "ai_search_namespaces": [
    {
    "binding": "AI_SEARCH",
    "namespace": "default",
    },
    ],
    }
    TypeScript
    // create an instance at runtime
    const instance = await env.AI_SEARCH.create({
    id: "my-instance",
    });

    For migration details, refer to Workers binding migration. For more on namespaces, refer to Namespaces.

    Within the new AI Search binding, you now have access to a Search and Chat API on the namespace level. Pass an array of instance IDs and get one ranked list of results back.

    TypeScript
    const results = await env.AI_SEARCH.search({
    messages: [{ role: "user", content: "What is Cloudflare?" }],
    ai_search_options: {
    instance_ids: ["product-docs", "customer-abc123"],
    },
    });

    Refer to Namespace-level search for details.

  1. AI Search now supports hybrid search and relevance boosting, giving you more control over how results are found and ranked.

    Hybrid search combines vector (semantic) search with BM25 keyword search in a single query. Vector search finds chunks with similar meaning, even when the exact words differ. Keyword search matches chunks that contain your query terms exactly. When you enable hybrid search, both run in parallel and the results are fused into a single ranked list.

    You can configure the tokenizer (porter for natural language, trigram for code), keyword match mode (and for precision, or for recall), and fusion method (rrf or max) per instance:

    TypeScript
    const instance = await env.AI_SEARCH.create({
    id: "my-instance",
    index_method: { vector: true, keyword: true },
    fusion_method: "rrf",
    indexing_options: { keyword_tokenizer: "porter" },
    retrieval_options: { keyword_match_mode: "and" },
    });

    Refer to Search modes for an overview and Hybrid search for configuration details.

    Relevance boosting

    Relevance boosting lets you nudge search rankings based on document metadata. For example, you can prioritize recent documents by boosting on timestamp, or surface high-priority content by boosting on a custom metadata field like priority.

    Configure up to 3 boost fields per instance or override them per request:

    TypeScript
    const results = await env.AI_SEARCH.get("my-instance").search({
    messages: [{ role: "user", content: "deployment guide" }],
    ai_search_options: {
    retrieval: {
    boost_by: [
    { field: "timestamp", direction: "desc" },
    { field: "priority", direction: "desc" },
    ],
    },
    },
    });

    Refer to Relevance boosting for configuration details.

  1. Artifacts is now in private beta. Artifacts is Git-compatible storage built for scale: create tens of millions of repos, fork from any remote, and hand off a URL to any Git client. It provides a versioned filesystem for storing and exchanging file trees across Workers, the REST API, and any Git client, running locally or within an agent.

    You can read the announcement blog to learn more about what Artifacts does, how it works, and how to create repositories for your agents to use.

    Artifacts has three API surfaces:

    • Workers bindings (for creating and managing repositories)
    • REST API (for creating and managing repos from any other compute platform)
    • Git protocol (for interacting with repos)

    As an example: you can use the Workers binding to create a repo and read back its remote URL:

    TypeScript
    # Create a thousand, a million or ten million repos: one for every agent, for every upstream branch, or every user.
    const created = await env.PROD_ARTIFACTS.create("agent-007");
    const remote = (await created.repo.info())?.remote;

    Or, use the REST API to create a repo inside a namespace from your agent(s) running on any platform:

    Terminal window
    curl --request POST "https://artifacts.cloudflare.net/v1/api/namespaces/some-namespace/repos" --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN" --header "Content-Type: application/json" --data '{"name":"agent-007"}'

    Any Git client that speaks smart HTTP can use the returned remote URL:

    Terminal window
    # Agents know git.
    # Every repository can act as a git repo, allowing agents to interact with Artifacts the way they know best: using the git CLI.
    git clone https://x:${REPO_TOKEN}@artifacts.cloudflare.net/some-namespace/agent-007.git

    To learn more, refer to Get started, Workers binding, and Git protocol.

  1. Workflows limits have been raised to the following:

    LimitPreviousNew
    Concurrent instances (running in parallel)10,00050,000
    Instance creation rate (per account)100/second per account300/second per account, 100/second per workflow
    Queued instances per Workflow 11 million2 million

    These increases apply to all users on the Workers Paid plan. Refer to the Workflows limits documentation for more details.

    Footnotes

    1. Queued instances are instances that have been created or awoken and are waiting for a concurrency slot.

  1. We are renaming Browser Rendering to Browser Run. The name Browser Rendering never fully captured what the product does. Browser Run lets you run full browser sessions on Cloudflare's global network, drive them with code or AI, record and replay sessions, crawl pages for content, debug in real time, and let humans intervene when your agent needs help.

    Along with the rename, we have increased limits for Workers Paid plans and redesigned the Browser Run dashboard.

    We have 4x-ed concurrency limits for Workers Paid plan users:

    • Concurrent browsers per account: 30 → 120 per account
    • New browser instances: 30 per minute → 1 per second
    • REST API rate limits: recently increased from 3 to 10 requests per second

    Rate limits across the limits page are now expressed in per-second terms, matching how they are enforced. No action is needed to benefit from the higher limits.

    The redesigned dashboard now shows every request in a single Runs tab, not just browser sessions but also quick actions like screenshots, PDFs, markdown, and crawls. Filter by endpoint, view target URLs, status, and duration, and expand any row for more detail.

    Browser Run dashboard Runs tab with browser sessions and quick actions visible in one list, and an expanded crawl job showing its progress

    We are also shipping several new features:

    • Live View, Human in the Loop, and Session Recordings - See what your agent is doing in real time, let humans step in when automation hits a wall, and replay any session after it ends.
    • WebMCP - Websites can expose structured tools for AI agents to discover and call directly, replacing slow screenshot-analyze-click loops.

    For the full story, read our Agents Week blog Browser Run: Give your agents a browser.

  1. When browser automation fails or behaves unexpectedly, it can be hard to understand what happened. We are shipping three new features in Browser Run (formerly Browser Rendering) to help:

    Live View

    Live View lets you see what your agent is doing in real time. The page, DOM, console, and network requests are all visible for any active browser session. Access Live View from the Cloudflare dashboard, via the hosted UI at live.browser.run, or using native Chrome DevTools.

    Human in the Loop

    When your agent hits a snag like a login page or unexpected edge case, it can hand off to a human instead of failing. With Human in the Loop, a human steps into the live browser session through Live View, resolves the issue, and hands control back to the script.

    Today, you can step in by opening the Live View URL for any active session. Next, we are adding a handoff flow where the agent can signal that it needs help, notify a human to step in, then hand control back to the agent once the issue is resolved.

    Browser Run Human in the Loop demo where an AI agent searches Amazon, selects a product, and requests human help when authentication is needed to buy

    Session Recordings

    Session Recordings records DOM state so you can replay any session after it ends. Enable recordings by passing recording: true when launching a browser. After the session closes, view the recording in the Cloudflare dashboard under Browser Run > Runs, or retrieve via API using the session ID. Next, we are adding the ability to inspect DOM state and console output at any point during the recording.

    Browser Run session recording showing an automated browser navigating the Sentry Shop and adding a bomber jacket to the cart

    To get started, refer to the documentation for Live View, Human in the Loop, and Session Recording.

  1. Browser Run (formerly Browser Rendering) now supports WebMCP (Web Model Context Protocol), a new browser API from the Google Chrome team.

    The Internet was built for humans, so navigating as an AI agent today is unreliable. WebMCP lets websites expose structured tools for AI agents to discover and call directly. Instead of slow screenshot-analyze-click loops, agents can call website functions like searchFlights() or bookTicket() with typed parameters, making browser automation faster, more reliable, and less fragile.

    Browser Run lab session showing WebMCP tools being discovered and executed in the Chrome DevTools console to book a hotel

    With WebMCP, you can:

    • Discover website tools - Use navigator.modelContextTesting.listTools() to see available actions on any WebMCP-enabled site
    • Execute tools directly - Call navigator.modelContextTesting.executeTool() with typed parameters
    • Handle human-in-the-loop interactions - Some tools pause for user confirmation before completing sensitive actions

    WebMCP requires Chrome beta features. We have an experimental pool with browser instances running Chrome beta so you can test emerging browser features before they reach stable Chrome. To start a WebMCP session, add lab=true to your /devtools/browser request:

    Terminal window
    curl -X POST "https://api.cloudflare.com/client/v4/accounts/{account_id}/browser-rendering/devtools/browser?lab=true&keep_alive=300000" \
    -H "Authorization: Bearer {api_token}"

    Combined with the recently launched CDP endpoint, AI agents can also use WebMCP. Connect an MCP client to Browser Run via CDP, and your agent can discover and call website tools directly. Here's the same hotel booking demo, this time driven by an AI agent through OpenCode:

    Browser Run Live View showing an AI agent navigating a hotel booking site in real time

    For a step-by-step guide, refer to the WebMCP documentation.

  1. Agent Lee adds Write Operations and Generative UI

    We are excited to announce two major capability upgrades for Agent Lee, the AI co-pilot built directly into the Cloudflare dashboard. Agent Lee is designed to understand your specific account configuration, and with this release, it moves from a passive advisor to an active assistant that can help you manage your infrastructure and visualize your data through natural language.

    Take action with Write Operations

    Agent Lee can now perform changes on your behalf across your Cloudflare account. Whether you need to update DNS records, modify SSL/TLS settings, or configure Workers routes, you can simply ask.

    To ensure security and accuracy, every write operation requires explicit user approval. Before any change is committed, Agent Lee will present a summary of the proposed action in plain language. No action is taken until you select Confirm, and this approval requirement is enforced at the infrastructure level to prevent unauthorized changes.

    Example requests:

    • "Add an A record for blog.example.com pointing to 192.0.2.10."
    • "Enable Always Use HTTPS on my zone."
    • "Set the SSL mode for example.com to Full (strict)."

    Visualize data with Generative UI

    Understanding your traffic and security trends is now as easy as asking a question. Agent Lee now features Generative UI, allowing it to render inline charts and structured data visualizations directly within the chat interface using your actual account telemetry.

    Example requests:

    • "Show me a chart of my traffic over the last 7 days."
    • "What does my error rate look like for the past 24 hours?"
    • "Graph my cache hit rate for example.com this week."

    Availability

    These features are currently available in Beta for all users on the Free plan. To get started, log in to the Cloudflare dashboard and select Ask AI in the upper right corner.

    To learn more about how to interact with your account using AI, refer to the Agent Lee documentation.

  1. You can now specify placement constraints to control where your Containers run.

    ConstraintValuesUse case
    regionsENAM, WNAM, EEUR, WEURGeographic placement
    jurisdictioneu, fedrampCompliance boundaries

    Use regions to limit placement to specific geographic areas. Use jurisdiction to restrict containers to compliance boundaries — eu maps to European regions (EEUR, WEUR) and fedramp maps to North American regions (ENAM, WNAM).

    Refer to Containers architecture for more details on placement.

  1. Privacy Proxy metrics are now queryable through Cloudflare's GraphQL Analytics API, the new default method for accessing Privacy Proxy observability data. All metrics are available through a single endpoint:

    Terminal window
    curl https://api.cloudflare.com/client/v4/graphql \
    --header "Authorization: Bearer <API_TOKEN>" \
    --header "Content-Type: application/json" \
    --data '{
    "query": "{ viewer { accounts(filter: { accountTag: $accountTag }) { privacyProxyRequestMetricsAdaptiveGroups(filter: { date_geq: $startDate, date_leq: $endDate }, limit: 10000, orderBy: [date_ASC]) { count dimensions { date } } } } }",
    "variables": {
    "accountTag": "<YOUR_ACCOUNT_TAG>",
    "startDate": "2026-04-04",
    "endDate": "2026-04-06"
    }
    }'

    Available nodes

    Four GraphQL nodes are now live, providing aggregate metrics across all key dimensions of your Privacy Proxy deployment:

    • privacyProxyRequestMetricsAdaptiveGroups — Request volume, error rates, status codes, and proxy status breakdowns.
    • privacyProxyIngressConnMetricsAdaptiveGroups — Client-to-proxy connection counts, bytes transferred, and latency percentiles.
    • privacyProxyEgressConnMetricsAdaptiveGroups — Proxy-to-origin connection counts, bytes transferred, and latency percentiles.
    • privacyProxyAuthMetricsAdaptiveGroups — Authentication attempt counts by method and result.

    All nodes support filtering by time, data center (coloCode), and endpoint, with additional node-specific dimensions such as transport protocol and authentication method.

    What this means for existing OpenTelemetry users

    OpenTelemetry-based metrics export remains available. The GraphQL Analytics API is now the recommended default method — a plug-and-play method that requires no collector infrastructure, saving engineering overhead.

    Learn more

  1. Browser Rendering now supports wrangler browser commands, letting you create, manage, and view browser sessions directly from your terminal, streamlining your workflow. Since Wrangler handles authentication, you do not need to pass API tokens in your commands.

    The following commands are available:

    CommandDescription
    wrangler browser createCreate a new browser session
    wrangler browser closeClose a session
    wrangler browser listList active sessions
    wrangler browser viewView a live browser session

    The create command spins up a browser instance on Cloudflare's network and returns a session URL. Once created, you can connect to the session using any CDP-compatible client like Puppeteer, Playwright, or MCP clients to automate browsing, scrape content, or debug remotely.

    Terminal window
    wrangler browser create

    Use --keepAlive to set the session keep-alive duration (60-600 seconds):

    Terminal window
    wrangler browser create --keepAlive 300

    The view command auto-selects when only one session exists, or prompts for selection when multiple sessions are available.

    All commands support --json for structured output, and because these are CLI commands, you can incorporate them into scripts to automate session management.

    For full usage details, refer to the Wrangler commands documentation.

  1. VPC Network bindings now give your Workers access to any service in your private network without pre-registering individual hosts or ports. This complements existing VPC Service bindings, which scope each binding to a specific host and port.

    You can bind to a Cloudflare Tunnel by tunnel_id to reach any service on the network where that tunnel is running, or bind to your Cloudflare Mesh network using cf1:network to reach any Mesh node, client device, or subnet route in your account:

    JSONC
    {
    "vpc_networks": [
    {
    "binding": "MESH",
    "network_id": "cf1:network",
    "remote": true
    }
    ]
    }

    At runtime, fetch() routes through the network to reach the service at the IP and port you specify:

    JavaScript
    const response = await env.MESH.fetch("http://10.0.1.50:8080/api/data");

    For configuration options and examples, refer to VPC Networks and Connect Workers to Cloudflare Mesh.

  1. Cloudflare Containers and Sandboxes are now generally available.

    Containers let you run more workloads on the Workers platform, including resource-intensive applications, different languages, and CLI tools that need full Linux environments.

    Since the initial launch of Containers, there have been significant improvements to Containers' performance, stability, and feature set. Some highlights include:

    The Sandbox SDK provides isolated environments for running untrusted code securely, with a simple TypeScript API for executing commands, managing files, and exposing services. This makes it easier to secure and manage your agents at scale. Some additions since launch include:

    For more information, refer to Containers and Sandbox SDK documentation.

  1. Outbound Workers for Sandboxes and Containers now support zero-trust credential injection, TLS interception, allow/deny lists, and dynamic per-instance egress policies. These features give platforms running agentic workloads full control over what leaves the sandbox, without exposing secrets to untrusted workloads, like user-generated code or coding agents.

    Credential injection

    Because outbound handlers run in the Workers runtime, outside the sandbox, they can hold secrets the sandbox never sees. A sandboxed workload can make a plain request, and credentials are transparently attached before a request is forwarded upstream.

    For instance, you could run an agent in a sandbox and ensure that any requests it makes to Github are authenticated. But it will never be able to access the credentials:

    TypeScript
    export class MySandbox extends Sandbox {}
    MySandbox.outboundByHost = {
    "github.com": (request: Request, env: Env, ctx: OutboundHandlerContext) => {
    const requestWithAuth = new Request(request);
    requestWithAuth.headers.set("x-auth-token", env.SECRET);
    return fetch(requestWithAuth);
    },
    };

    You can easily inject unique credentials for different instances by using ctx.containerId:

    TypeScript
    MySandbox.outboundByHost = {
    "my-internal-vcs.dev": async (
    request: Request,
    env: Env,
    ctx: OutboundHandlerContext,
    ) => {
    const authKey = await env.KEYS.get(ctx.containerId);
    const requestWithAuth = new Request(request);
    requestWithAuth.headers.set("x-auth-token", authKey);
    return fetch(requestWithAuth);
    },
    };

    No token is ever passed into the sandbox. You can rotate secrets in the Worker environment and every request will pick them up immediately.

    TLS interception

    Outbound Workers now intercept HTTPS traffic. A unique ephemeral certificate authority (CA) and private key are created for each sandbox instance. The CA is placed into the sandbox and trusted by default. The ephemeral private key never leaves the container runtime sidecar process and is never shared across instances.

    With TLS interception active, outbound Workers can act as a transparent proxy for both HTTP and HTTPS traffic.

    Allow and deny hosts

    Easily filter outbound traffic with allowedHosts and deniedHosts. When allowedHosts is set, it becomes a deny-by-default allowlist. Both properties support glob patterns.

    TypeScript
    export class MySandbox extends Sandbox {
    allowedHosts = ["github.com", "npmjs.org"];
    }

    Dynamic outbound handlers

    Define named outbound handlers then apply or remove them at runtime using setOutboundHandler() or setOutboundByHost(). This lets you change egress policy for a running sandbox without restarting it.

    TypeScript
    export class MySandbox extends Sandbox {}
    MySandbox.outboundHandlers = {
    allowHosts: async (req: Request, env: Env, ctx: OutboundHandlerContext ) => {
    const url = new URL(req.url);
    if (ctx.params.allowedHostnames.includes(url.hostname)) {
    return fetch(req);
    }
    return new Response(null, { status: 403 });
    },
    noHttp: async () => {
    return new Response(null, { status: 403 });
    },
    };

    Apply handlers programmatically from your Worker:

    TypeScript
    const sandbox = getSandbox(env.Sandbox, userId);
    // Open network for setup
    await sandbox.setOutboundHandler("allowHosts", {
    allowedHostnames: ["github.com", "npmjs.org"],
    });
    await sandbox.exec("npm install");
    // Lock down after setup
    await sandbox.setOutboundHandler("noHttp");

    Handlers accept params, so you can customize behavior per instance without defining separate handler functions.

    Get started

    Upgrade to @cloudflare/containers@0.3.0 or @cloudflare/sandbox@0.8.9 to use these features.

    For more details, refer to Sandbox outbound traffic and Container outbound traffic.

  1. Browser Rendering now exposes the Chrome DevTools Protocol (CDP), the low-level protocol that powers browser automation. The growing ecosystem of CDP-based agent tools, along with existing CDP automation scripts, can now use Browser Rendering directly.

    Any CDP-compatible client, including Puppeteer and Playwright, can connect from any environment, whether that is Cloudflare Workers, your local machine, or a cloud environment. All you need is your Cloudflare API key.

    For any existing CDP script, switching to Browser Rendering is a one-line change:

    JavaScript
    const puppeteer = require("puppeteer-core");
    const browser = await puppeteer.connect({
    browserWSEndpoint: `wss://api.cloudflare.com/client/v4/accounts/${ACCOUNT_ID}/browser-rendering/devtools/browser?keep_alive=600000`,
    headers: { Authorization: `Bearer ${API_TOKEN}` },
    });
    const page = await browser.newPage();
    await page.goto("https://example.com");
    console.log(await page.title());
    await browser.close();

    Additionally, MCP clients like Claude Desktop, Claude Code, Cursor, and OpenCode can now use Browser Rendering as their remote browser via the chrome-devtools-mcp package.

    Here is an example of how to configure Browser Rendering for Claude Desktop:

    {
    "mcpServers": {
    "browser-rendering": {
    "command": "npx",
    "args": [
    "-y",
    "chrome-devtools-mcp@latest",
    "--wsEndpoint=wss://api.cloudflare.com/client/v4/accounts/<ACCOUNT_ID>/browser-rendering/devtools/browser?keep_alive=600000",
    "--wsHeaders={\"Authorization\":\"Bearer <API_TOKEN>\"}"
    ]
    }
    }
    }

    To get started, refer to the CDP documentation.

  1. The simultaneous open connections limit has been relaxed. Previously, each Worker invocation was limited to six open connections at a time for the entire lifetime of each connection, including while reading the response body. Now, a connection is freed as soon as response headers arrive, so the six-connection limit only constrains how many connections can be in the initial "waiting for headers" phase simultaneously.

    Before: New connections are blocked until an earlier connection fully completes

    A 7th fetch is queued until an earlier connection fully completes, including reading its entire response body

    After: New connections can start as soon as response headers arrive

    A 7th fetch starts as soon as any earlier connection receives its response headers

    This means Workers can now have many more connections open at the same time without queueing, as long as no more than six are waiting for their initial response. This eliminates the Response closed due to connection limit exception that could previously occur when the runtime canceled stalled connections to prevent deadlocks.

    Previously, the runtime used a deadlock avoidance algorithm that watched each open connection for I/O activity. If all six connections appeared idle — even momentarily — the runtime would cancel the least-recently-used connection to make room for new requests. In practice, this heuristic was fragile. For example, when a response used Content-Encoding: gzip, the runtime's internal decompression created brief gaps between read and write operations. During these gaps, the connection appeared stalled despite being actively read by the Worker. If multiple connections hit these gaps at the same time, the runtime could spuriously cancel a connection that was working correctly. By only counting connections during the waiting-for-headers phase — where the runtime is fully in control and there is no ambiguity about whether the connection is active — this class of bug is eliminated entirely.

    Before: Connections could be canceled during brief internal pauses

    A connection with gaps from gzip decompression appears idle and is canceled by the runtime

    After: Connections complete normally regardless of internal pauses

    The same connection completes normally because the body phase is no longer counted against the limit
  1. AI Search now supports CSS content selectors for website data sources. You can now define which parts of a crawled page are extracted and indexed by specifying CSS selectors paired with URL glob patterns.

    Content selectors solve the problem of indexing only relevant content while ignoring navigation, sidebars, footers, and other boilerplate. When a page URL matches a glob pattern, only elements matching the corresponding CSS selector are extracted and converted to Markdown for indexing.

    Configure content selectors via the dashboard or API:

    Terminal window
    curl "https://api.cloudflare.com/client/v4/accounts/{account_id}/ai-search/instances" \
    -H "Authorization: Bearer {api_token}" \
    -H "Content-Type: application/json" \
    -d '{
    "id": "my-ai-search",
    "source": "https://example.com",
    "type": "web-crawler",
    "source_params": {
    "web_crawler": {
    "parse_options": {
    "content_selector": [
    {
    "path": "**/blog/**",
    "selector": "article .post-body"
    }
    ]
    }
    }
    }
    }'

    Selectors are evaluated in order, and the first matching pattern wins. You can define up to 10 content selector entries per instance.

    For configuration details and examples, refer to the content selectors documentation.

  1. AI Search now supports four additional Workers AI models across text generation and embedding.

    Text generation

    ModelContext window (tokens)
    @cf/zai-org/glm-4.7-flash131,072
    @cf/qwen/qwen3-30b-a3b-fp832,000

    GLM-4.7-Flash is a lightweight model from Zhipu AI with a 131,072 token context window, suitable for long-document summarization and retrieval tasks. Qwen3-30B-A3B is a mixture-of-experts model from Alibaba that activates only 3 billion parameters per forward pass, keeping inference fast while maintaining strong response quality.

    Embedding

    ModelVector dimsInput tokensMetric
    @cf/qwen/qwen3-embedding-0.6b1,0244,096cosine
    @cf/google/embeddinggemma-300m768512cosine

    Qwen3-Embedding-0.6B supports up to 4,096 input tokens, making it a good fit for indexing longer text chunks. EmbeddingGemma-300M from Google produces 768-dimension vectors and is optimized for low-latency embedding workloads.

    All four models are available without additional provider keys since they run on Workers AI. Select them when creating or updating an AI Search instance in the dashboard or through the API.

    For the full list of supported models, refer to Supported models.

  1. The Workers runtime now automatically sends a reciprocal Close frame when it receives a Close frame from the peer. The readyState transitions to CLOSED before the close event fires. This matches the WebSocket specification and standard browser behavior.

    This change is enabled by default for Workers using compatibility dates on or after 2026-04-07 (via the web_socket_auto_reply_to_close compatibility flag). Existing code that manually calls close() inside the close event handler will continue to work — the call is silently ignored when the WebSocket is already closed.

    JavaScript
    const [client, server] = Object.values(new WebSocketPair());
    server.accept();
    server.addEventListener("close", (event) => {
    // readyState is already CLOSED — no need to call server.close().
    console.log(server.readyState); // WebSocket.CLOSED
    console.log(event.code); // 1000
    console.log(event.wasClean); // true
    });

    Half-open mode for WebSocket proxying

    The automatic close behavior can interfere with WebSocket proxying, where a Worker sits between a client and a backend and needs to coordinate the close on both sides independently. To support this use case, pass { allowHalfOpen: true } to accept():

    JavaScript
    const [client, server] = Object.values(new WebSocketPair());
    server.accept({ allowHalfOpen: true });
    server.addEventListener("close", (event) => {
    // readyState is still CLOSING here, giving you time
    // to coordinate the close on the other side.
    console.log(server.readyState); // WebSocket.CLOSING
    // Manually close when ready.
    server.close(event.code, "done");
    });

    For more information, refer to WebSockets Close behavior.

  1. We are partnering with Google to bring @cf/google/gemma-4-26b-a4b-it to Workers AI. Gemma 4 26B A4B is a Mixture-of-Experts (MoE) model built from Gemini 3 research, with 26B total parameters and only 4B active per forward pass. By activating a small subset of parameters during inference, the model runs almost as fast as a 4B-parameter model while delivering the quality of a much larger one.

    Gemma 4 is Google's most capable family of open models, designed to maximize intelligence-per-parameter.

    Key capabilities

    • Mixture-of-Experts architecture with 8 active experts out of 128 total (plus 1 shared expert), delivering frontier-level performance at a fraction of the compute cost of dense models
    • 256,000 token context window for retaining full conversation history, tool definitions, and long documents across extended sessions
    • Built-in thinking mode that lets the model reason step-by-step before answering, improving accuracy on complex tasks
    • Vision understanding for object detection, document and PDF parsing, screen and UI understanding, chart comprehension, OCR (including multilingual), and handwriting recognition, with support for variable aspect ratios and resolutions
    • Function calling with native support for structured tool use, enabling agentic workflows and multi-step planning
    • Multilingual with out-of-the-box support for 35+ languages, pre-trained on 140+ languages
    • Coding for code generation, completion, and correction

    Use Gemma 4 26B A4B through the Workers AI binding (env.AI.run()), the REST API at /run or /v1/chat/completions, or the OpenAI-compatible endpoint.

    For more information, refer to the Gemma 4 26B A4B model page.

  1. AI Gateway now supports automatic retries at the gateway level. When an upstream provider returns an error, your gateway retries the request based on the retry policy you configure, without requiring any client-side changes.

    You can configure the retry count (up to 5 attempts), the delay between retries (from 100ms to 5 seconds), and the backoff strategy (Constant, Linear, or Exponential). These defaults apply to all requests through the gateway, and per-request headers can override them.

    Retry Requests settings in the AI Gateway dashboard

    This is particularly useful when you do not control the client making the request and cannot implement retry logic on the caller side. For more complex failover scenarios — such as failing across different providers — use Dynamic Routing.

    For more information, refer to Manage gateways.

  1. All wrangler workflows commands now accept a --local flag to target a Workflow running in a local wrangler dev session instead of the production API.

    You can now manage the full Workflow lifecycle locally, including triggering Workflows, listing instances, pausing, resuming, restarting, terminating, and sending events:

    Terminal window
    npx wrangler workflows list --local
    npx wrangler workflows trigger my-workflow --local
    npx wrangler workflows instances list my-workflow --local
    npx wrangler workflows instances pause my-workflow <INSTANCE_ID> --local
    npx wrangler workflows instances send-event my-workflow <INSTANCE_ID> --type my-event --local

    All commands also accept --port to target a specific wrangler dev session (defaults to 8787).

    For more information, refer to Workflows local development.

  1. AI Search supports a wrangler ai-search command namespace. Use it to manage instances from the command line.

    The following commands are available:

    CommandDescription
    wrangler ai-search createCreate a new instance with an interactive wizard
    wrangler ai-search listList all instances in your account
    wrangler ai-search getGet details of a specific instance
    wrangler ai-search updateUpdate the configuration of an instance
    wrangler ai-search deleteDelete an instance
    wrangler ai-search searchRun a search query against an instance
    wrangler ai-search statsGet usage statistics for an instance

    The create command guides you through setup, choosing a name, source type (r2 or web), and data source. You can also pass all options as flags for non-interactive use:

    Terminal window
    wrangler ai-search create my-instance --type r2 --source my-bucket

    Use wrangler ai-search search to query an instance directly from the CLI:

    Terminal window
    wrangler ai-search search my-instance --query "how do I configure caching?"

    All commands support --json for structured output that scripts and AI agents can parse directly.

    For full usage details, refer to the Wrangler commands documentation.

  1. Workers Builds now supports Deploy Hooks — trigger builds from your headless CMS, a Cron Trigger, a Slack bot, or any system that can send an HTTP request.

    Each Deploy Hook is a unique URL tied to a specific branch. Send it a POST and your Worker builds and deploys.

    Terminal window
    curl -X POST "https://api.cloudflare.com/client/v4/workers/builds/deploy_hooks/<DEPLOY_HOOK_ID>"

    To create one, go to Workers & Pages > your Worker > Settings > Builds > Deploy Hooks.

    Since a Deploy Hook is a URL, you can also call it from another Worker. For example, a Worker with a Cron Trigger can rebuild your project on a schedule:

    JavaScript
    export default {
    async scheduled(event, env, ctx) {
    ctx.waitUntil(fetch(env.DEPLOY_HOOK_URL, { method: "POST" }));
    },
    };

    You can also use Deploy Hooks to rebuild when your CMS publishes new content or deploy from a Slack slash command.

    Built-in optimizations

    • Automatic deduplication: If a Deploy Hook fires multiple times before the first build starts running, redundant builds are automatically skipped. This keeps your build queue clean when webhooks retry or CMS events arrive in bursts.
    • Last triggered: The dashboard shows when each hook was last triggered.
    • Build source: Your Worker's build history shows which Deploy Hook started each build by name.

    Deploy Hooks are rate limited to 10 builds per minute per Worker and 100 builds per minute per account. For all limits, see Limits & pricing.

    To get started, read the Deploy Hooks documentation.

  1. Three new properties are now available on request.cf in Workers that expose Layer 4 transport telemetry from the client connection. These properties let your Worker make decisions based on real-time connection quality signals — such as round-trip time and data delivery rate — without requiring any client-side changes.

    Previously, this telemetry was only available via the Server-Timing: cfL4 response header. These new properties surface the same data directly in the Workers runtime, so you can use it for routing, logging, or response customization.

    New properties

    PropertyTypeDescription
    clientTcpRttnumber | undefinedThe smoothed TCP round-trip time (RTT) between Cloudflare and the client in milliseconds. Only present for TCP connections (HTTP/1, HTTP/2). For example, 22.
    clientQuicRttnumber | undefinedThe smoothed QUIC round-trip time (RTT) between Cloudflare and the client in milliseconds. Only present for QUIC connections (HTTP/3). For example, 42.
    edgeL4Object | undefinedLayer 4 transport statistics. Contains deliveryRate (number) — the most recent data delivery rate estimate for the connection, in bytes per second. For example, 123456.

    Example: Log connection quality metrics

    JavaScript
    export default {
    async fetch(request) {
    const cf = request.cf;
    const rtt = cf.clientTcpRtt ?? cf.clientQuicRtt ?? 0;
    const deliveryRate = cf.edgeL4?.deliveryRate ?? 0;
    const transport = cf.clientTcpRtt ? "TCP" : "QUIC";
    console.log(`Transport: ${transport}, RTT: ${rtt}ms, Delivery rate: ${deliveryRate} B/s`);
    const headers = new Headers(request.headers);
    headers.set("X-Client-RTT", String(rtt));
    headers.set("X-Delivery-Rate", String(deliveryRate));
    return fetch(new Request(request, { headers }));
    },
    };

    For more information, refer to Workers Runtime APIs: Request.