Skip to content
Cloudflare Docs

Changelog

New updates and improvements at Cloudflare.

Subscribe to RSS
View all RSS feeds

Select...
hero image
  1. The @cloudflare/codemode package has been rewritten into a modular, runtime-agnostic SDK.

    Code Mode enables LLMs to write and execute TypeScript that orchestrates your tools, instead of calling them one at a time. This can (and does) yield significant token savings, reduces context window pressure and improves overall model performance on a task.

    The new Executor interface is runtime agnostic and comes with a prebuilt DynamicWorkerExecutor to run generated code in a Dynamic Worker Loader.

    Breaking changes

    • Removed experimental_codemode() and CodeModeProxy — the package no longer owns an LLM call or model choice
    • New import path: createCodeTool() is now exported from @cloudflare/codemode/ai

    New features

    • createCodeTool() — Returns a standard AI SDK Tool to use in your AI agents.
    • Executor interface — Minimal execute(code, fns) contract. Implement for any code sandboxing primitive or runtime.

    DynamicWorkerExecutor

    Runs code in a Dynamic Worker. It comes with the following features:

    • Network isolationfetch() and connect() blocked by default (globalOutbound: null) when using DynamicWorkerExecutor
    • Console captureconsole.log/warn/error captured and returned in ExecuteResult.logs
    • Execution timeout — Configurable via timeout option (default 30s)

    Usage

    JavaScript
    import { createCodeTool } from "@cloudflare/codemode/ai";
    import { DynamicWorkerExecutor } from "@cloudflare/codemode";
    import { streamText } from "ai";
    const executor = new DynamicWorkerExecutor({ loader: env.LOADER });
    const codemode = createCodeTool({ tools: myTools, executor });
    const result = streamText({
    model,
    tools: { codemode },
    messages,
    });

    Wrangler configuration

    {
    "worker_loaders": [{ "binding": "LOADER" }],
    }

    See the Code Mode documentation for full API reference and examples.

    Upgrade

    Terminal window
    npm i @cloudflare/codemode@latest
  1. Cloudflare Tunnel is now available in the main Cloudflare Dashboard at Networking > Tunnels, bringing first-class Tunnel management to developers using Tunnel for securing origin servers.

    Manage Tunnels in the Core Dashboard

    This new experience provides everything you need to manage Tunnels for public applications, including:

    Choose the right dashboard for your use case

    Core Dashboard: Navigate to Networking > Tunnels to manage Tunnels for:

    Cloudflare One Dashboard: Navigate to Zero Trust > Networks > Connectors to manage Tunnels for:

    Both dashboards provide complete Tunnel management capabilities — choose based on your primary workflow.

    Get started

    New to Tunnel? Learn how to get started with Cloudflare Tunnel or explore advanced use cases like securing SSH servers or running Tunnels in Kubernetes.

  1. We have introduced dynamic visualizations to the Threat Events dashboard to help you better understand the threat landscape and identify emerging patterns at a glance.

    What's new:

    • Sankey Diagrams: Trace the flow of attacks from country of origin to target country to identify which regions are being hit hardest and where the threat infrastructure resides.
    Sankey Diagram
    • Dataset Distribution over time: Instantly pivot your view to understand if a specific campaign is targeting your sector or if it is a broad-spectrum commodity attack.
    Events over time
    • Enhanced Filtering: Use these visual tools to filter and drill down into specific attack vectors directly from the charts.

    Cloudforce One subscribers can explore these new views now in Application Security > Threat Intelligence > Threat Events.

  1. A new Allow clientless access setting makes it easier to enable access to private self-hosted applications without a device client.

    Allow clientless access setting in the Cloudflare One dashboard

    Previously, to provide clientless access to a private hostname or IP, you had to create a separate bookmark application pointing to a prefixed Clientless Web Isolation URL (for example, https://<your-teamname>.cloudflareaccess.com/browser/https://10.0.0.1/). This bookmark was visible to all users in the App Launcher, regardless of whether they had access to the underlying application.

    Now, you can manage clientless access directly within your private self-hosted application. When Allow clientless access is turned on, users who pass your Access application policies will see a tile in their App Launcher pointing to the prefixed URL. Users must have remote browser permissions to open the link.

  1. You can now assign Access policies to bookmark applications. This lets you control which users see a bookmark in the App Launcher based on identity, device posture, and other policy rules.

    Previously, bookmark applications were visible to all users in your organization. With policy support, you can now:

    • Tailor the App Launcher to each user — Users only see the applications they have access to, reducing clutter and preventing accidental clicks on irrelevant resources.
    • Restrict visibility of sensitive bookmarks — Limit who can view bookmarks to internal tools or partner resources based on group membership, identity provider, or device posture.

    Bookmarks support all Access policy configurations except purpose justification, temporary authentication, and application isolation. If no policy is assigned, the bookmark remains visible to all users (maintaining backwards compatibility).

    For more information, refer to Add bookmarks.

  1. The latest release of the Agents SDK adds built-in retry utilities, per-connection protocol message control, and a fully rewritten @cloudflare/ai-chat with data parts, tool approval persistence, and zero breaking changes.

    Retry utilities

    A new this.retry() method lets you retry any async operation with exponential backoff and jitter. You can pass an optional shouldRetry predicate to bail early on non-retryable errors.

    JavaScript
    class MyAgent extends Agent {
    async onRequest(request) {
    const data = await this.retry(() => callUnreliableService(), {
    maxAttempts: 4,
    shouldRetry: (err) => !(err instanceof PermanentError),
    });
    return Response.json(data);
    }
    }

    Retry options are also available per-task on queue(), schedule(), scheduleEvery(), and addMcpServer():

    JavaScript
    // Per-task retry configuration, persisted in SQLite alongside the task
    await this.schedule(
    Date.now() + 60_000,
    "sendReport",
    { userId: "abc" },
    {
    retry: { maxAttempts: 5 },
    },
    );
    // Class-level retry defaults
    class MyAgent extends Agent {
    static options = {
    retry: { maxAttempts: 3 },
    };
    }

    Retry options are validated eagerly at enqueue/schedule time, and invalid values throw immediately. Internal retries have also been added for workflow operations (terminateWorkflow, pauseWorkflow, and others) with Durable Object-aware error detection.

    Per-connection protocol message control

    Agents automatically send JSON text frames (identity, state, MCP server lists) to every WebSocket connection. You can now suppress these per-connection for clients that cannot handle them — binary-only devices, MQTT clients, or lightweight embedded systems.

    JavaScript
    class MyAgent extends Agent {
    shouldSendProtocolMessages(connection, ctx) {
    // Suppress protocol messages for MQTT clients
    const subprotocol = ctx.request.headers.get("Sec-WebSocket-Protocol");
    return subprotocol !== "mqtt";
    }
    }

    Connections with protocol messages disabled still fully participate in RPC and regular messaging. Use isConnectionProtocolEnabled(connection) to check a connection's status at any time. The flag persists across Durable Object hibernation.

    See Protocol messages for full documentation.

    @cloudflare/ai-chat v0.1.0

    The first stable release of @cloudflare/ai-chat ships alongside this release with a major refactor of AIChatAgent internals — new ResumableStream class, WebSocket ChatTransport, and simplified SSE parsing — with zero breaking changes. Existing code using AIChatAgent and useAgentChat works as-is.

    Key new features:

    • Data parts — Attach typed JSON blobs (data-*) to messages alongside text. Supports reconciliation (type+id updates in-place), append, and transient parts (ephemeral via onData callback). See Data parts.
    • Tool approval persistence — The needsApproval approval UI now survives page refresh and DO hibernation. The streaming message is persisted to SQLite when a tool enters approval-requested state.
    • maxPersistedMessages — Cap SQLite message storage with automatic oldest-message deletion.
    • body option on useAgentChat — Send custom data with every request (static or dynamic).
    • Incremental persistence — Hash-based cache to skip redundant SQL writes.
    • Row size guard — Automatic two-pass compaction when messages approach the SQLite 2 MB limit.
    • autoContinueAfterToolResult defaults to true — Client-side tool results and tool approvals now automatically trigger a server continuation, matching server-executed tool behavior. Set autoContinueAfterToolResult: false in useAgentChat to restore the previous behavior.

    Notable bug fixes:

    • Resolved stream resumption race conditions
    • Resolved an issue where setMessages functional updater sent empty arrays
    • Resolved an issue where client tool schemas were lost after DO hibernation
    • Resolved InvalidPromptError after tool approval (approval.id was dropped)
    • Resolved an issue where message metadata was not propagated on broadcast/resume paths
    • Resolved an issue where clearAll() did not clear in-memory chunk buffers
    • Resolved an issue where reasoning-delta silently dropped data when reasoning-start was missed during stream resumption

    Synchronous queue and schedule getters

    getQueue(), getQueues(), getSchedule(), dequeue(), dequeueAll(), and dequeueAllByCallback() were unnecessarily async despite only performing synchronous SQL operations. They now return values directly instead of wrapping them in Promises. This is backward compatible — existing code using await on these methods will continue to work.

    Other improvements

    • Fix TypeScript "excessively deep" error — A depth counter on CanSerialize and IsSerializableParam types bails out to true after 10 levels of recursion, preventing the "Type instantiation is excessively deep" error with deeply nested types like AI SDK CoreMessage[].
    • POST SSE keepalive — The POST SSE handler now sends event: ping every 30 seconds to keep the connection alive, matching the existing GET SSE handler behavior. This prevents POST response streams from being silently dropped by proxies during long-running tool calls.
    • Widened peer dependency ranges — Peer dependency ranges across packages have been widened to prevent cascading major bumps during 0.x minor releases. @cloudflare/ai-chat and @cloudflare/codemode are now marked as optional peer dependencies.

    Upgrade

    To update to the latest version:

    Terminal window
    npm i agents@latest @cloudflare/ai-chat@latest
  1. We are updating naming related to some of our Networking products to better clarify their place in the Zero Trust and Secure Access Service Edge (SASE) journey.

    We are retiring some older brand names in favor of names that describe exactly what the products do within your network. We are doing this to help customers build better, clearer mental models for comprehensive SASE architecture delivered on Cloudflare.

    What's changing

    • Magic WANCloudflare WAN
    • Magic WAN IPsecCloudflare IPsec
    • Magic WAN GRECloudflare GRE
    • Magic WAN ConnectorCloudflare One Appliance
    • Magic FirewallCloudflare Network Firewall
    • Magic Network MonitoringNetwork Flow
    • Magic Cloud NetworkingCloudflare One Multi-cloud Networking

    No action is required by you — all functionality, existing configurations, and billing will remain exactly the same.

    For more information, visit the Cloudflare One documentation.

  1. Sandboxes and Containers now support running Docker for "Docker-in-Docker" setups. This is particularly useful when your end users or agents want to run a full sandboxed development environment.

    This allows you to:

    • Develop containerized applications with your Sandbox
    • Run isolated test environments for images
    • Build container images as part of CI/CD workflows
    • Deploy arbitrary images supplied at runtime within a container

    For Sandbox SDK users, see the Docker-in-Docker guide for instructions on combining Docker with the SandboxSDK. For general Containers usage, see the Containers FAQ.

  1. When AI systems request pages from any website that uses Cloudflare and has Markdown for Agents enabled, they can express the preference for text/markdown in the request: our network will automatically and efficiently convert the HTML to markdown, when possible, on the fly.

    This release adds the following improvements:

    • The origin response limit was raised from 1 MB to 2 MB (2,097,152 bytes).
    • We no longer require the origin to send the content-length header.
    • We now support content encoded responses from the origin.

    If you haven’t enabled automatic Markdown conversion yet, visit the AI Crawl Control section of the Cloudflare dashboard and enable Markdown for Agents.

    Refer to our developer documentation for more details.

  1. This week’s release introduces new detections for CVE-2025-68645 and CVE-2025-31125.

    Key Findings

    • CVE-2025-68645: A Local File Inclusion (LFI) vulnerability in the Webmail Classic UI of Zimbra Collaboration Suite (ZCS) 10.0 and 10.1 allows unauthenticated remote attackers to craft requests to the /h/rest endpoint, improperly influence internal dispatching, and include arbitrary files from the WebRoot directory.
    • CVE-2025-31125: Vite, the JavaScript frontend tooling framework, exposes content of non-allowed files via ?inline&import when its development server is network-exposed, enabling unauthorized attackers to read arbitrary files and potentially leak sensitive information.
    RulesetRule IDLegacy Rule IDDescriptionPrevious ActionNew ActionComments
    Cloudflare Managed Ruleset N/AZimbra - Local File Inclusion - CVE:CVE-2025-68645LogBlockThis is a new detection.
    Cloudflare Managed Ruleset N/AVite - WASM Import Path Traversal - CVE:CVE-2025-31125LogBlockThis is a new detection.
  1. A new Workers Best Practices guide provides opinionated recommendations for building fast, reliable, observable, and secure Workers. The guide draws on production patterns, Cloudflare internal usage, and best practices observed from developers building on Workers.

    Key guidance includes:

    • Keep your compatibility date current and enable nodejs_compat — Ensure you have access to the latest runtime features and Node.js built-in modules.
    {
    "name": "my-worker",
    "main": "src/index.ts",
    // Set this to today's date
    "compatibility_date": "2026-02-20",
    "compatibility_flags": ["nodejs_compat"],
    }
    • Generate binding types with wrangler types — Never hand-write your Env interface. Let Wrangler generate it from your actual configuration to catch mismatches at compile time.
    • Stream request and response bodies — Avoid buffering large payloads in memory. Use TransformStream and pipeTo to stay within the 128 MB memory limit and improve time-to-first-byte.
    • Use bindings, not REST APIs — Bindings to KV, R2, D1, Queues, and other Cloudflare services are direct, in-process references with no network hop and no authentication overhead.
    • Use Queues and Workflows for background work — Move long-running or retriable tasks out of the critical request path. Use Queues for simple fan-out and buffering, and Workflows for multi-step durable processes.
    • Enable Workers Logs and Traces — Configure observability before deploying to production so you have data when you need to debug.
    • Avoid global mutable state — Workers reuse isolates across requests. Storing request-scoped data in module-level variables causes cross-request data leaks.
    • Always await or waitUntil your Promises — Floating promises cause silent bugs and dropped work.
    • Use Web Crypto for secure token generation — Never use Math.random() for security-sensitive operations.

    To learn more, refer to Workers Best Practices.

  1. Disclaimer: Please note that v5.0.0-beta.1 is in Beta and we are still testing it for stability.

    Full Changelog: v4.3.1...v5.0.0-beta.1

    In this release, you'll see a large number of breaking changes. This is primarily due to a change in OpenAPI definitions, which our libraries are based off of, and codegen updates that we rely on to read those OpenAPI definitions and produce our SDK libraries. As the codegen is always evolving and improving, so are our code bases.

    There may be changes that are not captured in this changelog. Feel free to open an issue to report any inaccuracies, and we will make sure it gets into the changelog before the v5.0.0 release.

    Most of the breaking changes below are caused by improvements to the accuracy of the base OpenAPI schemas, which sometimes translates to breaking changes in downstream clients that depend on those schemas.

    Please ensure you read through the list of changes below and the migration guide before moving to this version - this will help you understand any down or upstream issues it may cause to your environments.

    Breaking Changes

    The following resources have breaking changes. See the v5 Migration Guide for detailed migration instructions.

    • abusereports
    • acm.totaltls
    • apigateway.configurations
    • cloudforceone.threatevents
    • d1.database
    • intel.indicatorfeeds
    • logpush.edge
    • origintlsclientauth.hostnames
    • queues.consumers
    • radar.bgp
    • rulesets.rules
    • schemavalidation.schemas
    • snippets
    • zerotrust.dlp
    • zerotrust.networks

    Features

    New API Resources

    • abusereports - Abuse report management
    • abusereports.mitigations - Abuse report mitigation actions
    • ai.tomarkdown - AI-powered markdown conversion
    • aigateway.dynamicrouting - AI Gateway dynamic routing configuration
    • aigateway.providerconfigs - AI Gateway provider configurations
    • aisearch - AI-powered search functionality
    • aisearch.instances - AI Search instance management
    • aisearch.tokens - AI Search authentication tokens
    • alerting.silences - Alert silence management
    • brandprotection.logomatches - Brand protection logo match detection
    • brandprotection.logos - Brand protection logo management
    • brandprotection.matches - Brand protection match results
    • brandprotection.queries - Brand protection query management
    • cloudforceone.binarystorage - CloudForce One binary storage
    • connectivity.directory - Connectivity directory services
    • d1.database - D1 database management
    • diagnostics.endpointhealthchecks - Endpoint health check diagnostics
    • fraud - Fraud detection and prevention
    • iam.sso - IAM Single Sign-On configuration
    • loadbalancers.monitorgroups - Load balancer monitor groups
    • organizations - Organization management
    • organizations.organizationprofile - Organization profile settings
    • origintlsclientauth.hostnamecertificates - Origin TLS client auth hostname certificates
    • origintlsclientauth.hostnames - Origin TLS client auth hostnames
    • origintlsclientauth.zonecertificates - Origin TLS client auth zone certificates
    • pipelines - Data pipeline management
    • pipelines.sinks - Pipeline sink configurations
    • pipelines.streams - Pipeline stream configurations
    • queues.subscriptions - Queue subscription management
    • r2datacatalog - R2 Data Catalog integration
    • r2datacatalog.credentials - R2 Data Catalog credentials
    • r2datacatalog.maintenanceconfigs - R2 Data Catalog maintenance configurations
    • r2datacatalog.namespaces - R2 Data Catalog namespaces
    • radar.bots - Radar bot analytics
    • radar.ct - Radar certificate transparency data
    • radar.geolocations - Radar geolocation data
    • realtimekit.activesession - Real-time Kit active session management
    • realtimekit.analytics - Real-time Kit analytics
    • realtimekit.apps - Real-time Kit application management
    • realtimekit.livestreams - Real-time Kit live streaming
    • realtimekit.meetings - Real-time Kit meeting management
    • realtimekit.presets - Real-time Kit preset configurations
    • realtimekit.recordings - Real-time Kit recording management
    • realtimekit.sessions - Real-time Kit session management
    • realtimekit.webhooks - Real-time Kit webhook configurations
    • tokenvalidation.configuration - Token validation configuration
    • tokenvalidation.rules - Token validation rules
    • workers.beta - Workers beta features

    New Endpoints (Existing Resources)

    acm.totaltls

    • edit()
    • update()

    cloudforceone.threatevents

    • list()

    contentscanning

    • create()
    • get()
    • update()

    dns.records

    • scan_list()
    • scan_review()
    • scan_trigger()

    intel.indicatorfeeds

    • create()
    • delete()
    • list()

    leakedcredentialchecks.detections

    • get()

    queues.consumers

    • list()

    radar.ai

    • summary()
    • timeseries()
    • timeseries_groups()

    radar.bgp

    • changes()
    • snapshot()

    workers.subdomains

    • delete()

    zerotrust.networks

    • create()
    • delete()
    • edit()
    • get()
    • list()

    General Fixes and Improvements

    Type System & Compatibility

    • Type inference improvements: Allow Pyright to properly infer TypedDict types within SequenceNotStr
    • Type completeness: Add missing types to method arguments and response models
    • Pydantic compatibility: Ensure compatibility with Pydantic versions prior to 2.8.0 when using additional fields

    Request/Response Handling

    • Multipart form data: Correctly handle sending multipart/form-data requests with JSON data
    • Header handling: Do not send headers with default values set to omit
    • GET request headers: Don't send Content-Type header on GET requests
    • Response body model accuracy: Broad improvements to the correctness of models

    Parsing & Data Processing

    • Discriminated unions: Correctly handle nested discriminated unions in response parsing
    • Extra field types: Parse extra field types correctly
    • Empty metadata: Ignore empty metadata fields during parsing
    • Singularization rules: Update resource name singularization rules for better consistency
  1. We're excited to announce GLM-4.7-Flash on Workers AI, a fast and efficient text generation model optimized for multilingual dialogue and instruction-following tasks, along with the brand-new @cloudflare/tanstack-ai package and workers-ai-provider v3.1.1.

    You can now run AI agents entirely on Cloudflare. With GLM-4.7-Flash's multi-turn tool calling support, plus full compatibility with TanStack AI and the Vercel AI SDK, you have everything you need to build agentic applications that run completely at the edge.

    GLM-4.7-Flash — Multilingual Text Generation Model

    @cf/zai-org/glm-4.7-flash is a multilingual model with a 131,072 token context window, making it ideal for long-form content generation, complex reasoning tasks, and multilingual applications.

    Key Features and Use Cases:

    • Multi-turn Tool Calling for Agents: Build AI agents that can call functions and tools across multiple conversation turns
    • Multilingual Support: Built to handle content generation in multiple languages effectively
    • Large Context Window: 131,072 tokens for long-form writing, complex reasoning, and processing long documents
    • Fast Inference: Optimized for low-latency responses in chatbots and virtual assistants
    • Instruction Following: Excellent at following complex instructions for code generation and structured tasks

    Use GLM-4.7-Flash through the Workers AI binding (env.AI.run()), the REST API at /run or /v1/chat/completions, AI Gateway, or via workers-ai-provider for the Vercel AI SDK.

    Pricing is available on the model page or pricing page.

    @cloudflare/tanstack-ai v0.1.1 — TanStack AI adapters for Workers AI and AI Gateway

    We've released @cloudflare/tanstack-ai, a new package that brings Workers AI and AI Gateway support to TanStack AI. This provides a framework-agnostic alternative for developers who prefer TanStack's approach to building AI applications.

    Workers AI adapters support four configuration modes — plain binding (env.AI), plain REST, AI Gateway binding (env.AI.gateway(id)), and AI Gateway REST — across all capabilities:

    • Chat (createWorkersAiChat) — Streaming chat completions with tool calling, structured output, and reasoning text streaming.
    • Image generation (createWorkersAiImage) — Text-to-image models.
    • Transcription (createWorkersAiTranscription) — Speech-to-text.
    • Text-to-speech (createWorkersAiTts) — Audio generation.
    • Summarization (createWorkersAiSummarize) — Text summarization.

    AI Gateway adapters route requests from third-party providers — OpenAI, Anthropic, Gemini, Grok, and OpenRouter — through Cloudflare AI Gateway for caching, rate limiting, and unified billing.

    To get started:

    Terminal window
    npm install @cloudflare/tanstack-ai @tanstack/ai

    workers-ai-provider v3.1.1 — transcription, speech, reranking, and reliability

    The Workers AI provider for the Vercel AI SDK now supports three new capabilities beyond chat and image generation:

    • Transcription (provider.transcription(model)) — Speech-to-text with automatic handling of model-specific input formats across binding and REST paths.
    • Text-to-speech (provider.speech(model)) — Audio generation with support for voice and speed options.
    • Reranking (provider.reranking(model)) — Document reranking for RAG pipelines and search result ordering.
    TypeScript
    import { createWorkersAI } from "workers-ai-provider";
    import {
    experimental_transcribe,
    experimental_generateSpeech,
    rerank,
    } from "ai";
    const workersai = createWorkersAI({ binding: env.AI });
    const transcript = await experimental_transcribe({
    model: workersai.transcription("@cf/openai/whisper-large-v3-turbo"),
    audio: audioData,
    mediaType: "audio/wav",
    });
    const speech = await experimental_generateSpeech({
    model: workersai.speech("@cf/deepgram/aura-1"),
    text: "Hello world",
    voice: "asteria",
    });
    const ranked = await rerank({
    model: workersai.reranking("@cf/baai/bge-reranker-base"),
    query: "What is machine learning?",
    documents: ["ML is a branch of AI.", "The weather is sunny."],
    });

    This release also includes a comprehensive reliability overhaul (v3.0.5):

    • Fixed streaming — Responses now stream token-by-token instead of buffering all chunks, using a proper TransformStream pipeline with backpressure.
    • Fixed tool calling — Resolved issues with tool call ID sanitization, conversation history preservation, and a heuristic that silently fell back to non-streaming mode when tools were defined.
    • Premature stream termination detection — Streams that end unexpectedly now report finishReason: "error" instead of silently reporting "stop".
    • AI Search support — Added createAISearch as the canonical export (renamed from AutoRAG). createAutoRAG still works with a deprecation warning.

    To upgrade:

    Terminal window
    npm install workers-ai-provider@latest ai

    Resources

  1. Workers VPC now supports Cloudflare Origin CA certificates when connecting to your private services over HTTPS. Previously, Workers VPC only trusted certificates issued by publicly trusted certificate authorities (for example, Let's Encrypt, DigiCert).

    With this change, you can use free Cloudflare Origin CA certificates on your origin servers within private networks and connect to them from Workers VPC using the https scheme. This is useful for encrypting traffic between the tunnel and your service without needing to provision certificates from a public CA.

    For more information, refer to Supported TLS certificates.

  1. Cloudflare's network now supports real-time content conversion at the source, for enabled zones using content negotiation headers. When AI systems request pages from any website that uses Cloudflare and has Markdown for Agents enabled, they can express the preference for text/markdown in the request: our network will automatically and efficiently convert the HTML to markdown, when possible, on the fly.

    Here is a curl example with the Accept negotiation header requesting this page from our developer documentation:

    Terminal window
    curl https://developers.cloudflare.com/fundamentals/reference/markdown-for-agents/ \
    -H "Accept: text/markdown"

    The response to this request is now formatted in markdown:

    HTTP/2 200
    date: Wed, 11 Feb 2026 11:44:48 GMT
    content-type: text/markdown; charset=utf-8
    content-length: 2899
    vary: accept
    x-markdown-tokens: 725
    content-signal: ai-train=yes, search=yes, ai-input=yes
    ---
    title: Markdown for Agents · Cloudflare Agents docs
    ---
    ## What is Markdown for Agents
    Markdown has quickly become the lingua franca for agents and AI systems
    as a whole. The format’s explicit structure makes it ideal for AI processing,
    ultimately resulting in better results while minimizing token waste.
    ...

    Refer to our developer documentation and our blog announcement for more details.

  1. Radar now includes content type insights for AI bot and crawler traffic. The new content_type dimension and filter shows the distribution of content types returned to AI crawlers, grouped by MIME type category.

    The content type dimension and filter are available via the following API endpoints:

    Content type categories:

    • HTML - Web pages (text/html)
    • Images - All image formats (image/*)
    • JSON - JSON data and API responses (application/json, *+json)
    • JavaScript - Scripts (application/javascript, text/javascript)
    • CSS - Stylesheets (text/css)
    • Plain Text - Unformatted text (text/plain)
    • Fonts - Web fonts (font/*, application/font-*)
    • XML - XML documents and feeds (text/xml, application/xml, application/rss+xml, application/atom+xml)
    • YAML - Configuration files (text/yaml, application/yaml)
    • Video - Video content and streaming (video/*, application/ogg, *mpegurl)
    • Audio - Audio content (audio/*)
    • Markdown - Markdown documents (text/markdown)
    • Documents - PDFs, Office documents, ePub, CSV (application/pdf, application/msword, text/csv)
    • Binary - Executables, archives, WebAssembly (application/octet-stream, application/zip, application/wasm)
    • Serialization - Binary API formats (application/protobuf, application/grpc, application/msgpack)
    • Other - All other content types

    Additionally, individual bot information pages now display content type distribution for AI crawlers that exist in both the Verified Bots and AI Bots datasets.

    Screenshot of the Content Type Distribution chart on the AI Insights page

    Check out the AI Insights page to explore the data.

  1. We have significantly upgraded our Logo Matching capabilities within Brand Protection. While previously limited to approximately 100% matches, users can now detect a wider range of brand assets through a redesigned matching model and UI.

    What's new

    • Configurable match thresholds: Users can set a minimum match score (starting at 75%) when creating a logo query to capture subtle variations or high-quality impersonations.
    • Visual match scores: Allow users to see the exact percentage of the match directly in the results table, highlighted with color-coded lozenges to indicate severity.
    • Direct logo previews: Available in the Cloudflare dashboard — similar to string matches — to verify infringements at a glance.

    Key benefits

    • Expose sophisticated impersonators who use slightly altered logos to bypass basic detection filters.
    • Faster triage of the most relevant threats immediately using visual indicators, reducing the time spent manually reviewing matches.

    Ready to protect your visual identity? Learn more in our Brand Protection documentation.

  1. In January 2025, we announced the launch of the new Terraform v5 Provider. We greatly appreciate the proactive engagement and valuable feedback from the Cloudflare community following the v5 release. In response, we have established a consistent and rapid 2-3 week cadence for releasing targeted improvements, demonstrating our commitment to stability and reliability.

    With the help of the community, we have a growing number of resources that we have marked as stable, with that list continuing to grow with every release. The most used resources are on track to be stable by the end of March 2026, when we will also be releasing a new migration tool to help you migrate from v4 to v5 with ease.

    This release brings new capabilities for AI Search, enhanced Workers Script placement controls, and numerous bug fixes based on community feedback. We also begun laying foundational work for improving the v4 to v5 migration process. Stay tuned for more details as we approach the March 2026 release timeline.

    Thank you for continuing to raise issues. They make our provider stronger and help us build products that reflect your needs.

    Features

    • ai_search_instance: add data source for querying AI Search instances
    • ai_search_token: add data source for querying AI Search tokens
    • account: add support for tenant unit management with new unit field
    • account: add automatic mapping from managed_by.parent_org_id to unit.id
    • authenticated_origin_pulls_certificate: add data source for querying authenticated origin pull certificates
    • authenticated_origin_pulls_hostname_certificate: add data source for querying hostname-specific authenticated origin pull certificates
    • authenticated_origin_pulls_settings: add data source for querying authenticated origin pull settings
    • workers_kv: add value field to data source to retrieve KV values directly
    • workers_script: add script field to data source to retrieve script content
    • workers_script: add support for simple rate limit binding
    • workers_script: add support for targeted placement mode with placement.target array for specifying placement targets (region, hostname, host)
    • workers_script: add placement_mode and placement_status computed fields
    • zero_trust_dex_test: add data source with filter support for finding specific tests
    • zero_trust_dlp_predefined_profile: add enabled_entries field for flexible entry management

    Bug Fixes

    • account: map managed_by.parent_org_id to unit.id in unmarshall and add acceptance tests
    • authenticated_origin_pulls_certificate: add certificate normalization to prevent drift
    • authenticated_origin_pulls: handle array response and implement full lifecycle
    • authenticated_origin_pulls_hostname_certificate: fix resource and tests
    • cloudforce_one_request_message: use correct request_id field instead of id in API calls
    • dns_zone_transfers_incoming: use correct zone_id field instead of id in API calls
    • dns_zone_transfers_outgoing: use correct zone_id field instead of id in API calls
    • email_routing_settings: use correct zone_id field instead of id in API calls
    • hyperdrive_config: add proper handling for write-only fields to prevent state drift
    • hyperdrive_config: add normalization for empty mtls objects to prevent unnecessary diffs
    • magic_network_monitoring_rule: use correct account_id field instead of id in API calls
    • mtls_certificates: fix resource and test
    • pages_project: revert build_config to computed optional
    • stream_key: use correct account_id field instead of id in API calls
    • total_tls: use upsert pattern for singleton zone setting
    • waiting_room_rules: use correct waiting_room_id field instead of id in API calls
    • workers_script: add support for placement mode/status
    • zero_trust_access_application: update v4 version on migration tests
    • zero_trust_device_posture_rule: update tests to match API
    • zero_trust_dlp_integration_entry: use correct entry_id field instead of id in API calls
    • zero_trust_dlp_predefined_entry: use correct entry_id field instead of id in API calls
    • zero_trust_organization: fix plan issues

    Chores

    • add state upgraders to 95+ resources to lay the foundation for replacing Grit (still under active development)
    • certificate_pack: add state migration handler for SDKv2 to Framework conversion
    • custom_hostname_fallback_origin: add comprehensive lifecycle test and migration support
    • dns_record: add state migration handler for SDKv2 to Framework conversion
    • leaked_credential_check: add import functionality and tests
    • load_balancer_pool: add state migration handler with detection for v4 vs v5 format
    • pages_project: add state migration handlers
    • tiered_cache: add state migration handlers
    • zero_trust_dlp_predefined_profile: deprecate entries field in favor of enabled_entries

    For more information

  1. Workers no longer have a limit of 1000 subrequests per invocation, allowing you to make more fetch() calls or requests to Cloudflare services on every incoming request. This is especially important for long-running Workers requests, such as open websockets on Durable Objects or long-running Workflows, as these could often exceed this limit and error.

    By default, Workers on paid plans are now limited to 10,000 subrequests per invocation, but this limit can be increased up to 10 million by setting the new subrequests limit in your Wrangler configuration file.

    {
    "limits": {
    "subrequests": 50000,
    },
    }

    Workers on the free plan remain limited to 50 external subrequests and 1000 subrequests to Cloudflare services per invocation.

    To protect against runaway code or unexpected costs, you can also set a lower limit for both subrequests and CPU usage.

    {
    "limits": {
    "subrequests": 10,
    "cpu_ms": 1000,
    },
    }

    For more information, refer to the Wrangler configuration documentation for limits and subrequest limits.

  1. The Cloudflare Vite plugin now integrates seamlessly @vitejs/plugin-rsc, the official Vite plugin for React Server Components.

    A childEnvironments option has been added to the plugin config to enable using multiple environments within a single Worker. The parent environment can then import modules from a child environment in order to access a separate module graph. For a typical RSC use case, the plugin might be configured as in the following example:

    vite.config.ts
    export default defineConfig({
    plugins: [
    cloudflare({
    viteEnvironment: {
    name: "rsc",
    childEnvironments: ["ssr"],
    },
    }),
    ],
    });

    @vitejs/plugin-rsc provides the lower level functionality that frameworks, such as React Router, build upon. The GitHub repository includes a basic Cloudflare example.

  1. This week’s release changes the rule action from BLOCK to Disabled for Anomaly:Header:User-Agent - Fake Google Bot.

    RulesetRule IDLegacy Rule IDDescriptionPrevious ActionNew ActionComments
    Cloudflare Managed Ruleset N/AAnomaly:Header:User-Agent - Fake Google BotEnabledDisabledWe are changing the action for this rule from BLOCK to Disabled
  1. Announcement DateRelease DateRelease BehaviorLegacy Rule IDRule IDDescriptionComments
    2026-02-102026-02-16LogN/A Zimbra - Local File Inclusion - CVE:CVE-2025-68645This is a new detection.
    2026-02-102026-02-16LogN/A Vite - WASM Import Path Traversal - CVE:CVE-2025-31125This is a new detection.
  1. The latest release of the Agents SDK brings readonly connections, MCP protocol and security improvements, x402 payment protocol v2 migration, and the ability to customize OAuth for MCP server connections.

    Readonly connections

    Agents can now restrict WebSocket clients to read-only access, preventing them from modifying agent state. This is useful for dashboards, spectator views, or any scenario where clients should observe but not mutate.

    New hooks: shouldConnectionBeReadonly, setConnectionReadonly, isConnectionReadonly. Readonly connections block both client-side setState() and mutating @callable() methods, and the readonly flag survives hibernation.

    JavaScript
    class MyAgent extends Agent {
    shouldConnectionBeReadonly(connection) {
    // Make spectators readonly
    return connection.url.includes("spectator");
    }
    }

    Custom MCP OAuth providers

    The new createMcpOAuthProvider method on the Agent class allows subclasses to override the default OAuth provider used when connecting to MCP servers. This enables custom authentication strategies such as pre-registered client credentials or mTLS, beyond the built-in dynamic client registration.

    JavaScript
    class MyAgent extends Agent {
    createMcpOAuthProvider(callbackUrl) {
    return new MyCustomOAuthProvider(this.ctx.storage, this.name, callbackUrl);
    }
    }

    MCP SDK upgrade to 1.26.0

    Upgraded the MCP SDK to 1.26.0 to prevent cross-client response leakage. Stateless MCP Servers should now create a new McpServer instance per request instead of sharing a single instance. A guard is added in this version of the MCP SDK which will prevent connection to a Server instance that has already been connected to a transport. Developers will need to modify their code if they declare their McpServer instance as a global variable.

    MCP OAuth callback URL security fix

    Added callbackPath option to addMcpServer to prevent instance name leakage in MCP OAuth callback URLs. When sendIdentityOnConnect is false, callbackPath is now required — the default callback URL would expose the instance name, undermining the security intent. Also fixes callback request detection to match via the state parameter instead of a loose /callback URL substring check, enabling custom callback paths.

    Deprecate onStateUpdate in favor of onStateChanged

    onStateChanged is a drop-in rename of onStateUpdate (same signature, same behavior). onStateUpdate still works but emits a one-time console warning per class. validateStateChange rejections now propagate a CF_AGENT_STATE_ERROR message back to the client.

    x402 v2 migration

    Migrated the x402 MCP payment integration from the legacy x402 package to @x402/core and @x402/evm v2.

    Breaking changes for x402 users:

    • Peer dependencies changed: replace x402 with @x402/core and @x402/evm
    • PaymentRequirements type now uses v2 fields (e.g. amount instead of maxAmountRequired)
    • X402ClientConfig.account type changed from viem.Account to ClientEvmSigner (structurally compatible with privateKeyToAccount())
    Terminal window
    npm uninstall x402
    npm install @x402/core @x402/evm

    Network identifiers now accept both legacy names and CAIP-2 format:

    TypeScript
    // Legacy name (auto-converted)
    {
    network: "base-sepolia",
    }
    // CAIP-2 format (preferred)
    {
    network: "eip155:84532",
    }

    Other x402 changes:

    • X402ClientConfig.network is now optional — the client auto-selects from available payment requirements
    • Server-side lazy initialization: facilitator connection is deferred until the first paid tool invocation
    • Payment tokens support both v2 (PAYMENT-SIGNATURE) and v1 (X-PAYMENT) HTTP headers
    • Added normalizeNetwork export for converting legacy network names to CAIP-2 format
    • Re-exports PaymentRequirements, PaymentRequired, Network, FacilitatorConfig, and ClientEvmSigner from agents/x402

    Other improvements

    • Fix useAgent and AgentClient crashing when using basePath routing
    • CORS handling delegated to partyserver's native support (simpler, more reliable)
    • Client-side onStateUpdateError callback for handling rejected state updates

    Upgrade

    To update to the latest version:

    Terminal window
    npm i agents@latest
  1. The Sandbox SDK now supports PTY (pseudo-terminal) passthrough, enabling browser-based terminal UIs to connect to sandbox shells via WebSocket.

    sandbox.terminal(request)

    The new terminal() method proxies a WebSocket upgrade to the container's PTY endpoint, with output buffering for replay on reconnect.

    JavaScript
    // Worker: proxy WebSocket to container terminal
    return sandbox.terminal(request, { cols: 80, rows: 24 });

    Multiple terminals per sandbox

    Each session can have its own terminal with an isolated working directory and environment, so users can run separate shells side-by-side in the same container.

    JavaScript
    // Multiple isolated terminals in the same sandbox
    const dev = await sandbox.getSession("dev");
    return dev.terminal(request);

    xterm.js addon

    The new @cloudflare/sandbox/xterm export provides a SandboxAddon for xterm.js with automatic reconnection (exponential backoff + jitter), buffered output replay, and resize forwarding.

    JavaScript
    import { SandboxAddon } from "@cloudflare/sandbox/xterm";
    const addon = new SandboxAddon({
    getWebSocketUrl: ({ sandboxId, origin }) =>
    `${origin}/ws/terminal?id=${sandboxId}`,
    onStateChange: (state, error) => updateUI(state),
    });
    terminal.loadAddon(addon);
    addon.connect({ sandboxId: "my-sandbox" });

    Upgrade

    To update to the latest version:

    Terminal window
    npm i @cloudflare/sandbox@latest
  1. AI Crawl Control metrics have been enhanced with new views, improved filtering, and better data visualization.

    AI Crawl Control path patterns

    Path pattern grouping

    • In the Metrics tab > Most popular paths table, use the new Patterns tab that groups requests by URI pattern (/blog/*, /api/v1/*, /docs/*) to identify which site areas crawlers target most. Refer to the screenshot above.

    Enhanced referral analytics

    • Destination patterns show which site areas receive AI-driven referral traffic.
    • In the Metrics tab, a new Referrals over time chart shows trends by operator or source.

    Data transfer metrics

    • In the Metrics tab > Allowed requests over time chart, toggle Bytes to show bandwidth consumption.
    • In the Crawlers tab, a new Bytes Transferred column shows bandwidth per crawler.

    Image exports

    • Export charts and tables as images for reports and presentations.

    Learn more about analyzing AI traffic.

  1. Get your content updates into AI Search faster and avoid a full rescan when you do not need it.

    Reindex individual files without a full sync

    Updated a file or need to retry one that errored? When you know exactly which file changed, you can now reindex it directly instead of rescanning your entire data source.

    Go to Overview > Indexed Items and select the sync icon next to any file to reindex it immediately.

    Sync individual files from Indexed Items

    Crawl only the sitemap you need

    By default, AI Search crawls all sitemaps listed in your robots.txt, up to the maximum files per index limit. If your site has multiple sitemaps but you only want to index a specific set, you can now specify a single sitemap URL to limit what the crawler visits.

    For example, if your robots.txt lists both blog-sitemap.xml and docs-sitemap.xml, you can specify just https://example.com/docs-sitemap.xml to index only your documentation.

    Configure your selection anytime in Settings > Parsing options > Specific sitemaps, then trigger a sync to apply the changes.

    Specify a sitemap in Parsinh options

    Learn more about indexing controls and website crawling configuration.

  1. R2 SQL now supports five approximate aggregation functions for fast analysis of large datasets. These functions trade minor precision for improved performance on high-cardinality data.

    New functions

    • APPROX_PERCENTILE_CONT(column, percentile) — Returns the approximate value at a given percentile (0.0 to 1.0). Works on integer and decimal columns.
    • APPROX_PERCENTILE_CONT_WITH_WEIGHT(column, weight, percentile) — Weighted percentile calculation where each row contributes proportionally to its weight column value.
    • APPROX_MEDIAN(column) — Returns the approximate median. Equivalent to APPROX_PERCENTILE_CONT(column, 0.5).
    • APPROX_DISTINCT(column) — Returns the approximate number of distinct values. Works on any column type.
    • APPROX_TOP_K(column, k) — Returns the k most frequent values with their counts as a JSON array.

    All functions support WHERE filters. All except APPROX_TOP_K support GROUP BY.

    Examples

    -- Percentile analysis on revenue data
    SELECT approx_percentile_cont(total_amount, 0.25),
    approx_percentile_cont(total_amount, 0.5),
    approx_percentile_cont(total_amount, 0.75)
    FROM my_namespace.sales_data
    -- Median per department
    SELECT department, approx_median(total_amount)
    FROM my_namespace.sales_data
    GROUP BY department
    -- Approximate distinct customers by region
    SELECT region, approx_distinct(customer_id)
    FROM my_namespace.sales_data
    GROUP BY region
    -- Top 5 most frequent departments
    SELECT approx_top_k(department, 5)
    FROM my_namespace.sales_data
    -- Combine approximate and standard aggregations
    SELECT COUNT(*),
    AVG(total_amount),
    approx_percentile_cont(total_amount, 0.5),
    approx_distinct(customer_id)
    FROM my_namespace.sales_data
    WHERE region = 'North'

    For the full syntax and additional examples, refer to the SQL reference.

  1. The Workers Observability dashboard has some major updates to make it easier to debug your application's issues and share findings with your team.

    Workers Observability dashboard showing events view with event details and share options

    You can now:

    • Create visualizations — Build charts from your Worker data directly in a Worker's Observability tab
    • Export data as JSON or CSV — Download logs and traces for offline analysis or to share with teammates
    • Share events and traces — Generate direct URLs to specific events, invocations, and traces that open standalone pages with full context
    • Customize table columns — Improved field picker to add, remove, and reorder columns in the events table
    • Expandable event details — Expand events inline to view full details without leaving the table
    • Keyboard shortcuts — Navigate the dashboard with hotkey support
    Workers Observability dashboard showing a P99 CPU time visualization grouped by outcome

    These updates are now live in the Cloudflare dashboard, both in a Worker's Observability tab and in the account-level Observability dashboard for a unified experience. To get started, go to Workers & Pages > select your Worker > Observability.

  1. New reference documentation is now available for AI Crawl Control:

    • GraphQL API reference — Query examples for crawler requests, top paths, referral traffic, and data transfer. Includes key filters for detection IDs, user agents, and referrer domains.
    • Bot reference — Detection IDs and user agents for major AI crawlers from OpenAI, Anthropic, Google, Meta, and others.
    • Worker templates — Deploy the x402 Payment-Gated Proxy to monetize crawler access or charge bots while letting humans through free.
  1. Cloudflare Queues is now part of the Workers free plan, offering guaranteed message delivery across up to 10,000 queues to either Cloudflare Workers or HTTP pull consumers. Every Cloudflare account now includes 10,000 operations per day across reads, writes, and deletes. For more details on how each operation is defined, refer to Queues pricing.

    All features of the existing Queues functionality are available on the free plan, including unlimited event subscriptions. Note that the maximum retention period on the free tier, however, is 24 hours rather than 14 days.

    If you are new to Cloudflare Queues, follow this guide or try one of our tutorials to get started.

  1. Cloudflare Workflows now automatically generates visual diagrams from your code

    Your Workflow is parsed to provide a visual map of the Workflow structure, allowing you to:

    • Understand how steps connect and execute
    • Visualize loops and nested logic
    • Follow branching paths for conditional logic
    Example diagram

    You can collapse loops and nested logic to see the high-level flow, or expand them to see every step.

    Workflow diagrams are available in beta for all JavaScript and TypeScript Workflows. Find your Workflows in the Cloudflare dashboard to see their diagrams.

  1. The latest release of the Agents SDK brings first-class support for Cloudflare Workflows, synchronous state management, and new scheduling capabilities.

    Cloudflare Workflows integration

    Agents excel at real-time communication and state management. Workflows excel at durable execution. Together, they enable powerful patterns where Agents handle WebSocket connections while Workflows handle long-running tasks, retries, and human-in-the-loop flows.

    Use the new AgentWorkflow class to define workflows with typed access to your Agent:

    JavaScript
    import { AgentWorkflow } from "agents/workflows";
    export class ProcessingWorkflow extends AgentWorkflow {
    async run(event, step) {
    // Call Agent methods via RPC
    await this.agent.updateStatus(event.payload.taskId, "processing");
    // Non-durable: progress reporting to clients
    await this.reportProgress({ step: "process", percent: 0.5 });
    this.broadcastToClients({ type: "update", taskId: event.payload.taskId });
    // Durable via step: idempotent, won't repeat on retry
    await step.mergeAgentState({ taskProgress: 0.5 });
    const result = await step.do("process", async () => {
    return processData(event.payload.data);
    });
    await step.reportComplete(result);
    return result;
    }
    }

    Start workflows from your Agent with runWorkflow() and handle lifecycle events:

    JavaScript
    export class MyAgent extends Agent {
    async startTask(taskId, data) {
    const instanceId = await this.runWorkflow("PROCESSING_WORKFLOW", {
    taskId,
    data,
    });
    return { instanceId };
    }
    async onWorkflowProgress(workflowName, instanceId, progress) {
    this.broadcast(JSON.stringify({ type: "progress", progress }));
    }
    async onWorkflowComplete(workflowName, instanceId, result) {
    console.log(`Workflow ${instanceId} completed`);
    }
    async onWorkflowError(workflowName, instanceId, error) {
    console.error(`Workflow ${instanceId} failed:`, error);
    }
    }

    Key workflow methods on your Agent:

    • runWorkflow(workflowName, params, options?) — Start a workflow with optional metadata
    • getWorkflow(workflowId) / getWorkflows(criteria?) — Query workflows with cursor-based pagination
    • approveWorkflow(workflowId) / rejectWorkflow(workflowId) — Human-in-the-loop approval flows
    • pauseWorkflow(), resumeWorkflow(), terminateWorkflow() — Workflow control

    Synchronous setState()

    State updates are now synchronous with a new validateStateChange() validation hook:

    JavaScript
    export class MyAgent extends Agent {
    validateStateChange(oldState, newState) {
    // Return false to reject the change
    if (newState.count < 0) return false;
    // Return modified state to transform
    return { ...newState, lastUpdated: Date.now() };
    }
    }

    scheduleEvery() for recurring tasks

    The new scheduleEvery() method enables fixed-interval recurring tasks with built-in overlap prevention:

    JavaScript
    // Run every 5 minutes
    await this.scheduleEvery("syncData", 5 * 60 * 1000, { source: "api" });

    Callable system improvements

    • Client-side RPC timeout — Set timeouts on callable method invocations
    • StreamingResponse.error(message) — Graceful stream error signaling
    • getCallableMethods() — Introspection API for discovering callable methods
    • Connection close handling — Pending calls are automatically rejected on disconnect
    JavaScript
    await agent.call("method", [args], {
    timeout: 5000,
    stream: { onChunk, onDone, onError },
    });

    Email and routing enhancements

    Secure email reply routing — Email replies are now secured with HMAC-SHA256 signed headers, preventing unauthorized routing of emails to agent instances.

    Routing improvements:

    • basePath option to bypass default URL construction for custom routing
    • Server-sent identity — Agents send name and agent type on connect
    • New onIdentity and onIdentityChange callbacks on the client
    JavaScript
    const agent = useAgent({
    basePath: "user",
    onIdentity: (name, agentType) => console.log(`Connected to ${name}`),
    });

    Upgrade

    To update to the latest version:

    Terminal window
    npm i agents@latest

    For the complete Workflows API reference and patterns, see Run Workflows.

  1. Local Uploads is now available in open beta. Enable it on your R2 bucket to improve upload performance when clients upload data from a different region than your bucket. With Local Uploads enabled, object data is written to storage infrastructure near the client, then asynchronously replicated to your bucket. The object is immediately accessible and remains strongly consistent throughout. Refer to How R2 works for details on how data is written to your bucket.

    In our tests, we observed up to 75% reduction in Time to Last Byte (TTLB) for upload requests when Local Uploads is enabled.

    Local Uploads latency comparison showing p50 TTLB dropping from around 2 seconds to 500ms after enabling Local Uploads

    This feature is ideal when:

    • Your users are globally distributed
    • Upload performance and reliability is critical to your application
    • You want to optimize write performance without changing your bucket's primary location

    To enable Local Uploads on your bucket, find Local Uploads in your bucket settings in the Cloudflare Dashboard, or run:

    Terminal window
    npx wrangler r2 bucket local-uploads enable <BUCKET_NAME>

    Enabling Local Uploads on a bucket is seamless: existing uploads will complete as expected and there’s no interruption to traffic. There is no additional cost to enable Local Uploads. Upload requests incur the standard Class A operation costs same as upload requests made without Local Uploads.

    For more information, refer to Local Uploads.

  1. Identifying threat actors can be challenging, because naming conventions often vary across the security industry. To simplify your research, Cloudflare Threat Events now include an Also known as field, providing a list of common aliases and industry-standard names for the groups we track.

    This new field is available in both the Cloudflare dashboard and via the API. In the dashboard, you can view these aliases by expanding the event details side panel (under the Attacker field) or by adding it as a column in your configurable table view.

    Key benefits

    • Easily map Cloudflare-tracked actors to the naming conventions used by other vendors without manual cross-referencing.
    • Quickly identify if a detected threat actor matches a group your team is already monitoring via other intelligence feeds.

    For more information on how to access this data, refer to the Threat Events API documentation.

  1. We have updated the Monitoring page to provide a more streamlined and insightful experience for administrators, improving both data visualization and dashboard accessibility.

    • Enhanced Visual Layout: Optimized contrast and the introduction of stacked bar charts for clearer data visualization and trend analysis. visual-example
    • Improved Accessibility & Usability:
      • Widget Search: Added search functionality to multiple widgets, including Policies, Submitters, and Impersonation.
      • Actionable UI: All available actions are now accessible via dedicated buttons.
      • State Indicators: Improved UI states to clearly communicate loading, empty datasets, and error conditions. buttons-example
    • Granular Data Breakdowns: New views for dispositions by month, malicious email details, link actions, and impersonations. monthly-example

    This applies to all Email Security packages:

    • Advantage
    • Enterprise
    • Enterprise + PhishGuard
  1. This week’s release introduces new detections for CVE-2025-64459 and CVE-2025-24893.

    Key Findings

    • CVE-2025-64459: Django versions prior to 5.1.14, 5.2.8, and 4.2.26 are vulnerable to SQL injection via crafted dictionaries passed to QuerySet methods and the Q() class.
    • CVE-2025-24893: XWiki allows unauthenticated remote code execution through crafted requests to the SolrSearch endpoint, affecting the entire installation.
    RulesetRule IDLegacy Rule IDDescriptionPrevious ActionNew ActionComments
    Cloudflare Managed Ruleset N/AXWiki - Remote Code Execution - CVE:CVE-2025-24893 2LogBlockThis is a new detection.
    Cloudflare Managed Ruleset N/ADjango SQLI - CVE:CVE-2025-64459LogBlockThis is a new detection.
    Cloudflare Managed Ruleset N/ANoSQL, MongoDB - SQLi - Comparison - 2BlockBlockRule metadata description refined. Detection unchanged.
  1. The minimum cacheTtl parameter for Workers KV has been reduced from 60 seconds to 30 seconds. This change applies to both get() and getWithMetadata() methods.

    This reduction allows you to maintain more up-to-date cached data and have finer-grained control over cache behavior. Applications requiring faster data refresh rates can now configure cache durations as low as 30 seconds instead of the previous 60-second minimum.

    The cacheTtl parameter defines how long a KV result is cached at the global network location it is accessed from:

    JavaScript
    // Read with custom cache TTL
    const value = await env.NAMESPACE.get("my-key", {
    cacheTtl: 30, // Cache for minimum 30 seconds (previously 60)
    });
    // getWithMetadata also supports the reduced cache TTL
    const valueWithMetadata = await env.NAMESPACE.getWithMetadata("my-key", {
    cacheTtl: 30, // Cache for minimum 30 seconds
    });

    The default cache TTL remains unchanged at 60 seconds. Upgrade to the latest version of Wrangler to be able to use 30 seconds cacheTtl.

    This change affects all KV read operations using the binding API. For more information, consult the Workers KV cache TTL documentation.

  1. Magic WAN and Magic Transit customers can use the Cloudflare dashboard to configure and manage BGP peering between their networks and their Magic routing table when using IPsec and GRE tunnel on-ramps (beta).

    Using BGP peering allows customers to:

    • Automate the process of adding or removing networks and subnets.
    • Take advantage of failure detection and session recovery features.

    With this functionality, customers can:

    • Establish an eBGP session between their devices and the Magic WAN / Magic Transit service when connected via IPsec and GRE tunnel on-ramps.
    • Secure the session by MD5 authentication to prevent misconfigurations.
    • Exchange routes dynamically between their devices and their Magic routing table.

    For configuration details, refer to:

  1. We have partnered with Black Forest Labs (BFL) again to bring their optimized FLUX.2 [klein] 9B model to Workers AI. This distilled model offers enhanced quality compared to the 4B variant, while maintaining cost-effective pricing. With a fixed 4-step inference process, Klein 9B is ideal for rapid prototyping and real-time applications where both speed and quality matter.

    Read the BFL blog to learn more about the model itself, or try it out yourself on our multi modal playground.

    Pricing documentation is available on the model page or pricing page.

    Workers AI platform specifics

    The model hosted on Workers AI is optimized for speed with a fixed 4-step inference process and supports up to 4 image inputs. Since this is a distilled model, the steps parameter is fixed at 4 and cannot be adjusted. Like FLUX.2 [dev] and FLUX.2 [klein] 4B, this image model uses multipart form data inputs, even if you just have a prompt.

    With the REST API, the multipart form data input looks like this:

    Terminal window
    curl --request POST \
    --url 'https://api.cloudflare.com/client/v4/accounts/{ACCOUNT}/ai/run/@cf/black-forest-labs/flux-2-klein-9b' \
    --header 'Authorization: Bearer {TOKEN}' \
    --header 'Content-Type: multipart/form-data' \
    --form 'prompt=a sunset at the alps' \
    --form width=1024 \
    --form height=1024

    With the Workers AI binding, you can use it as such:

    JavaScript
    const form = new FormData();
    form.append("prompt", "a sunset with a dog");
    form.append("width", "1024");
    form.append("height", "1024");
    // FormData doesn't expose its serialized body or boundary. Passing it to a
    // Request (or Response) constructor serializes it and generates the Content-Type
    // header with the boundary, which is required for the server to parse the multipart fields.
    const formResponse = new Response(form);
    const formStream = formResponse.body;
    const formContentType = formResponse.headers.get('content-type');
    const resp = await env.AI.run("@cf/black-forest-labs/flux-2-klein-9b", {
    multipart: {
    body: formStream,
    contentType: formContentType,
    },
    });

    The parameters you can send to the model are detailed here:

    JSON Schema for Model Required Parameters

    • prompt (string) - Text description of the image to generate

    Optional Parameters

    • input_image_0 (string) - Binary image
    • input_image_1 (string) - Binary image
    • input_image_2 (string) - Binary image
    • input_image_3 (string) - Binary image
    • guidance (float) - Guidance scale for generation. Higher values follow the prompt more closely
    • width (integer) - Width of the image, default 1024 Range: 256-1920
    • height (integer) - Height of the image, default 768 Range: 256-1920
    • seed (integer) - Seed for reproducibility

    Note: Since this is a distilled model, the steps parameter is fixed at 4 and cannot be adjusted.

    Multi-reference images

    The FLUX.2 klein-9b model supports generating images based on reference images, just like FLUX.2 [dev] and FLUX.2 [klein] 4B. You can use this feature to apply the style of one image to another, add a new character to an image, or iterate on past generated images. You would use it with the same multipart form data structure, with the input images in binary. The model supports up to 4 input images.

    For the prompt, you can reference the images based on the index, like take the subject of image 1 and style it like image 0 or even use natural language like place the dog beside the woman.

    You must name the input parameter as input_image_0, input_image_1, input_image_2, input_image_3 for it to work correctly. All input images must be smaller than 512x512.

    Terminal window
    curl --request POST \
    --url 'https://api.cloudflare.com/client/v4/accounts/{ACCOUNT}/ai/run/@cf/black-forest-labs/flux-2-klein-9b' \
    --header 'Authorization: Bearer {TOKEN}' \
    --header 'Content-Type: multipart/form-data' \
    --form 'prompt=take the subject of image 1 and style it like image 0' \
    --form input_image_0=@/Users/johndoe/Desktop/icedoutkeanu.png \
    --form input_image_1=@/Users/johndoe/Desktop/me.png \
    --form width=1024 \
    --form height=1024

    Through Workers AI Binding:

    JavaScript
    //helper function to convert ReadableStream to Blob
    async function streamToBlob(stream: ReadableStream, contentType: string): Promise<Blob> {
    const reader = stream.getReader();
    const chunks = [];
    while (true) {
    const { done, value } = await reader.read();
    if (done) break;
    chunks.push(value);
    }
    return new Blob(chunks, { type: contentType });
    }
    const image0 = await fetch("http://image-url");
    const image1 = await fetch("http://image-url");
    const form = new FormData();
    const image_blob0 = await streamToBlob(image0.body, "image/png");
    const image_blob1 = await streamToBlob(image1.body, "image/png");
    form.append('input_image_0', image_blob0)
    form.append('input_image_1', image_blob1)
    form.append('prompt', 'take the subject of image 1 and style it like image 0')
    // FormData doesn't expose its serialized body or boundary. Passing it to a
    // Request (or Response) constructor serializes it and generates the Content-Type
    // header with the boundary, which is required for the server to parse the multipart fields.
    const formResponse = new Response(form);
    const formStream = formResponse.body;
    const formContentType = formResponse.headers.get('content-type');
    const resp = await env.AI.run("@cf/black-forest-labs/flux-2-klein-9b", {
    multipart: {
    body: formStream,
    contentType: formContentType
    }
    })
  1. A new Beta release for the Windows WARP client is now available on the beta releases downloads page.

    This release contains minor fixes, improvements, and new features.

    Changes and improvements

    • Improvements to multi-user mode. Fixed an issue where when switching from a pre-login registration to a user registration, Mobile Device Management (MDM) configuration association could be lost.
    • Added a new feature to manage NetBIOS over TCP/IP functionality on the Windows client. NetBIOS over TCP/IP on the Windows client is now disabled by default and can be enabled in device profile settings.
    • Fixed an issue causing failure of the local network exclusion feature when configured with a timeout of 0.
    • Improvement for the Windows client certificate posture check to ensure logged results are from checks that run once users log in.
    • Improvement for more accurate reporting of device colocation information in the Cloudflare One dashboard.

    Known issues

    • For Windows 11 24H2 users, Microsoft has confirmed a regression that may lead to performance issues like mouse lag, audio cracking, or other slowdowns. Cloudflare recommends users experiencing these issues upgrade to a minimum Windows 11 24H2 KB5062553 or higher for resolution.

    • Devices with KB5055523 installed may receive a warning about Win32/ClickFix.ABA being present in the installer. To resolve this false positive, update Microsoft Security Intelligence to version 1.429.19.0 or later.

    • DNS resolution may be broken when the following conditions are all true:

      • WARP is in Secure Web Gateway without DNS filtering (tunnel-only) mode.
      • A custom DNS server address is configured on the primary network adapter.
      • The custom DNS server address on the primary network adapter is changed while WARP is connected.

      To work around this issue, reconnect the WARP client by toggling off and back on.

  1. A new Beta release for the macOS WARP client is now available on the beta releases downloads page.

    This release contains minor fixes and improvements.

    Changes and improvements

    • Fixed an issue causing failure of the local network exclusion feature when configured with a timeout of 0.
    • Improvement for more accurate reporting of device colocation information in the Cloudflare One dashboard.
  1. Cloudflare source IPs are the IP addresses used by Cloudflare services (such as Load Balancing, Gateway, and Browser Isolation) when sending traffic to your private networks.

    For customers using legacy mode routing, traffic to private networks is sourced from public Cloudflare IPs, which may cause IP conflicts. For customers using Unified Routing mode (beta), traffic to private networks is sourced from dedicated, non-Internet-routable private IPv4 range to ensure:

    • Symmetric routing over private network connections
    • Proper firewall state preservation
    • Private traffic stays on secure paths

    Key details:

    • IPv4: Sourced from 100.64.0.0/12 by default, configurable to any /12 CIDR
    • IPv6: Sourced from 2606:4700:cf1:5000::/64 (not configurable)
    • Affected connectors: GRE, IPsec, CNI, WARP Connector, and WARP Client (Cloudflare Tunnel is not affected)

    Configuring Cloudflare source IPs requires Unified Routing (beta) and the Cloudflare One Networks Write permission.

    For configuration details, refer to Configure Cloudflare source IPs.

  1. You can now set the timezone in the Cloudflare dashboard as Coordinated Universal Time (UTC) or your browser or system's timezone.

    What's New

    Unless otherwise specified in the user interface, all dates and times in the Cloudflare dashboard are now displayed in the selected timezone.

    You can change the timezone setting from the user profile dropdown.

    Timezone preference dropdown

    The page will reload to apply the new timezone setting.

  1. You can now control how Cloudflare buffers HTTP request and response bodies using two new settings in Configuration Rules.

    Request body buffering

    Controls how Cloudflare buffers HTTP request bodies before forwarding them to your origin server:

    ModeBehavior
    Standard (default)Cloudflare can inspect a prefix of the request body for enabled functionality such as WAF and Bot Management.
    FullBuffers the entire request body before sending to origin.
    NoneNo buffering — the request body streams directly to origin without inspection.

    Response body buffering

    Controls how Cloudflare buffers HTTP response bodies before forwarding them to the client:

    ModeBehavior
    Standard (default)Cloudflare can inspect a prefix of the response body for enabled functionality.
    NoneNo buffering — the response body streams directly to the client without inspection.

    API example

    {
    "action": "set_config",
    "action_parameters": {
    "request_body_buffering": "standard",
    "response_body_buffering": "none"
    }
    }

    For more information, refer to Configuration Rules.

  1. This week’s release introduces new detections for denial-of-service attempts targeting React CVE-2026-23864 (https://www.cve.org/CVERecord?id=CVE-2026-23864).

    Key Findings

    • CVE-2026-23864 (https://www.cve.org/CVERecord?id=CVE-2026-23864) affects react-server-dom-parcel, react-server-dom-turbopack, and react-server-dom-webpack packages.
    • Attackers can send crafted HTTP requests to Server Function endpoints, causing server crashes, out-of-memory exceptions, or excessive CPU usage.
    RulesetRule IDLegacy Rule IDDescriptionPrevious ActionNew ActionComments
    Cloudflare Managed Ruleset N/AReact Server - DOS - CVE:CVE-2026-23864 - 1N/ABlockThis is a new detection.
    Cloudflare Managed Ruleset N/AReact Server - DOS - CVE:CVE-2026-23864 - 2N/ABlockThis is a new detection.
    Cloudflare Managed Ruleset N/AReact Server - DOS - CVE:CVE-2026-23864 - 3N/ABlockThis is a new detection.
  1. Screenshot of new 2FA enrollment experience

    In an effort to improve overall user security, users without 2FA will be prompted upon login to enroll in email 2FA. This will improve user security posture while minimizing friction. Users without email 2FA enabled will see a prompt to secure their account with additional factors upon logging in. Enrolling in 2FA remains optional, but strongly encouraged as it is the best way to prevent account takeovers.

    We also made changes to existing 2FA screens to improve the user experience. Now we have distinct experiences for each 2FA factor type, reflective of the way that factor works.

    For more information

  1. Paid plans can now have up to 100,000 files per Pages site, increased from the previous limit of 20,000 files.

    To enable this increased limit, set the environment variable PAGES_WRANGLER_MAJOR_VERSION=4 in your Pages project settings.

    The Free plan remains at 20,000 files per site.

    For more details, refer to the Pages limits documentation.

  1. You can now store up to 10 million vectors in a single Vectorize index, doubling the previous limit of 5 million vectors. This enables larger-scale semantic search, recommendation systems, and retrieval-augmented generation (RAG) applications without splitting data across multiple indexes.

    Vectorize continues to support indexes with up to 1,536 dimensions per vector at 32-bit precision. Refer to the Vectorize limits documentation for complete details.

  1. Cloudflare Rulesets now includes encode_base64() and sha256() functions, enabling you to generate signed request headers directly in rule expressions. These functions support common patterns like constructing a canonical string from request attributes, computing a SHA256 digest, and Base64-encoding the result.


    New functions

    FunctionDescriptionAvailability
    encode_base64(input, flags)Encodes a string to Base64 format. Optional flags parameter: u for URL-safe encoding, p for padding (adds = characters to make the output length a multiple of 4, as required by some systems). By default, output is standard Base64 without padding.All plans (in header transform rules)
    sha256(input)Computes a SHA256 hash of the input string.Requires enablement

    Examples

    Encode a string to Base64 format:

    encode_base64("hello world")

    Returns: aGVsbG8gd29ybGQ

    Encode a string to Base64 format with padding:

    encode_base64("hello world", "p")

    Returns: aGVsbG8gd29ybGQ=

    Perform a URL-safe Base64 encoding of a string:

    encode_base64("hello world", "u")

    Returns: aGVsbG8gd29ybGQ

    Compute the SHA256 hash of a secret token:

    sha256("my-token")

    Returns a hash that your origin can validate to authenticate requests.

    Compute the SHA256 hash of a string and encode the result to Base64 format:

    encode_base64(sha256("my-token"))

    Combines hashing and encoding for systems that expect Base64-encoded signatures.

    For more information, refer to the Functions reference.

  1. You can now configure Workers to run close to infrastructure in legacy cloud regions to minimize latency to existing services and databases. This is most useful when your Worker makes multiple round trips.

    To set a placement hint, set the placement.region property in your Wrangler configuration file:

    {
    "placement": {
    "region": "aws:us-east-1",
    },
    }

    Placement hints support Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure region identifiers. Workers run in the Cloudflare data center with the lowest latency to the specified cloud region.

    If your existing infrastructure is not in these cloud providers, expose it to placement probes with placement.host for layer 4 checks or placement.hostname for layer 7 checks. These probes are designed to locate single-homed infrastructure and are not suitable for anycasted or multicasted resources.

    {
    "placement": {
    "host": "my_database_host.com:5432",
    },
    }
    {
    "placement": {
    "hostname": "my_api_server.com",
    },
    }

    This is an extension of Smart Placement, which automatically places your Workers closer to back-end APIs based on measured latency. When you do not know the location of your back-end APIs or have multiple back-end APIs, set mode: "smart":

    {
    "placement": {
    "mode": "smart",
    },
    }

No recent entries for this product.

Search all changelog entries →