Skip to content
Cloudflare Docs

Changelog

New updates and improvements at Cloudflare.

Developer platform
hero image
  1. Disclaimer: Please note that v6.0.0-beta.1 is in Beta and we are still testing it for stability.

    Full Changelog: v5.2.0...v6.0.0-beta.1

    In this release, you'll see a large number of breaking changes. This is primarily due to a change in OpenAPI definitions, which our libraries are based off of, and codegen updates that we rely on to read those OpenAPI definitions and produce our SDK libraries. As the codegen is always evolving and improving, so are our code bases.

    Some breaking changes were introduced due to bug fixes, also listed below.

    Please ensure you read through the list of changes below before moving to this version - this will help you understand any down or upstream issues it may cause to your environments.


    Breaking Changes

    Addressing - Parameter Requirements Changed

    • BGPPrefixCreateParams.cidr: optional → required
    • PrefixCreateParams.asn: number | nullnumber
    • PrefixCreateParams.loa_document_id: required → optional
    • ServiceBindingCreateParams.cidr: optional → required
    • ServiceBindingCreateParams.service_id: optional → required

    API Gateway

    • ConfigurationUpdateResponse removed
    • PublicSchemaOldPublicSchema
    • SchemaUploadUserSchemaCreateResponse
    • ConfigurationUpdateParams.properties removed; use normalize

    CloudforceOne - Response Type Changes

    • ThreatEventBulkCreateResponse: number → complex object with counts and errors

    D1 Database - Query Parameters

    • DatabaseQueryParams: simple interface → union type (D1SingleQuery | MultipleQueries)
    • DatabaseRawParams: same change
    • Supports batch queries via batch array

    DNS Records - Type Renames (21 types)

    All record type interfaces renamed from *Record to short names:

    • RecordResponse.ARecordRecordResponse.A
    • RecordResponse.AAAARecordRecordResponse.AAAA
    • RecordResponse.CNAMERecordRecordResponse.CNAME
    • RecordResponse.MXRecordRecordResponse.MX
    • RecordResponse.NSRecordRecordResponse.NS
    • RecordResponse.PTRRecordRecordResponse.PTR
    • RecordResponse.TXTRecordRecordResponse.TXT
    • RecordResponse.CAARecordRecordResponse.CAA
    • RecordResponse.CERTRecordRecordResponse.CERT
    • RecordResponse.DNSKEYRecordRecordResponse.DNSKEY
    • RecordResponse.DSRecordRecordResponse.DS
    • RecordResponse.HTTPSRecordRecordResponse.HTTPS
    • RecordResponse.LOCRecordRecordResponse.LOC
    • RecordResponse.NAPTRRecordRecordResponse.NAPTR
    • RecordResponse.SMIMEARecordRecordResponse.SMIMEA
    • RecordResponse.SRVRecordRecordResponse.SRV
    • RecordResponse.SSHFPRecordRecordResponse.SSHFP
    • RecordResponse.SVCBRecordRecordResponse.SVCB
    • RecordResponse.TLSARecordRecordResponse.TLSA
    • RecordResponse.URIRecordRecordResponse.URI
    • RecordResponse.OpenpgpkeyRecordRecordResponse.Openpgpkey

    IAM Resource Groups

    • ResourceGroupCreateResponse.scope: optional single → required array
    • ResourceGroupCreateResponse.id: optional → required

    Origin CA Certificates - Parameter Requirements Changed

    • OriginCACertificateCreateParams.csr: optional → required
    • OriginCACertificateCreateParams.hostnames: optional → required
    • OriginCACertificateCreateParams.request_type: optional → required

    Pages

    • Renamed: DeploymentsSinglePageDeploymentListResponsesV4PagePaginationArray
    • Domain response fields: many optional → required

    Pipelines - v0 to v1 Migration

    • Entire v0 API deprecated; use v1 methods (createV1, listV1, etc.)
    • New sub-resources: Sinks, Streams

    R2

    • EventNotificationUpdateParams.rules: optional → required
    • Super Slurper: bucket, secret now required in source params

    Radar

    • dataSource: string → typed enum (23 values)
    • eventType: string → typed enum (6 values)
    • V2 methods require dimension parameter (breaking signature change)

    Resource Sharing

    • Removed: status_message field from all recipient response types

    Schema Validation

    • Consolidated SchemaCreateResponse, SchemaListResponse, SchemaEditResponse, SchemaGetResponsePublicSchema
    • Renamed: SchemaListResponsesV4PagePaginationArrayPublicSchemasV4PagePaginationArray

    Spectrum

    • Renamed union members: AppListResponse.UnionMember0SpectrumConfigAppConfig
    • Renamed union members: AppListResponse.UnionMember1SpectrumConfigPaygoAppConfig

    Workers

    • Removed: WorkersBindingKindTailConsumer type (all occurrences)
    • Renamed: ScriptsSinglePageScriptListResponsesSinglePage
    • Removed: DeploymentsSinglePage

    Zero-Trust DLP

    • datasets.create(), update(), get() return types changed
    • PredefinedGetResponse union members renamed to UnionMember0-5

    Zero-Trust Tunnels

    • Removed: CloudflaredCreateResponse, CloudflaredListResponse, CloudflaredDeleteResponse, CloudflaredEditResponse, CloudflaredGetResponse
    • Removed: CloudflaredListResponsesV4PagePaginationArray

    Features

    Abuse Reports (client.abuseReports)

    • Reports: create, list, get
    • Mitigations: sub-resource for abuse mitigations

    AI Search (client.aisearch)

    • Instances: create, update, list, delete, read, stats
    • Items: list, get
    • Jobs: create, list, get, logs
    • Tokens: create, update, list, delete, read

    Connectivity (client.connectivity)

    • Directory Services: create, update, list, delete, get
    • Supports IPv4, IPv6, dual-stack, and hostname configurations

    Organizations (client.organizations)

    • Organizations: create, update, list, delete, get
    • OrganizationProfile: update, get
    • Hierarchical organization support with parent/child relationships

    R2 Data Catalog (client.r2DataCatalog)

    • Catalog: list, enable, disable, get
    • Credentials: create
    • MaintenanceConfigs: update, get
    • Namespaces: list
    • Tables: list, maintenance config management
    • Apache Iceberg integration

    Realtime Kit (client.realtimeKit)

    • Apps: get, post
    • Meetings: create, get, participant management
    • Livestreams: 10+ methods for streaming
    • Recordings: start, pause, stop, get
    • Sessions: transcripts, summaries, chat
    • Webhooks: full CRUD
    • ActiveSession: polls, kick participants
    • Analytics: organization analytics

    Token Validation (client.tokenValidation)

    • Configuration: create, list, delete, edit, get
    • Credentials: update
    • Rules: create, list, delete, bulkCreate, bulkEdit, edit, get
    • JWT validation with RS256/384/512, PS256/384/512, ES256, ES384

    Alerting Silences (client.alerting.silences)

    • create, update, list, delete, get

    IAM SSO (client.iam.sso)

    • create, update, list, delete, get, beginVerification

    Pipelines v1 (client.pipelines)

    • Sinks: create, list, delete, get
    • Streams: create, update, list, delete, get

    Zero-Trust AI Controls / MCP (client.zeroTrust.access.aiControls.mcp)

    • Portals: create, update, list, delete, read
    • Servers: create, update, list, delete, read, sync

    Accounts

    • managed_by field with parent_org_id, parent_org_name

    Addressing LOA Documents

    • auto_generated field on LOADocumentCreateResponse

    Addressing Prefixes

    • delegate_loa_creation, irr_validation_state, ownership_validation_state, ownership_validation_token, rpki_validation_state

    AI

    • Added toMarkdown.supported() method to get all supported conversion formats

    AI Gateway

    • zdr field added to all responses and params

    Alerting

    • New alert type: abuse_report_alert
    • type field added to PolicyFilter

    Browser Rendering

    • ContentCreateParams: refined to discriminated union (Variant0 | Variant1)
    • Split into URL-based and HTML-based parameter variants for better type safety

    Client Certificates

    • reactivate parameter in edit

    CloudforceOne

    • ThreatEventCreateParams.indicatorType: required → optional
    • hasChildren field added to all threat event response types
    • datasetIds query parameter on AttackerListParams, CategoryListParams, TargetIndustryListParams
    • categoryUuid field on TagCreateResponse
    • indicators array for multi-indicator support per event
    • uuid and preserveUuid fields for UUID preservation in bulk create
    • format query parameter ('json' | 'stix2') on ThreatEventListParams
    • createdAt, datasetId fields on ThreatEventEditParams

    Content Scanning

    • Added create(), update(), get() methods

    Custom Pages

    • New page types: basic_challenge, under_attack, waf_challenge

    D1

    • served_by_colo - colo that handled query
    • jurisdiction - 'eu' | 'fedramp'
    • Time Travel (client.d1.database.timeTravel): getBookmark(), restore() - point-in-time recovery

    Email Security

    • New fields on InvestigateListResponse/InvestigateGetResponse: envelope_from, envelope_to, postfix_id_outbound, replyto
    • New detection classification: 'outbound_ndr'
    • Enhanced Finding interface with attachment, detection, field, portion, reason, score
    • Added cursor query parameter to InvestigateListParams

    Gateway Lists

    • New list types: CATEGORY, LOCATION, DEVICE

    Intel

    • New issue type: 'configuration_suggestion'
    • payload field: unknown → typed Payload interface with detection_method, zone_tag

    Leaked Credential Checks

    • Added detections.get() method

    Logpush

    • New datasets: dex_application_tests, dex_device_state_events, ipsec_logs, warp_config_changes, warp_toggle_changes

    Load Balancers

    • Monitor.port: numbernumber | null
    • Pool.load_shedding: LoadSheddingLoadShedding | null
    • Pool.origin_steering: OriginSteeringOriginSteering | null

    Magic Transit

    • license_key field on connectors
    • provision_license parameter for auto-provisioning
    • IPSec: custom_remote_identities with FQDN support
    • Snapshots: Bond interface, probed_mtu field

    Pages

    • New response types: ProjectCreateResponse, ProjectListResponse, ProjectEditResponse, ProjectGetResponse
    • Deployment methods return specific response types instead of generic Deployment

    Queues

    • Added subscriptions.get() method
    • Enhanced SubscriptionGetResponse with typed event source interfaces
    • New event source types: Images, KV, R2, Vectorize, Workers AI, Workers Builds, Workflows

    R2

    • Sippy: new provider s3 (S3-compatible endpoints)
    • Sippy: bucketUrl field for S3-compatible sources
    • Super Slurper: keys field on source response schemas (specify specific keys to migrate)
    • Super Slurper: pathPrefix field on source schemas
    • Super Slurper: region field on S3 source params

    Radar

    • Added geolocations.list(), geolocations.get() methods
    • Added V2 dimension-based methods (summaryV2, timeseriesGroupsV2) to radar sub-resources

    Resource Sharing

    • Added terminal boolean field to Resource Error interfaces

    Rules

    • Added id field to ItemDeleteParams.Item

    Rulesets

    • New buffering fields on SetConfigRule: request_body_buffering, response_body_buffering

    Secrets Store

    • New scopes: 'dex', 'access' (in addition to 'workers', 'ai_gateway')

    SSL Certificate Packs

    • Response types now proper interfaces (was unknown)
    • Fields now required: id, certificates, hosts, status, type

    Security Center

    • payload field: unknown → typed Payload interface with detection_method, zone_tag

    Shared Types

    • Added: CloudflareTunnelsV4PagePaginationArray pagination class

    Workers

    • Added subdomains.delete() method
    • Worker.references - track external dependencies (domains, Durable Objects, queues)
    • Worker.startup_time_ms - startup timing
    • Script.observability - observability settings with logging
    • Script.tag, Script.tags - immutable ID and tags
    • Placement: support for region, hostname, host-based placement
    • tags, tail_consumers now accept | null
    • Telemetry: traces field, $containers event info, durableObjectId, transactionName, abr_level fields

    Workers for Platforms

    • ScriptUpdateResponse: new fields entry_point, observability, tag, tags
    • placement field now union of 4 variants (smart mode, region, hostname, host)
    • tags, tail_consumers now nullable
    • TagUpdateParams.body now accepts null

    Workflows

    • instance_retention: unknown → typed InstanceRetention interface with error_retention, success_retention
    • New status option: 'restart' added to StatusEditParams.status

    Zero-Trust Devices

    • External emergency disconnect settings (4 new fields)
    • antivirus device posture check type
    • os_version_extra documentation improvements

    Zones

    • New response types: SubscriptionCreateResponse, SubscriptionUpdateResponse, SubscriptionGetResponse

    Zero-Trust Access Applications

    • New ApplicationType values: 'mcp', 'mcp_portal', 'proxy_endpoint'
    • New destination type: ViaMcpServerPortalDestination for MCP server access

    Zero-Trust Gateway

    • Added rules.listTenant() method

    Zero-Trust Gateway - Proxy Endpoints

    • ProxyEndpoint: interface → discriminated union (ZeroTrustGatewayProxyEndpointIP | ZeroTrustGatewayProxyEndpointIdentity)
    • ProxyEndpointCreateParams: interface → union type
    • Added kind field: 'ip' | 'identity'

    Zero-Trust Tunnels

    • WARPConnector*Response: union type → interface

    Deprecations

    • API Gateway: UserSchemas, Settings, SchemaValidation resources
    • Audit Logs: auditLogId.not (use id.not)
    • CloudforceOne: ThreatEvents.get(), IndicatorTypes.list()
    • Devices: public_ip field (use DEX API)
    • Email Security: item_count field in Move responses
    • Pipelines: v0 methods (use v1)
    • Radar: old summary() and timeseriesGroups() methods (use V2)
    • Rulesets: disable_apps, mirage fields
    • WARP Connector: connections field
    • Workers: environment parameter in Domains
    • Zones: ResponseBuffering page rule

    Bug Fixes

    • mcp: correct code tool API endpoint (599703c)
    • mcp: return correct lines on typescript errors (5d6f999)
    • organization_profile: fix bad reference (d84ea77)
    • schema_validation: correctly reflect model to openapi mapping (bb86151)
    • workers: fix tests (2ee37f7)

    Documentation

    • Added deprecation notices with migration paths
    • api_gateway: deprecate API Shield Schema Validation resources (8a4b20f)
    • Improved JSDoc examples across all resources
    • workers: expose subdomain delete documentation (4f7cc1f)
  1. In January 2025, we announced the launch of the new Terraform v5 Provider. We greatly appreciate the proactive engagement and valuable feedback from the Cloudflare community following the v5 release. In response, we've established a consistent and rapid 2-3 week cadence for releasing targeted improvements, demonstrating our commitment to stability and reliability.

    With the help of the community, we have a growing number of resources that we have marked as stable, with that list continuing to grow with every release. The most used resources are on track to be stable by the end of March 2026, when we will also be releasing a new migration tool to you migrate from v4 to v5 with ease.

    Thank you for continuing to raise issues. They make our provider stronger and help us build products that reflect your needs.

    This release includes bug fixes, the stabilization of even more popular resources, and more.

    Features

    • custom_pages: add "waf_challenge" as new supported error page type identifier in both resource and data source schemas
    • list: enhance CIDR validator to check for normalized CIDR notation requiring network address for IPv4 and IPv6
    • magic_wan_gre_tunnel: add automatic_return_routing attribute for automatic routing control
    • magic_wan_gre_tunnel: add BGP configuration support with new BGP model attribute
    • magic_wan_gre_tunnel: add bgp_status computed attribute for BGP connection status information
    • magic_wan_gre_tunnel: enhance schema with BGP-related attributes and validators
    • magic_wan_ipsec_tunnel: add automatic_return_routing attribute for automatic routing control
    • magic_wan_ipsec_tunnel: add BGP configuration support with new BGP model attribute
    • magic_wan_ipsec_tunnel: add bgp_status computed attribute for BGP connection status information
    • magic_wan_ipsec_tunnel: add custom_remote_identities attribute for custom identity configuration
    • magic_wan_ipsec_tunnel: enhance schema with BGP and identity-related attributes
    • ruleset: add request body buffering support
    • ruleset: enhance ruleset data source with additional configuration options
    • workers_script: add observability logs attributes to list data source model
    • workers_script: enhance list data source schema with additional configuration options

    Bug Fixes

    • account_member: fix resource importability issues
    • dns_record: remove unnecessary fmt.Sprintf wrapper around LoadTestCase call in test configuration helper function
    • load_balancer: fix session_affinity_ttl type expectations to match Float64 in initial creation and Int64 after migration
    • workers_kv: handle special characters correctly in URL encoding

    Documentation

    • account_subscription: update schema description for rate_plan.sets attribute to clarify it returns an array of strings
    • api_shield: add resource-level description for API Shield management of auth ID characteristics
    • api_shield: enhance auth_id_characteristics.name attribute description to include JWT token configuration format requirements
    • api_shield: specify JSONPath expression format for JWT claim locations
    • hyperdrive_config: add description attribute to name attribute explaining its purpose in dashboard and API identification
    • hyperdrive_config: apply description improvements across resource, data source, and list data source schemas
    • hyperdrive_config: improve schema descriptions for cache settings to clarify default values
    • hyperdrive_config: update port description to clarify defaults for different database types

    For more information

  1. Auxiliary Workers are now fully supported when using full-stack frameworks, such as React Router and TanStack Start, that integrate with the Cloudflare Vite plugin. They are included alongside the framework's build output in the build output directory. Note that this feature requires Vite 7 or above.

    Auxiliary Workers are additional Workers that can be called via service bindings from your main (entry) Worker. They are defined in the plugin config, as in the example below:

    vite.config.ts
    import { defineConfig } from "vite";
    import { tanstackStart } from "@tanstack/react-start/plugin/vite";
    import { cloudflare } from "@cloudflare/vite-plugin";
    export default defineConfig({
    plugins: [
    tanstackStart(),
    cloudflare({
    viteEnvironment: { name: "ssr" },
    auxiliaryWorkers: [{ configPath: "./wrangler.aux.jsonc" }],
    }),
    ],
    });

    See the Vite plugin API docs for more info.

  1. The .sql file extension is now automatically configured to be importable in your Worker code when using Wrangler or the Cloudflare Vite plugin. This is particular useful for importing migrations in Durable Objects and means you no longer need to configure custom rules when using Drizzle.

    SQL files are imported as JavaScript strings:

    TypeScript
    // `example` will be a JavaScript string
    import example from "./example.sql";
  1. We have made it easier to validate connectivity when deploying WARP Connector as part of your software-defined private network.

    You can now ping the WARP Connector host directly on its LAN IP address immediately after installation. This provides a fast, familiar way to confirm that the Connector is online and reachable within your network before testing access to downstream services.

    Starting with version 2025.10.186.0, WARP Connector responds to traffic addressed to its own LAN IP, giving you immediate visibility into Connector reachability.

    Learn more about deploying WARP Connector and building private network connectivity with Cloudflare One.

  1. We've partnered with Black Forest Labs (BFL) again to bring their optimized FLUX.2 [klein] 4B model to Workers AI! This distilled model offers faster generation and cost-effective pricing, while maintaining great output quality. With a fixed 4-step inference process, Klein 4B is ideal for rapid prototyping and real-time applications where speed matters.

    Read the BFL blog to learn more about the model itself, or try it out yourself on our multi modal playground.

    Pricing documentation is available on the model page or pricing page.

    Workers AI Platform specifics

    The model hosted on Workers AI is optimized for speed with a fixed 4-step inference process and supports up to 4 image inputs. Since this is a distilled model, the steps parameter is fixed at 4 and cannot be adjusted. Like FLUX.2 [dev], this image model uses multipart form data inputs, even if you just have a prompt.

    With the REST API, the multipart form data input looks like this:

    Terminal window
    curl --request POST \
    --url 'https://api.cloudflare.com/client/v4/accounts/{ACCOUNT}/ai/run/@cf/black-forest-labs/flux-2-klein-4b' \
    --header 'Authorization: Bearer {TOKEN}' \
    --header 'Content-Type: multipart/form-data' \
    --form 'prompt=a sunset at the alps' \
    --form width=1024 \
    --form height=1024

    With the Workers AI binding, you can use it as such:

    JavaScript
    const form = new FormData();
    form.append("prompt", "a sunset with a dog");
    form.append("width", "1024");
    form.append("height", "1024");
    // FormData doesn't expose its serialized body or boundary. Passing it to a
    // Request (or Response) constructor serializes it and generates the Content-Type
    // header with the boundary, which is required for the server to parse the multipart fields.
    const formResponse = new Response(form);
    const formStream = formResponse.body;
    const formContentType = formResponse.headers.get('content-type');
    const resp = await env.AI.run("@cf/black-forest-labs/flux-2-klein-4b", {
    multipart: {
    body: formStream,
    contentType: formContentType,
    },
    });

    The parameters you can send to the model are detailed here:

    JSON Schema for Model Required Parameters

    • prompt (string) - Text description of the image to generate

    Optional Parameters

    • input_image_0 (string) - Binary image
    • input_image_1 (string) - Binary image
    • input_image_2 (string) - Binary image
    • input_image_3 (string) - Binary image
    • guidance (float) - Guidance scale for generation. Higher values follow the prompt more closely
    • width (integer) - Width of the image, default 1024 Range: 256-1920
    • height (integer) - Height of the image, default 768 Range: 256-1920
    • seed (integer) - Seed for reproducibility

    Note: Since this is a distilled model, the steps parameter is fixed at 4 and cannot be adjusted.

    ## Multi-Reference Images
    The FLUX.2 klein-4b model supports generating images based on reference images, just like FLUX.2 [dev]. You can use this feature to apply the style of one image to another, add a new character to an image, or iterate on past generated images. You would use it with the same multipart form data structure, with the input images in binary. The model supports up to 4 input images.
    For the prompt, you can reference the images based on the index, like `take the subject of image 1 and style it like image 0` or even use natural language like `place the dog beside the woman`.
    Note: you have to name the input parameter as `input_image_0`, `input_image_1`, `input_image_2`, `input_image_3` for it to work correctly. All input images must be smaller than 512x512.
    ```bash
    curl --request POST \
    --url 'https://api.cloudflare.com/client/v4/accounts/{ACCOUNT}/ai/run/@cf/black-forest-labs/flux-2-klein-4b' \
    --header 'Authorization: Bearer {TOKEN}' \
    --header 'Content-Type: multipart/form-data' \
    --form 'prompt=take the subject of image 1 and style it like image 0' \
    --form input_image_0=@/Users/johndoe/Desktop/icedoutkeanu.png \
    --form input_image_1=@/Users/johndoe/Desktop/me.png \
    --form width=1024 \
    --form height=1024

    Through Workers AI Binding:

    JavaScript
    //helper function to convert ReadableStream to Blob
    async function streamToBlob(stream: ReadableStream, contentType: string): Promise<Blob> {
    const reader = stream.getReader();
    const chunks = [];
    while (true) {
    const { done, value } = await reader.read();
    if (done) break;
    chunks.push(value);
    }
    return new Blob(chunks, { type: contentType });
    }
    const image0 = await fetch("http://image-url");
    const image1 = await fetch("http://image-url");
    const form = new FormData();
    const image_blob0 = await streamToBlob(image0.body, "image/png");
    const image_blob1 = await streamToBlob(image1.body, "image/png");
    form.append('input_image_0', image_blob0)
    form.append('input_image_1', image_blob1)
    form.append('prompt', 'take the subject of image 1 and style it like image 0')
    // FormData doesn't expose its serialized body or boundary. Passing it to a
    // Request (or Response) constructor serializes it and generates the Content-Type
    // header with the boundary, which is required for the server to parse the multipart fields.
    const formResponse = new Response(form);
    const formStream = formResponse.body;
    const formContentType = formResponse.headers.get('content-type');
    const resp = await env.AI.run("@cf/black-forest-labs/flux-2-klein-4b", {
    multipart: {
    body: formStream,
    contentType: formContentType
    }
    })
  1. The wrangler types command now generates TypeScript types for bindings from all environments defined in your Wrangler configuration file by default.

    Previously, wrangler types only generated types for bindings in the top-level configuration (or a single environment when using the --env flag). This meant that if you had environment-specific bindings — for example, a KV namespace only in production or an R2 bucket only in staging — those bindings would be missing from your generated types, causing TypeScript errors when accessing them.

    Now, running wrangler types collects bindings from all environments and includes them in the generated Env type. This ensures your types are complete regardless of which environment you deploy to.

    Generating types for a specific environment

    If you want the previous behavior of generating types for only a specific environment, you can use the --env flag:

    Terminal window
    wrangler types --env production

    Learn more about generating types for your Worker in the Wrangler documentation.

  1. Wrangler now supports a --check flag for the wrangler types command. This flag validates that your generated types are up to date without writing any changes to disk.

    This is useful in CI/CD pipelines where you want to ensure that developers have regenerated their types after making changes to their Wrangler configuration. If the types are out of date, the command will exit with a non-zero status code.

    Terminal window
    npx wrangler types --check

    If your types are up to date, the command will succeed silently. If they are out of date, you'll see an error message indicating which files need to be regenerated.

    For more information, see the Wrangler types documentation.

  1. You can now receive notifications when your Workers' builds start, succeed, fail, or get cancelled using Event Subscriptions.

    Workers Builds publishes events to a Queue that your Worker can read messages from, and then send notifications wherever you need — Slack, Discord, email, or any webhook endpoint.

    You can deploy this Worker to your own Cloudflare account to send build notifications to Slack:

    Deploy to Cloudflare

    The template includes:

    • Build status with Preview/Live URLs for successful deployments
    • Inline error messages for failed builds
    • Branch, commit hash, and author name
    Slack notifications showing build events

    For setup instructions, refer to the template README or the Event Subscriptions documentation.

  1. Wrangler now includes built-in shell tab completion support, making it faster and easier to navigate commands without memorizing every option. Press Tab as you type to autocomplete commands, subcommands, flags, and even option values like log levels.

    Tab completions are supported for Bash, Zsh, Fish, and PowerShell.

    Setup

    Generate the completion script for your shell and add it to your configuration file:

    Terminal window
    # Bash
    wrangler complete bash >> ~/.bashrc
    # Zsh
    wrangler complete zsh >> ~/.zshrc
    # Fish
    wrangler complete fish >> ~/.config/fish/config.fish
    # PowerShell
    wrangler complete powershell >> $PROFILE

    After adding the script, restart your terminal or source your configuration file for the changes to take effect. Then you can simply press Tab to see available completions:

    Terminal window
    wrangler d<TAB> # completes to 'deploy', 'dev', 'd1', etc.
    wrangler kv <TAB> # shows subcommands: namespace, key, bulk

    Tab completions are dynamically generated from Wrangler's command registry, so they stay up-to-date as new commands and options are added. This feature is powered by @bomb.sh/tab.

    See the wrangler complete documentation for more details.

  1. You can now use the HAVING clause and LIKE pattern matching operators in Workers Analytics Engine.

    Workers Analytics Engine allows you to ingest and store high-cardinality data at scale and query your data through a simple SQL API.

    Filtering using HAVING

    The HAVING clause complements the WHERE clause by enabling you to filter groups based on aggregate values. While WHERE filters rows before aggregation, HAVING filters groups after aggregation is complete.

    You can use HAVING to filter groups where the average exceeds a threshold:

    SELECT
    blob1 AS probe_name,
    avg(double1) AS average_temp
    FROM temperature_readings
    GROUP BY probe_name
    HAVING average_temp > 10

    You can also filter groups based on aggregates such as the number of items in the group:

    SELECT
    blob1 AS probe_name,
    count() AS num_readings
    FROM temperature_readings
    GROUP BY probe_name
    HAVING num_readings > 100

    Pattern matching using LIKE

    The new pattern matching operators enable you to search for strings that match specific patterns using wildcard characters:

    • LIKE - case-sensitive pattern matching
    • NOT LIKE - case-sensitive pattern exclusion
    • ILIKE - case-insensitive pattern matching
    • NOT ILIKE - case-insensitive pattern exclusion

    Pattern matching supports two wildcard characters: % (matches zero or more characters) and _ (matches exactly one character).

    You can match strings starting with a prefix:

    SELECT *
    FROM logs
    WHERE blob1 LIKE 'error%'

    You can also match file extensions (case-insensitive):

    SELECT *
    FROM requests
    WHERE blob2 ILIKE '%.jpg'

    Another example is excluding strings containing specific text:

    SELECT *
    FROM events
    WHERE blob3 NOT ILIKE '%debug%'

    Ready to get started?

    Learn more about the HAVING clause or pattern matching operators in the Workers Analytics Engine SQL reference documentation.

  1. Custom instance types are now enabled for all Cloudflare Containers users. You can now specify specific vCPU, memory, and disk amounts, rather than being limited to pre-defined instance types. Previously, only select Enterprise customers were able to customize their instance type.

    To use a custom instance type, specify the instance_type property as an object with vcpu, memory_mib, and disk_mb fields in your Wrangler configuration:

    [[containers]]
    image = "./Dockerfile"
    instance_type = { vcpu = 2, memory_mib = 6144, disk_mb = 12000 }

    Individual limits for custom instance types are based on the standard-4 instance type (4 vCPU, 12 GiB memory, 20 GB disk). You must allocate at least 1 vCPU for custom instance types. For workloads requiring less than 1 vCPU, use the predefined instance types like lite or basic.

    See the limits documentation for the full list of constraints on custom instance types. See the getting started guide to deploy your first Container,

  1. You can now deploy microfrontends to Cloudflare, splitting a single application into smaller, independently deployable units that render as one cohesive application. This lets different teams using different frameworks develop, test, and deploy each microfrontend without coordinating releases.

    Microfrontends solve several challenges for large-scale applications:

    • Independent deployments: Teams deploy updates on their own schedule without redeploying the entire application
    • Framework flexibility: Build multi-framework applications (for example, Astro, Remix, and Next.js in one app)
    • Gradual migration: Migrate from a monolith to a distributed architecture incrementally

    Create a microfrontend project:

    Deploy to Cloudflare

    This template automatically creates a router worker with pre-configured routing logic, and lets you configure Service bindings to Workers you have already deployed to your Cloudflare account. The router Worker analyzes incoming requests, matches them against configured routes, and forwards requests to the appropriate microfrontend via service bindings. The router automatically rewrites HTML, CSS, and headers to ensure assets load correctly from each microfrontend's mount path. The router includes advanced features like preloading for faster navigation between microfrontends, smooth page transitions using the View Transitions API, and automatic path rewriting for assets, redirects, and cookies.

    Each microfrontend can be a full-framework application, a static site with Workers Static Assets, or any other Worker-based application.

    Get started with the microfrontends template, or read the microfrontends documentation for implementation details.

  1. We've shipped a new release for the Agents SDK v0.3.0 bringing full compatibility with AI SDK v6 and introducing the unified tool pattern, dynamic tool approval, and enhanced React hooks with improved tool handling.

    This release includes improved streaming and tool support, dynamic tool approval (for "human in the loop" systems), enhanced React hooks with onToolCall callback, improved error handling for streaming responses, and seamless migration from v5 patterns.

    This makes it ideal for building production AI chat interfaces with Cloudflare Workers AI models, agent workflows, human-in-the-loop systems, or any application requiring reliable tool execution and approval workflows.

    Additionally, we've updated workers-ai-provider v3.0.0, the official provider for Cloudflare Workers AI models, and ai-gateway-provider v3.0.0, the provider for Cloudflare AI Gateway, to be compatible with AI SDK v6.

    Agents SDK v0.3.0

    Unified Tool Pattern

    AI SDK v6 introduces a unified tool pattern where all tools are defined on the server using the tool() function. This replaces the previous client-side AITool pattern.

    Server-Side Tool Definition

    TypeScript
    import { tool } from "ai";
    import { z } from "zod";
    // Server: Define ALL tools on the server
    const tools = {
    // Server-executed tool
    getWeather: tool({
    description: "Get weather for a city",
    inputSchema: z.object({ city: z.string() }),
    execute: async ({ city }) => fetchWeather(city)
    }),
    // Client-executed tool (no execute = client handles via onToolCall)
    getLocation: tool({
    description: "Get user location from browser",
    inputSchema: z.object({})
    // No execute function
    }),
    // Tool requiring approval (dynamic based on input)
    processPayment: tool({
    description: "Process a payment",
    inputSchema: z.object({ amount: z.number() }),
    needsApproval: async ({ amount }) => amount > 100,
    execute: async ({ amount }) => charge(amount)
    })
    };

    Client-Side Tool Handling

    TypeScript
    // Client: Handle client-side tools via onToolCall callback
    import { useAgentChat } from "agents/ai-react";
    const { messages, sendMessage, addToolOutput } = useAgentChat({
    agent,
    onToolCall: async ({ toolCall, addToolOutput }) => {
    if (toolCall.toolName === "getLocation") {
    const position = await new Promise((resolve, reject) => {
    navigator.geolocation.getCurrentPosition(resolve, reject);
    });
    addToolOutput({
    toolCallId: toolCall.toolCallId,
    output: {
    lat: position.coords.latitude,
    lng: position.coords.longitude
    }
    });
    }
    }
    });

    Key benefits of the unified tool pattern:

    • Server-defined tools: All tools are defined in one place on the server
    • Dynamic approval: Use needsApproval to conditionally require user confirmation
    • Cleaner client code: Use onToolCall callback instead of managing tool configs
    • Type safety: Full TypeScript support with proper tool typing

    useAgentChat(options)

    Creates a new chat interface with enhanced v6 capabilities.

    TypeScript
    // Basic chat setup with onToolCall
    const { messages, sendMessage, addToolOutput } = useAgentChat({
    agent,
    onToolCall: async ({ toolCall, addToolOutput }) => {
    // Handle client-side tool execution
    await addToolOutput({
    toolCallId: toolCall.toolCallId,
    output: { result: "success" }
    });
    }
    });

    Dynamic Tool Approval

    Use needsApproval on server tools to conditionally require user confirmation:

    TypeScript
    const paymentTool = tool({
    description: "Process a payment",
    inputSchema: z.object({
    amount: z.number(),
    recipient: z.string()
    }),
    needsApproval: async ({ amount }) => amount > 1000,
    execute: async ({ amount, recipient }) => {
    return await processPayment(amount, recipient);
    }
    });

    Tool Confirmation Detection

    The isToolUIPart and getToolName functions now check both static and dynamic tool parts:

    TypeScript
    import { isToolUIPart, getToolName } from "ai";
    const pendingToolCallConfirmation = messages.some((m) =>
    m.parts?.some(
    (part) => isToolUIPart(part) && part.state === "input-available",
    ),
    );
    // Handle tool confirmation
    if (pendingToolCallConfirmation) {
    await addToolOutput({
    toolCallId: part.toolCallId,
    output: "User approved the action"
    });
    }

    If you need the v5 behavior (static-only checks), use the new functions:

    TypeScript
    import { isStaticToolUIPart, getStaticToolName } from "ai";

    convertToModelMessages() is now async

    The convertToModelMessages() function is now asynchronous. Update all calls to await the result:

    TypeScript
    import { convertToModelMessages } from "ai";
    const result = streamText({
    messages: await convertToModelMessages(this.messages),
    model: openai("gpt-4o")
    });

    ModelMessage type

    The CoreMessage type has been removed. Use ModelMessage instead:

    TypeScript
    import { convertToModelMessages, type ModelMessage } from "ai";
    const modelMessages: ModelMessage[] = await convertToModelMessages(messages);

    generateObject mode option removed

    The mode option for generateObject has been removed:

    TypeScript
    // Before (v5)
    const result = await generateObject({
    mode: "json",
    model,
    schema,
    prompt
    });
    // After (v6)
    const result = await generateObject({
    model,
    schema,
    prompt
    });

    Structured Output with generateText

    While generateObject and streamObject are still functional, the recommended approach is to use generateText/streamText with the Output.object() helper:

    TypeScript
    import { generateText, Output, stepCountIs } from "ai";
    const { output } = await generateText({
    model: openai("gpt-4"),
    output: Output.object({
    schema: z.object({ name: z.string() })
    }),
    stopWhen: stepCountIs(2),
    prompt: "Generate a name"
    });

    Note: When using structured output with generateText, you must configure multiple steps with stopWhen because generating the structured output is itself a step.

    workers-ai-provider v3.0.0

    Seamless integration with Cloudflare Workers AI models through the updated workers-ai-provider v3.0.0 with AI SDK v6 support.

    Model Setup with Workers AI

    Use Cloudflare Workers AI models directly in your agent workflows:

    TypeScript
    import { createWorkersAI } from "workers-ai-provider";
    import { useAgentChat } from "agents/ai-react";
    // Create Workers AI model (v3.0.0 - enhanced v6 internals)
    const model = createWorkersAI({
    binding: env.AI,
    })("@cf/meta/llama-3.2-3b-instruct");

    Enhanced File and Image Support

    Workers AI models now support v6 file handling with automatic conversion:

    TypeScript
    // Send images and files to Workers AI models
    sendMessage({
    role: "user",
    parts: [
    { type: "text", text: "Analyze this image:" },
    {
    type: "file",
    data: imageBuffer,
    mediaType: "image/jpeg",
    },
    ],
    });
    // Workers AI provider automatically converts to proper format

    Streaming with Workers AI

    Enhanced streaming support with automatic warning detection:

    TypeScript
    // Streaming with Workers AI models
    const result = await streamText({
    model: createWorkersAI({ binding: env.AI })("@cf/meta/llama-3.2-3b-instruct"),
    messages: await convertToModelMessages(messages),
    onChunk: (chunk) => {
    // Enhanced streaming with warning handling
    console.log(chunk);
    },
    });

    ai-gateway-provider v3.0.0

    The ai-gateway-provider v3.0.0 now supports AI SDK v6, enabling you to use Cloudflare AI Gateway with multiple AI providers including Anthropic, Azure, AWS Bedrock, Google Vertex, and Perplexity.

    AI Gateway Setup

    Use Cloudflare AI Gateway to add analytics, caching, and rate limiting to your AI applications:

    TypeScript
    import { createAIGateway } from "ai-gateway-provider";
    // Create AI Gateway provider (v3.0.0 - enhanced v6 internals)
    const model = createAIGateway({
    gatewayUrl: "https://gateway.ai.cloudflare.com/v1/your-account-id/gateway",
    headers: {
    "Authorization": `Bearer ${env.AI_GATEWAY_TOKEN}`
    }
    })({
    provider: "openai",
    model: "gpt-4o"
    });

    Migration from v5

    Deprecated APIs

    The following APIs are deprecated in favor of the unified tool pattern:

    DeprecatedReplacement
    AITool typeUse AI SDK's tool() function on server
    extractClientToolSchemas()Define tools on server, no client schemas needed
    createToolsFromClientSchemas()Define tools on server with tool()
    toolsRequiringConfirmation optionUse needsApproval on server tools
    experimental_automaticToolResolutionUse onToolCall callback
    tools option in useAgentChatUse onToolCall for client-side execution
    addToolResult()Use addToolOutput()

    Breaking Changes Summary

    1. Unified Tool Pattern: All tools must be defined on the server using tool()
    2. convertToModelMessages() is async: Add await to all calls
    3. CoreMessage removed: Use ModelMessage instead
    4. generateObject mode removed: Remove mode option
    5. isToolUIPart behavior changed: Now checks both static and dynamic tool parts

    Installation

    Update your dependencies to use the latest versions:

    Terminal window
    npm install agents@^0.3.0 workers-ai-provider@^3.0.0 ai-gateway-provider@^3.0.0 ai@^6.0.0 @ai-sdk/react@^3.0.0 @ai-sdk/openai@^3.0.0

    Resources

    Feedback Welcome

    We'd love your feedback! We're particularly interested in feedback on:

    • Migration experience - How smooth was the upgrade from v5 to v6?
    • Unified tool pattern - How does the new server-defined tool pattern work for you?
    • Dynamic tool approval - Does the needsApproval feature meet your needs?
    • AI Gateway integration - How well does the new provider work with your setup?
  1. Earlier this year, we announced the launch of the new Terraform v5 Provider. We are aware of the high number of issues reported by the Cloudflare community related to the v5 release. We have committed to releasing improvements on a 2-3 week cadence to ensure its stability and reliability, including the v5.15 release. We have also pivoted from an issue-to-issue approach to a resource-per-resource approach - we will be focusing on specific resources to not only stabilize the resource but also ensure it is migration-friendly for those migrating from v4 to v5.

    Thank you for continuing to raise issues. They make our provider stronger and help us build products that reflect your needs.

    This release includes bug fixes, the stabilization of even more popular resources, and more.

    Features

    • ai_search: Add AI Search endpoints (6f02adb)
    • certificate_pack: Ensure proper Terraform resource ID handling for path parameters in API calls (081f32a)
    • worker_version: Support startup_time_ms (286ab55)
    • zero_trust_dlp_custom_entry: Support upload_status (7dc0fe3)
    • zero_trust_dlp_entry: Support upload_status (7dc0fe3)
    • zero_trust_dlp_integration_entry: Support upload_status (7dc0fe3)
    • zero_trust_dlp_predefined_entry: Support upload_status (7dc0fe3)
    • zero_trust_gateway_policy: Support forensic_copy (5741fd0)
    • zero_trust_list: Support additional types (category, location, device) (5741fd0)

    Bug fixes

    • access_rules: Add validation to prevent state drift. Ideally, we'd use Semantic Equality but since that isn't an option, this will remove a foot-gun. (4457791)
    • cloudflare_pages_project: Addressing drift issues (6edffcf) (3db318e)
    • cloudflare_worker: Can be cleanly imported (4859b52)
    • cloudflare_worker: Ensure clean imports (5b525bc)
    • list_items: Add validation for IP List items to avoid inconsistent state (b6733dc)
    • zero_trust_access_application: Remove all conditions from sweeper (3197f1a)
    • spectrum_application: Map missing fields during spectrum resource import (#6495) (ddb4e72)

    Upgrade to newer version

    We suggest waiting to migrate to v5 while we work on stabilization. This helps with avoiding any blocking issues while the Terraform resources are actively being stabilized. We will be releasing a new migration tool in March 2026 to help support v4 to v5 transitions for our most popular resources.

    For more information

  1. TanStack Start apps can now prerender routes to static HTML at build time with access to build time environment variables and bindings, and serve them as static assets. To enable prerendering, configure the prerender option of the TanStack Start plugin in your Vite config:

    vite.config.ts
    import { defineConfig } from "vite";
    import { cloudflare } from "@cloudflare/vite-plugin";
    import { tanstackStart } from "@tanstack/react-start/plugin/vite";
    export default defineConfig({
    plugins: [
    cloudflare({ viteEnvironment: { name: "ssr" } }),
    tanstackStart({
    prerender: {
    enabled: true,
    },
    }),
    ],
    });

    This feature requires @tanstack/react-start v1.138.0 or later. See the TanStack Start framework guide for more details.

  1. R2 Data Catalog now supports automatic snapshot expiration for Apache Iceberg tables.

    In Apache Iceberg, a snapshot is metadata that represents the state of a table at a given point in time. Every mutation creates a new snapshot which enable powerful features like time travel queries and rollback capabilities but will accumulate over time.

    Without regular cleanup, these accumulated snapshots can lead to:

    • Metadata overhead
    • Slower table operations
    • Increased storage costs.

    Snapshot expiration in R2 Data Catalog automatically removes old table snapshots based on your configured retention policy, improving performance and storage costs.

    Terminal window
    # Enable catalog-level snapshot expiration
    # Expire snapshots older than 7 days, always retain at least 10 recent snapshots
    npx wrangler r2 bucket catalog snapshot-expiration enable my-bucket \
    --older-than-days 7 \
    --retain-last 10

    Snapshot expiration uses two parameters to determine which snapshots to remove:

    • --older-than-days: age threshold in days
    • --retain-last: minimum snapshot count to retain

    Both conditions must be met before a snapshot is expired, ensuring you always retain recent snapshots even if they exceed the age threshold.

    This feature complements automatic compaction, which optimizes query performance by combining small data files into larger ones. Together, these automatic maintenance operations keep your Iceberg tables performant and cost-efficient without manual intervention.

    To learn more about snapshot expiration and how to configure it, visit our table maintenance documentation or see how to manage catalogs.

  1. We've published build image policies for Workers Builds and Cloudflare Pages, which establish:

    • Minor version updates: We typically update preinstalled software to the latest available minor version without notice. For tools that don't follow semantic versioning (e.g., Bun or Hugo), we provide 3 months’ notice.
    • Major version updates: Before preinstalled software reaches end-of-life, we update to the next stable LTS version with 3 months’ notice.
    • Build image version deprecation (Pages only): We provide 6 months’ notice before deprecation. Projects on v1 or v2 will be automatically moved to v3 on their specified deprecation dates.

    To prepare for updates, monitor the Cloudflare Changelog, dashboard notifications, and email. You can also override default versions to maintain specific versions.

  1. Wrangler now includes a new wrangler auth token command that retrieves your current authentication token or credentials for use with other tools and scripts.

    Terminal window
    wrangler auth token

    The command returns whichever authentication method is currently configured, in priority order: API token from CLOUDFLARE_API_TOKEN, or OAuth token from wrangler login (automatically refreshed if expired).

    Use the --json flag to get structured output including the token type:

    Terminal window
    wrangler auth token --json

    The JSON output includes the authentication type:

    // API token
    { "type": "api_token", "token": "..." }
    // OAuth token
    { "type": "oauth", "token": "..." }
    // API key/email (only available with --json)
    { "type": "api_key", "key": "...", "email": "..." }

    API key/email credentials from CLOUDFLARE_API_KEY and CLOUDFLARE_EMAIL require the --json flag since this method uses two values instead of a single token.

  1. Workers for Platforms lets you build multi-tenant platforms on Cloudflare Workers, allowing your end users to deploy and run their own code on your platform. It's designed for anyone building an AI vibe coding platform, e-commerce platform, website builder, or any product that needs to securely execute user-generated code at scale.

    Previously, setting up Workers for Platforms required using the API. Now, the Workers for Platforms UI supports namespace creation, dispatch worker templates, and tag management, making it easier for Workers for Platforms customers to build and manage multi-tenant platforms directly from the Cloudflare dashboard.

    Workers for Platforms Dashboard Improvements

    Key improvements

    • Namespace Management: You can now create and configure dispatch namespaces directly within the dashboard to start a new platform setup.
    • Dispatch Worker Templates: New Dispatch Worker templates allow you to quickly define how traffic is routed to individual Workers within your namespace. Refer to the Dynamic Dispatch documentation for more examples.
    • Tag Management: You can now set and update tags on User Workers, making it easier to group and manage your Workers.
    • Binding Visibility: Bindings attached to User Workers are now visible directly within the User Worker view.
    • Deploy Vibe Coding Platform in one-click: Deploy a reference implementation of an AI vibe coding platform directly from the dashboard. Powered by the Cloudflare's VibeSDK, this starter kit integrates with Workers for Platforms to handle the deployment of AI-generated projects at scale.

    To get started, go to Workers for Platforms under Compute & AI in the Cloudflare dashboard.

  1. The @cloudflare/vitest-pool-workers package now supports the ctx.exports API, allowing you to access your Worker's top-level exports during tests.

    You can access ctx.exports in unit tests by calling createExecutionContext():

    TypeScript
    import { createExecutionContext } from "cloudflare:test";
    import { it, expect } from "vitest";
    it("can access ctx.exports", async () => {
    const ctx = createExecutionContext();
    const result = await ctx.exports.MyEntryPoint.myMethod();
    expect(result).toBe("expected value");
    });

    Alternatively, you can import exports directly from cloudflare:workers:

    TypeScript
    import { exports } from "cloudflare:workers";
    import { it, expect } from "vitest";
    it("can access imported exports", async () => {
    const result = await exports.MyEntryPoint.myMethod();
    expect(result).toBe("expected value");
    });

    See the context-exports fixture for a complete example.

  1. Wrangler now supports automatic configuration for popular web frameworks in experimental mode, making it even easier to deploy to Cloudflare Workers.

    Previously, if you wanted to deploy an application using a popular web framework like Next.js or Astro, you had to follow tutorials to set up your application for deployment to Cloudflare Workers. This usually involved creating a Wrangler file, installing adapters, or changing configuration options.

    Now wrangler deploy does this for you. Starting with Wrangler 4.55, you can use npx wrangler deploy --x-autoconfig in the directory of any web application using one of the supported frameworks. Wrangler will then proceed to configure and deploy it to your Cloudflare account.

    You can also configure your application without deploying it by using the new npx wrangler setup command. This enables you to easily review what changes we are making so your application is ready for Cloudflare Workers.

    The following application frameworks are supported starting today:

    • Next.js
    • Astro
    • Nuxt
    • TanStack Start
    • SolidStart
    • React Router
    • SvelteKit
    • Docusaurus
    • Qwik
    • Analog

    Automatic configuration also supports static sites by detecting the assets directory and build command. From a single index.html file to the output of a generator like Jekyll or Hugo, you can just run npx wrangler deploy --x-autoconfig to upload to Cloudflare.

    We're really excited to bring you automatic configuration so you can do more with Workers. Please let us know if you run into challenges using this experimentally. We’ve opened a GitHub discussion and would love to hear your feedback.

  1. A new Rules of Durable Objects guide is now available, providing opinionated best practices for building effective Durable Objects applications. This guide covers design patterns, storage strategies, concurrency, and common anti-patterns to avoid.

    Key guidance includes:

    • Design around your "atom" of coordination — Create one Durable Object per logical unit (chat room, game session, user) instead of a global singleton that becomes a bottleneck.
    • Use SQLite storage with RPC methods — SQLite-backed Durable Objects with typed RPC methods provide the best developer experience and performance.
    • Understand input and output gates — Learn how Cloudflare's runtime prevents data races by default, how write coalescing works, and when to use blockConcurrencyWhile().
    • Leverage Hibernatable WebSockets — Reduce costs for real-time applications by allowing Durable Objects to sleep while maintaining WebSocket connections.

    The testing documentation has also been updated with modern patterns using @cloudflare/vitest-pool-workers, including examples for testing SQLite storage, alarms, and direct instance access:

    test/counter.test.js
    import { env, runDurableObjectAlarm } from "cloudflare:test";
    import { it, expect } from "vitest";
    it("can test Durable Objects with isolated storage", async () => {
    const stub = env.COUNTER.getByName("test");
    // Call RPC methods directly on the stub
    await stub.increment();
    expect(await stub.getCount()).toBe(1);
    // Trigger alarms immediately without waiting
    await runDurableObjectAlarm(stub);
    });
  1. Storage billing for SQLite-backed Durable Objects will be enabled in January 2026, with a target date of January 7, 2026 (no earlier).

    To view your SQLite storage usage, go to the Durable Objects page

    Go to Durable Objects

    If you do not want to incur costs, please take action such as optimizing queries or deleting unnecessary stored data in order to reduce your SQLite storage usage ahead of the January 7th target. Only usage on and after the billing target date will incur charges.

    Developers on the Workers Paid plan with Durable Object's SQLite storage usage beyond included limits will incur charges according to SQLite storage pricing announced in September 2024 with the public beta. Developers on the Workers Free plan will not be charged.

    Compute billing for SQLite-backed Durable Objects has been enabled since the initial public beta. SQLite-backed Durable Objects currently incur charges for requests and duration, and no changes are being made to compute billing.

    For more information about SQLite storage pricing and limits, refer to the Durable Objects pricing documentation.

  1. R2 SQL now supports aggregation functions, GROUP BY, HAVING, along with schema discovery commands to make it easy to explore your data catalog.

    Aggregation Functions

    You can now perform aggregations on Apache Iceberg tables in R2 Data Catalog using standard SQL functions including COUNT(*), SUM(), AVG(), MIN(), and MAX(). Combine these with GROUP BY to analyze data across dimensions, and use HAVING to filter aggregated results.

    -- Calculate average transaction amounts by department
    SELECT department, COUNT(*), AVG(total_amount)
    FROM my_namespace.sales_data
    WHERE region = 'North'
    GROUP BY department
    HAVING COUNT(*) > 50
    ORDER BY AVG(total_amount) DESC
    -- Find high-value departments
    SELECT department, SUM(total_amount)
    FROM my_namespace.sales_data
    GROUP BY department
    HAVING SUM(total_amount) > 50000

    Schema Discovery

    New metadata commands make it easy to explore your data catalog and understand table structures:

    • SHOW DATABASES or SHOW NAMESPACES - List all available namespaces
    • SHOW TABLES IN namespace_name - List tables within a namespace
    • DESCRIBE namespace_name.table_name - View table schema and column types
    Terminal window
    npx wrangler r2 sql query "{ACCOUNT_ID}_{BUCKET_NAME}" "DESCRIBE default.sales_data;"
    ⛅️ wrangler 4.54.0
    ─────────────────────────────────────────────
    ┌──────────────────┬────────────────┬──────────┬─────────────────┬───────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────┐
    column_name type required initial_default write_default doc
    ├──────────────────┼────────────────┼──────────┼─────────────────┼───────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────┤
    sale_id BIGINT false Unique identifier for each sales transaction
    ├──────────────────┼────────────────┼──────────┼─────────────────┼───────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────┤
    sale_timestamp TIMESTAMPTZ false Exact date and time when the sale occurred (used for partitioning) │
    ├──────────────────┼────────────────┼──────────┼─────────────────┼───────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────┤
    department TEXT false Product department (8 categories: Electronics, Beauty, Home, Toys, Sports, Food, Clothing, Books) │
    ├──────────────────┼────────────────┼──────────┼─────────────────┼───────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────┤
    category TEXT false Product category grouping (4 categories: Premium, Standard, Budget, Clearance) │
    ├──────────────────┼────────────────┼──────────┼─────────────────┼───────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────┤
    region TEXT false Geographic sales region (5 regions: North, South, East, West, Central) │
    ├──────────────────┼────────────────┼──────────┼─────────────────┼───────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────┤
    product_id INT false Unique identifier for the product sold
    ├──────────────────┼────────────────┼──────────┼─────────────────┼───────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────┤
    quantity INT false Number of units sold in this transaction (range: 1-50) │
    ├──────────────────┼────────────────┼──────────┼─────────────────┼───────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────┤
    unit_price DECIMAL(10, 2) false Price per unit in dollars (range: $5.00-$500.00) │
    ├──────────────────┼────────────────┼──────────┼─────────────────┼───────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────┤
    total_amount DECIMAL(10, 2) false Total sale amount before tax (quantity × unit_price with discounts applied) │
    ├──────────────────┼────────────────┼──────────┼─────────────────┼───────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────┤
    discount_percent INT false Discount percentage applied to this sale (0-50%) │
    ├──────────────────┼────────────────┼──────────┼─────────────────┼───────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────┤
    tax_amount DECIMAL(10, 2) false Tax amount collected on this sale
    ├──────────────────┼────────────────┼──────────┼─────────────────┼───────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────┤
    profit_margin DECIMAL(10, 2) false Profit margin on this sale as a decimal percentage
    ├──────────────────┼────────────────┼──────────┼─────────────────┼───────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────┤
    customer_id INT false Unique identifier for the customer who made the purchase
    ├──────────────────┼────────────────┼──────────┼─────────────────┼───────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────┤
    is_online_sale BOOLEAN false Boolean flag indicating if sale was made online (true) or in-store (false) │
    ├──────────────────┼────────────────┼──────────┼─────────────────┼───────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────┤
    sale_date DATE false Calendar date of the sale (extracted from sale_timestamp) │
    └──────────────────┴────────────────┴──────────┴─────────────────┴───────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────┘
    Read 0 B across 0 files from R2
    On average, 0 B / s

    To learn more about the new aggregation capabilities and schema discovery commands, check out the SQL reference. If you're new to R2 SQL, visit our getting started guide to begin querying your data.