The latest release of the Agents SDK ↗ adds built-in retry utilities, per-connection protocol message control, and a fully rewritten
@cloudflare/ai-chatwith data parts, tool approval persistence, and zero breaking changes.A new
this.retry()method lets you retry any async operation with exponential backoff and jitter. You can pass an optionalshouldRetrypredicate to bail early on non-retryable errors.JavaScript class MyAgent extends Agent {async onRequest(request) {const data = await this.retry(() => callUnreliableService(), {maxAttempts: 4,shouldRetry: (err) => !(err instanceof PermanentError),});return Response.json(data);}}TypeScript class MyAgent extends Agent {async onRequest(request: Request) {const data = await this.retry(() => callUnreliableService(), {maxAttempts: 4,shouldRetry: (err) => !(err instanceof PermanentError),});return Response.json(data);}}Retry options are also available per-task on
queue(),schedule(),scheduleEvery(), andaddMcpServer():JavaScript // Per-task retry configuration, persisted in SQLite alongside the taskawait this.schedule(Date.now() + 60_000,"sendReport",{ userId: "abc" },{retry: { maxAttempts: 5 },},);// Class-level retry defaultsclass MyAgent extends Agent {static options = {retry: { maxAttempts: 3 },};}TypeScript // Per-task retry configuration, persisted in SQLite alongside the taskawait this.schedule(Date.now() + 60_000, "sendReport", { userId: "abc" }, {retry: { maxAttempts: 5 },});// Class-level retry defaultsclass MyAgent extends Agent {static options = {retry: { maxAttempts: 3 },};}Retry options are validated eagerly at enqueue/schedule time, and invalid values throw immediately. Internal retries have also been added for workflow operations (
terminateWorkflow,pauseWorkflow, and others) with Durable Object-aware error detection.Agents automatically send JSON text frames (identity, state, MCP server lists) to every WebSocket connection. You can now suppress these per-connection for clients that cannot handle them — binary-only devices, MQTT clients, or lightweight embedded systems.
JavaScript class MyAgent extends Agent {shouldSendProtocolMessages(connection, ctx) {// Suppress protocol messages for MQTT clientsconst subprotocol = ctx.request.headers.get("Sec-WebSocket-Protocol");return subprotocol !== "mqtt";}}TypeScript class MyAgent extends Agent {shouldSendProtocolMessages(connection: Connection, ctx: ConnectionContext) {// Suppress protocol messages for MQTT clientsconst subprotocol = ctx.request.headers.get("Sec-WebSocket-Protocol");return subprotocol !== "mqtt";}}Connections with protocol messages disabled still fully participate in RPC and regular messaging. Use
isConnectionProtocolEnabled(connection)to check a connection's status at any time. The flag persists across Durable Object hibernation.See Protocol messages for full documentation.
The first stable release of
@cloudflare/ai-chatships alongside this release with a major refactor ofAIChatAgentinternals — newResumableStreamclass, WebSocketChatTransport, and simplified SSE parsing — with zero breaking changes. Existing code usingAIChatAgentanduseAgentChatworks as-is.Key new features:
- Data parts — Attach typed JSON blobs (
data-*) to messages alongside text. Supports reconciliation (type+id updates in-place), append, and transient parts (ephemeral viaonDatacallback). See Data parts. - Tool approval persistence — The
needsApprovalapproval UI now survives page refresh and DO hibernation. The streaming message is persisted to SQLite when a tool entersapproval-requestedstate. maxPersistedMessages— Cap SQLite message storage with automatic oldest-message deletion.bodyoption onuseAgentChat— Send custom data with every request (static or dynamic).- Incremental persistence — Hash-based cache to skip redundant SQL writes.
- Row size guard — Automatic two-pass compaction when messages approach the SQLite 2 MB limit.
autoContinueAfterToolResultdefaults totrue— Client-side tool results and tool approvals now automatically trigger a server continuation, matching server-executed tool behavior. SetautoContinueAfterToolResult: falseinuseAgentChatto restore the previous behavior.
Notable bug fixes:
- Resolved stream resumption race conditions
- Resolved an issue where
setMessagesfunctional updater sent empty arrays - Resolved an issue where client tool schemas were lost after DO hibernation
- Resolved
InvalidPromptErrorafter tool approval (approval.idwas dropped) - Resolved an issue where message metadata was not propagated on broadcast/resume paths
- Resolved an issue where
clearAll()did not clear in-memory chunk buffers - Resolved an issue where
reasoning-deltasilently dropped data whenreasoning-startwas missed during stream resumption
getQueue(),getQueues(),getSchedule(),dequeue(),dequeueAll(), anddequeueAllByCallback()were unnecessarilyasyncdespite only performing synchronous SQL operations. They now return values directly instead of wrapping them in Promises. This is backward compatible — existing code usingawaiton these methods will continue to work.- Fix TypeScript "excessively deep" error — A depth counter on
CanSerializeandIsSerializableParamtypes bails out totrueafter 10 levels of recursion, preventing the "Type instantiation is excessively deep" error with deeply nested types like AI SDKCoreMessage[]. - POST SSE keepalive — The POST SSE handler now sends
event: pingevery 30 seconds to keep the connection alive, matching the existing GET SSE handler behavior. This prevents POST response streams from being silently dropped by proxies during long-running tool calls. - Widened peer dependency ranges — Peer dependency ranges across packages have been widened to prevent cascading major bumps during 0.x minor releases.
@cloudflare/ai-chatand@cloudflare/codemodeare now marked as optional peer dependencies.
To update to the latest version:
Terminal window npm i agents@latest @cloudflare/ai-chat@latest- Data parts — Attach typed JSON blobs (
Cloudflare has deprecated the Workers Quick Editor dev tools inspector and replaced it with a lightweight log viewer.
This aligns our logging with
wrangler tailand gives us the opportunity to focus our efforts on bringing benefits from the work we have invested in observability, which would not be possible otherwise.We have made improvements to this logging viewer based on your feedback such that you can log object and array types, and easily clear the list of logs. This does not include class instances. Limitations are documented in the Workers Playground docs.
If you do need to develop your Worker with a remote inspector, you can still do this using Wrangler locally. Cloning a project from your quick editor to your computer for local development can be done with the
wrangler init --from-dashcommand. For more information, refer to Wrangler commands.
A new Workers Best Practices guide provides opinionated recommendations for building fast, reliable, observable, and secure Workers. The guide draws on production patterns, Cloudflare internal usage, and best practices observed from developers building on Workers.
Key guidance includes:
- Keep your compatibility date current and enable
nodejs_compat— Ensure you have access to the latest runtime features and Node.js built-in modules.
JSONC {"name": "my-worker","main": "src/index.ts",// Set this to today's date"compatibility_date": "2026-04-29","compatibility_flags": ["nodejs_compat"],}TOML name = "my-worker"main = "src/index.ts"# Set this to today's datecompatibility_date = "2026-04-29"compatibility_flags = [ "nodejs_compat" ]- Generate binding types with
wrangler types— Never hand-write yourEnvinterface. Let Wrangler generate it from your actual configuration to catch mismatches at compile time. - Stream request and response bodies — Avoid buffering large payloads in memory. Use
TransformStreamandpipeToto stay within the 128 MB memory limit and improve time-to-first-byte. - Use bindings, not REST APIs — Bindings to KV, R2, D1, Queues, and other Cloudflare services are direct, in-process references with no network hop and no authentication overhead.
- Use Queues and Workflows for background work — Move long-running or retriable tasks out of the critical request path. Use Queues for simple fan-out and buffering, and Workflows for multi-step durable processes.
- Enable Workers Logs and Traces — Configure observability before deploying to production so you have data when you need to debug.
- Avoid global mutable state — Workers reuse isolates across requests. Storing request-scoped data in module-level variables causes cross-request data leaks.
- Always
awaitorwaitUntilyour Promises — Floating promises cause silent bugs and dropped work. - Use Web Crypto for secure token generation — Never use
Math.random()for security-sensitive operations.
To learn more, refer to Workers Best Practices.
- Keep your compatibility date current and enable
We're excited to announce GLM-4.7-Flash on Workers AI, a fast and efficient text generation model optimized for multilingual dialogue and instruction-following tasks, along with the brand-new @cloudflare/tanstack-ai ↗ package and workers-ai-provider v3.1.1 ↗.
You can now run AI agents entirely on Cloudflare. With GLM-4.7-Flash's multi-turn tool calling support, plus full compatibility with TanStack AI and the Vercel AI SDK, you have everything you need to build agentic applications that run completely at the edge.
@cf/zai-org/glm-4.7-flashis a multilingual model with a 131,072 token context window, making it ideal for long-form content generation, complex reasoning tasks, and multilingual applications.Key Features and Use Cases:
- Multi-turn Tool Calling for Agents: Build AI agents that can call functions and tools across multiple conversation turns
- Multilingual Support: Built to handle content generation in multiple languages effectively
- Large Context Window: 131,072 tokens for long-form writing, complex reasoning, and processing long documents
- Fast Inference: Optimized for low-latency responses in chatbots and virtual assistants
- Instruction Following: Excellent at following complex instructions for code generation and structured tasks
Use GLM-4.7-Flash through the Workers AI binding (
env.AI.run()), the REST API at/runor/v1/chat/completions, AI Gateway, or via workers-ai-provider for the Vercel AI SDK.Pricing is available on the model page or pricing page.
We've released
@cloudflare/tanstack-ai, a new package that brings Workers AI and AI Gateway support to TanStack AI ↗. This provides a framework-agnostic alternative for developers who prefer TanStack's approach to building AI applications.Workers AI adapters support four configuration modes — plain binding (
env.AI), plain REST, AI Gateway binding (env.AI.gateway(id)), and AI Gateway REST — across all capabilities:- Chat (
createWorkersAiChat) — Streaming chat completions with tool calling, structured output, and reasoning text streaming. - Image generation (
createWorkersAiImage) — Text-to-image models. - Transcription (
createWorkersAiTranscription) — Speech-to-text. - Text-to-speech (
createWorkersAiTts) — Audio generation. - Summarization (
createWorkersAiSummarize) — Text summarization.
AI Gateway adapters route requests from third-party providers — OpenAI, Anthropic, Gemini, Grok, and OpenRouter — through Cloudflare AI Gateway for caching, rate limiting, and unified billing.
To get started:
Terminal window npm install @cloudflare/tanstack-ai @tanstack/aiThe Workers AI provider for the Vercel AI SDK ↗ now supports three new capabilities beyond chat and image generation:
- Transcription (
provider.transcription(model)) — Speech-to-text with automatic handling of model-specific input formats across binding and REST paths. - Text-to-speech (
provider.speech(model)) — Audio generation with support for voice and speed options. - Reranking (
provider.reranking(model)) — Document reranking for RAG pipelines and search result ordering.
TypeScript import { createWorkersAI } from "workers-ai-provider";import {experimental_transcribe,experimental_generateSpeech,rerank,} from "ai";const workersai = createWorkersAI({ binding: env.AI });const transcript = await experimental_transcribe({model: workersai.transcription("@cf/openai/whisper-large-v3-turbo"),audio: audioData,mediaType: "audio/wav",});const speech = await experimental_generateSpeech({model: workersai.speech("@cf/deepgram/aura-1"),text: "Hello world",voice: "asteria",});const ranked = await rerank({model: workersai.reranking("@cf/baai/bge-reranker-base"),query: "What is machine learning?",documents: ["ML is a branch of AI.", "The weather is sunny."],});This release also includes a comprehensive reliability overhaul (v3.0.5):
- Fixed streaming — Responses now stream token-by-token instead of buffering all chunks, using a proper
TransformStreampipeline with backpressure. - Fixed tool calling — Resolved issues with tool call ID sanitization, conversation history preservation, and a heuristic that silently fell back to non-streaming mode when tools were defined.
- Premature stream termination detection — Streams that end unexpectedly now report
finishReason: "error"instead of silently reporting"stop". - AI Search support — Added
createAISearchas the canonical export (renamed from AutoRAG).createAutoRAGstill works with a deprecation warning.
To upgrade:
Terminal window npm install workers-ai-provider@latest ai
Workers no longer have a limit of 1000 subrequests per invocation, allowing you to make more
fetch()calls or requests to Cloudflare services on every incoming request. This is especially important for long-running Workers requests, such as open websockets on Durable Objects or long-running Workflows, as these could often exceed this limit and error.By default, Workers on paid plans are now limited to 10,000 subrequests per invocation, but this limit can be increased up to 10 million by setting the new
subrequestslimit in your Wrangler configuration file.JSONC {"limits": {"subrequests": 50000,},}TOML [limits]subrequests = 50_000Workers on the free plan remain limited to 50 external subrequests and 1000 subrequests to Cloudflare services per invocation.
To protect against runaway code or unexpected costs, you can also set a lower limit for both subrequests and CPU usage.
JSONC {"limits": {"subrequests": 10,"cpu_ms": 1000,},}TOML [limits]subrequests = 10cpu_ms = 1_000For more information, refer to the Wrangler configuration documentation for limits and subrequest limits.
The Cloudflare Vite plugin now integrates seamlessly @vitejs/plugin-rsc ↗, the official Vite plugin for React Server Components ↗.
A
childEnvironmentsoption has been added to the plugin config to enable using multiple environments within a single Worker. The parent environment can then import modules from a child environment in order to access a separate module graph. For a typical RSC use case, the plugin might be configured as in the following example:vite.config.ts export default defineConfig({plugins: [cloudflare({viteEnvironment: {name: "rsc",childEnvironments: ["ssr"],},}),],});@vitejs/plugin-rscprovides the lower level functionality that frameworks, such as React Router ↗, build upon. The GitHub repository includes a basic Cloudflare example ↗.
The latest release of the Agents SDK ↗ brings readonly connections, MCP protocol and security improvements, x402 payment protocol v2 migration, and the ability to customize OAuth for MCP server connections.
Agents can now restrict WebSocket clients to read-only access, preventing them from modifying agent state. This is useful for dashboards, spectator views, or any scenario where clients should observe but not mutate.
New hooks:
shouldConnectionBeReadonly,setConnectionReadonly,isConnectionReadonly. Readonly connections block both client-sidesetState()and mutating@callable()methods, and the readonly flag survives hibernation.JavaScript class MyAgent extends Agent {shouldConnectionBeReadonly(connection) {// Make spectators readonlyreturn connection.url.includes("spectator");}}TypeScript class MyAgent extends Agent {shouldConnectionBeReadonly(connection) {// Make spectators readonlyreturn connection.url.includes("spectator");}}The new
createMcpOAuthProvidermethod on theAgentclass allows subclasses to override the default OAuth provider used when connecting to MCP servers. This enables custom authentication strategies such as pre-registered client credentials or mTLS, beyond the built-in dynamic client registration.JavaScript class MyAgent extends Agent {createMcpOAuthProvider(callbackUrl) {return new MyCustomOAuthProvider(this.ctx.storage, this.name, callbackUrl);}}TypeScript class MyAgent extends Agent {createMcpOAuthProvider(callbackUrl: string): AgentMcpOAuthProvider {return new MyCustomOAuthProvider(this.ctx.storage, this.name, callbackUrl);}}Upgraded the MCP SDK to 1.26.0 to prevent cross-client response leakage. Stateless MCP Servers should now create a new
McpServerinstance per request instead of sharing a single instance. A guard is added in this version of the MCP SDK which will prevent connection to a Server instance that has already been connected to a transport. Developers will need to modify their code if they declare theirMcpServerinstance as a global variable.Added
callbackPathoption toaddMcpServerto prevent instance name leakage in MCP OAuth callback URLs. WhensendIdentityOnConnectisfalse,callbackPathis now required — the default callback URL would expose the instance name, undermining the security intent. Also fixes callback request detection to match via thestateparameter instead of a loose/callbackURL substring check, enabling custom callback paths.onStateChangedis a drop-in rename ofonStateUpdate(same signature, same behavior).onStateUpdatestill works but emits a one-time console warning per class.validateStateChangerejections now propagate aCF_AGENT_STATE_ERRORmessage back to the client.Migrated the x402 MCP payment integration from the legacy
x402package to@x402/coreand@x402/evmv2.Breaking changes for x402 users:
- Peer dependencies changed: replace
x402with@x402/coreand@x402/evm PaymentRequirementstype now uses v2 fields (e.g.amountinstead ofmaxAmountRequired)X402ClientConfig.accounttype changed fromviem.AccounttoClientEvmSigner(structurally compatible withprivateKeyToAccount())
Terminal window npm uninstall x402npm install @x402/core @x402/evmNetwork identifiers now accept both legacy names and CAIP-2 format:
TypeScript // Legacy name (auto-converted){network: "base-sepolia",}// CAIP-2 format (preferred){network: "eip155:84532",}Other x402 changes:
X402ClientConfig.networkis now optional — the client auto-selects from available payment requirements- Server-side lazy initialization: facilitator connection is deferred until the first paid tool invocation
- Payment tokens support both v2 (
PAYMENT-SIGNATURE) and v1 (X-PAYMENT) HTTP headers - Added
normalizeNetworkexport for converting legacy network names to CAIP-2 format - Re-exports
PaymentRequirements,PaymentRequired,Network,FacilitatorConfig, andClientEvmSignerfromagents/x402
- Fix
useAgentandAgentClientcrashing when usingbasePathrouting - CORS handling delegated to partyserver's native support (simpler, more reliable)
- Client-side
onStateUpdateErrorcallback for handling rejected state updates
To update to the latest version:
Terminal window npm i agents@latest- Peer dependencies changed: replace
The Workers Observability dashboard ↗ has some major updates to make it easier to debug your application's issues and share findings with your team.

You can now:
- Create visualizations — Build charts from your Worker data directly in a Worker's Observability tab
- Export data as JSON or CSV — Download logs and traces for offline analysis or to share with teammates
- Share events and traces — Generate direct URLs to specific events, invocations, and traces that open standalone pages with full context
- Customize table columns — Improved field picker to add, remove, and reorder columns in the events table
- Expandable event details — Expand events inline to view full details without leaving the table
- Keyboard shortcuts — Navigate the dashboard with hotkey support

These updates are now live in the Cloudflare dashboard, both in a Worker's Observability tab and in the account-level Observability dashboard for a unified experience. To get started, go to Workers & Pages > select your Worker > Observability.
Cloudflare Workflows now automatically generates visual diagrams from your code
Your Workflow is parsed to provide a visual map of the Workflow structure, allowing you to:
- Understand how steps connect and execute
- Visualize loops and nested logic
- Follow branching paths for conditional logic

You can collapse loops and nested logic to see the high-level flow, or expand them to see every step.
Workflow diagrams are available in beta for all JavaScript and TypeScript Workflows. Find your Workflows in the Cloudflare dashboard ↗ to see their diagrams.
You can now configure Workers to run close to infrastructure in legacy cloud regions to minimize latency to existing services and databases. This is most useful when your Worker makes multiple round trips.
To set a placement hint, set the
placement.regionproperty in your Wrangler configuration file:JSONC {"placement": {"region": "aws:us-east-1",},}TOML [placement]region = "aws:us-east-1"Placement hints support Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure region identifiers. Workers run in the Cloudflare data center ↗ with the lowest latency to the specified cloud region.
If your existing infrastructure is not in these cloud providers, expose it to placement probes with
placement.hostfor layer 4 checks orplacement.hostnamefor layer 7 checks. These probes are designed to locate single-homed infrastructure and are not suitable for anycasted or multicasted resources.JSONC {"placement": {"host": "my_database_host.com:5432",},}TOML [placement]host = "my_database_host.com:5432"JSONC {"placement": {"hostname": "my_api_server.com",},}TOML [placement]hostname = "my_api_server.com"This is an extension of Smart Placement, which automatically places your Workers closer to back-end APIs based on measured latency. When you do not know the location of your back-end APIs or have multiple back-end APIs, set
mode: "smart":JSONC {"placement": {"mode": "smart",},}TOML [placement]mode = "smart"
Auxiliary Workers are now fully supported when using full-stack frameworks, such as React Router and TanStack Start, that integrate with the Cloudflare Vite plugin. They are included alongside the framework's build output in the build output directory. Note that this feature requires Vite 7 or above.
Auxiliary Workers are additional Workers that can be called via service bindings from your main (entry) Worker. They are defined in the plugin config, as in the example below:
vite.config.ts import { defineConfig } from "vite";import { tanstackStart } from "@tanstack/react-start/plugin/vite";import { cloudflare } from "@cloudflare/vite-plugin";export default defineConfig({plugins: [tanstackStart(),cloudflare({viteEnvironment: { name: "ssr" },auxiliaryWorkers: [{ configPath: "./wrangler.aux.jsonc" }],}),],});See the Vite plugin API docs for more info.
The
.sqlfile extension is now automatically configured to be importable in your Worker code when using Wrangler or the Cloudflare Vite plugin. This is particular useful for importing migrations in Durable Objects and means you no longer need to configure custom rules when using Drizzle ↗.SQL files are imported as JavaScript strings:
TypeScript // `example` will be a JavaScript stringimport example from "./example.sql";
The
wrangler typescommand now generates TypeScript types for bindings from all environments defined in your Wrangler configuration file by default.Previously,
wrangler typesonly generated types for bindings in the top-level configuration (or a single environment when using the--envflag). This meant that if you had environment-specific bindings — for example, a KV namespace only in production or an R2 bucket only in staging — those bindings would be missing from your generated types, causing TypeScript errors when accessing them.Now, running
wrangler typescollects bindings from all environments and includes them in the generatedEnvtype. This ensures your types are complete regardless of which environment you deploy to.If you want the previous behavior of generating types for only a specific environment, you can use the
--envflag:Terminal window wrangler types --env productionLearn more about generating types for your Worker in the Wrangler documentation.
Wrangler now supports a
--checkflag for thewrangler typescommand. This flag validates that your generated types are up to date without writing any changes to disk.This is useful in CI/CD pipelines where you want to ensure that developers have regenerated their types after making changes to their Wrangler configuration. If the types are out of date, the command will exit with a non-zero status code.
Terminal window npx wrangler types --checkIf your types are up to date, the command will succeed silently. If they are out of date, you'll see an error message indicating which files need to be regenerated.
For more information, see the Wrangler types documentation.
You can now receive notifications when your Workers' builds start, succeed, fail, or get cancelled using Event Subscriptions.
Workers Builds publishes events to a Queue that your Worker can read messages from, and then send notifications wherever you need — Slack, Discord, email, or any webhook endpoint.
You can deploy this Worker ↗ to your own Cloudflare account to send build notifications to Slack:
The template includes:
- Build status with Preview/Live URLs for successful deployments
- Inline error messages for failed builds
- Branch, commit hash, and author name

For setup instructions, refer to the template README ↗ or the Event Subscriptions documentation.
Wrangler now includes built-in shell tab completion support, making it faster and easier to navigate commands without memorizing every option. Press Tab as you type to autocomplete commands, subcommands, flags, and even option values like log levels.
Tab completions are supported for Bash, Zsh, Fish, and PowerShell.
Generate the completion script for your shell and add it to your configuration file:
Terminal window # Bashwrangler complete bash >> ~/.bashrc# Zshwrangler complete zsh >> ~/.zshrc# Fishwrangler complete fish >> ~/.config/fish/config.fish# PowerShellwrangler complete powershell >> $PROFILEAfter adding the script, restart your terminal or source your configuration file for the changes to take effect. Then you can simply press Tab to see available completions:
Terminal window wrangler d<TAB> # completes to 'deploy', 'dev', 'd1', etc.wrangler kv <TAB> # shows subcommands: namespace, key, bulkTab completions are dynamically generated from Wrangler's command registry, so they stay up-to-date as new commands and options are added. This feature is powered by
@bomb.sh/tab↗.See the
wrangler completedocumentation for more details.
You can now use the
HAVINGclause andLIKEpattern matching operators in Workers Analytics Engine ↗.Workers Analytics Engine allows you to ingest and store high-cardinality data at scale and query your data through a simple SQL API.
The
HAVINGclause complements theWHEREclause by enabling you to filter groups based on aggregate values. WhileWHEREfilters rows before aggregation,HAVINGfilters groups after aggregation is complete.You can use
HAVINGto filter groups where the average exceeds a threshold:SELECTblob1 AS probe_name,avg(double1) AS average_tempFROM temperature_readingsGROUP BY probe_nameHAVING average_temp > 10You can also filter groups based on aggregates such as the number of items in the group:
SELECTblob1 AS probe_name,count() AS num_readingsFROM temperature_readingsGROUP BY probe_nameHAVING num_readings > 100The new pattern matching operators enable you to search for strings that match specific patterns using wildcard characters:
LIKE- case-sensitive pattern matchingNOT LIKE- case-sensitive pattern exclusionILIKE- case-insensitive pattern matchingNOT ILIKE- case-insensitive pattern exclusion
Pattern matching supports two wildcard characters:
%(matches zero or more characters) and_(matches exactly one character).You can match strings starting with a prefix:
SELECT *FROM logsWHERE blob1 LIKE 'error%'You can also match file extensions (case-insensitive):
SELECT *FROM requestsWHERE blob2 ILIKE '%.jpg'Another example is excluding strings containing specific text:
SELECT *FROM eventsWHERE blob3 NOT ILIKE '%debug%'Learn more about the
HAVINGclause or pattern matching operators in the Workers Analytics Engine SQL reference documentation.
You can now deploy microfrontends to Cloudflare, splitting a single application into smaller, independently deployable units that render as one cohesive application. This lets different teams using different frameworks develop, test, and deploy each microfrontend without coordinating releases.
Microfrontends solve several challenges for large-scale applications:
- Independent deployments: Teams deploy updates on their own schedule without redeploying the entire application
- Framework flexibility: Build multi-framework applications (for example, Astro, Remix, and Next.js in one app)
- Gradual migration: Migrate from a monolith to a distributed architecture incrementally
Create a microfrontend project:
This template automatically creates a router worker with pre-configured routing logic, and lets you configure Service bindings to Workers you have already deployed to your Cloudflare account. The router Worker analyzes incoming requests, matches them against configured routes, and forwards requests to the appropriate microfrontend via service bindings. The router automatically rewrites HTML, CSS, and headers to ensure assets load correctly from each microfrontend's mount path. The router includes advanced features like preloading for faster navigation between microfrontends, smooth page transitions using the View Transitions API, and automatic path rewriting for assets, redirects, and cookies.
Each microfrontend can be a full-framework application, a static site with Workers Static Assets, or any other Worker-based application.
Get started with the microfrontends template ↗, or read the microfrontends documentation for implementation details.
Agents SDK v0.3.0, workers-ai-provider v3.0.0, and ai-gateway-provider v3.0.0 with AI SDK v6 support
We've shipped a new release for the Agents SDK ↗ v0.3.0 bringing full compatibility with AI SDK v6 ↗ and introducing the unified tool pattern, dynamic tool approval, and enhanced React hooks with improved tool handling.
This release includes improved streaming and tool support, dynamic tool approval (for "human in the loop" systems), enhanced React hooks with
onToolCallcallback, improved error handling for streaming responses, and seamless migration from v5 patterns.This makes it ideal for building production AI chat interfaces with Cloudflare Workers AI models, agent workflows, human-in-the-loop systems, or any application requiring reliable tool execution and approval workflows.
Additionally, we've updated workers-ai-provider v3.0.0, the official provider for Cloudflare Workers AI models, and ai-gateway-provider v3.0.0, the provider for Cloudflare AI Gateway, to be compatible with AI SDK v6.
AI SDK v6 introduces a unified tool pattern where all tools are defined on the server using the
tool()function. This replaces the previous client-sideAIToolpattern.TypeScript import { tool } from "ai";import { z } from "zod";// Server: Define ALL tools on the serverconst tools = {// Server-executed toolgetWeather: tool({description: "Get weather for a city",inputSchema: z.object({ city: z.string() }),execute: async ({ city }) => fetchWeather(city)}),// Client-executed tool (no execute = client handles via onToolCall)getLocation: tool({description: "Get user location from browser",inputSchema: z.object({})// No execute function}),// Tool requiring approval (dynamic based on input)processPayment: tool({description: "Process a payment",inputSchema: z.object({ amount: z.number() }),needsApproval: async ({ amount }) => amount > 100,execute: async ({ amount }) => charge(amount)})};TypeScript // Client: Handle client-side tools via onToolCall callbackimport { useAgentChat } from "agents/ai-react";const { messages, sendMessage, addToolOutput } = useAgentChat({agent,onToolCall: async ({ toolCall, addToolOutput }) => {if (toolCall.toolName === "getLocation") {const position = await new Promise((resolve, reject) => {navigator.geolocation.getCurrentPosition(resolve, reject);});addToolOutput({toolCallId: toolCall.toolCallId,output: {lat: position.coords.latitude,lng: position.coords.longitude}});}}});Key benefits of the unified tool pattern:
- Server-defined tools: All tools are defined in one place on the server
- Dynamic approval: Use
needsApprovalto conditionally require user confirmation - Cleaner client code: Use
onToolCallcallback instead of managing tool configs - Type safety: Full TypeScript support with proper tool typing
Creates a new chat interface with enhanced v6 capabilities.
TypeScript // Basic chat setup with onToolCallconst { messages, sendMessage, addToolOutput } = useAgentChat({agent,onToolCall: async ({ toolCall, addToolOutput }) => {// Handle client-side tool executionawait addToolOutput({toolCallId: toolCall.toolCallId,output: { result: "success" }});}});Use
needsApprovalon server tools to conditionally require user confirmation:TypeScript const paymentTool = tool({description: "Process a payment",inputSchema: z.object({amount: z.number(),recipient: z.string()}),needsApproval: async ({ amount }) => amount > 1000,execute: async ({ amount, recipient }) => {return await processPayment(amount, recipient);}});The
isToolUIPartandgetToolNamefunctions now check both static and dynamic tool parts:TypeScript import { isToolUIPart, getToolName } from "ai";const pendingToolCallConfirmation = messages.some((m) =>m.parts?.some((part) => isToolUIPart(part) && part.state === "input-available",),);// Handle tool confirmationif (pendingToolCallConfirmation) {await addToolOutput({toolCallId: part.toolCallId,output: "User approved the action"});}If you need the v5 behavior (static-only checks), use the new functions:
TypeScript import { isStaticToolUIPart, getStaticToolName } from "ai";The
convertToModelMessages()function is now asynchronous. Update all calls to await the result:TypeScript import { convertToModelMessages } from "ai";const result = streamText({messages: await convertToModelMessages(this.messages),model: openai("gpt-4o")});The
CoreMessagetype has been removed. UseModelMessageinstead:TypeScript import { convertToModelMessages, type ModelMessage } from "ai";const modelMessages: ModelMessage[] = await convertToModelMessages(messages);The
modeoption forgenerateObjecthas been removed:TypeScript // Before (v5)const result = await generateObject({mode: "json",model,schema,prompt});// After (v6)const result = await generateObject({model,schema,prompt});While
generateObjectandstreamObjectare still functional, the recommended approach is to usegenerateText/streamTextwith theOutput.object()helper:TypeScript import { generateText, Output, stepCountIs } from "ai";const { output } = await generateText({model: openai("gpt-4"),output: Output.object({schema: z.object({ name: z.string() })}),stopWhen: stepCountIs(2),prompt: "Generate a name"});Note: When using structured output with
generateText, you must configure multiple steps withstopWhenbecause generating the structured output is itself a step.Seamless integration with Cloudflare Workers AI models through the updated workers-ai-provider v3.0.0 with AI SDK v6 support.
Use Cloudflare Workers AI models directly in your agent workflows:
TypeScript import { createWorkersAI } from "workers-ai-provider";import { useAgentChat } from "agents/ai-react";// Create Workers AI model (v3.0.0 - enhanced v6 internals)const model = createWorkersAI({binding: env.AI,})("@cf/meta/llama-3.2-3b-instruct");Workers AI models now support v6 file handling with automatic conversion:
TypeScript // Send images and files to Workers AI modelssendMessage({role: "user",parts: [{ type: "text", text: "Analyze this image:" },{type: "file",data: imageBuffer,mediaType: "image/jpeg",},],});// Workers AI provider automatically converts to proper formatEnhanced streaming support with automatic warning detection:
TypeScript // Streaming with Workers AI modelsconst result = await streamText({model: createWorkersAI({ binding: env.AI })("@cf/meta/llama-3.2-3b-instruct"),messages: await convertToModelMessages(messages),onChunk: (chunk) => {// Enhanced streaming with warning handlingconsole.log(chunk);},});The ai-gateway-provider v3.0.0 now supports AI SDK v6, enabling you to use Cloudflare AI Gateway with multiple AI providers including Anthropic, Azure, AWS Bedrock, Google Vertex, and Perplexity.
Use Cloudflare AI Gateway to add analytics, caching, and rate limiting to your AI applications:
TypeScript import { createAIGateway } from "ai-gateway-provider";// Create AI Gateway provider (v3.0.0 - enhanced v6 internals)const model = createAIGateway({gatewayUrl: "https://gateway.ai.cloudflare.com/v1/your-account-id/gateway",headers: {"Authorization": `Bearer ${env.AI_GATEWAY_TOKEN}`}})({provider: "openai",model: "gpt-4o"});The following APIs are deprecated in favor of the unified tool pattern:
Deprecated Replacement AITooltypeUse AI SDK's tool()function on serverextractClientToolSchemas()Define tools on server, no client schemas needed createToolsFromClientSchemas()Define tools on server with tool()toolsRequiringConfirmationoptionUse needsApprovalon server toolsexperimental_automaticToolResolutionUse onToolCallcallbacktoolsoption inuseAgentChatUse onToolCallfor client-side executionaddToolResult()Use addToolOutput()- Unified Tool Pattern: All tools must be defined on the server using
tool() convertToModelMessages()is async: Addawaitto all callsCoreMessageremoved: UseModelMessageinsteadgenerateObjectmode removed: RemovemodeoptionisToolUIPartbehavior changed: Now checks both static and dynamic tool parts
Update your dependencies to use the latest versions:
Terminal window npm install agents@^0.3.0 workers-ai-provider@^3.0.0 ai-gateway-provider@^3.0.0 ai@^6.0.0 @ai-sdk/react@^3.0.0 @ai-sdk/openai@^3.0.0- Migration Guide ↗ - Comprehensive migration documentation from v5 to v6
- AI SDK v6 Documentation ↗ - Official AI SDK migration guide
- AI SDK v6 Announcement ↗ - Learn about new features in v6
- AI SDK Documentation ↗ - Complete AI SDK reference
- GitHub Issues ↗ - Report bugs or request features
We'd love your feedback! We're particularly interested in feedback on:
- Migration experience - How smooth was the upgrade from v5 to v6?
- Unified tool pattern - How does the new server-defined tool pattern work for you?
- Dynamic tool approval - Does the
needsApprovalfeature meet your needs? - AI Gateway integration - How well does the new provider work with your setup?
TanStack Start ↗ apps can now prerender routes to static HTML at build time with access to build time environment variables and bindings, and serve them as static assets. To enable prerendering, configure the
prerenderoption of the TanStack Start plugin in your Vite config:vite.config.ts import { defineConfig } from "vite";import { cloudflare } from "@cloudflare/vite-plugin";import { tanstackStart } from "@tanstack/react-start/plugin/vite";export default defineConfig({plugins: [cloudflare({ viteEnvironment: { name: "ssr" } }),tanstackStart({prerender: {enabled: true,},}),],});This feature requires
@tanstack/react-startv1.138.0 or later. See the TanStack Start framework guide for more details.
We've published build image policies for Workers Builds and Cloudflare Pages, which establish:
- Minor version updates: We typically update preinstalled software to the latest available minor version without notice. For tools that don't follow semantic versioning (e.g., Bun or Hugo), we provide 3 months’ notice.
- Major version updates: Before preinstalled software reaches end-of-life, we update to the next stable LTS version with 3 months’ notice.
- Build image version deprecation (Pages only): We provide 6 months’ notice before deprecation. Projects on v1 or v2 will be automatically moved to v3 on their specified deprecation dates.
To prepare for updates, monitor the Cloudflare Changelog ↗, dashboard notifications, and email. You can also override default versions to maintain specific versions.
Wrangler now includes a new
wrangler auth tokencommand that retrieves your current authentication token or credentials for use with other tools and scripts.Terminal window wrangler auth tokenThe command returns whichever authentication method is currently configured, in priority order: API token from
CLOUDFLARE_API_TOKEN, or OAuth token fromwrangler login(automatically refreshed if expired).Use the
--jsonflag to get structured output including the token type:Terminal window wrangler auth token --jsonThe JSON output includes the authentication type:
JSONC // API token{ "type": "api_token", "token": "..." }// OAuth token{ "type": "oauth", "token": "..." }// API key/email (only available with --json){ "type": "api_key", "key": "...", "email": "..." }API key/email credentials from
CLOUDFLARE_API_KEYandCLOUDFLARE_EMAILrequire the--jsonflag since this method uses two values instead of a single token.
The
@cloudflare/vitest-pool-workerspackage now supports thectx.exportsAPI, allowing you to access your Worker's top-level exports during tests.You can access
ctx.exportsin unit tests by callingcreateExecutionContext():TypeScript import { createExecutionContext } from "cloudflare:test";import { it, expect } from "vitest";it("can access ctx.exports", async () => {const ctx = createExecutionContext();const result = await ctx.exports.MyEntryPoint.myMethod();expect(result).toBe("expected value");});Alternatively, you can import
exportsdirectly fromcloudflare:workers:TypeScript import { exports } from "cloudflare:workers";import { it, expect } from "vitest";it("can access imported exports", async () => {const result = await exports.MyEntryPoint.myMethod();expect(result).toBe("expected value");});See the context-exports fixture ↗ for a complete example.
Wrangler now supports automatic configuration for popular web frameworks in experimental mode, making it even easier to deploy to Cloudflare Workers.
Previously, if you wanted to deploy an application using a popular web framework like Next.js or Astro, you had to follow tutorials to set up your application for deployment to Cloudflare Workers. This usually involved creating a Wrangler file, installing adapters, or changing configuration options.
Now
wrangler deploydoes this for you. Starting with Wrangler 4.55, you can usenpx wrangler deploy --x-autoconfigin the directory of any web application using one of the supported frameworks. Wrangler will then proceed to configure and deploy it to your Cloudflare account.You can also configure your application without deploying it by using the new
npx wrangler setupcommand. This enables you to easily review what changes we are making so your application is ready for Cloudflare Workers.The following application frameworks are supported starting today:
- Next.js
- Astro
- Nuxt
- TanStack Start
- SolidStart
- React Router
- SvelteKit
- Docusaurus
- Qwik
- Analog
Automatic configuration also supports static sites by detecting the assets directory and build command. From a single index.html file to the output of a generator like Jekyll or Hugo, you can just run
npx wrangler deploy --x-autoconfigto upload to Cloudflare.We're really excited to bring you automatic configuration so you can do more with Workers. Please let us know if you run into challenges using this experimentally. We’ve opened a GitHub discussion ↗ and would love to hear your feedback.
A new Rules of Durable Objects guide is now available, providing opinionated best practices for building effective Durable Objects applications. This guide covers design patterns, storage strategies, concurrency, and common anti-patterns to avoid.
Key guidance includes:
- Design around your "atom" of coordination — Create one Durable Object per logical unit (chat room, game session, user) instead of a global singleton that becomes a bottleneck.
- Use SQLite storage with RPC methods — SQLite-backed Durable Objects with typed RPC methods provide the best developer experience and performance.
- Understand input and output gates — Learn how Cloudflare's runtime prevents data races by default, how write coalescing works, and when to use
blockConcurrencyWhile(). - Leverage Hibernatable WebSockets — Reduce costs for real-time applications by allowing Durable Objects to sleep while maintaining WebSocket connections.
The testing documentation has also been updated with modern patterns using
@cloudflare/vitest-pool-workers, including examples for testing SQLite storage, alarms, and direct instance access:test/counter.test.js import { env, runDurableObjectAlarm } from "cloudflare:test";import { it, expect } from "vitest";it("can test Durable Objects with isolated storage", async () => {const stub = env.COUNTER.getByName("test");// Call RPC methods directly on the stubawait stub.increment();expect(await stub.getCount()).toBe(1);// Trigger alarms immediately without waitingawait runDurableObjectAlarm(stub);});test/counter.test.ts import { env, runDurableObjectAlarm } from "cloudflare:test";import { it, expect } from "vitest";it("can test Durable Objects with isolated storage", async () => {const stub = env.COUNTER.getByName("test");// Call RPC methods directly on the stubawait stub.increment();expect(await stub.getCount()).toBe(1);// Trigger alarms immediately without waitingawait runDurableObjectAlarm(stub);});