Skip to content
Cloudflare Docs

Changelog

Subscribe to RSS

2026-02-17

Chat Completions API support for gpt-oss models and tool calling improvements
  • @cf/openai/gpt-oss-120b and @cf/openai/gpt-oss-20b now support Chat Completions API format. Use /v1/chat/completions with a messages array, or use /ai/run which dynamically detects your input format and accepts Chat Completions (messages), legacy Completions (prompt), or Responses API (input).
  • [Bug fix] Fixed a bug in the schema for multiple text generation models where the content field in message objects only accepted string values. The field now properly accepts both string content and array content (structured content parts for multi-modal inputs). This fix applies to all affected chat models including GPT-OSS models, Llama 3.x, Mistral, Qwen, and others.
  • [Bug fix] Tool call round-trips now work correctly. The binding no longer rejects tool_call_id values that it generated itself, fixing issues with multi-turn tool calling conversations.
  • [Bug fix] Assistant messages with content: null and tool_calls are now accepted in both the Workers AI binding and REST API (/v1/chat/completions), fixing tool call round-trip failures.
  • [Bug fix] Streaming responses now correctly report finish_reason only on the usage chunk, matching OpenAI's streaming behavior and preventing duplicate finish events.
  • [Bug fix] /v1/chat/completions now preserves original tool call IDs from models instead of regenerating them. Previously, the endpoint was generating new IDs which broke multi-turn tool calling because AI SDK clients could not match tool results to their original calls.
  • [Bug fix] /v1/chat/completions now correctly reports finish_reason: "tool_calls" in the final usage chunk when tools are used. Previously, it was hardcoding finish_reason: "stop" which caused AI SDK clients to think the conversation was complete instead of executing tool calls.

2026-02-13

GLM-4.7-Flash, @cloudflare/tanstack-ai, and workers-ai-provider v3.1.1

2026-01-28

Black Forest Labs FLUX.2 [klein] 9B now available

2026-01-15

Black Forest Labs FLUX.2 [klein] 4b now available

2025-12-03

Deepgram Flux promotional period over on Dec 8, 2025 - now has pricing

2025-11-25

Black Forest Labs FLUX.2 dev now available

2025-11-13

Qwen3 LLM and Embeddings available on Workers AI

2025-10-21

New voice and LLM models on Workers AI

2025-10-02

Deepgram Flux now available on Workers AI
  • We're excited to be a launch partner with Deepgram and offer their new Speech Recognition model built specifically for enabling voice agents. Check out Deepgram's blog for more details on the release.
  • Access the model through @cf/deepgram/flux and check out the changelog for in-depth examples.

2025-09-24

New local models available on Workers AI
  • We've added support for some regional models on Workers AI in support of uplifting local AI labs and AI sovereignty. Check out the full blog post here.
  • @cf/pfnet/plamo-embedding-1b creates embeddings from Japanese text.
  • @cf/aisingapore/gemma-sea-lion-v4-27b-it is a fine-tuned model that supports multiple South East Asian languages, including Burmese, English, Indonesian, Khmer, Lao, Malay, Mandarin, Tagalog, Tamil, Thai, and Vietnamese.
  • @cf/ai4bharat/indictrans2-en-indic-1B is a translation model that can translate between 22 indic languages, including Bengali, Gujarati, Hindi, Tamil, Sanskrit and even traditionally low-resourced languages like Kashmiri, Manipuri and Sindhi.

2025-09-23

New document formats supported by Markdown conversion utility

2025-09-18

Model Catalog updates (types, EmbeddingGemma, model deprecation)
  • Workers AI types got updated in the upcoming wrangler release, please use npm i -D wrangler@latest to update your packages.
  • EmbeddingGemma model accuracy has been improved, we recommend re-indexing data to take advantage of the improved accuracy
  • Some older Workers AI models are being deprecated on October 1st, 2025. We reccommend you use the newer models such as Llama 4 and gpt-oss. The following models are being deprecated:
    • @hf/thebloke/zephyr-7b-beta-awq
    • @hf/thebloke/mistral-7b-instruct-v0.1-awq
    • @hf/thebloke/llama-2-13b-chat-awq
    • @hf/thebloke/openhermes-2.5-mistral-7b-awq
    • @hf/thebloke/neural-chat-7b-v3-1-awq
    • @hf/thebloke/llamaguard-7b-awq
    • @hf/thebloke/deepseek-coder-6.7b-base-awq
    • @hf/thebloke/deepseek-coder-6.7b-instruct-awq
    • @cf/deepseek-ai/deepseek-math-7b-instruct
    • @cf/openchat/openchat-3.5-0106
    • @cf/tiiuae/falcon-7b-instruct
    • @cf/thebloke/discolm-german-7b-v1-awq
    • @cf/qwen/qwen1.5-0.5b-chat
    • @cf/qwen/qwen1.5-7b-chat-awq
    • @cf/qwen/qwen1.5-14b-chat-awq
    • @cf/tinyllama/tinyllama-1.1b-chat-v1.0
    • @cf/qwen/qwen1.5-1.8b-chat
    • @hf/nexusflow/starling-lm-7b-beta
    • @cf/fblgit/una-cybertron-7b-v2-bf16

2025-09-05

Introducing EmbeddingGemma from Google
  • We’re excited to be a launch partner alongside Google to bring their newest embedding model to Workers AI. We're excited to introduce EmbeddingGemma delivers best-in-class performance for its size, enabling RAG and semantic search use cases. Take a look at @cf/google/embeddinggemma-300m for more details. Now available to use for embedding in AI Search too.

2025-08-27

Introducing Partner models to the Workers AI catalog
  • Read the blog for more details
  • @cf/deepgram/aura-1 is a text-to-speech model that allows you to input text and have it come to life in a customizable voice
  • @cf/deepgram/nova-3 is speech-to-text model that transcribes multilingual audio at a blazingly fast speed
  • @cf/pipecat-ai/smart-turn-v2 helps you detect when someone is done speaking
  • @cf/leonardo/lucid-origin is a text-to-image model that generates images with sharp graphic design, stunning full-HD renders, or highly specific creative direction
  • @cf/leonardo/phoenix-1.0 is a text-to-image model with exceptional prompt adherence and coherent text
  • WebSocket support added for audio models like @cf/deepgram/aura-1, @cf/deepgram/nova-3, @cf/pipecat-ai/smart-turn-v2

2025-08-05

Adding gpt-oss models to our catalog
  • Check out the blog for more details about the new models
  • Take a look at the gpt-oss-120b and gpt-oss-20b model pages for more information about schemas, pricing, and context windows

2025-04-09

Pricing correction for @cf/myshell-ai/melotts
  • We've updated our documentation to reflect the correct pricing for melotts: $0.0002 per audio minute, which is actually cheaper than initially stated. The documented pricing was incorrect, where it said users would be charged based on input tokens.

2025-03-17

Minor updates to the model schema for llama-3.2-1b-instruct, whisper-large-v3-turbo, llama-guard

2025-02-21

Workers AI bug fixes
  • We fixed a bug where max_tokens defaults were not properly being respected - max_tokens now correctly defaults to 256 as displayed on the model pages. Users relying on the previous behaviour may observe this as a breaking change. If you want to generate more tokens, please set the max_tokens parameter to what you need.
  • We updated model pages to show context windows - which is defined as the tokens used in the prompt + tokens used in the response. If your prompt + response tokens exceed the context window, the request will error. Please set max_tokens accordingly depending on your prompt length and the context window length to ensure a successful response.

2024-09-26

Workers AI Birthday Week 2024 announcements
  • Meta Llama 3.2 1B, 3B, and 11B vision is now available on Workers AI
  • @cf/black-forest-labs/flux-1-schnell is now available on Workers AI
  • Workers AI is fast! Powered by new GPUs and optimizations, you can expect faster inference on Llama 3.1, Llama 3.2, and FLUX models.
  • No more neurons. Workers AI is moving towards unit-based pricing
  • Model pages get a refresh with better documentation on parameters, pricing, and model capabilities
  • Closed beta for our Run Any* Model feature, sign up here
  • Check out the product announcements blog post for more information
  • And the technical blog post if you want to learn about how we made Workers AI fast

2024-07-23

Meta Llama 3.1 now available on Workers AI

Workers AI now suppoorts Meta Llama 3.1.

2024-06-27

Introducing embedded function calling

2024-06-19

Added support for traditional function calling
  • Function calling is now supported on enabled models
  • Properties added on models page to show which models support function calling

2024-06-18

Native support for AI Gateways

Workers AI now natively supports AI Gateway.

2024-06-11

Deprecation announcement for `@cf/meta/llama-2-7b-chat-int8`

We will be deprecating @cf/meta/llama-2-7b-chat-int8 on 2024-06-30.

Replace the model ID in your code with a new model of your choice:

If you do not switch to a different model by June 30th, we will automatically start returning inference from @cf/meta/llama-3-8b-instruct-awq.

2024-05-29

Add new public LoRAs and note on LoRA routing
  • Added documentation on new public LoRAs.
  • Noted that you can now run LoRA inference with the base model rather than explicitly calling the -lora version

2024-05-17

Add OpenAI compatible API endpoints

Added OpenAI compatible API endpoints for /v1/chat/completions and /v1/embeddings. For more details, refer to Configurations.

2024-04-11

Add AI native binding
  • Added new AI native binding, you can now run models with const resp = await env.AI.run(modelName, inputs)
  • Deprecated @cloudflare/ai npm package. While existing solutions using the @cloudflare/ai package will continue to work, no new Workers AI features will be supported. Moving to native AI bindings is highly recommended