Skip to content

Changelog

New updates and improvements at Cloudflare.

hero image

New Workers AI models for text generation and embedding in AI Search

AI Search now supports four additional Workers AI models across text generation and embedding.

Text generation

ModelContext window (tokens)
@cf/zai-org/glm-4.7-flash131,072
@cf/qwen/qwen3-30b-a3b-fp832,000

GLM-4.7-Flash is a lightweight model from Zhipu AI with a 131,072 token context window, suitable for long-document summarization and retrieval tasks. Qwen3-30B-A3B is a mixture-of-experts model from Alibaba that activates only 3 billion parameters per forward pass, keeping inference fast while maintaining strong response quality.

Embedding

ModelVector dimsInput tokensMetric
@cf/qwen/qwen3-embedding-0.6b1,0244,096cosine
@cf/google/embeddinggemma-300m768512cosine

Qwen3-Embedding-0.6B supports up to 4,096 input tokens, making it a good fit for indexing longer text chunks. EmbeddingGemma-300M from Google produces 768-dimension vectors and is optimized for low-latency embedding workloads.

All four models are available without additional provider keys since they run on Workers AI. Select them when creating or updating an AI Search instance in the dashboard or through the API.

For the full list of supported models, refer to Supported models.