Before

We've partnered with Black Forest Labs (BFL) to bring their latest FLUX.2 [dev] model to Workers AI! This model excels in generating high-fidelity images with physical world grounding, multi-language support, and digital asset creation. You can also create specific super images with granular controls like JSON prompting.
Read the BFL blog ↗ to learn more about the model itself. Read our Cloudflare blog ↗ to see the model in action, or try it out yourself on our multi modal playground ↗.
Pricing documentation is available on the model page or pricing page. Note, we expect to drop pricing in the next few days after iterating on the model performance.
The model hosted on Workers AI is able to support up to 4 image inputs (512x512 per input image). Note, this image model is one of the most powerful in the catalog and is expected to be slower than the other image models we currently support. One catch to look out for is that this model takes multipart form data inputs, even if you just have a prompt.
With the REST API, the multipart form data input looks like this:
curl --request POST \ --url 'https://api.cloudflare.com/client/v4/accounts/{ACCOUNT}/ai/run/@cf/black-forest-labs/flux-2-dev' \ --header 'Authorization: Bearer {TOKEN}' \ --header 'Content-Type: multipart/form-data' \ --form 'prompt=a sunset at the alps' \ --form steps=25 --form width=1024 --form height=1024With the Workers AI binding, you can use it as such:
const form = new FormData();form.append('prompt', 'a sunset with a dog');form.append('width', '1024');form.append('height', '1024');
//this dummy request is temporary hack//we're pushing a change to address this soonconst formRequest = new Request('http://dummy', { method: 'POST', body: form});const formStream = formRequest.body;const formContentType = formRequest.headers.get('content-type') || 'multipart/form-data';
const resp = await env.AI.run("@cf/black-forest-labs/flux-2-dev", { multipart: { body: formStream, contentType: formContentType }});The parameters you can send to the model are detailed here:
prompt (string) - Text description of the image to generateOptional Parameters
input_image_0 (string) - Binary imageinput_image_1 (string) - Binary imageinput_image_2 (string) - Binary imageinput_image_3 (string) - Binary imagesteps (integer) - Number of inference steps. Higher values may improve quality but increase generation timeguidance (float) - Guidance scale for generation. Higher values follow the prompt more closelywidth (integer) - Width of the image, default 1024 Range: 256-1920height (integer) - Height of the image, default 768 Range: 256-1920seed (integer) - Seed for reproducibility## Multi-Reference Images
The FLUX.2 model is great at generating images based on reference images. You can use this feature to apply the style of one image to another, add a new character to an image, or iterate on past generate images. You would use it with the same multipart form data structure, with the input images in binary.
For the prompt, you can reference the images based on the index, like `take the subject of image 1 and style it like image 0` or even use natural language like `place the dog beside the woman`.
Note: you have to name the input parameter as `input_image_0`, `input_image_1`, `input_image_2` for it to work correctly. All input images must be smaller than 512x512.
```bashcurl --request POST \ --url 'https://api.cloudflare.com/client/v4/accounts/{ACCOUNT}/ai/run/@cf/black-forest-labs/flux-2-dev' \ --header 'Authorization: Bearer {TOKEN}' \ --header 'Content-Type: multipart/form-data' \ --form 'prompt=take the subject of image 1 and style it like image 0' \ --form input_image_0=@/Users/johndoe/Desktop/icedoutkeanu.png \ --form input_image_1=@/Users/johndoe/Desktop/me.png \ --form steps=25 --form width=1024 --form height=1024Through Workers AI Binding:
//helper function to convert ReadableStream to Blobasync function streamToBlob(stream: ReadableStream, contentType: string): Promise<Blob> { const reader = stream.getReader(); const chunks = [];
while (true) { const { done, value } = await reader.read(); if (done) break; chunks.push(value); }
return new Blob(chunks, { type: contentType });}
const image0 = await fetch("http://image-url");const image1 = await fetch("http://image-url");const form = new FormData();
const image_blob0 = await streamToBlob(image0.body, "image/png");const image_blob1 = await streamToBlob(image1.body, "image/png");form.append('input_image_0', image_blob0)form.append('input_image_1', image_blob1)form.append('prompt', 'take the subject of image 1and style it like image 0')
//this dummy request is temporary hack//we're pushing a change to address this soonconst formRequest = new Request('http://dummy', { method: 'POST', body: form});const formStream = formRequest.body;const formContentType = formRequest.headers.get('content-type') || 'multipart/form-data';
const resp = await env.AI.run("@cf/black-forest-labs/flux-2-dev", { multipart: { body: form, contentType: "multipart/form-data" }})The model supports prompting in JSON to get more granular control over images. You would pass the JSON as the value of the 'prompt' field in the multipart form data. See the JSON schema below on the base parameters you can pass to the model.
{ "type": "object", "properties": { "scene": { "type": "string", "description": "Overall scene setting or location" }, "subjects": { "type": "array", "items": { "type": "object", "properties": { "type": { "type": "string", "description": "Type of subject (e.g., desert nomad, blacksmith, DJ, falcon)" }, "description": { "type": "string", "description": "Physical attributes, clothing, accessories" }, "pose": { "type": "string", "description": "Action or stance" }, "position": { "type": "string", "enum": ["foreground", "midground", "background"], "description": "Depth placement in scene" } }, "required": ["type", "description", "pose", "position"] } }, "style": { "type": "string", "description": "Artistic rendering style (e.g., digital painting, photorealistic, pixel art, noir sci-fi, lifestyle photo, wabi-sabi photo)" }, "color_palette": { "type": "array", "items": { "type": "string" }, "minItems": 3, "maxItems": 3, "description": "Exactly 3 main colors for the scene (e.g., ['navy', 'neon yellow', 'magenta'])" }, "lighting": { "type": "string", "description": "Lighting condition and direction (e.g., fog-filtered sun, moonlight with star glints, dappled sunlight)" }, "mood": { "type": "string", "description": "Emotional atmosphere (e.g., harsh and determined, playful and modern, peaceful and dreamy)" }, "background": { "type": "string", "description": "Background environment details" }, "composition": { "type": "string", "enum": [ "rule of thirds", "circular arrangement", "framed by foreground", "minimalist negative space", "S-curve", "vanishing point center", "dynamic off-center", "leading leads", "golden spiral", "diagonal energy", "strong verticals", "triangular arrangement" ], "description": "Compositional technique" }, "camera": { "type": "object", "properties": { "angle": { "type": "string", "enum": ["eye level", "low angle", "slightly low", "bird's-eye", "worm's-eye", "over-the-shoulder", "isometric"], "description": "Camera perspective" }, "distance": { "type": "string", "enum": ["close-up", "medium close-up", "medium shot", "medium wide", "wide shot", "extreme wide"], "description": "Framing distance" }, "focus": { "type": "string", "enum": ["deep focus", "macro focus", "selective focus", "sharp on subject", "soft background"], "description": "Focus type" }, "lens": { "type": "string", "enum": ["14mm", "24mm", "35mm", "50mm", "70mm", "85mm"], "description": "Focal length (wide to telephoto)" }, "f-number": { "type": "string", "description": "Aperture (e.g., f/2.8, the smaller the number the more blurry the background)" }, "ISO": { "type": "number", "description": "Light sensitivity value (comfortable range between 100 & 6400, lower = less sensitivity)" } } }, "effects": { "type": "array", "items": { "type": "string" }, "description": "Post-processing effects (e.g., 'lens flare small', 'subtle film grain', 'soft bloom', 'god rays', 'chromatic aberration mild')" } }, "required": ["scene", "subjects"]}#2ECC71Radar introduces HTTP Origins insights, providing visibility into the status of traffic between Cloudflare's global network and cloud-based origin infrastructure.
The new Origins API provides provides the following endpoints:
/origins - Lists all origins (cloud providers and associated regions)./origins/{origin} - Retrieves information about a specific origin (cloud provider)./origins/timeseries - Retrieves normalized time series data for a specific origin, including the following metrics:
REQUESTS: Number of requestsCONNECTION_FAILURES: Number of connection failuresRESPONSE_HEADER_RECEIVE_DURATION: Duration of the response header receiveTCP_HANDSHAKE_DURATION: Duration of the TCP handshakeTCP_RTT: TCP round trip timeTLS_HANDSHAKE_DURATION: Duration of the TLS handshake/origins/summary - Retrieves HTTP requests to origins summarized by a dimension./origins/timeseries_groups - Retrieves timeseries data for HTTP requests to origins grouped by a dimension.The following dimensions are available for the summary and timeseries_groups endpoints:
region: Origin regionsuccess_rate: Success rate of requests (2XX versus 5XX response codes)percentile: Percentiles of metrics listed aboveAdditionally, the Annotations and Traffic Anomalies APIs have been extended to support origin outages and anomalies, enabling automated detection and alerting for origin infrastructure issues.

Check out the new Radar page ↗.
This week highlights enhancements to detection signatures improving coverage for vulnerabilities in FortiWeb, linked to CVE-2025-64446, alongside new detection logic expanding protection against PHP Wrapper Injection techniques.
Key Findings
This vulnerability enables an unauthenticated attacker to bypass access controls by abusing the CGIINFO header. The latest update strengthens detection logic to ensure a reliable identification of crafted requests attempting to exploit this flaw.
Impact
CGIINFO header to FortiWeb’s backend CGI handler. Successful exploitation grants unintended access to restricted administrative functionality, potentially enabling configuration tampering or system-level actions.| Ruleset | Rule ID | Legacy Rule ID | Description | Previous Action | New Action | Comments |
|---|---|---|---|---|---|---|
| Cloudflare Managed Ruleset | N/A | FortiWeb - Authentication Bypass via CGIINFO Header - CVE:CVE-2025-64446 | Log | Block | This is a new detection | |
| Cloudflare Managed Ruleset | N/A | PHP Wrapper Injection - Body - Beta | Log | Disabled | This rule has been merged into the original rule "PHP Wrapper Injection - Body" (ID: | |
| Cloudflare Managed Ruleset | N/A | PHP Wrapper Injection - URI - Beta | Log | Disabled | This rule has been merged into the original rule "PHP Wrapper Injection - URI" (ID: |
| Announcement Date | Release Date | Release Behavior | Legacy Rule ID | Rule ID | Description | Comments |
|---|---|---|---|---|---|---|
| 2025-11-24 | 2025-12-01 | Log | N/A | Monsta FTP - Remote Code Execution - CVE:CVE-2025-34299 | This is a new detection | |
| 2025-11-24 | 2025-12-01 | Log | N/A | XSS - JS Context Escape - Beta | This is a beta detection and will replace the action on original detection "XSS - JS Context Escape" (ID: |
Containers now support mounting R2 buckets as FUSE (Filesystem in Userspace) volumes, allowing applications to interact with R2 using standard filesystem operations.
Common use cases include:
FUSE adapters like tigrisfs ↗, s3fs ↗, and gcsfuse ↗ can be installed in your container image and configured to mount buckets at startup.
FROM alpine:3.20
# Install FUSE and dependenciesRUN apk update && \ apk add --no-cache ca-certificates fuse curl bash
# Install tigrisfsRUN ARCH=$(uname -m) && \ if [ "$ARCH" = "x86_64" ]; then ARCH="amd64"; fi && \ if [ "$ARCH" = "aarch64" ]; then ARCH="arm64"; fi && \ VERSION=$(curl -s https://api.github.com/repos/tigrisdata/tigrisfs/releases/latest | grep -o '"tag_name": "[^"]*' | cut -d'"' -f4) && \ curl -L "https://github.com/tigrisdata/tigrisfs/releases/download/${VERSION}/tigrisfs_${VERSION#v}_linux_${ARCH}.tar.gz" -o /tmp/tigrisfs.tar.gz && \ tar -xzf /tmp/tigrisfs.tar.gz -C /usr/local/bin/ && \ rm /tmp/tigrisfs.tar.gz && \ chmod +x /usr/local/bin/tigrisfs
# Create startup script that mounts bucketRUN printf '#!/bin/sh\n\ set -e\n\ mkdir -p /mnt/r2\n\ R2_ENDPOINT="https://${R2_ACCOUNT_ID}.r2.cloudflarestorage.com"\n\ /usr/local/bin/tigrisfs --endpoint "${R2_ENDPOINT}" -f "${BUCKET_NAME}" /mnt/r2 &\n\ sleep 3\n\ ls -lah /mnt/r2\n\ ' > /startup.sh && chmod +x /startup.sh
CMD ["/startup.sh"]See the Mount R2 buckets with FUSE example for a complete guide on mounting R2 buckets and/or other S3-compatible storage buckets within your containers.
Containers and Sandboxes pricing for CPU time is now based on active usage only, instead of provisioned resources.
This means that you now pay less for Containers and Sandboxes.
Imagine running the standard-2 instance type for one hour, which can use up to 1 vCPU,
but on average you use only 20% of your CPU capacity.
CPU-time is priced at $0.00002 per vCPU-second.
Previously, you would be charged for the CPU allocated to the instance multiplied by the time it was active, in this case 1 hour.
CPU cost would have been: $0.072 — 1 vCPU * 3600 seconds * $0.00002
Now, since you are only using 20% of your CPU capacity, your CPU cost is cut to 20% of the previous amount.
CPU cost is now: $0.0144 — 1 vCPU * 3600 seconds * $0.00002 * 20% utilization
This can significantly reduce costs for Containers and Sandboxes.
See the documentation to learn more about Containers, Sandboxes, and associated pricing.
This week’s release introduces a critical detection for CVE-2025-61757, a vulnerability in the Oracle Identity Manager REST WebServices component.
Key Findings
This flaw allows unauthenticated attackers with network access over HTTP to fully compromise the Identity Manager, potentially leading to a complete takeover.
Impact
Oracle Identity Manager (CVE-2025-61757): Exploitation could allow an unauthenticated remote attacker to bypass security checks by sending specially crafted requests to the application's message processor. This enables the creation of arbitrary employee accounts, which can be leveraged to modify system configurations and achieve full system compromise.
| Ruleset | Rule ID | Legacy Rule ID | Description | Previous Action | New Action | Comments |
|---|---|---|---|---|---|---|
| Cloudflare Managed Ruleset | N/A | Oracle Identity Manager - Pre-Auth RCE - CVE:CVE-2025-61757 | N/A | Block | This is a new detection. |
Workers Builds now supports up to 64 environment variables, and each environment variable can be up to 5 KB in size. The previous limit was 5 KB total across all environment variables.
This change enables better support for complex build configurations, larger application settings, and more flexible CI/CD workflows.
For more details, refer to the build limits documentation.
Until now, if a Worker had been previously deployed via the Cloudflare Dashboard ↗, a subsequent deployment done via the Cloudflare Workers CLI, Wrangler
(through the deploy command), would allow the user to override the Worker's dashboard settings without providing details on
what dashboard settings would be lost.
Now instead, wrangler deploy presents a helpful representation of the differences between the local configuration
and the remote dashboard settings, and offers to update your local configuration file for you.
See example below showing a before and after for wrangler deploy when a local configuration is expected to override a Worker's dashboard settings:
Before

After

Also, if instead Wrangler detects that a deployment would override remote dashboard settings but in an additive way, without modifying or removing any of them, it will simply proceed with the deployment without requesting any user interaction.
Update to Wrangler v4.50.0 or greater to take advantage of this improved deploy flow.
Earlier this year, we announced the launch of the new Terraform v5 Provider. We are aware of the high number of issues reported by the Cloudflare community related to the v5 release. We have committed to releasing improvements on a 2-3 week cadence ↗ to ensure its stability and reliability, including the v5.13 release. We have also pivoted from an issue-to-issue approach to a resource-per-resource approach ↗ - we will be focusing on specific resources to not only stabilize the resource but also ensure it is migration-friendly for those migrating from v4 to v5.
Thank you for continuing to raise issues. They make our provider stronger and help us build products that reflect your needs.
This release includes new features, new resources and data sources, bug fixes, updates to our Developer Documentation, and more.
Please be aware that there are breaking changes for the cloudflare_api_token and cloudflare_account_token resources. These changes eliminate configuration drift caused by policy ordering differences in the Cloudflare API.
For more specific information about the changes or the actions required, please see the detailed Repository changelog ↗.
We suggest holding off on migration to v5 while we work on stabilization. This help will you avoid any blocking issues while the Terraform resources are actively being stabilized. We will be releasing a new migration tool in March 2026 to help support v4 to v5 transitions for our most popular resources.
AI Search now supports custom HTTP headers for website crawling, solving a common problem where valuable content behind authentication or access controls could not be indexed.
Previously, AI Search could only crawl publicly accessible pages, leaving knowledge bases, documentation, and other protected content out of your search results. With custom headers support, you can now include authentication credentials that allow the crawler to access this protected content.
This is particularly useful for indexing content like:
To add custom headers when creating an AI Search instance, select Parse options. In the Extra headers section, you can add up to five custom headers per Website data source.

For example, to crawl a site protected by Cloudflare Access, you can add service token credentials as custom headers:
CF-Access-Client-Id: your-token-id.accessCF-Access-Client-Secret: your-token-secretThe crawler will automatically include these headers in all requests, allowing it to access protected pages that would otherwise be blocked.
Learn more about configuring custom headers for website crawling in AI Search.
To facilitate significant enhancements to our submission processes, the Final Disposition column of the Team Submissions > Reclassifications page inside the Email Security Zero Trust application will be temporarily removed.
The column displaying the final disposition status for submitted email misses will no longer be visible on the specified page.
This temporary change is required as we revamp and integrate a more powerful backend infrastructure for processing these security-critical submissions. This update is designed to make even more effective use of the data you provide to improve our detection capabilities. We assure you that your submissions are continuing to be addressed at an even greater rate than before, fueling faster and more accurate security improvements.
Rest assured, the ability to submit email misses and the underlying analysis work remain fully operational. We are committed to reintroducing a refined, more valuable status update feature once the new infrastructure is completed.
The Zero Trust dashboard and navigation is receiving significant and exciting updates. The dashboard is being restructured to better support common tasks and workflows, and various pages have been moved and consolidated.
There is a new guided experience on login detailing the changes, and you can use the Zero Trust dashboard search to find product pages by both their new and old names, as well as your created resources. To replay the guided experience, you can find it in Overview > Get Started.

Notable changes

No changes to our API endpoint structure or to any backend services have been made as part of this effort.
This week highlights enhancements to detection signatures improving coverage for vulnerabilities in DELMIA Apriso, linked to CVE-2025-6205.
Key Findings
This vulnerability allows unauthenticated attackers to gain privileged access to the application. The latest update provides enhanced detection logic for resilient protection against exploitation attempts.
Impact
| Ruleset | Rule ID | Legacy Rule ID | Description | Previous Action | New Action | Comments |
|---|---|---|---|---|---|---|
| Cloudflare Managed Ruleset | N/A | DELMIA Apriso - Auth Bypass - CVE:CVE-2025-6205 | Log | Block | This is a new detection. | |
| Cloudflare Managed Ruleset | N/A | PHP Wrapper Injection - Body | N/A | Disabled | Rule metadata description refined. Detection unchanged. | |
| Cloudflare Managed Ruleset | N/A | PHP Wrapper Injection - URI | N/A | Disabled | Rule metadata description refined. Detection unchanged. |
You can now stay on top of your SaaS security posture with the new CASB Weekly Digest notification. This opt-in email digest is delivered to your inbox every Monday morning and provides a high-level summary of your organization's Cloudflare API CASB findings from the previous week.
This allows security teams and IT administrators to get proactive, at-a-glance visibility into new risks and integration health without having to log in to the dashboard.
To opt in, navigate to Manage Account > Notifications in the Cloudflare dashboard to configure the CASB Weekly Digest alert type.
The CASB Weekly Digest notification is available to all Cloudflare users today.
We've resolved a bug in Log Explorer that caused inconsistencies between the custom SQL date field filters and the date picker dropdown. Previously, users attempting to filter logs based on a custom date field via a SQL query sometimes encountered unexpected results or mismatching dates when using the interactive date picker.
This fix ensures that the custom SQL date field filters now align correctly with the selection made in the date picker dropdown, providing a reliable and predictable filtering experience for your log data. This is particularly important for users creating custom log views based on time-sensitive fields.
We've significantly enhanced Log Explorer by adding support for 14 additional Cloudflare product datasets.
This expansion enables Operations and Security Engineers to gain deeper visibility and telemetry across a wider range of Cloudflare services. By integrating these new datasets, users can now access full context to efficiently investigate security incidents, troubleshoot application performance issues, and correlate logged events across different layers (like application and network) within a single interface. This capability is crucial for a complete and cohesive understanding of event flows across your Cloudflare environment.
The newly supported datasets include:
Dns_logsNel_reportsPage_shield_eventsSpectrum_eventsZaraz_eventsAudit LogsAudit_logs_v2Biso_user_actionsDNS firewall logsEmail_security_alertsMagic Firewall IDSNetwork AnalyticsSinkhole HTTPipsec_logsYou can now use Log Explorer to query and filter with each of these datasets. For example, you can identify an IP address exhibiting suspicious behavior in the FW_event logs, and then instantly pivot to the Network Analytics logs or Access logs to see its network-level traffic profile or if it bypassed a corporate policy.
To learn more and get started, refer to the Log Explorer documentation and the Cloudflare Logs documentation.
Digital Experience Monitoring (DEX) provides visibility into WARP device metrics, connectivity, and network performance across your Cloudflare SASE deployment.
We've released four new WARP and DEX device data sets that can be exported via Cloudflare Logpush. These Logpush data sets can be exported to R2, a cloud bucket, or a SIEM to build a customized logging and analytics experience.
To create a new DEX or WARP Logpush job, customers can go to the account level of the Cloudflare dashboard > Analytics & Logs > Logpush to get started.

You can now perform more powerful queries directly in Workers Analytics Engine ↗ with a major expansion of our SQL function library.
Workers Analytics Engine allows you to ingest and store high-cardinality data at scale (such as custom analytics) and query your data through a simple SQL API.
Today, we've expanded Workers Analytics Engine's SQL capabilities with several new functions:
countIf() - count the number of rows which satisfy a provided conditionsumIf() - calculate a sum from rows which satisfy a provided conditionavgIf() - calculate an average from rows which satisfy a provided conditionNew date and time functions: ↗
toYear()toMonth()toDayOfMonth()toDayOfWeek()toHour()toMinute()toSecond()toStartOfYear()toStartOfMonth()toStartOfWeek()toStartOfDay()toStartOfHour()toStartOfFifteenMinutes()toStartOfTenMinutes()toStartOfFiveMinutes()toStartOfMinute()today()toYYYYMM()Whether you're building usage-based billing systems, customer analytics dashboards, or other custom analytics, these functions let you get the most out of your data. Get started with Workers Analytics Engine and explore all available functions in our SQL reference documentation.
A new GA release for the Windows WARP client is now available on the stable releases downloads page.
This release contains minor fixes, improvements, and new features including Path Maximum Transmission Unit Discovery (PMTUD). When PMTUD is enabled, the client will dynamically adjust packet sizing to optimize connection performance. There is also a new connection status message in the GUI to inform users that the local network connection may be unstable. This will make it easier to diagnose connectivity issues.
Changes and improvements
Known issues
For Windows 11 24H2 users, Microsoft has confirmed a regression that may lead to performance issues like mouse lag, audio cracking, or other slowdowns. Cloudflare recommends users experiencing these issues upgrade to a minimum Windows 11 24H2 KB5062553 or higher for resolution.
Devices using WARP client 2025.4.929.0 and up may experience Local Domain Fallback failures if a fallback server has not been configured. To configure a fallback server, refer to Route traffic to fallback server.
Devices with KB5055523 installed may receive a warning about Win32/ClickFix.ABA being present in the installer. To resolve this false positive, update Microsoft Security Intelligence to version 1.429.19.0 or later.
DNS resolution may be broken when the following conditions are all true:
To work around this issue, reconnect the WARP client by toggling off and back on.
A new GA release for the macOS WARP client is now available on the stable releases downloads page.
This release contains minor fixes, improvements, and new features including Path Maximum Transmission Unit Discovery (PMTUD). When PMTUD is enabled, the client will dynamically adjust packet sizing to optimize connection performance. There is also a new connection status message in the GUI to inform users that the local network connection may be unstable. This will make it easier to diagnose connectivity issues.
Changes and improvements
Known issues
A new GA release for the Linux WARP client is now available on the stable releases downloads page.
This release contains minor fixes, improvements, and new features including Path Maximum Transmission Unit Discovery (PMTUD). When PMTUD is enabled, the client will dynamically adjust packet sizing to optimize connection performance. There is also a new connection status message in the GUI to inform users that the local network connection may be unstable. This will make it easier to diagnose connectivity issues.
WARP client version 2025.8.779.0 introduced an updated public key for Linux packages. The public key must be updated if it was installed before September 12, 2025 to ensure the repository remains functional after December 4, 2025. Instructions to make this update are available at pkg.cloudflareclient.com.
Changes and improvements
Starting February 2, 2026, the cloudflared proxy-dns command will be removed from all new cloudflared releases.
This change is being made to enhance security and address a potential vulnerability in an underlying DNS library. This vulnerability is specific to the proxy-dns command and does not affect any other cloudflared features, such as the core Cloudflare Tunnel service.
The proxy-dns command, which runs a client-side DNS-over-HTTPS (DoH) proxy, has been an officially undocumented feature for several years. This functionality is fully and securely supported by our actively developed products.
Versions of cloudflared released before this date will not be affected and will continue to operate. However, note that our official support policy for any cloudflared release is one year from its release date.
We strongly advise users of this undocumented feature to migrate to one of the following officially supported solutions before February 2, 2026, to continue benefiting from secure DNS-over-HTTPS.
The preferred method for enabling DNS-over-HTTPS on user devices is the Cloudflare WARP client. The WARP client automatically secures and proxies all DNS traffic from your device, integrating it with your organization's Zero Trust policies and posture checks.
For scenarios where installing a client on every device is not possible (such as servers, routers, or IoT devices), we recommend using the WARP Connector.
Instead of running cloudflared proxy-dns on a machine, you can install the WARP Connector on a single Linux host within your private network. This connector will act as a gateway, securely routing all DNS and network traffic from your entire subnet to Cloudflare for filtering and logging.
We’re excited to introduce Logpush Health Dashboards, giving customers real-time visibility into the status, reliability, and performance of their Logpush jobs. Health dashboards make it easier to detect delivery issues, monitor job stability, and track performance across destinations. The dashboards are divided into two sections:
Upload Health: See how much data was successfully uploaded, where drops occurred, and how your jobs are performing overall. This includes data completeness, success rate, and upload volume.
Upload Reliability – Diagnose issues impacting stability, retries, or latency, and monitor key metrics such as retry counts, upload duration, and destination availability.

Health Dashboards can be accessed from the Logpush page in the Cloudflare dashboard at the account or zone level, under the Health tab. For more details, refer to our Logpush Health Dashboards documentation, which includes a comprehensive troubleshooting guide to help interpret and resolve common issues.
We're excited to announce a quality-of-life improvement for Log Explorer users. You can now resize the custom SQL query window to accommodate longer and more complex queries.
Previously, if you were writing a long custom SQL query, the fixed-size window required excessive scrolling to view the full query. This update allows you to easily drag the bottom edge of the query window to make it taller. This means you can view your entire custom query at once, improving the efficiency and experience of writing and debugging complex queries.
To learn more and get started, refer to the Log Explorer documentation.
AI Crawl Control now supports per-crawler drilldowns with an extended actions menu and status code analytics. Drill down into Metrics, Cloudflare Radar, and Security Analytics, or export crawler data for use in WAF custom rules, Redirect Rules, and robots.txt files.
The Metrics tab includes a status code distribution chart showing HTTP response codes (2xx, 3xx, 4xx, 5xx) over time. Filter by individual crawler, category, operator, or time range to analyze how specific crawlers interact with your site.

Each crawler row includes a three-dot menu with per-crawler actions:

Learn more about AI Crawl Control.
This week’s release introduces new detections for Prototype Pollution across three common vectors: URI, Body, and Header/Form.
Key Findings
Impact
Exploitation may allow attackers to change internal logic or cause unexpected behavior in applications using JavaScript or Node.js frameworks. Developers should sanitize input keys and avoid merging untrusted data structures.
| Ruleset | Rule ID | Legacy Rule ID | Description | Previous Action | New Action | Comments |
|---|---|---|---|---|---|---|
| Cloudflare Managed Ruleset | N/A | Generic Rules - Prototype Pollution - URI | Log | Disabled | This is a new detection | |
| Cloudflare Managed Ruleset | N/A | Generic Rules - Prototype Pollution - Body | Log | Disabled | This is a new detection | |
| Cloudflare Managed Ruleset | N/A | Generic Rules - Prototype Pollution - Header - Form | Log | Disabled | This is a new detection |
Enable automatic tracing on your Workers, giving you detailed metadata and timing information for every operation your Worker performs.

Tracing helps you identify performance bottlenecks, resolve errors, and understand how your Worker interacts with other services on the Workers platform. You can now answer questions like:
You can now:
{ "observability": { "tracing": { "enabled": true, }, },}We have previously added new application categories to better reflect their content and improve HTTP traffic management: refer to Changelog. While the new categories are live now, we want to ensure you have ample time to review and adjust any existing rules you have configured against old categories. The remapping of existing applications into these new categories will be completed by January 30, 2026. This timeline allows you a dedicated period to:
Applications being remappedd
| Application Name | Existing Category | New Category |
|---|---|---|
| Google Photos | File Sharing | Photography & Graphic Design |
| Flickr | File Sharing | Photography & Graphic Design |
| ADP | Human Resources | Business |
| Greenhouse | Human Resources | Business |
| myCigna | Human Resources | Health & Fitness |
| UnitedHealthcare | Human Resources | Health & Fitness |
| ZipRecruiter | Human Resources | Business |
| Amazon Business | Human Resources | Business |
| Jobcenter | Human Resources | Business |
| Jobsuche | Human Resources | Business |
| Zenjob | Human Resources | Business |
| DocuSign | Legal | Business |
| Postident | Legal | Business |
| Adobe Creative Cloud | Productivity | Photography & Graphic Design |
| Airtable | Productivity | Development |
| Autodesk Fusion360 | Productivity | IT Management |
| Coursera | Productivity | Education |
| Microsoft Power BI | Productivity | Business |
| Tableau | Productivity | Business |
| Duolingo | Productivity | Education |
| Adobe Reader | Productivity | Business |
| AnpiReport | Productivity | Travel |
| ビズリーチ | Productivity | Business |
| doda (デューダ) | Productivity | Business |
| 求人ボックス | Productivity | Business |
| マイナビ2026 | Productivity | Business |
| Power Apps | Productivity | Business |
| RECRUIT AGENT | Productivity | Business |
| シフトボード | Productivity | Business |
| スタンバイ | Productivity | Business |
| Doctolib | Productivity | Health & Fitness |
| Miro | Productivity | Photography & Graphic Design |
| MyFitnessPal | Productivity | Health & Fitness |
| Sentry Mobile | Productivity | Travel |
| Slido | Productivity | Photography & Graphic Design |
| Arista Networks | Productivity | IT Management |
| Atlassian | Productivity | Business |
| CoderPad | Productivity | Business |
| eAgreements | Productivity | Business |
| Vmware | Productivity | IT Management |
| Vmware Vcenter | Productivity | IT Management |
| AWS Skill Builder | Productivity | Education |
| Microsoft Office 365 (GCC) | Productivity | Business |
| Microsoft Exchange Online (GCC) | Productivity | Business |
| Canva | Sales & Marketing | Photography & Graphic Design |
| Instacart | Shopping | Food & Drink |
| Wawa | Shopping | Food & Drink |
| McDonald's | Shopping | Food & Drink |
| Vrbo | Shopping | Travel |
| American Airlines | Shopping | Travel |
| Booking.com | Shopping | Travel |
| Ticketmaster | Shopping | Entertainment & Events |
| Airbnb | Shopping | Travel |
| DoorDash | Shopping | Food & Drink |
| Expedia | Shopping | Travel |
| EasyPark | Shopping | Travel |
| UEFA Tickets | Shopping | Entertainment & Events |
| DHL Express | Shopping | Business |
| UPS | Shopping | Business |
For more information on creating HTTP policies, refer to Applications and app types.
You can now set a jurisdiction when creating a D1 database to guarantee where your database runs and stores data. Jurisdictions can help you comply with data localization regulations such as GDPR. Supported jurisdictions include eu and fedramp.
A jurisdiction can only be set at database creation time via wrangler, REST API or the UI and cannot be added/updated after the database already exists.
npx wrangler@latest d1 create db-with-jurisdiction --jurisdiction eucurl -X POST "https://api.cloudflare.com/client/v4/accounts/<account_id>/d1/database" \ -H "Authorization: Bearer $TOKEN" \ -H "Content-Type: application/json" \ --data '{"name": "db-wth-jurisdiction", "jurisdiction": "eu" }'To learn more, visit D1's data location documentation.
Permissions for managing Logpush jobs related to Zero Trust datasets (Access, Gateway, and DEX) have been updated to improve data security and enforce appropriate access controls.
To view, create, update, or delete Logpush jobs for Zero Trust datasets, users must now have both of the following permissions:
Workers VPC Services is now available, enabling your Workers to securely access resources in your private networks, without having to expose them on the public Internet.
export default { async fetch(request, env, ctx) { // Perform application logic in Workers here
// Sample call to an internal API running on ECS in AWS using the binding const response = await env.AWS_VPC_ECS_API.fetch("https://internal-host.example.com");
// Additional application logic in Workers return new Response(); },};Set up a Cloudflare Tunnel, create a VPC Service, add service bindings to your Worker, and access private resources securely. Refer to the documentation to get started.
We're excited to announce that Log Explorer users can now cancel queries that are currently running.
This new feature addresses a common pain point: waiting for a long, unintended, or misconfigured query to complete before you can submit a new, correct one. With query cancellation, you can immediately stop the execution of any undesirable query, allowing you to quickly craft and submit a new query, significantly improving your investigative workflow and productivity within Log Explorer.
We're excited to announce a new feature in Log Explorer that significantly enhances how you analyze query results: the Query results distribution chart.
This new chart provides a graphical distribution of your results over the time window of the query. Immediately after running a query, you will see the distribution chart above your result table. This visualization allows Log Explorer users to quickly spot trends, identify anomalies, and understand the temporal concentration of log events that match their criteria. For example, you can visually confirm if a spike in traffic or errors occurred at a specific time, allowing you to focus your investigation efforts more effectively. This feature makes it faster and easier to extract meaningful insights from your vast log data.
The chart will dynamically update to reflect the logs matching your current query.
The Brand Protection logo query dashboard now allows you to use the Report to Cloudflare button to submit an Abuse report directly from the Brand Protection logo queries dashboard. While you could previously report new domains that were impersonating your brand before, now you can do the same for websites found to be using your logo wihtout your permission. The abuse reports wiull be prefilled and you will only need to validate a few fields before you can click the submit button, after which our team process your request.
Ready to start? Check out the Brand Protection docs.
Workers, including those using Durable Objects and Browser Rendering, may now process WebSocket messages up to 32 MiB in size. Previously, this limit was 1 MiB.
This change allows Workers to handle use cases requiring large message sizes, such as processing Chrome Devtools Protocol messages.
For more information, please see the Durable Objects startup limits.
We've raised the Cloudflare Workflows account-level limits for all accounts on the Workers paid plan:
These increases mean you can create new instances up to 10x faster, and have more workflow instances concurrently executing. To learn more and get started with Workflows, refer to the getting started guide.
If your application requires a higher limit, fill out the Limit Increase Request Form or contact your account team. Please refer to Workflows pricing for more information.
Two-factor authentication (2FA) is one of the best ways to protect your account from the risk of account takeover. Cloudflare has offered phishing resistant 2FA options including hardware based keys (for example, a Yubikey) and app based TOTP (time-based one-time password) options which use apps like Google or Microsoft's Authenticator app. Unfortunately, while these solutions are very secure, they can be lost if you misplace the hardware based key, or lose the phone which includes that app. The result is that users sometimes get locked out of their accounts and need to contact support.
Today, we are announcing the addition of email as a 2FA factor for all Cloudflare accounts. Email 2FA is in wide use across the industry as a least common denominator for 2FA because it is low friction, loss resistant, and still improves security over username/password login only. We also know that most commercial email providers already require 2FA, so your email address is usually well protected already.
You can now enable email 2FA on the Cloudflare dashboard:
Cloudflare is critical infrastructure, and you should protect it as such. Review the following best practices and make sure you are doing your part to secure your account:
As Cloudflare's platform has grown, so has the need for precise, role-based access control. We’ve redesigned the Member Management experience in the Dashboard to help administrators more easily discover, assign, and refine permissions for specific principals.
Refreshed member invite flow
We overhauled the Invite Members UI to simplify inviting users and assigning permissions.

Refreshed Members Overview Page
We've updated the Members Overview Page to clearly display:

New Member Permission Policies Details View
We've created a new member details screen that shows all permission policies associated with a member; including policies inherited from group associations to make it easier for members to understand the effective permissions they have.

Improved Member Permission Workflow
We redesigned the permission management experience to make it faster and easier for administrators to review roles and grant access.

Account-scoped Policies Restrictions Relaxed
Previously, customers could only associate a single account-scoped policy with a member. We've relaxed this restriction, and now Administrators can now assign multiple account-scoped policies to the same member; bringing policy assignment behavior in-line with user-groups and providing greater flexibility in managing member permissions.
Cloudflare now provides two new request fields in the Ruleset engine that let you make decisions based on whether a request used TCP and the measured TCP round-trip time between the client and Cloudflare. These fields help you understand protocol usage across your traffic and build policies that respond to network performance. For example, you can distinguish TCP from QUIC traffic or route high latency requests to alternative origins when needed.
| Field | Type | Description |
|---|---|---|
cf.edge.client_tcp | Boolean | Indicates whether the request used TCP. A value of true means the client connected using TCP instead of QUIC. |
cf.timings.client_tcp_rtt_msec | Number | Reports the smoothed TCP round-trip time between the client and Cloudflare in milliseconds. For example, a value of 20 indicates roughly twenty milliseconds of RTT. |
Example filter expression:
cf.edge.client_tcp && cf.timings.client_tcp_rtt_msec < 100More information can be found in the Rules language fields reference.
You can now access preview URLs directly from the build details page, making it easier to test your changes when reviewing builds in the dashboard.

What's new
Cloudflare Access for private hostname applications can now secure traffic on all ports and protocols.
Previously, applying Zero Trust policies to private applications required the application to use HTTPS on port 443 and support Server Name Indicator (SNI).
This update removes that limitation. As long as the application is reachable via a Cloudflare off-ramp, you can now enforce your critical security controls — like single sign-on (SSO), MFA, device posture, and variable session lengths — to any private application. This allows you to extend Zero Trust security to services like SSH, RDP, internal databases, and other non-HTTPS applications.

For example, you can now create a self-hosted application in Access for ssh.testapp.local running on port 22. You can then build a policy that only allows engineers in your organization to connect after they pass an SSO/MFA check and are using a corporate device.
This feature is generally available across all plans.
AI Search now supports reranking for improved retrieval quality and allows you to set the system prompt directly in your API requests.
You can now enable reranking to reorder retrieved documents based on their semantic relevance to the user’s query. Reranking helps improve accuracy, especially for large or noisy datasets where vector similarity alone may not produce the optimal ordering.
You can enable and configure reranking in the dashboard or directly in your API requests:
const answer = await env.AI.autorag("my-autorag").aiSearch({ query: "How do I train a llama to deliver coffee?", model: "@cf/meta/llama-3.3-70b-instruct-fp8-fast", reranking: { enabled: true, model: "@cf/baai/bge-reranker-base" }});Previously, system prompts could only be configured in the dashboard. You can now define them directly in your API requests, giving you per-query control over behavior. For example:
// Dynamically set query and system prompt in AI Searchasync function getAnswer(query, tone) { const systemPrompt = `You are a ${tone} assistant.`;
const response = await env.AI.autorag("my-autorag").aiSearch({ query: query, system_prompt: systemPrompt });
return response;}
// Example usageconst query = "What is Cloudflare?";const tone = "friendly";
const answer = await getAnswer(query, tone);console.log(answer);Learn more about Reranking and System Prompt in AI Search.
Cloudflare CASB (Cloud Access Security Broker) now supports two new granular roles to provide more precise access control for your security teams:
These new roles help you better enforce the principle of least privilege. You can now grant specific members access to CASB security findings without assigning them broader permissions, such as the Super Administrator or Administrator roles.
To enable Data Loss Prevention (DLP), scans in CASB, account members will need the Cloudflare Zero Trust role.
You can find these new roles when inviting members or creating API tokens in the Cloudflare dashboard under Manage Account > Members.
To learn more about managing roles and permissions, refer to the Manage account members and roles documentation.
To give you precision and flexibility while creating policies to block unwanted traffic, we are introducing new, more granular application categories in the Gateway product.
We have added the following categories to provide more precise organization and allow for finer-grained policy creation, designed around how users interact with different types of applications:
The new categories are live now, but we are providing a transition period for existing applications to be fully remapped to these new categories.
The full remapping will be completed by January 30, 2026.
We encourage you to use this time to:
For more information on creating HTTP policies, refer to Applications and app types.
Logpush now supports integration with Microsoft Sentinel ↗.The new Azure Sentinel Connector built on Microsoft’s Codeless Connector Framework (CCF), is now avaialble. This solution replaces the previous Azure Functions-based connector, offering significant improvements in security, data control, and ease of use for customers. Logpush customers can send logs to Azure Blob Storage and configure this new Sentinel Connector to ingest those logs directly into Microsoft Sentinel.
This upgrade significantly streamlines log ingestion, improves security, and provides greater control:
Find the new solution here ↗ and refer to the Cloudflare's developer documention ↗for more information on the connector, including setup steps, supported logs and Microsfot's resources.
Radar now introduces Top-Level Domain (TLD) insights, providing visibility into popularity based on the DNS magnitude metric, detailed TLD information including its type, manager, DNSSEC support, RDAP support, and WHOIS data, and trends such as DNS query volume and geographic distribution observed by the 1.1.1.1 DNS resolver.
The following dimensions were added to the Radar DNS API, specifically, to the /dns/summary/{dimension} and /dns/timeseries_groups/{dimension} endpoints:
tld: Top-level domain extracted from DNS queries; can also be used as a filter.tld_dns_magnitude: Top-level domain ranking by DNS magnitude.And the following endpoints were added:
/tlds - Lists all TLDs./tlds/{tld} - Retrieves information about a specific TLD.
Learn more about the new Radar DNS insights in our blog post ↗, and check out the new Radar page ↗.
The Requests for Information (RFI) dashboard now shows users the number of tokens used by each submitted RFI to better understand usage of tokens and how they relate to each request submitted.

What’s new:
Strategic Threat Research request type.Cloudforce One subscribers can try it now in Application Security > Threat Intelligence > Requests for Information ↗.
Previously, if you wanted to develop or deploy a worker with attached resources, you'd have to first manually create the desired resources. Now, if your Wrangler configuration file includes a KV namespace, D1 database, or R2 bucket that does not yet exist on your account, you can develop locally and deploy your application seamlessly, without having to run additional commands.
Automatic provisioning is launching as an open beta, and we'd love to hear your feedback to help us make improvements! It currently works for KV, R2, and D1 bindings. You can disable the feature using the --no-x-provision flag.
To use this feature, update to wrangler@4.45.0 and add bindings to your config file without resource IDs e.g.:
{ "kv_namespaces": [{ "binding": "MY_KV" }], "d1_databases": [{ "binding": "MY_DB" }], "r2_buckets": [{ "binding": "MY_R2" }],}wrangler dev will then automatically create these resources for you locally, and on your next run of wrangler deploy, Wrangler will call the Cloudflare API to create the requested resources and link them to your Worker.
Though resource IDs will be automatically written back to your Wrangler config file after resource creation, resources will stay linked across future deploys even without adding the resource IDs to the config file. This is especially useful for shared templates, which now no longer need to include account-specific resource IDs when adding a binding.
Developers can now programmatically retrieve a list of all file formats supported by the Markdown Conversion utility in Workers AI.
You can use the env.AI binding:
await env.AI.toMarkdown().supported()Or call the REST API:
curl https://api.cloudflare.com/client/v4/accounts/{ACCOUNT_ID}/ai/tomarkdown/supported \ -H 'Authorization: Bearer {API_TOKEN}'Both return a list of file formats that users can convert into Markdown:
[ { "extension": ".pdf", "mimeType": "application/pdf", }, { "extension": ".jpeg", "mimeType": "image/jpeg", }, ...]Learn more about our Markdown Conversion utility.