The new Network session analytics dashboard is now available in Cloudflare One. This dashboard provides visibility into your network traffic patterns, helping you understand how traffic flows through your Cloudflare One infrastructure.

- Analyze geographic distribution: View a world map showing where your network traffic originates, with a list of top locations by session count.
- Monitor key metrics: Track session count, total bytes transferred, and unique users.
- Identify connection issues: Analyze connection close reasons to troubleshoot network problems.
- Review protocol usage: See which network protocols (TCP, UDP, ICMP) are most used.
- Summary metrics: Session count, bytes total, and unique users
- Traffic by location: World map visualization and location list with top traffic sources
- Top protocols: Breakdown of TCP, UDP, ICMP, and ICMPv6 traffic
- Connection close reasons: Insights into why sessions terminated (client closed, origin closed, timeouts, errors)
- Log in to Cloudflare One ↗.
- Go to Zero Trust > Insights > Dashboards.
- Select Network session analytics.
For more information, refer to the Network session analytics documentation.
Logpush has traditionally been great at delivering Cloudflare logs to a variety of destinations in JSON format. While JSON is flexible and easily readable, it can be inefficient to store and query at scale.
With this release, you can now send your logs directly to Pipelines to ingest, transform, and store your logs in R2 as Parquet files or Apache Iceberg tables managed by R2 Data Catalog. This makes the data footprint more compact and more efficient at querying your logs instantly with R2 SQL or any other query engine that supports Apache Iceberg or Parquet.
Pipelines SQL runs on each log record in-flight, so you can reshape your data before it is written. For example, you can drop noisy fields, redact sensitive values, or derive new columns:
INSERT INTO http_logs_sinkSELECTClientIP,EdgeResponseStatus,to_timestamp_micros(EdgeStartTimestamp) AS event_time,upper(ClientRequestMethod) AS method,sha256(ClientIP) AS hashed_ipFROM http_logs_streamWHERE EdgeResponseStatus >= 400;Pipelines SQL supports string functions, regex, hashing, JSON extraction, timestamp conversion, conditional expressions, and more. For the full list, refer to the Pipelines SQL reference.
To configure Pipelines as a Logpush destination, refer to Enable Cloudflare Pipelines.
R2 SQL is Cloudflare's serverless, distributed, analytics query engine for querying Apache Iceberg ↗ tables stored in R2 Data Catalog.
R2 SQL now supports functions for querying JSON data stored in Apache Iceberg tables, an easier way to parse query plans with
EXPLAIN FORMAT JSON, and querying tables without partition keys stored in R2 Data Catalog.JSON functions extract and manipulate JSON values directly in SQL without client-side processing:
SELECTjson_get_str(doc, 'name') AS name,json_get_int(doc, 'user', 'profile', 'level') AS level,json_get_bool(doc, 'active') AS is_activeFROM my_namespace.sales_dataWHERE json_contains(doc, 'email')For a full list of available functions, refer to JSON functions.
EXPLAIN FORMAT JSONreturns query execution plans as structured JSON for programmatic analysis and observability integrations:Terminal window npx wrangler r2 sql query "${WAREHOUSE}" "EXPLAIN FORMAT JSON SELECT * FROM logpush.requests LIMIT 10;"┌──────────────────────────────────────┐│ plan │├──────────────────────────────────────┤│ { ││ "name": "CoalescePartitionsExec", ││ "output_partitions": 1, ││ "rows": 10, ││ "size_approx": "310B", ││ "children": [ ││ { ││ "name": "DataSourceExec", ││ "output_partitions": 4, ││ "rows": 28951, ││ "size_approx": "900.0KB", ││ "table": "logpush.requests", ││ "files": 7, ││ "bytes": 900019, ││ "projection": [ ││ "__ingest_ts", ││ "CPUTimeMs", ││ "DispatchNamespace", ││ "Entrypoint", ││ "Event", ││ "EventTimestampMs", ││ "EventType", ││ "Exceptions", ││ "Logs", ││ "Outcome", ││ "ScriptName", ││ "ScriptTags", ││ "ScriptVersion", ││ "WallTimeMs" ││ ], ││ "limit": 10 ││ } ││ ] ││ } │└──────────────────────────────────────┘For more details, refer to EXPLAIN.
Unpartitioned Iceberg tables can now be queried directly, which is useful for smaller datasets or data without natural time dimensions. For tables with more than 1000 files, partitioning is still recommended for better performance.
Refer to Limitations and best practices for the latest guidance on using R2 SQL.
@cf/moonshotai/kimi-k2.6is now available on Workers AI, in partnership with Moonshot AI for Day 0 support. Kimi K2.6 is a native multimodal agentic model from Moonshot AI that advances practical capabilities in long-horizon coding, coding-driven design, proactive autonomous execution, and swarm-based task orchestration.Built on a Mixture-of-Experts architecture with 1T total parameters and 32B active per token, Kimi K2.6 delivers frontier-scale intelligence with efficient inference. It scores competitively against GPT-5.4 and Claude Opus 4.6 on agentic and coding benchmarks, including BrowseComp (83.2), SWE-Bench Verified (80.2), and Terminal-Bench 2.0 (66.7).
- 262.1k token context window for retaining full conversation history, tool definitions, and codebases across long-running agent sessions
- Long-horizon coding with significant improvements on complex, end-to-end coding tasks across languages including Rust, Go, and Python
- Coding-driven design that transforms simple prompts and visual inputs into production-ready interfaces and full-stack workflows
- Agent swarm orchestration scaling horizontally to 300 sub-agents executing 4,000 coordinated steps for complex autonomous tasks
- Vision inputs for processing images alongside text
- Thinking mode with configurable reasoning depth
- Multi-turn tool calling for building agents that invoke tools across multiple conversation turns
If you are migrating from Kimi K2.5, note the following API changes:
- K2.6 uses
chat_template_kwargs.thinkingto control reasoning, replacingchat_template_kwargs.enable_thinking - K2.6 returns reasoning content in the
reasoningfield, replacingreasoning_content
Use Kimi K2.6 through the Workers AI binding (
env.AI.run()), the REST API at/ai/run, or the OpenAI-compatible endpoint at/v1/chat/completions. You can also use AI Gateway with any of these endpoints.For more information, refer to the Kimi K2.6 model page and pricing.
Cloudflare's network now supports redirecting verified AI training crawlers to canonical URLs when they request deprecated or duplicate pages. When enabled via AI Crawl Control > Quick Actions, AI training crawlers that request a page with a canonical tag pointing elsewhere receive a 301 redirect to the canonical version. Humans, search engine crawlers, and AI Search agents continue to see the original page normally.
This feature leverages your existing
<link rel="canonical">tags. No additional configuration required beyond enabling the toggle. Available on Pro, Business, and Enterprise plans at no additional cost.Refer to the Redirects for AI Training documentation for details.
AI Crawl Control now includes new tools to help you prepare your site for the agentic Internet—a web where AI agents are first-class citizens that discover and interact with content differently than human visitors.
The Metrics tab now includes a Content Format chart showing what content types AI systems request versus what your origin serves. Understanding these patterns helps you optimize content delivery for both human and agent consumption.
The Robots.txt tab has been renamed to Directives and now includes a link to check your site's Agent Readiness ↗ score.
Refer to our blog post on preparing for the agentic Internet ↗ for more on why these capabilities matter.
You can now achieve higher cache HIT rates and reduce origin load for origins hosted on public cloud providers with Smart Tiered Cache. By setting a cloud region hint for your origin, Cloudflare selects the optimal upper-tier data center for that cloud region, funneling all cache MISSes through a single location close to your origin.
Previously, Smart Tiered Cache could not reliably select an optimal upper tier for origins behind anycast or regional unicast networks commonly used by cloud providers. Origins on AWS, GCP, Azure, and Oracle Cloud would fall back to a multi-upper-tier topology, resulting in lower cache HIT rates and more requests reaching your origin.
Set a cloud region hint (for example,
aws/us-east-1orgcp/europe-west1) for your origin IP or hostname. Smart Tiered Cache uses this hint along with real-time latency data to select a primary upper tier close to your cloud region, plus a fallback in a different location for resilience.- Supported providers: AWS, GCP, Azure, and Oracle Cloud.
- All plans: Available on Free, Pro, Business, and Enterprise plans at no additional cost.
- Dashboard and API: Configure from Caching > Tiered Cache > Origin Configuration, or use the API and Terraform.
To get started, enable Smart Tiered Cache and set a cloud region hint for your origin in the Tiered Cache settings.
Radar adds three new features to the AI Insights ↗ page, expanding visibility into how AI bots, crawlers, and agents interact with the web.
The AI Insights page now includes an adoption of AI agent standards ↗ widget that tracks how websites adopt agent-facing standards. The data is filterable by domain category and updated weekly on Mondays. This data is also available through the Agent Readiness API reference.

URL Scanner ↗ reports now include an Agent readiness tab that evaluates a scanned URL against the criteria used by the Agent Readiness score tool ↗.

For more details, refer to the Agent Readiness blog post ↗.
A new savings gauge ↗ shows the median response-size reduction when serving Markdown instead of HTML to AI bots and crawlers. This highlights the bandwidth and token savings that Markdown for Agents provides.

For more details, refer to the Markdown for Agents API reference.
The new response status widget ↗ displays the distribution of HTTP response status codes returned to AI bots and crawlers. Results are groupable by individual status code (200, 403, 404) or by category (2xx, 3xx, 4xx, 5xx).
The same widget is available on each verified bot's detail page (only available for AI bots), for example Google ↗.

Explore all three features on the Cloudflare Radar AI Insights ↗ page.
New AI Search instances created after today will work differently. New instances come with built-in storage and a vector index, so you can upload a file, have it indexed immediately, and search it right away.
Additionally new Workers Bindings are now available to use with AI Search. The new namespace binding lets you create and manage instances at runtime, and cross-instance search API lets you query across multiple instances in one call.
All new instances now comes with built-in storage which allows you to upload files directly to it using the Items API or the dashboard. No R2 buckets to set up, no external data sources to connect first.
TypeScript const instance = env.AI_SEARCH.get("my-instance");// upload and wait for indexing to completeconst item = await instance.items.uploadAndPoll("faq.md", content);// search immediately after indexingconst results = await instance.search({messages: [{ role: "user", content: "onboarding guide" }],});The new
ai_search_namespacesbinding replaces the previousenv.AI.autorag()API provided through theAIbinding. It gives your Worker access to all instances within a namespace and lets you create, update, and delete instances at runtime without redeploying.JSONC // wrangler.jsonc{"ai_search_namespaces": [{"binding": "AI_SEARCH","namespace": "default",},],}TypeScript // create an instance at runtimeconst instance = await env.AI_SEARCH.create({id: "my-instance",});For migration details, refer to Workers binding migration. For more on namespaces, refer to Namespaces.
Within the new AI Search binding, you now have access to a Search and Chat API on the namespace level. Pass an array of instance IDs and get one ranked list of results back.
TypeScript const results = await env.AI_SEARCH.search({messages: [{ role: "user", content: "What is Cloudflare?" }],ai_search_options: {instance_ids: ["product-docs", "customer-abc123"],},});Refer to Namespace-level search for details.
AI Search now supports hybrid search and relevance boosting, giving you more control over how results are found and ranked.
Hybrid search combines vector (semantic) search with BM25 keyword search in a single query. Vector search finds chunks with similar meaning, even when the exact words differ. Keyword search matches chunks that contain your query terms exactly. When you enable hybrid search, both run in parallel and the results are fused into a single ranked list.
You can configure the tokenizer (
porterfor natural language,trigramfor code), keyword match mode (andfor precision,orfor recall), and fusion method (rrformax) per instance:TypeScript const instance = await env.AI_SEARCH.create({id: "my-instance",index_method: { vector: true, keyword: true },fusion_method: "rrf",indexing_options: { keyword_tokenizer: "porter" },retrieval_options: { keyword_match_mode: "and" },});Refer to Search modes for an overview and Hybrid search for configuration details.
Relevance boosting lets you nudge search rankings based on document metadata. For example, you can prioritize recent documents by boosting on
timestamp, or surface high-priority content by boosting on a custom metadata field likepriority.Configure up to 3 boost fields per instance or override them per request:
TypeScript const results = await env.AI_SEARCH.get("my-instance").search({messages: [{ role: "user", content: "deployment guide" }],ai_search_options: {retrieval: {boost_by: [{ field: "timestamp", direction: "desc" },{ field: "priority", direction: "desc" },],},},});Refer to Relevance boosting for configuration details.
Artifacts is now in private beta. Artifacts is Git-compatible storage built for scale: create tens of millions of repos, fork from any remote, and hand off a URL to any Git client. It provides a versioned filesystem for storing and exchanging file trees across Workers, the REST API, and any Git client, running locally or within an agent.
You can read the announcement blog ↗ to learn more about what Artifacts does, how it works, and how to create repositories for your agents to use.
Artifacts has three API surfaces:
- Workers bindings (for creating and managing repositories)
- REST API (for creating and managing repos from any other compute platform)
- Git protocol (for interacting with repos)
As an example: you can use the Workers binding to create a repo and read back its remote URL:
TypeScript # Create a thousand, a million or ten million repos: one for every agent, for every upstream branch, or every user.const created = await env.PROD_ARTIFACTS.create("agent-007");const remote = (await created.repo.info())?.remote;Or, use the REST API to create a repo inside a namespace from your agent(s) running on any platform:
Terminal window curl --request POST "https://artifacts.cloudflare.net/v1/api/namespaces/some-namespace/repos" --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN" --header "Content-Type: application/json" --data '{"name":"agent-007"}'Any Git client that speaks smart HTTP can use the returned remote URL:
Terminal window # Agents know git.# Every repository can act as a git repo, allowing agents to interact with Artifacts the way they know best: using the git CLI.git clone https://x:${REPO_TOKEN}@artifacts.cloudflare.net/some-namespace/agent-007.gitTo learn more, refer to Get started, Workers binding, and Git protocol.
Workflows limits have been raised to the following:
Limit Previous New Concurrent instances (running in parallel) 10,000 50,000 Instance creation rate (per account) 100/second per account 300/second per account, 100/second per workflow Queued instances per Workflow 1 1 million 2 million These increases apply to all users on the Workers Paid plan. Refer to the Workflows limits documentation for more details.
-
Queued instances are instances that have been created or awoken and are waiting for a concurrency slot. ↩
-
We are renaming Browser Rendering to Browser Run. The name Browser Rendering never fully captured what the product does. Browser Run lets you run full browser sessions on Cloudflare's global network, drive them with code or AI, record and replay sessions, crawl pages for content, debug in real time, and let humans intervene when your agent needs help.
Along with the rename, we have increased limits for Workers Paid plans and redesigned the Browser Run dashboard.
We have 4x-ed concurrency limits for Workers Paid plan users:
- Concurrent browsers per account: 30 → 120 per account
- New browser instances: 30 per minute → 1 per second
- REST API rate limits: recently increased from 3 to 10 requests per second
Rate limits across the limits page are now expressed in per-second terms, matching how they are enforced. No action is needed to benefit from the higher limits.
The redesigned dashboard ↗ now shows every request in a single Runs tab, not just browser sessions but also quick actions like screenshots, PDFs, markdown, and crawls. Filter by endpoint, view target URLs, status, and duration, and expand any row for more detail.

We are also shipping several new features:
- Live View, Human in the Loop, and Session Recordings - See what your agent is doing in real time, let humans step in when automation hits a wall, and replay any session after it ends.
- WebMCP - Websites can expose structured tools for AI agents to discover and call directly, replacing slow screenshot-analyze-click loops.
For the full story, read our Agents Week blog Browser Run: Give your agents a browser ↗.
When browser automation fails or behaves unexpectedly, it can be hard to understand what happened. We are shipping three new features in Browser Run (formerly Browser Rendering) to help:
- Live View for real-time visibility
- Human in the Loop for human intervention
- Session Recordings for replaying sessions after they end
Live View lets you see what your agent is doing in real time. The page, DOM, console, and network requests are all visible for any active browser session. Access Live View from the Cloudflare dashboard, via the hosted UI at
live.browser.run, or using native Chrome DevTools.When your agent hits a snag like a login page or unexpected edge case, it can hand off to a human instead of failing. With Human in the Loop, a human steps into the live browser session through Live View, resolves the issue, and hands control back to the script.
Today, you can step in by opening the Live View URL for any active session. Next, we are adding a handoff flow where the agent can signal that it needs help, notify a human to step in, then hand control back to the agent once the issue is resolved.

Session Recordings records DOM state so you can replay any session after it ends. Enable recordings by passing
recording: truewhen launching a browser. After the session closes, view the recording in the Cloudflare dashboard under Browser Run > Runs, or retrieve via API using the session ID. Next, we are adding the ability to inspect DOM state and console output at any point during the recording.
To get started, refer to the documentation for Live View, Human in the Loop, and Session Recording.
Browser Run (formerly Browser Rendering) now supports WebMCP ↗ (Web Model Context Protocol), a new browser API from the Google Chrome team.
The Internet was built for humans, so navigating as an AI agent today is unreliable. WebMCP lets websites expose structured tools for AI agents to discover and call directly. Instead of slow screenshot-analyze-click loops, agents can call website functions like
searchFlights()orbookTicket()with typed parameters, making browser automation faster, more reliable, and less fragile.
With WebMCP, you can:
- Discover website tools - Use
navigator.modelContextTesting.listTools()to see available actions on any WebMCP-enabled site - Execute tools directly - Call
navigator.modelContextTesting.executeTool()with typed parameters - Handle human-in-the-loop interactions - Some tools pause for user confirmation before completing sensitive actions
WebMCP requires Chrome beta features. We have an experimental pool with browser instances running Chrome beta so you can test emerging browser features before they reach stable Chrome. To start a WebMCP session, add
lab=trueto your/devtools/browserrequest:Terminal window curl -X POST "https://api.cloudflare.com/client/v4/accounts/{account_id}/browser-rendering/devtools/browser?lab=true&keep_alive=300000" \-H "Authorization: Bearer {api_token}"Combined with the recently launched CDP endpoint, AI agents can also use WebMCP. Connect an MCP client to Browser Run via CDP, and your agent can discover and call website tools directly. Here's the same hotel booking demo, this time driven by an AI agent through OpenCode:

For a step-by-step guide, refer to the WebMCP documentation.
- Discover website tools - Use
Cloudflare Access now supports independent multi-factor authentication (MFA), allowing you to enforce MFA requirements without relying on your identity provider (IdP). With per-application and per-policy configuration, you can enforce stricter authentication methods like hardware security keys on sensitive applications without requiring them across your entire organization. This reduces the risk of MFA fatigue for your broader user population while adding additional security where it matters most.
This feature also addresses common gaps in IdP-based MFA, such as inconsistent MFA policies across different identity providers or the need for additional security layers beyond what the IdP provides.
Independent MFA supports the following authenticator types:
- Authenticator application — Time-based one-time passwords (TOTP) using apps like Google Authenticator, Microsoft Authenticator, or Authy.
- Security key — Hardware security keys such as YubiKeys.
- Biometrics — Built-in device authenticators including Apple Touch ID, Apple Face ID, and Windows Hello.
You can configure MFA requirements at three levels:
Level Description Organization Enforce MFA by default for all applications in your account. Application Require or turn off MFA for a specific application. Policy Require or turn off MFA for users who match a specific policy. Settings at lower levels (policy) override settings at higher levels (organization), giving you granular control over MFA enforcement.
Users enroll their authenticators through the App Launcher. To help with onboarding, administrators can share a direct enrollment link:
<your-team-name>.cloudflareaccess.com/AddMfaDevice.To get started with Independent MFA, refer to Independent MFA.
-
We are excited to announce two major capability upgrades for Agent Lee, the AI co-pilot built directly into the Cloudflare dashboard. Agent Lee is designed to understand your specific account configuration, and with this release, it moves from a passive advisor to an active assistant that can help you manage your infrastructure and visualize your data through natural language.
Agent Lee can now perform changes on your behalf across your Cloudflare account. Whether you need to update DNS records, modify SSL/TLS settings, or configure Workers routes, you can simply ask.
To ensure security and accuracy, every write operation requires explicit user approval. Before any change is committed, Agent Lee will present a summary of the proposed action in plain language. No action is taken until you select Confirm, and this approval requirement is enforced at the infrastructure level to prevent unauthorized changes.
Example requests:
- "Add an A record for blog.example.com pointing to 192.0.2.10."
- "Enable Always Use HTTPS on my zone."
- "Set the SSL mode for example.com to Full (strict)."
Understanding your traffic and security trends is now as easy as asking a question. Agent Lee now features Generative UI, allowing it to render inline charts and structured data visualizations directly within the chat interface using your actual account telemetry.
Example requests:
- "Show me a chart of my traffic over the last 7 days."
- "What does my error rate look like for the past 24 hours?"
- "Graph my cache hit rate for example.com this week."
These features are currently available in Beta for all users on the Free plan. To get started, log in to the Cloudflare dashboard ↗ and select Ask AI in the upper right corner.
To learn more about how to interact with your account using AI, refer to the Agent Lee documentation.
The Cloudflare One dashboard now features redesigned builders for two core workflows: creating Gateway policies and configuring self-hosted Access applications.
The Gateway rule builder now features a redesigned user experience, bringing it in line with the Access policy builder experience. Improvements include:
- Streamlined UX with clearer states and improved user interactions
- Wirefilter editing for viewing and editing Gateway rules directly from wirefilter expressions
- Preview state to review the impact of your policy in a simple graphic

For more information, refer to Traffic policies.
The self-hosted Access application builder now offers a simplified creation workflow with fewer steps from setup to save. Improvements include:
- New application selection experience that makes choosing the right application type before you begin easier.
- Streamlined creation flow with fewer clicks to build and save an application
- Inline policy creation for building Access policies directly within the application creation flow
- Preview state to understand how your policies enforce user access before saving

For more information, refer to self-hosted applications.
You can now specify placement constraints to control where your Containers run.
Constraint Values Use case regionsENAM,WNAM,EEUR,WEURGeographic placement jurisdictioneu,fedrampCompliance boundaries Use
regionsto limit placement to specific geographic areas. Usejurisdictionto restrict containers to compliance boundaries —eumaps to European regions (EEUR, WEUR) andfedrampmaps to North American regions (ENAM, WNAM).Refer to Containers architecture for more details on placement.
The last seen timestamp for Cloudflare One Client devices is now more consistent across the dashboard. IT teams will see more consistent information about the most recent client event between a device and Cloudflare's network.
Cloudflare has added new fields to multiple Logpush datasets:
The following Gateway and Zero Trust datasets now include a
TenantIDfield:- Gateway DNS: Identifies the tenant ID of the DNS request, if it exists.
- Gateway HTTP: Identifies the tenant ID of the HTTP request, if it exists.
- Gateway Network: Identifies the tenant ID of the network session, if it exists.
- Zero Trust Network Sessions: Identifies the tenant ID of the network session, if it exists.
The following datasets now include Firewall for AI fields:
-
FirewallForAIInjectionScore: The score indicating the likelihood of a prompt injection attack in the request.FirewallForAIPIICategories: List of PII categories detected in the request.FirewallForAITokenCount: The number of tokens in the request.FirewallForAIUnsafeTopicCategories: List of unsafe topic categories detected in the request.
-
FirewallForAIInjectionScore: The score indicating the likelihood of a prompt injection attack in the request.FirewallForAIPIICategories: List of PII categories detected in the request.FirewallForAITokenCount: The number of tokens in the request.FirewallForAIUnsafeTopicCategories: List of unsafe topic categories detected in the request.
For the complete field definitions for each dataset, refer to Logpush datasets.
Privacy Proxy metrics are now queryable through Cloudflare's GraphQL Analytics API, the new default method for accessing Privacy Proxy observability data. All metrics are available through a single endpoint:
Terminal window curl https://api.cloudflare.com/client/v4/graphql \--header "Authorization: Bearer <API_TOKEN>" \--header "Content-Type: application/json" \--data '{"query": "{ viewer { accounts(filter: { accountTag: $accountTag }) { privacyProxyRequestMetricsAdaptiveGroups(filter: { date_geq: $startDate, date_leq: $endDate }, limit: 10000, orderBy: [date_ASC]) { count dimensions { date } } } } }","variables": {"accountTag": "<YOUR_ACCOUNT_TAG>","startDate": "2026-04-04","endDate": "2026-04-06"}}'Four GraphQL nodes are now live, providing aggregate metrics across all key dimensions of your Privacy Proxy deployment:
privacyProxyRequestMetricsAdaptiveGroups— Request volume, error rates, status codes, and proxy status breakdowns.privacyProxyIngressConnMetricsAdaptiveGroups— Client-to-proxy connection counts, bytes transferred, and latency percentiles.privacyProxyEgressConnMetricsAdaptiveGroups— Proxy-to-origin connection counts, bytes transferred, and latency percentiles.privacyProxyAuthMetricsAdaptiveGroups— Authentication attempt counts by method and result.
All nodes support filtering by time, data center (
coloCode), and endpoint, with additional node-specific dimensions such as transport protocol and authentication method.OpenTelemetry-based metrics export remains available. The GraphQL Analytics API is now the recommended default method — a plug-and-play method that requires no collector infrastructure, saving engineering overhead.
This week's release introduces a new detection for a critical Remote Code Execution (RCE) vulnerability in Mesop (CVE-2026-33057), alongside protections for high-impact vulnerabilities in Cisco Secure Firewall Management Center (CVE-2026-20079) and FortiClient EMS (CVE-2026-21643). Additionally, this release includes an update to our existing React Server DoS coverage to address recently identified resource exhaustion vectors (CVE-2026-23869).
Key Findings
-
Cisco Secure FMC (CVE-2026-20079): A vulnerability in the web-based management interface of Cisco Secure Firewall Management Center (FMC) that allows an unauthenticated, remote attacker to execute arbitrary commands or bypass security filters.
-
FortiClient EMS (CVE-2026-21643): A critical vulnerability in the FortiClient EMS permitting unauthorized access or administrative configuration manipulation via crafted HTTP requests.
-
Mesop (CVE-2026-33057): A vulnerability in the Mesop Python-based UI framework where unauthenticated attackers can execute arbitrary code by sending specially crafted, Base64-encoded payloads in the request body.
Impact
Successful exploitation of these vulnerabilities could allow unauthenticated attackers to execute arbitrary code, gain administrative control over network management infrastructure, or trigger server-side resource exhaustion. Administrators are strongly encouraged to apply official vendor updates.
Ruleset Rule ID Legacy Rule ID Description Previous Action New Action Comments Cloudflare Managed Ruleset N/A Cisco Secure FMC - RCE via upgradeReadinessCall - CVE:CVE-2026-20079 Log Block This is a new detection. Cloudflare Managed Ruleset N/A FortiClient EMS - Pre-Auth SQL Injection - CVE:CVE-2026-21643 Log Block This is a new detection. Cloudflare Managed Ruleset N/A Mesop - Remote Code Execution - Base64 Payload - CVE:CVE-2026-33057 Log Block This is a new detection. Cloudflare Managed Ruleset N/A React Server - DOS - CVE:CVE-2026-23864 - 1 - Beta Log Block This rule has been merged into the original rule "React Server - DOS - CVE:CVE-2026-23864 - 1" (ID: )Cloudflare Managed Ruleset N/A XSS, HTML Injection - Link Tag - URI (beta) N/A Disabled This is a new detection. Cloudflare Managed Ruleset N/A XSS, HTML Injection - Embed Tag - URI (beta) N/A Disabled This is a new detection. -
Announcement Date Release Date Release Behavior Legacy Rule ID Rule ID Description Comments 2026-04-15 2026-04-20 Log N/A Command Injection - Generic 8 - uri - Beta This is a new detection. 2026-04-15 2026-04-20 Disabled N/A Command Injection - Generic 8 - body - Beta This is a new detection. This rule will be merged into the original rule "Command Injection - Generic 8" (ID:
)2026-04-15 2026-04-20 Log N/A MySQL - SQLi - Executable Comment - Beta This is a new detection. This rule will be merged into the original rule "MySQL - SQLi - Executable Comment" (ID:
)2026-04-15 2026-04-20 Log N/A MySQL - SQLi - Executable Comment - Headers This is a new detection. 2026-04-15 2026-04-20 Log N/A This is a new detection. 2026-04-15 2026-04-20 Log N/A MySQL - SQLi - Executable Comment - URI This is a new detection. 2026-04-15 2026-04-20 Log N/A Magento 2 - Unrestricted file upload - 2 This is a new detection. 2026-04-15 2026-04-20 Log N/A Apache ActiveMQ - Remote Code Execution - CVE:CVE-2026-34197 This is a new detection. 2026-04-15 2026-04-20 Log N/A SQLi - Probing - uri - Beta This is a new detection. 2026-04-15 2026-04-20 Log N/A SQLi - Probing - header - Beta This is a new detection. 2026-04-15 2026-04-20 Disabled N/A SQLi - Probing - body - Beta This is a new detection. This rule will be merged into the original rule "SQLi - Probing" (ID:
)2026-04-15 2026-04-20 Log N/A SQLi - Sleep Function - Beta This is a new detection. This rule will be merged into the original rule "SQLi - Sleep Function" (ID:
)2026-04-15 2026-04-20 Log N/A SQLi - Sleep Function - Headers This is a new detection. 2026-04-15 2026-04-20 Log N/A SQLi - Sleep Function - URI This is a new detection. 2026-04-15 2026-04-20 Log N/A XSS, HTML Injection - Embed Tag - Headers (beta) This is a new detection. 2026-04-15 2026-04-20 Log N/A XSS, HTML Injection - IFrame Tag - Src and Srcdoc Attributes - Headers (beta)
This is a new detection. 2026-04-15 2026-04-20 Log N/A XSS, HTML Injection - Link Tag - Headers (beta) This is a new detection.
Account-level DLP settings are now available in Cloudflare One. You can now configure advanced DLP settings at the account level, including OCR, AI context analysis, and payload masking. This provides consistent enforcement across all DLP profiles and simplifies configuration management.
Key changes:
- Consistent enforcement: Settings configured at the account level apply to all DLP profiles
- Simplified migration: Settings enabled on any profile are automatically migrated to account level
- Deprecation notice: Profile-level advanced settings will be deprecated in a future release
Migration details:
During the migration period, if a setting is enabled on any profile, it will automatically be enabled at the account level. This means profiles that previously had a setting disabled may now have it enabled if another profile in the account had it enabled.
Settings are evaluated using OR logic - a setting is enabled if it is turned on at either the account level or the profile level. However, profile-level settings cannot be enabled when the account-level setting is off.
For more details, refer to the DLP advanced settings documentation.