Skip to content

Changelog

New updates and improvements at Cloudflare.

Core platform
hero image
  1. Redesigned "Get Help" Portal for faster, personalized help

    Cloudflare has officially launched a redesigned "Get Help" Support Portal to eliminate friction and get you to a resolution faster. Previously, navigating support meant clicking through multiple tiles, categorizing your own technical issues across 50+ conditional fields, and translating your problem into Cloudflare's internal taxonomy.

    The new experience replaces that complexity with a personalized front door built around your specific account plan. Whether you are under a DDoS attack or have a simple billing question, the portal now presents a single, clean page that surfaces the direct paths available to you — such as "Ask AI", "Chat with a human", or "Community" — without the manual triage.

    What's New

    • One Page, Clear Choices: No more navigating a grid of overlapping categories. The portal now uses action cards tailored to your plan (Free, Pro, Business, or Enterprise), ensuring you only see the support channels you can actually use.
    • A Radically Simpler Support Form: We've reduced the ticket submission process from four+ screens and 50+ fields to a single screen with five critical inputs. You describe the issue in your own words, and our backend handles the categorization.
    • AI-Driven Triage: Using Cloudflare Workers AI and Vectorize, the portal now automatically generates case subjects and predicts product categories.

    Moving complexity to the backend

    Behind the scenes, we've moved the complexity from the user to our own developer stack. When you describe an issue, we use semantic embeddings to capture intent rather than just keywords.

    By leveraging case-based reasoning, our system compares your request against millions of resolved cases to route your inquiry to the specialist best equipped to help. This ensures that while the front-end experience is simpler for you, the back-end routing is more accurate than ever.

    To learn more, refer to the Support documentation or select Get Help directly in the Cloudflare Dashboard.

  1. We're announcing the public beta of Organizations for enterprise customers, a new top-level Cloudflare container that lets Cloudflare customers manage multiple accounts, members, analytics, and shared policies from one centralized location.

    What's New

    Organizations [BETA]: Organizations are a new top-level container for centrally managing multiple accounts. Each Organization supports up to 500 accounts and 500 zones, giving larger teams a single place to administer resources at scale.

    Self-serve onboarding: Enterprise customers can create an Organization in the dashboard and assign accounts where they are already Super Administrators.

    Centralized Account Management: At launch, every Organization member has the Organization Super Admin role. Organization Super Admins can invite other users and manage any child account under the Organization implicitly. Shared policies: Share WAF or Gateway policies across multiple accounts within your Organization to simplify centralized policy management. Implicit access: Members of an Organization automatically receive Super Administrator permissions across child accounts, removing the need for explicit membership on each account. Additional Org-level roles will be available over the course of the year.

    Unified analytics: View, filter, and download aggregate HTTP analytics across all Organization child accounts from a single dashboard for centralized visibility into traffic patterns and security events.

    Terraform provider support: Manage Organizations with infrastructure as code from day one. Provision organizations, assign accounts, and configure settings programmatically with the Cloudflare Terraform provider.

    Shared policies: Share WAF or Gateway policies across multiple accounts within your Organization to simplify centralized policy management.

    For more info:

  1. Cloudflare has added a new field to the Gateway DNS Logpush dataset:

    • ResponseTimeMs: Total response time of the DNS request in milliseconds.

    For the complete field definitions, refer to Gateway DNS dataset.

  1. Cloudflare Logpush now supports BigQuery as a native destination.

    Logs from Cloudflare can be sent to Google Cloud BigQuery via Logpush. The destination can be configured through the Logpush UI in the Cloudflare dashboard or by using the Logpush API.

    For more information, refer to the Destination Configuration documentation.

  1. Two new fields are now available in rule expressions that surface Layer 4 transport telemetry from the client connection. Together with the existing cf.timings.client_tcp_rtt_msec field, these fields give you a complete picture of connection quality for both TCP and QUIC traffic — enabling transport-aware rules without requiring any client-side changes.

    Previously, QUIC RTT and delivery rate data was only available via the Server-Timing: cfL4 response header. These new fields make the same data available directly in rule expressions, so you can use them in Transform Rules, WAF Custom Rules, and other phases that support dynamic fields.

    New fields

    FieldTypeDescription
    cf.timings.client_quic_rtt_msecIntegerThe smoothed QUIC round-trip time (RTT) between Cloudflare and the client in milliseconds. Only populated for QUIC (HTTP/3) connections. Returns 0 for TCP connections.
    cf.edge.l4.delivery_rateIntegerThe most recent data delivery rate estimate for the client connection, in bytes per second. Returns 0 when L4 statistics are not available for the request.

    Example: Route slow connections to a lightweight origin

    Use a request header transform rule to tag requests from high-latency connections, so your origin can serve a lighter page variant:

    Rule expression:

    cf.timings.client_tcp_rtt_msec > 200 or cf.timings.client_quic_rtt_msec > 200

    Header modifications:

    OperationHeader nameValue
    SetX-High-Latencytrue

    Example: Match low-bandwidth connections

    cf.edge.l4.delivery_rate > 0 and cf.edge.l4.delivery_rate < 100000

    For more information, refer to Request Header Transform Rules and the fields reference.

  1. Logpush now supports higher-precision timestamp formats for log output. You can configure jobs to output timestamps at millisecond or nanosecond precision. This is available in both the Logpush UI in the Cloudflare dashboard and the Logpush API.

    To use the new formats, set timestamp_format in your Logpush job's output_options:

    • rfc3339ms2024-02-17T23:52:01.123Z
    • rfc3339ns2024-02-17T23:52:01.123456789Z

    Default timestamp formats apply unless explicitly set. The dashboard defaults to rfc3339 and the API defaults to unixnano.

    For more information, refer to the Log output options documentation.

  1. Cloudflare now exposes four new fields in the Transform Rules phase that encode client certificate data in RFC 9440 format. Previously, forwarding client certificate information to your origin required custom parsing of PEM-encoded fields or non-standard HTTP header formats. These new fields produce output in the standardized Client-Cert and Client-Cert-Chain header format defined by RFC 9440, so your origin can consume them directly without any additional decoding logic.

    Each certificate is DER-encoded, Base64-encoded, and wrapped in colons. For example, :MIIDsT...Vw==:. A chain of intermediates is expressed as a comma-separated list of such values.

    New fields

    FieldTypeDescription
    cf.tls_client_auth.cert_rfc9440StringThe client leaf certificate in RFC 9440 format. Empty if no client certificate was presented.
    cf.tls_client_auth.cert_rfc9440_too_largeBooleantrue if the leaf certificate exceeded 10 KB and was omitted. In practice this will almost always be false.
    cf.tls_client_auth.cert_chain_rfc9440StringThe intermediate certificate chain in RFC 9440 format as a comma-separated list. Empty if no intermediate certificates were sent or if the chain exceeded 16 KB.
    cf.tls_client_auth.cert_chain_rfc9440_too_largeBooleantrue if the intermediate chain exceeded 16 KB and was omitted.

    The chain encoding follows the same ordering as the TLS handshake: the certificate closest to the leaf appears first, working up toward the trust anchor. The root certificate is not included.

    Example: Forwarding client certificate headers to your origin server

    Add a request header transform rule to set the Client-Cert and Client-Cert-Chain headers on requests forwarded to your origin server. For example, to forward headers for verified, non-revoked certificates:

    Rule expression:

    cf.tls_client_auth.cert_verified and not cf.tls_client_auth.cert_revoked

    Header modifications:

    OperationHeader nameValue
    SetClient-Certcf.tls_client_auth.cert_rfc9440
    SetClient-Cert-Chaincf.tls_client_auth.cert_chain_rfc9440

    To get the most out of these fields, upload your client CA certificate to Cloudflare so that Cloudflare validates the client certificate at the edge and populates cf.tls_client_auth.cert_verified and cf.tls_client_auth.cert_revoked.

    For more information, refer to Mutual TLS authentication, Request Header Transform Rules, and the fields reference.

  1. AI Crawl Control now supports extending the underlying WAF rule with custom modifications. Any changes you make directly in the WAF custom rules editor — such as adding path-based exceptions, extra user agents, or additional expression clauses — are preserved when you update crawler actions in AI Crawl Control.

    If the WAF rule expression has been modified in a way AI Crawl Control cannot parse, a warning banner appears on the Crawlers page with a link to view the rule directly in WAF.

    For more information, refer to WAF rule management.

  1. In the Cloudflare One dashboard, the overview page for a specific Cloudflare Tunnel now shows all replicas of that tunnel and supports streaming logs from multiple replicas at once.

    View replicas and stream logs from multiple connectors

    Previously, you could only stream logs from one replica at a time. With this update:

    • Replicas on the tunnel overview — All active replicas for the selected tunnel now appear on that tunnel's overview page under Connectors. Select any replica to stream its logs.
    • Multi-connector log streaming — Stream logs from multiple replicas simultaneously, making it easier to correlate events across your infrastructure during debugging or incident response. To try it out, log in to Cloudflare One and go to Networks > Connectors > Cloudflare Tunnels. Select View logs next to the tunnel you want to monitor.

    For more information, refer to Tunnel log streams and Deploy replicas.

  1. Service Key authentication for the Cloudflare API is deprecated. Service Keys will stop working on September 30, 2026.

    API Tokens replace Service Keys with fine-grained permissions, expiration, and revocation.

    What you need to do

    Replace any use of the X-Auth-User-Service-Key header with an API Token scoped to the permissions your integration requires.

    If you use cloudflared, update to a version from November 2022 or later. These versions already use API Tokens.

    If you use origin-ca-issuer, update to a version that supports API Token authentication.

    For more information, refer to API deprecations.

  1. You can now manage Cloudflare Tunnels directly from Wrangler, the CLI for the Cloudflare Developer Platform. The new wrangler tunnel commands let you create, run, and manage tunnels without leaving your terminal.

    Wrangler tunnel commands demo

    Available commands:

    • wrangler tunnel create — Create a new remotely managed tunnel.
    • wrangler tunnel list — List all tunnels in your account.
    • wrangler tunnel info — Display details about a specific tunnel.
    • wrangler tunnel delete — Delete a tunnel.
    • wrangler tunnel run — Run a tunnel using the cloudflared daemon.
    • wrangler tunnel quick-start — Start a free, temporary tunnel without an account using Quick Tunnels.

    Wrangler handles downloading and managing the cloudflared binary automatically. On first use, you will be prompted to download cloudflared to a local cache directory.

    These commands are currently experimental and may change without notice.

    To get started, refer to the Wrangler tunnel commands documentation.

  1. Cloudflare dashboard SCIM provisioning now supports Authentik as an identity provider, joining Okta and Microsoft Entra ID as explicitly supported providers.

    Customers can now sync users and group information from Authentik to Cloudflare, apply Permission Policies to those groups, and manage the lifecycle of users & groups directly from your Authentik Identity Provider.

    For more information:

  1. Cloudflare dashboard SCIM provisioning operations are now captured in Audit Logs v2, giving you visibility into user and group changes made by your identity provider.

    SCIM audit logging

    Logged actions:

    Action TypeDescription
    Create SCIM UserUser provisioned from IdP
    Replace SCIM UserUser fully replaced (PUT)
    Update SCIM UserUser attributes modified (PATCH)
    Delete SCIM UserMember deprovisioned
    Create SCIM GroupGroup provisioned from IdP
    Update SCIM GroupGroup membership or attributes modified
    Delete SCIM GroupGroup deprovisioned

    For more details, refer to the Audit Logs v2 documentation.

  1. The cf.timings.worker_msec field is now available in the Ruleset Engine. This field reports the wall-clock time that a Cloudflare Worker spent handling a request, measured in milliseconds.

    You can use this field to identify slow Worker executions, detect performance regressions, or build rules that respond differently based on Worker processing time, such as logging requests that exceed a latency threshold.

    Field details

    FieldTypeDescription
    cf.timings.worker_msecIntegerThe time spent executing a Cloudflare Worker in milliseconds. Returns 0 if no Worker was invoked.

    Example filter expression:

    cf.timings.worker_msec > 500

    For more information, refer to the Fields reference.

  1. Cloudflare-generated 1xxx error responses now include a standard Retry-After HTTP header when the error is retryable. Agents and HTTP clients can read the recommended wait time from response headers alone — no body parsing required.

    Changes

    Seven retryable error codes now emit Retry-After:

    Error codeRetry-After (seconds)Error name
    1004120DNS resolution error
    1005120Banned zone
    101530Rate limited
    1033120Argo Tunnel error
    103860HTTP headers limit exceeded
    120060Cache connection limit
    12055Too many redirects

    The header value matches the existing retry_after body field in JSON and Markdown responses.

    If a WAF rate limiting rule has already set a dynamic Retry-After value on the response, that value takes precedence.

    Availability

    Available for all zones on all plans.

    Verify

    Check for the header on any retryable error:

    Terminal window
    curl -s --compressed -D - -o /dev/null -H "Accept: application/json" -A "TestAgent/1.0" -H "Accept-Encoding: gzip, deflate" "<YOUR_DOMAIN>/cdn-cgi/error/1015" | grep -i retry-after

    References:

  1. Cloudflare-generated 1xxx errors now return structured JSON when clients send Accept: application/json or Accept: application/problem+json. JSON responses follow RFC 9457 (Problem Details for HTTP APIs), so any HTTP client that understands Problem Details can parse the base members without Cloudflare-specific code.

    Breaking change

    The Markdown frontmatter field http_status has been renamed to status. Agents consuming Markdown frontmatter should update parsers accordingly.

    Changes

    JSON format. Clients sending Accept: application/json or Accept: application/problem+json now receive a structured JSON object with the same operational fields as Markdown frontmatter, plus RFC 9457 standard members.

    RFC 9457 standard members (JSON only):

    • type — URI pointing to Cloudflare documentation for the specific error code
    • status — HTTP status code (matching the response status)
    • title — short, human-readable summary
    • detail — human-readable explanation specific to this occurrence
    • instance — Ray ID identifying this specific error occurrence

    Field renames:

    • http_status -> status (JSON and Markdown)
    • what_happened -> detail (JSON only — Markdown prose sections are unchanged)

    Content-Type mirroring. Clients sending Accept: application/problem+json receive Content-Type: application/problem+json; charset=utf-8 back; Accept: application/json receives application/json; charset=utf-8. Same body in both cases.

    Negotiation behavior

    Request header sentResponse format
    Accept: application/jsonJSON (application/json content type)
    Accept: application/problem+jsonJSON (application/problem+json content type)
    Accept: application/json, text/markdown;q=0.9JSON
    Accept: text/markdownMarkdown
    Accept: text/markdown, application/jsonMarkdown (equal q, first-listed wins)
    Accept: */*HTML (default)

    Availability

    Available now for Cloudflare-generated 1xxx errors.

    Get started

    Terminal window
    curl -s --compressed -H "Accept: application/json" -A "TestAgent/1.0" -H "Accept-Encoding: gzip, deflate" "<YOUR_DOMAIN>/cdn-cgi/error/1015" | jq .
    Terminal window
    curl -s --compressed -H "Accept: application/problem+json" -A "TestAgent/1.0" -H "Accept-Encoding: gzip, deflate" "<YOUR_DOMAIN>/cdn-cgi/error/1015" | jq .

    References:

  1. Cloudflare Log Explorer now allows you to customize exactly which data fields are ingested and stored when enabling or managing log datasets.

    Previously, ingesting logs often meant taking an "all or nothing" approach to data fields. With Ingest Field Selection, you can now choose from a list of available and recommended fields for each dataset. This allows you to reduce noise, focus on the metrics that matter most to your security and performance analysis, and manage your data footprint more effectively.

    Key capabilities

    • Granular control: Select only the specific fields you need when enabling a new dataset.
    • Dynamic updates: Update fields for existing, already enabled logstreams at any time.
    • Historical consistency: Even if you disable a field later, you can still query and receive results for that field for the period it was captured.
    • Data integrity: Core fields, such as Timestamp, are automatically retained to ensure your logs remain searchable and chronologically accurate.

    Example configuration

    When configuring a dataset via the dashboard or API, you can define a specific set of fields. The Timestamp field remains mandatory to ensure data indexability.

    {
    "dataset": "firewall_events",
    "enabled": true,
    "fields": [
    "Timestamp",
    "ClientRequestHost",
    "ClientIP",
    "Action",
    "EdgeResponseStatus",
    "OriginResponseStatus"
    ]
    }

    For more information, refer to the Log Explorer documentation.

  1. Audit Logs v2 is now generally available to all Cloudflare customers.

    Audit Logs v2 GA

    Audit Logs v2 provides a unified and standardized system for tracking and recording all user and system actions across Cloudflare products. Built on Cloudflare's API Shield / OpenAPI gateway, logs are generated automatically without requiring manual instrumentation from individual product teams, ensuring consistency across ~95% of Cloudflare products.

    What's available at GA:

    • Standardized logging — Audit logs follow a consistent format across all Cloudflare products, making it easier to search, filter, and investigate activity.
    • Expanded product coverage — ~95% of Cloudflare products covered, up from ~75% in v1.
    • Granular filtering — Filter by actor, action type, action result, resource, raw HTTP method, zone, and more. Over 20 filter parameters available via the API.
    • Enhanced context — Each log entry includes authentication method, interface (API or dashboard), Cloudflare Ray ID, and actor token details.
    • 18-month retention — Logs are retained for 18 months. Full history is accessible via the API or Logpush.

    Access:

    • Dashboard: Go to Manage Account > Audit Logs. Audit Logs v2 is shown by default.
    • API: GET https://api.cloudflare.com/client/v4/accounts/{account_id}/logs/audit
    • Logpush: Available via the audit_logs_v2 account-scoped dataset.

    Important notes:

    • Approximately 30 days of logs from the Beta period (back to ~February 8, 2026) are available at GA. These Beta logs will expire on ~April 9, 2026. Logs generated after GA will be retained for the full 18 months. Older logs remain available in Audit Logs v1.
    • The UI query window is limited to 90 days for performance reasons. Use the API or Logpush for access to the full 18-month history.
    • GET requests (view actions) and 4xx error responses are not logged at GA. GET logging will be selectively re-enabled for sensitive read operations in a future release.
    • Audit Logs v1 continues to run in parallel. A deprecation timeline will be communicated separately.
    • Before and after values — the ability to see what a value changed from and to — is a highly requested feature and is on our roadmap for a post-GA release. In the meantime, we recommend using Audit Logs v1 for before and after values. Audit Logs v1 will continue to run in parallel until this feature is available in v2.

    For more details, refer to the Audit Logs v2 documentation.

  1. Cloudflare has added new fields across multiple Logpush datasets:

    New dataset

    • MCP Portal Logs: A new dataset with fields including ClientCountry, ClientIP, ColoCode, Datetime, Error, Method, PortalAUD, PortalID, PromptGetName, ResourceReadURI, ServerAUD, ServerID, ServerResponseDurationMs, ServerURL, SessionID, Success, ToolCallName, UserEmail, and UserID.

    New fields in existing datasets

    • DEX Application Tests: HTTPRedirectEndMs, HTTPRedirectStartMs, HTTPResponseBody, and HTTPResponseHeaders.
    • DEX Device State Events: ExperimentalExtra.
    • Firewall Events: FraudUserID.
    • Gateway HTTP: AppControlInfo and ApplicationStatuses.
    • Gateway DNS: InternalDNSDurationMs.
    • HTTP Requests: FraudEmailRisk, FraudUserID, and PayPerCrawlStatus.
    • Network Analytics Logs: DNSQueryName, DNSQueryType, and PFPCustomTag.
    • WARP Toggle Changes: UserEmail.
    • WARP Config Changes: UserEmail.
    • Zero Trust Network Session Logs: SNI.

    For the complete field definitions for each dataset, refer to Logpush datasets.

  1. Cloudflare now returns structured Markdown responses for Cloudflare-generated 1xxx errors when clients send Accept: text/markdown.

    Each response includes YAML frontmatter plus guidance sections (What happened / What you should do) so agents can make deterministic retry and escalation decisions without parsing HTML.

    In measured 1,015 comparisons, Markdown reduced payload size and token footprint by over 98% versus HTML.

    Included frontmatter fields:

    • error_code, error_name, error_category, http_status
    • ray_id, timestamp, zone
    • cloudflare_error, retryable, retry_after (when applicable), owner_action_required

    Default behavior is unchanged: clients that do not explicitly request Markdown continue to receive HTML error pages.

    Negotiation behavior

    Cloudflare uses standard HTTP content negotiation on the Accept header.

    • Accept: text/markdown -> Markdown
    • Accept: text/markdown, text/html;q=0.9 -> Markdown
    • Accept: text/* -> Markdown
    • Accept: */* -> HTML (default browser behavior)

    When multiple values are present, Cloudflare selects the highest-priority supported media type using q values. If Markdown is not explicitly preferred, HTML is returned.

    Availability

    Available now for Cloudflare-generated 1xxx errors.

    Get started

    Terminal window
    curl -H "Accept: text/markdown" https://<your-domain>/cdn-cgi/error/1015

    Reference: Cloudflare 1xxx error documentation

  1. Cloudflare Tunnel is now available in the main Cloudflare Dashboard at Networking > Tunnels, bringing first-class Tunnel management to developers using Tunnel for securing origin servers.

    Manage Tunnels in the Core Dashboard

    This new experience provides everything you need to manage Tunnels for public applications, including:

    Choose the right dashboard for your use case

    Core Dashboard: Navigate to Networking > Tunnels to manage Tunnels for:

    Cloudflare One Dashboard: Navigate to Zero Trust > Networks > Connectors to manage Tunnels for:

    Both dashboards provide complete Tunnel management capabilities — choose based on your primary workflow.

    Get started

    New to Tunnel? Learn how to get started with Cloudflare Tunnel or explore advanced use cases like securing SSH servers or running Tunnels in Kubernetes.

  1. The Server-Timing header now includes a new cfWorker metric that measures time spent executing Cloudflare Workers, including any subrequests performed by the Worker. This helps developers accurately identify whether high Time to First Byte (TTFB) is caused by Worker processing or slow upstream dependencies.

    Previously, Worker execution time was included in the edge metric, making it harder to identify true edge performance. The new cfWorker metric provides this visibility:

    MetricDescription
    edgeTotal time spent on the Cloudflare edge, including Worker execution
    originTime spent fetching from the origin server
    cfWorkerTime spent in Worker execution, including subrequests but excluding origin fetch time

    Example response

    Server-Timing: cdn-cache; desc=DYNAMIC, edge; dur=20, origin; dur=100, cfWorker; dur=7

    In this example, the edge took 20ms, the origin took 100ms, and the Worker added just 7ms of processing time.

    Availability

    The cfWorker metric is enabled by default if you have Real User Monitoring (RUM) enabled. Otherwise, you can enable it using Rules.

    This metric is particularly useful for:

    • Performance debugging: Quickly determine if latency is caused by Worker code, external API calls within Workers, or slow origins.
    • Optimization targeting: Identify which component of your request path needs optimization.
    • Real User Monitoring (RUM): Access detailed timing breakdowns directly from response headers for client-side analytics.

    For more information about Server-Timing headers, refer to the W3C Server Timing specification.

  1. When AI systems request pages from any website that uses Cloudflare and has Markdown for Agents enabled, they can express the preference for text/markdown in the request: our network will automatically and efficiently convert the HTML to markdown, when possible, on the fly.

    This release adds the following improvements:

    • The origin response limit was raised from 1 MB to 2 MB (2,097,152 bytes).
    • We no longer require the origin to send the content-length header.
    • We now support content encoded responses from the origin.

    If you haven’t enabled automatic Markdown conversion yet, visit the AI Crawl Control section of the Cloudflare dashboard and enable Markdown for Agents.

    Refer to our developer documentation for more details.

  1. Fine-grained permissions for Access policies and Access service tokens are available. These new resource-scoped roles expand the existing RBAC model, enabling administrators to grant permissions scoped to individual resources.

    New roles

    • Cloudflare Access policy admin: Can edit a specific Access policy in an account.
    • Cloudflare Access service token admin: Can edit a specific Access service token in an account.

    These roles complement the existing resource-scoped roles for Access applications, identity providers, and infrastructure targets.

    For more information:

  1. Disclaimer: Please note that v5.0.0-beta.1 is in Beta and we are still testing it for stability.

    Full Changelog: v4.3.1...v5.0.0-beta.1

    In this release, you'll see a large number of breaking changes. This is primarily due to a change in OpenAPI definitions, which our libraries are based off of, and codegen updates that we rely on to read those OpenAPI definitions and produce our SDK libraries. As the codegen is always evolving and improving, so are our code bases.

    There may be changes that are not captured in this changelog. Feel free to open an issue to report any inaccuracies, and we will make sure it gets into the changelog before the v5.0.0 release.

    Most of the breaking changes below are caused by improvements to the accuracy of the base OpenAPI schemas, which sometimes translates to breaking changes in downstream clients that depend on those schemas.

    Please ensure you read through the list of changes below and the migration guide before moving to this version - this will help you understand any down or upstream issues it may cause to your environments.

    Breaking Changes

    The following resources have breaking changes. See the v5 Migration Guide for detailed migration instructions.

    • abusereports
    • acm.totaltls
    • apigateway.configurations
    • cloudforceone.threatevents
    • d1.database
    • intel.indicatorfeeds
    • logpush.edge
    • origintlsclientauth.hostnames
    • queues.consumers
    • radar.bgp
    • rulesets.rules
    • schemavalidation.schemas
    • snippets
    • zerotrust.dlp
    • zerotrust.networks

    Features

    New API Resources

    • abusereports - Abuse report management
    • abusereports.mitigations - Abuse report mitigation actions
    • ai.tomarkdown - AI-powered markdown conversion
    • aigateway.dynamicrouting - AI Gateway dynamic routing configuration
    • aigateway.providerconfigs - AI Gateway provider configurations
    • aisearch - AI-powered search functionality
    • aisearch.instances - AI Search instance management
    • aisearch.tokens - AI Search authentication tokens
    • alerting.silences - Alert silence management
    • brandprotection.logomatches - Brand protection logo match detection
    • brandprotection.logos - Brand protection logo management
    • brandprotection.matches - Brand protection match results
    • brandprotection.queries - Brand protection query management
    • cloudforceone.binarystorage - CloudForce One binary storage
    • connectivity.directory - Connectivity directory services
    • d1.database - D1 database management
    • diagnostics.endpointhealthchecks - Endpoint health check diagnostics
    • fraud - Fraud detection and prevention
    • iam.sso - IAM Single Sign-On configuration
    • loadbalancers.monitorgroups - Load balancer monitor groups
    • organizations - Organization management
    • organizations.organizationprofile - Organization profile settings
    • origintlsclientauth.hostnamecertificates - Origin TLS client auth hostname certificates
    • origintlsclientauth.hostnames - Origin TLS client auth hostnames
    • origintlsclientauth.zonecertificates - Origin TLS client auth zone certificates
    • pipelines - Data pipeline management
    • pipelines.sinks - Pipeline sink configurations
    • pipelines.streams - Pipeline stream configurations
    • queues.subscriptions - Queue subscription management
    • r2datacatalog - R2 Data Catalog integration
    • r2datacatalog.credentials - R2 Data Catalog credentials
    • r2datacatalog.maintenanceconfigs - R2 Data Catalog maintenance configurations
    • r2datacatalog.namespaces - R2 Data Catalog namespaces
    • radar.bots - Radar bot analytics
    • radar.ct - Radar certificate transparency data
    • radar.geolocations - Radar geolocation data
    • realtimekit.activesession - Real-time Kit active session management
    • realtimekit.analytics - Real-time Kit analytics
    • realtimekit.apps - Real-time Kit application management
    • realtimekit.livestreams - Real-time Kit live streaming
    • realtimekit.meetings - Real-time Kit meeting management
    • realtimekit.presets - Real-time Kit preset configurations
    • realtimekit.recordings - Real-time Kit recording management
    • realtimekit.sessions - Real-time Kit session management
    • realtimekit.webhooks - Real-time Kit webhook configurations
    • tokenvalidation.configuration - Token validation configuration
    • tokenvalidation.rules - Token validation rules
    • workers.beta - Workers beta features

    New Endpoints (Existing Resources)

    acm.totaltls

    • edit()
    • update()

    cloudforceone.threatevents

    • list()

    contentscanning

    • create()
    • get()
    • update()

    dns.records

    • scan_list()
    • scan_review()
    • scan_trigger()

    intel.indicatorfeeds

    • create()
    • delete()
    • list()

    leakedcredentialchecks.detections

    • get()

    queues.consumers

    • list()

    radar.ai

    • summary()
    • timeseries()
    • timeseries_groups()

    radar.bgp

    • changes()
    • snapshot()

    workers.subdomains

    • delete()

    zerotrust.networks

    • create()
    • delete()
    • edit()
    • get()
    • list()

    General Fixes and Improvements

    Type System & Compatibility

    • Type inference improvements: Allow Pyright to properly infer TypedDict types within SequenceNotStr
    • Type completeness: Add missing types to method arguments and response models
    • Pydantic compatibility: Ensure compatibility with Pydantic versions prior to 2.8.0 when using additional fields

    Request/Response Handling

    • Multipart form data: Correctly handle sending multipart/form-data requests with JSON data
    • Header handling: Do not send headers with default values set to omit
    • GET request headers: Don't send Content-Type header on GET requests
    • Response body model accuracy: Broad improvements to the correctness of models

    Parsing & Data Processing

    • Discriminated unions: Correctly handle nested discriminated unions in response parsing
    • Extra field types: Parse extra field types correctly
    • Empty metadata: Ignore empty metadata fields during parsing
    • Singularization rules: Update resource name singularization rules for better consistency