The Overview tab is now the default view in AI Crawl Control. The previous default view with controls for individual AI crawlers is available in the Crawlers tab.
- Executive summary — Monitor total requests, volume change, most common status code, most popular path, and high-volume activity
- Operator grouping — Track crawlers by their operating companies (OpenAI, Microsoft, Google, ByteDance, Anthropic, Meta)
- Customizable filters — Filter your snapshot by date range, crawler, operator, hostname, or path

- Log in to the Cloudflare dashboard and select your account and domain.
- Go to AI Crawl Control, where the Overview tab opens by default with your activity snapshot.
- Use filters to customize your view by date range, crawler, operator, hostname, or path.
- Navigate to the Crawlers tab to manage controls for individual crawlers.
Learn more about analyzing AI traffic and managing AI crawlers.
The cached/uncached classification logic used in Zone Overview analytics has been updated to improve accuracy.
Previously, requests were classified as "cached" based on an overly broad condition that included blocked 403 responses, Snippets requests, and other non-cache request types. This caused inflated cache hit ratios — in some cases showing near-100% cached — and affected approximately 15% of requests classified as cached in rollups.
The condition has been removed from the Zone Overview page. Cached/uncached classification now aligns with the heuristics used in HTTP Analytics, so only requests genuinely served from cache are counted as cached.
What changed:
- Zone Overview — Cache ratios now reflect actual cache performance.
- HTTP Analytics — No change. HTTP Analytics already used the correct classification logic.
- Historical data — This fix applies to new requests only. Previously logged data is not retroactively updated.
Cloudflare Logpush now supports SentinelOne as a native destination.
Logs from Cloudflare can be sent to SentinelOne AI SIEM ↗ via Logpush. The destination can be configured through the Logpush UI in the Cloudflare dashboard or by using the Logpush API.
For more information, refer to the Destination Configuration documentation.
Pay Per Crawl is introducing enhancements for both AI crawler operators and site owners, focusing on programmatic discovery, flexible pricing models, and granular configuration control.
A new authenticated API endpoint allows verified crawlers to programmatically discover domains participating in Pay Per Crawl. Crawlers can use this to build optimized crawl queues, cache domain lists, and identify new participating sites. This eliminates the need to discover payable content through trial requests.
The API endpoint is
GET https://crawlers-api.ai-audit.cfdata.org/charged_zonesand requires Web Bot Auth authentication. Refer to Discover payable content for authentication steps, request parameters, and response schema.Payment headers (
crawler-exact-priceorcrawler-max-price) must now be included in the Web Bot Authsignature-inputheader components. This security enhancement prevents payment header tampering, ensures authenticated payment intent, validates crawler identity with payment commitment, and protects against replay attacks with modified pricing. Crawlers must add their payment header to the list of signed components when constructing the signature-input header.Pay Per Crawl error responses now include a new
crawler-errorheader with 11 specific error codes for programmatic handling. Error response bodies remain unchanged for compatibility. These codes enable robust error handling, automated retry logic, and accurate spending tracking.Site owners can now offer free access to specific pages like homepages, navigation, or discovery pages while charging for other content. Create a Configuration Rule in Rules > Configuration Rules, set your URI pattern using wildcard, exact, or prefix matching on the URI Full field, and enable the Disable Pay Per Crawl setting. When disabled for a URI pattern, crawler requests pass through without blocking or charging.
Some paths are always free to crawl. These paths are:
/robots.txt,/sitemap.xml,/security.txt,/.well-known/security.txt,/crawlers.json.AI crawler operators: Discover payable content | Crawl pages
Site owners: Advanced configuration
Earlier this year, we announced the launch of the new Terraform v5 Provider. We are aware of the high number of issues reported by the Cloudflare community related to the v5 release. We have committed to releasing improvements on a 2-3 week cadence ↗ to ensure its stability and reliability, including the v5.14 release. We have also pivoted from an issue-to-issue approach to a resource-per-resource approach ↗ - we will be focusing on specific resources to not only stabilize the resource but also ensure it is migration-friendly for those migrating from v4 to v5.
Thank you for continuing to raise issues. They make our provider stronger and help us build products that reflect your needs.
This release includes bug fixes, the stabilization of even more popular resources, and more.
Resource affected:
api_shield_discovery_operationCloudflare continuously discovers and updates API endpoints and web assets of your web applications. To improve the maintainability of these dynamic resources, we are working on reducing the need to actively engage with discovered operations.
The corresponding public API endpoint of discovered operations ↗ is not affected and will continue to be supported.
- pages_project: Add v4 -> v5 migration tests (#6506 ↗)
- account_members: Makes member policies a set (#6488 ↗)
- pages_project: Ensures non empty refresh plans (#6515 ↗)
- R2: Improves sweeper (#6512 ↗)
- workers_kv: Ignores value import state for verify (#6521 ↗)
- workers_script: No longer treats the migrations attribute as WriteOnly (#6489 ↗)
- workers_script: Resolves resource drift when worker has unmanaged secret (#6504 ↗)
- zero_trust_device_posture_rule: Preserves input.version and other fields (#6500 ↗) and (#6503 ↗)
- zero_trust_dlp_custom_profile: Adds sweepers for
dlp_custom_profile - zone_subscription|account_subscription: Adds
partners_entas valid enum forrate_plan.id(#6505 ↗) - zone: Ensures datasource model schema parity (#6487 ↗)
- subscription: Updates import signature to accept account_id/subscription_id to import account subscription (#6510 ↗)
We suggest waiting to migrate to v5 while we work on stabilization. This helps with avoiding any blocking issues while the Terraform resources are actively being stabilized ↗. We will be releasing a new migration tool in March 2026 to help support v4 to v5 transitions for our most popular resources.
Earlier this year, we announced the launch of the new Terraform v5 Provider. We are aware of the high number of issues reported by the Cloudflare community related to the v5 release. We have committed to releasing improvements on a 2-3 week cadence ↗ to ensure its stability and reliability, including the v5.13 release. We have also pivoted from an issue-to-issue approach to a resource-per-resource approach ↗ - we will be focusing on specific resources to not only stabilize the resource but also ensure it is migration-friendly for those migrating from v4 to v5.
Thank you for continuing to raise issues. They make our provider stronger and help us build products that reflect your needs.
This release includes new features, new resources and data sources, bug fixes, updates to our Developer Documentation, and more.
Please be aware that there are breaking changes for the
cloudflare_api_tokenandcloudflare_account_tokenresources. These changes eliminate configuration drift caused by policy ordering differences in the Cloudflare API.For more specific information about the changes or the actions required, please see the detailed Repository changelog ↗.
- New resources and data sources added
- cloudflare_connectivity_directory
- cloudflare_sso_connector
- cloudflare_universal_ssl_setting
- api_token+account_tokens: state upgrader and schema bump (#6472 ↗)
- docs: make docs explicit when a resource does not have import support
- magic_transit_connector: support self-serve license key (#6398 ↗)
- worker_version: add content_base64 support
- worker_version: boolean support for run_worker_first (#6407 ↗)
- workers_script_subdomains: add import support (#6375 ↗)
- zero_trust_access_application: add proxy_endpoint for ZT Access Application (#6453 ↗)
- zero_trust_dlp_predefined_profile: Switch DLP Predefined Profile endpoints, introduce enabled_entries attribute
- account_token: token policy order and nested resources (#6440 ↗)
- allow r2_bucket_event_notification to be applied twice without failing (#6419 ↗)
- cloudflare_worker+cloudflare_worker_version: import for the resources (#6357 ↗)
- dns_record: inconsistent apply error (#6452 ↗)
- pages_domain: resource tests (#6338 ↗)
- pages_project: unintended resource state drift (#6377 ↗)
- queue_consumer: id population (#6181 ↗)
- workers_kv: multipart request (#6367 ↗)
- workers_kv: updating workers metadata attribute to be read from endpoint (#6386 ↗)
- workers_script_subdomain: add note to cloudflare_workers_script_subdomain about redundancy with cloudflare_worker (#6383 ↗)
- workers_script: allow config.run_worker_first to accept list input
- zero_trust_device_custom_profile_local_domain_fallback: drift issues (#6365 ↗)
- zero_trust_device_custom_profile: resolve drift issues (#6364 ↗)
- zero_trust_dex_test: correct configurability for 'targeted' attribute to fix drift
- zero_trust_tunnel_cloudflared_config: remove warp_routing from cloudflared_config (#6471 ↗)
We suggest holding off on migration to v5 while we work on stabilization. This help will you avoid any blocking issues while the Terraform resources are actively being stabilized. We will be releasing a new migration tool in March 2026 to help support v4 to v5 transitions for our most popular resources.
- New resources and data sources added
We've resolved a bug in Log Explorer that caused inconsistencies between the custom SQL date field filters and the date picker dropdown. Previously, users attempting to filter logs based on a custom date field via a SQL query sometimes encountered unexpected results or mismatching dates when using the interactive date picker.
This fix ensures that the custom SQL date field filters now align correctly with the selection made in the date picker dropdown, providing a reliable and predictable filtering experience for your log data. This is particularly important for users creating custom log views based on time-sensitive fields.
We've significantly enhanced Log Explorer by adding support for 14 additional Cloudflare product datasets.
This expansion enables Operations and Security Engineers to gain deeper visibility and telemetry across a wider range of Cloudflare services. By integrating these new datasets, users can now access full context to efficiently investigate security incidents, troubleshoot application performance issues, and correlate logged events across different layers (like application and network) within a single interface. This capability is crucial for a complete and cohesive understanding of event flows across your Cloudflare environment.
The newly supported datasets include:
Dns_logsNel_reportsPage_shield_eventsSpectrum_eventsZaraz_events
Audit LogsAudit_logs_v2Biso_user_actionsDNS firewall logsEmail_security_alertsMagic Firewall IDSNetwork AnalyticsSinkhole HTTPipsec_logs
You can now use Log Explorer to query and filter with each of these datasets. For example, you can identify an IP address exhibiting suspicious behavior in the
FW_eventlogs, and then instantly pivot to theNetwork Analyticslogs orAccesslogs to see its network-level traffic profile or if it bypassed a corporate policy.To learn more and get started, refer to the Log Explorer documentation and the Cloudflare Logs documentation.
We're excited to announce a quality-of-life improvement for Log Explorer users. You can now resize the custom SQL query window to accommodate longer and more complex queries.
Previously, if you were writing a long custom SQL query, the fixed-size window required excessive scrolling to view the full query. This update allows you to easily drag the bottom edge of the query window to make it taller. This means you can view your entire custom query at once, improving the efficiency and experience of writing and debugging complex queries.
To learn more and get started, refer to the Log Explorer documentation.
We’re excited to introduce Logpush Health Dashboards, giving customers real-time visibility into the status, reliability, and performance of their Logpush jobs. Health dashboards make it easier to detect delivery issues, monitor job stability, and track performance across destinations. The dashboards are divided into two sections:
-
Upload Health: See how much data was successfully uploaded, where drops occurred, and how your jobs are performing overall. This includes data completeness, success rate, and upload volume.
-
Upload Reliability – Diagnose issues impacting stability, retries, or latency, and monitor key metrics such as retry counts, upload duration, and destination availability.

Health Dashboards can be accessed from the Logpush page in the Cloudflare dashboard at the account or zone level, under the Health tab. For more details, refer to our Logpush Health Dashboards documentation, which includes a comprehensive troubleshooting guide to help interpret and resolve common issues.
-
Starting February 2, 2026, the
cloudflared proxy-dnscommand will be removed from all newcloudflaredreleases.This change is being made to enhance security and address a potential vulnerability in an underlying DNS library. This vulnerability is specific to the
proxy-dnscommand and does not affect any othercloudflaredfeatures, such as the core Cloudflare Tunnel service.The
proxy-dnscommand, which runs a client-side DNS-over-HTTPS (DoH) proxy, has been an officially undocumented feature for several years. This functionality is fully and securely supported by our actively developed products.Versions of
cloudflaredreleased before this date will not be affected and will continue to operate. However, note that our official support policy for anycloudflaredrelease is one year from its release date.We strongly advise users of this undocumented feature to migrate to one of the following officially supported solutions before February 2, 2026, to continue benefiting from secure DNS-over-HTTPS.
The preferred method for enabling DNS-over-HTTPS on user devices is the Cloudflare WARP client. The WARP client automatically secures and proxies all DNS traffic from your device, integrating it with your organization's Zero Trust policies and posture checks.
For scenarios where installing a client on every device is not possible (such as servers, routers, or IoT devices), we recommend using the WARP Connector.
Instead of running
cloudflared proxy-dnson a machine, you can install the WARP Connector on a single Linux host within your private network. This connector will act as a gateway, securely routing all DNS and network traffic from your entire subnet to Cloudflare for filtering and logging.
AI Crawl Control now supports per-crawler drilldowns with an extended actions menu and status code analytics. Drill down into Metrics, Cloudflare Radar, and Security Analytics, or export crawler data for use in WAF custom rules, Redirect Rules, and robots.txt files.
The Metrics tab includes a status code distribution chart showing HTTP response codes (2xx, 3xx, 4xx, 5xx) over time. Filter by individual crawler, category, operator, or time range to analyze how specific crawlers interact with your site.

Each crawler row includes a three-dot menu with per-crawler actions:
- View Metrics — Filter the AI Crawl Control Metrics page to the selected crawler.
- View on Cloudflare Radar — Access verified crawler details on Cloudflare Radar.
- Copy User Agent — Copy user agent strings for use in WAF custom rules, Redirect Rules, or robots.txt files.
- View in Security Analytics — Filter Security Analytics by detection IDs (Bot Management customers).
- Copy Detection ID — Copy detection IDs for use in WAF custom rules (Bot Management customers).

- Log in to the Cloudflare dashboard, and select your account and domain.
- Go to AI Crawl Control > Metrics to access the status code distribution chart.
- Go to AI Crawl Control > Crawlers and select the three-dot menu for any crawler to access per-crawler actions.
- Select multiple crawlers to use bulk copy buttons for user agents or detection IDs.
Learn more about AI Crawl Control.
Permissions for managing Logpush jobs related to Zero Trust datasets (Access, Gateway, and DEX) have been updated to improve data security and enforce appropriate access controls.
To view, create, update, or delete Logpush jobs for Zero Trust datasets, users must now have both of the following permissions:
- Logs Edit
- Zero Trust: PII Read
We're excited to announce that Log Explorer users can now cancel queries that are currently running.
This new feature addresses a common pain point: waiting for a long, unintended, or misconfigured query to complete before you can submit a new, correct one. With query cancellation, you can immediately stop the execution of any undesirable query, allowing you to quickly craft and submit a new query, significantly improving your investigative workflow and productivity within Log Explorer.
We're excited to announce a new feature in Log Explorer that significantly enhances how you analyze query results: the Query results distribution chart.
This new chart provides a graphical distribution of your results over the time window of the query. Immediately after running a query, you will see the distribution chart above your result table. This visualization allows Log Explorer users to quickly spot trends, identify anomalies, and understand the temporal concentration of log events that match their criteria. For example, you can visually confirm if a spike in traffic or errors occurred at a specific time, allowing you to focus your investigation efforts more effectively. This feature makes it faster and easier to extract meaningful insights from your vast log data.
The chart will dynamically update to reflect the logs matching your current query.
Two-factor authentication (2FA) is one of the best ways to protect your account from the risk of account takeover. Cloudflare has offered phishing resistant 2FA options including hardware based keys (for example, a Yubikey) and app based TOTP (time-based one-time password) options which use apps like Google or Microsoft's Authenticator app. Unfortunately, while these solutions are very secure, they can be lost if you misplace the hardware based key, or lose the phone which includes that app. The result is that users sometimes get locked out of their accounts and need to contact support.
Today, we are announcing the addition of email as a 2FA factor for all Cloudflare accounts. Email 2FA is in wide use across the industry as a least common denominator for 2FA because it is low friction, loss resistant, and still improves security over username/password login only. We also know that most commercial email providers already require 2FA, so your email address is usually well protected already.
You can now enable email 2FA on the Cloudflare dashboard:
- Go to Profile at the top right corner.
- Select Authentication.
- Under Two-Factor Authentication, select Set up.
Cloudflare is critical infrastructure, and you should protect it as such. Review the following best practices and make sure you are doing your part to secure your account:
- Use a unique password for every website, including Cloudflare, and store it in a password manager like 1Password or Keeper. These services are cross-platform and simplify the process of managing secure passwords.
- Use 2FA to make it harder for an attacker to get into your account in the event your password is leaked.
- Store your backup codes securely. A password manager is the best place since it keeps the backup codes encrypted, but you can also print them and put them somewhere safe in your home.
- If you use an app to manage your 2FA keys, enable cloud backup, so that you don't lose your keys in the event you lose your phone.
- If you use a custom email domain to sign in, configure SSO.
- If you use a public email domain like Gmail or Hotmail, you can also use social login with Apple, GitHub, or Google to sign in.
- If you manage a Cloudflare account for work:
- Have at least two administrators in case one of them unexpectedly leaves your company.
- Use SCIM to automate permissions management for members in your Cloudflare account.
As Cloudflare's platform has grown, so has the need for precise, role-based access control. We’ve redesigned the Member Management experience in the Dashboard to help administrators more easily discover, assign, and refine permissions for specific principals.
Refreshed member invite flow
We overhauled the Invite Members UI to simplify inviting users and assigning permissions.

Refreshed Members Overview Page
We've updated the Members Overview Page to clearly display:
- Member 2FA status
- Which members hold Super Admin privileges
- API access settings per member
- Member onboarding state (accepted vs pending invite)

New Member Permission Policies Details View
We've created a new member details screen that shows all permission policies associated with a member; including policies inherited from group associations to make it easier for members to understand the effective permissions they have.

Improved Member Permission Workflow
We redesigned the permission management experience to make it faster and easier for administrators to review roles and grant access.

Account-scoped Policies Restrictions Relaxed
Previously, customers could only associate a single account-scoped policy with a member. We've relaxed this restriction, and now Administrators can now assign multiple account-scoped policies to the same member; bringing policy assignment behavior in-line with user-groups and providing greater flexibility in managing member permissions.
-
Cloudflare now provides two new request fields in the Ruleset engine that let you make decisions based on whether a request used TCP and the measured TCP round-trip time between the client and Cloudflare. These fields help you understand protocol usage across your traffic and build policies that respond to network performance. For example, you can distinguish TCP from QUIC traffic or route high latency requests to alternative origins when needed.
Field Type Description cf.edge.client_tcpBoolean Indicates whether the request used TCP. A value of true means the client connected using TCP instead of QUIC. cf.timings.client_tcp_rtt_msecNumber Reports the smoothed TCP round-trip time between the client and Cloudflare in milliseconds. For example, a value of 20 indicates roughly twenty milliseconds of RTT. Example filter expression:
cf.edge.client_tcp && cf.timings.client_tcp_rtt_msec < 100More information can be found in the Rules language fields reference.
Logpush now supports integration with Microsoft Sentinel ↗.The new Azure Sentinel Connector built on Microsoft’s Codeless Connector Framework (CCF), is now avaialble. This solution replaces the previous Azure Functions-based connector, offering significant improvements in security, data control, and ease of use for customers. Logpush customers can send logs to Azure Blob Storage and configure this new Sentinel Connector to ingest those logs directly into Microsoft Sentinel.
This upgrade significantly streamlines log ingestion, improves security, and provides greater control:
- Simplified Implementation: Easier for engineering teams to set up and maintain.
- Cost Control: New support for Data Collection Rules (DCRs) allows you to filter and transform logs at ingestion time, offering potential cost savings.
- Enhanced Security: CCF provides a higher level of security compared to the older Azure Functions connector.
- ata Lake Integration: Includes native integration with Data Lake.
Find the new solution here ↗ and refer to the Cloudflare's developer documention ↗for more information on the connector, including setup steps, supported logs and Microsfot's resources.
AI Crawl Control now includes a Robots.txt tab that provides insights into how AI crawlers interact with your
robots.txtfiles.The Robots.txt tab allows you to:
- Monitor the health status of
robots.txtfiles across all your hostnames, including HTTP status codes, and identify hostnames that need arobots.txtfile. - Track the total number of requests to each
robots.txtfile, with breakdowns of successful versus unsuccessful requests. - Check whether your
robots.txtfiles contain Content Signals ↗ directives for AI training, search, and AI input. - Identify crawlers that request paths explicitly disallowed by your
robots.txtdirectives, including the crawler name, operator, violated path, specific directive, and violation count. - Filter
robots.txtrequest data by crawler, operator, category, and custom time ranges.
When you identify non-compliant crawlers, you can:
- Block the crawler in the Crawlers tab
- Create custom WAF rules for path-specific security
- Use Redirect Rules to guide crawlers to appropriate areas of your site
To get started, go to AI Crawl Control > Robots.txt in the Cloudflare dashboard. Learn more in the Track robots.txt documentation.
- Monitor the health status of
-
We're excited to announce a significant increase in the maximum header size supported by Cloudflare's Content Delivery Network (CDN). Cloudflare now supports up to 128 KB for both request and response headers.
Previously, customers were limited to a total of 32 KB for request or response headers, with a maximum of 16 KB per individual header. Larger headers could cause requests to fail with
HTTP 413(Request Header Fields Too Large) errors.
- Support for large headers: You can now utilize much larger headers, whether as a single large header up to 128 KB or split over multiple headers.
- Reduces
413and520HTTP errors: This change drastically reduces the likelihood of customers encounteringHTTP 413errors from large request headers orHTTP 520errors caused by oversized response headers, improving the overall reliability of your web applications. - Enhanced functionality: This is especially beneficial for applications that rely on:
- A large number of cookies.
- Large Content-Security-Policy (CSP) response headers.
- Advanced use cases with Cloudflare Workers that generate large response headers.
This enhancement improves compatibility with Cloudflare's CDN, enabling more use cases that previously failed due to header size limits.
To learn more and get started, refer to the Cloudflare Fundamentals documentation.
AI Crawl Control now provides enhanced metrics and CSV data exports to help you better understand AI crawler activity across your sites.
Visualize crawler activity patterns over time, and group data by different dimensions:
- By Crawler — Track activity from individual AI crawlers (GPTBot, ClaudeBot, Bytespider)
- By Category — Analyze crawler purpose or type
- By Operator — Discover which companies (OpenAI, Anthropic, ByteDance) are crawling your site
- By Host — Break down activity across multiple subdomains
- By Status Code — Monitor HTTP response codes to crawlers (200s, 300s, 400s, 500s)

Interactive chart showing crawler requests over time with filterable dimensions Identify traffic sources with referrer analytics:
- View top referrers driving traffic to your site
- Understand discovery patterns and content popularity from AI operators

Bar chart showing top referrers and their respective traffic volumes Download your filtered view as a CSV:
- Includes all applied filters and groupings
- Useful for custom reporting and deeper analysis
- Log in to the Cloudflare dashboard, and select your account and domain.
- Go to AI Crawl Control > Metrics.
- Use the grouping tabs to explore different views of your data.
- Apply filters to focus on specific crawlers, time ranges, or response codes.
- Select Download CSV to export your filtered data for further analysis.
Learn more about AI Crawl Control.

During Birthday Week, we announced that single sign-on (SSO) is available for free ↗ to everyone who signs in with a custom email domain and maintains a compatible identity provider ↗. SSO minimizes user friction around login and provides the strongest security posture available. At the time, this could only be configured using the API.
Today, we are launching a new user experience which allows users to manage their SSO configuration from within the Cloudflare dashboard. You can access this by going to Manage account > Members > Settings.
The most common reason users contact Cloudflare support is lost two-factor authentication (2FA) credentials. Cloudflare supports both app-based and hardware keys for 2FA, but you could lose access to your account if you lose these. Over the past few weeks, we have been rolling out email and in-product reminders that remind you to also download backup codes (sometimes called recovery keys) that can get you back into your account in the event you lose your 2FA credentials. Download your backup codes now by logging into Cloudflare, then navigating to Profile > Security & Authentication > Backup codes.
Cloudflare is critical infrastructure, and you should protect it as such. Please review the following best practices and make sure you are doing your part to secure your account.
- Use a unique password for every website, including Cloudflare, and store it in a password manager like 1Password or Keeper. These services are cross-platform and simplify the process of managing secure passwords.
- Use 2FA to make it harder for an attacker to get into your account in the event your password is leaked
- Store your backup codes securely. A password manager is the best place since it keeps the backup codes encrypted, but you can also print them and put them somewhere safe in your home.
- If you use an app to manage your 2FA keys, enable cloud backup, so that you don't lose your keys in the event you lose your phone.
- If you use a custom email domain to sign in, configure SSO ↗.
- If you use a public email domain like Gmail or Hotmail, you can also use social login with Apple, GitHub, or Google to sign in.
- If you manage a Cloudflare account for work:
- Have at least two administrators in case one of them unexpectedly leaves your company
- Use SCIM to automate permissions management for members in your Cloudflare account
Fine-grained permissions for Access Applications, Identity Providers (IdPs), and Targets is now available in Public Beta. This expands our RBAC model beyond account & zone-scoped roles, enabling administrators to grant permissions scoped to individual resources.
- Access Applications ↗: Grant admin permissions to specific Access Applications.
- Identity Providers ↗: Grant admin permissions to individual Identity Providers.
- Targets ↗: Grant admin rights to specific Targets

For more info: