API configuration
The table below summarizes the job operations available for both Logpush and Edge Log Delivery jobs. Make sure that Account-scoped datasets use /accounts/{account_id} and Zone-scoped datasets use /zone/{zone_id}. For more information, refer to the Datasets page.
You can locate {zone_id} and {account_id} arguments based on the Find zone and account IDs page.
The {job_id} argument is numeric, like 123456.
The {dataset_id} argument indicates the log category (such as http_requests or audit_logs).
| Operation | Description | API |
|---|---|---|
POST | Create job | Documentation |
GET | Retrieve job details | Documentation |
GET | Retrieve all jobs for all datasets | Documentation |
GET | Retrieve all jobs for a dataset | Documentation |
GET | Retrieve all available fields for a dataset | Documentation |
PUT | Update job | Documentation |
DELETE | Delete job | Documentation |
POST | Check whether destination exists | Documentation |
POST | Get ownership challenge | Documentation |
POST | Validate ownership challenge | Documentation |
POST | Validate log options | Documentation |
For concrete examples, refer to the tutorials in Logpush examples.
The Logpush API requires credentials like any other Cloudflare API.
Required API token permissions
At least one of the following token permissions
is required:
Logs Write
curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/logpush/jobs" \ --request GET \ --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN"Before creating a new job, ownership of the destination must be proven.
To issue an ownership challenge token to your destination:
Required API token permissions
At least one of the following token permissions
is required:
Logs Write
curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/logpush/ownership" \ --request POST \ --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN" \ --json '{ "destination_conf": "s3://<BUCKET_PATH>?region=us-west-2" }'A challenge file will be written to the destination, and the filename will be in the response (the filename may be expressed as a path, if appropriate for your destination):
{ "errors": [], "messages": [], "result": { "valid": true, "message": "", "filename": "<PATH_TO_CHALLENGE_FILE>.txt" }, "success": true}You will need to provide the token contained in the file when creating a job.
You can specify your cloud service provider destination via the required destination_conf parameter.
The destination_conf parameter must follow this format:
<scheme>://<destination-address>Supported schemes are listed below, each tailored to specific providers such as
R2, S3, etc. Additionally, generic use cases like https are also covered:
r2,gs,s3,sumo,https,azure,splunk,sentinelone,datadog.
The destination-address should generally be provided by the destination
provider. However, for certain providers, we require the destination-address
to follow a specific format:
- Cloudflare R2 (scheme
r2): bucket path + account ID + R2 access key ID + R2 secret access key; for example:r2://<BUCKET_PATH>?account-id=<ACCOUNT_ID>&access-key-id=<R2_ACCESS_KEY_ID>&secret-access-key=<R2_SECRET_ACCESS_KEY> - AWS S3 (scheme
s3): bucket + optional directory + region + optional encryption parameter (if required by your policy); for example:s3://bucket/[dir]?region=<REGION>[&sse=AES256] - Datadog (scheme
datadog): Datadog endpoint URL + Datadog API key + optional parameters; for example:datadog://<DATADOG_ENDPOINT_URL>?header_DD-API-KEY=<DATADOG_API_KEY>&ddsource=cloudflare&service=<SERVICE>&host=<HOST>&ddtags=<TAGS> - Google Cloud Storage (scheme
gs): bucket + optional directory; for example:gs://bucket/[dir] - Microsoft Azure (scheme
azure): service-level SAS URL withhttpsreplaced byazure+ optional directory added before query string; for example:azure://<BLOB_CONTAINER_PATH>/[dir]?<QUERY_STRING> - New Relic (use scheme
https): New Relic endpoint URL which ishttps://log-api.newrelic.com/log/v1for US orhttps://log-api.eu.newrelic.com/log/v1for EU + a license key + a format; for example: for US"https://log-api.newrelic.com/log/v1?Api-Key=<NR_LICENSE_KEY>&format=cloudflare"and for EU"https://log-api.eu.newrelic.com/log/v1?Api-Key=<NR_LICENSE_KEY>&format=cloudflare" - Splunk (scheme
splunk): Splunk endpoint URL + Splunk channel ID + insecure-skip-verify flag + Splunk sourcetype + Splunk authorization token; for example:splunk://<SPLUNK_ENDPOINT_URL>?channel=<SPLUNK_CHANNEL_ID>&insecure-skip-verify=<INSECURE_SKIP_VERIFY>&sourcetype=<SOURCE_TYPE>&header_Authorization=<SPLUNK_AUTH_TOKEN> - Sumo Logic (scheme
sumo): HTTP source address URL withhttpsreplaced bysumo; for example:sumo://<SUMO_ENDPOINT_URL>/receiver/v1/http/<UNIQUE_HTTP_COLLECTOR_CODE> - SentinelOne (scheme
sentinelone): SentinelOne endpoint URL + SentinelOne sourcetype + SentinelOne authorization token; for example:sentinelone://<SENTINELONE_ENDPOINT_URL>?sourcetype=<SOURCE_TYPE>&header_Authorization=<SENTINELONE_AUTH_TOKEN>
For R2, S3, Google Cloud Storage, and Azure, you can organize logs into daily subdirectories by including the special placeholder {DATE} in the URL path. This placeholder will automatically be replaced with the date in the YYYYMMDD format (for example, 20180523).
For example:
s3://mybucket/logs/{DATE}?region=us-east-1&sse=AES256azure://myblobcontainer/logs/{DATE}?[QueryString]
This approach is useful when you want your logs grouped by day.
For more information on the value for your cloud storage provider, consult the following conventions:
- AWS S3 CLI ↗ (S3Uri path argument type)
- Google Cloud Storage CLI ↗ (Syntax for accessing resources)
- Microsoft Azure Shared Access Signature ↗
- Sumo Logic HTTP Source ↗
To check if a destination is already in use:
Required API token permissions
At least one of the following token permissions
is required:
Logs Write
curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/logpush/validate/destination/exists" \ --request POST \ --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN" \ --json '{ "destination_conf": "s3://foo" }'Response
{ "errors": [], "messages": [], "result": { "exists": false }, "success": true}A human-readable, optional job name that does not need to be unique. We recommend choosing a meaningful name, such as the domain name, to help you easily identify and manage your job. You can update the name later if needed.
The kind parameter (optional) is used to differentiate between Logpush and Edge Log Delivery jobs. For Logpush jobs, this parameter can be left empty or omitted. For Edge Log Delivery jobs, set "kind": "edge". Currently, Edge Log Delivery is only supported for the http_requests dataset.
Required API token permissions
At least one of the following token permissions
is required:
Logs Write
curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/logpush/jobs" \ --request POST \ --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN" \ --json '{ "name": "<DOMAIN_NAME>", "destination_conf": "s3://<BUCKET_PATH>?region=us-west-2", "dataset": "http_requests", "output_options": { "field_names": [ "ClientIP", "ClientRequesrHost", "ClientRequestMethod", " ClientRequestURI", "EdgeEndTimestamp", "EdgeResponseBytes", "EdgeResponseStatus", "EdgeStartTimestamp", "RayID" ], "timestamp_format": "rfc3339" }, "kind": "edge" }'Logpull_options has been replaced with Custom Log Formatting output_options. Please refer to the Log Output Options documentation for instructions on configuring these options and updating your existing jobs to use these options.
If you are still using logpull_options, here are the options that you can customize:
- Fields (optional): Refer to Datasets for the currently available fields. The list of fields is also accessible directly from the API:
https://api.cloudflare.com/client/v4/zones/{zone_id}/logpush/datasets/{dataset_id}/fields. Default fields:https://api.cloudflare.com/client/v4/zones/{zone_id}/logpush/datasets/{dataset_id}/fields/default. - Timestamp format (optional): The format in which timestamp fields will be returned. Value options:
unixnano(nanoseconds unit - default),unix(seconds unit),rfc3339(seconds unit). - Redaction for CVE-2021-44228 (optional): This option will replace every occurrence of
${withx{. To enable it, set"CVE-2021-44228": true.
To check if the selected logpull_options are valid:
Required API token permissions
At least one of the following token permissions
is required:
Logs Write
curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/logpush/validate/origin" \ --request POST \ --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN" \ --json '{ "logpull_options": "fields=RayID,ClientIP,EdgeStartTimestamp×tamps=rfc3339&CVE-2021-44228=true", "dataset": "http_requests" }'Response
{ "errors": [], "messages": [], "result": { "valid": true, "message": "" }, "success": true}Use filters to select the events to include and/or remove from your logs. For more information, refer to Filters.
Value can range from 0.0 (exclusive) to 1.0 (inclusive). sample=0.1 means return 10% (1 in 10) of all records. The default value is 1, meaning logs will be unsampled.
These parameters control the size of each upload batch — not how quickly data is delivered. Use them to prevent overloading your destination with uploads that are too large or too small.
| Parameter | Description | Default |
|---|---|---|
max_upload_bytes | Maximum uncompressed file size of a batch of logs. | Varies by destination |
max_upload_records | Maximum number of log lines per batch. | 100,000 |
max_upload_interval_seconds | Maximum time-span of log data per batch (used during catch-up scenarios). | Varies by destination |
- Reduce
max_upload_recordsif your destination struggles with large payloads or runs out of memory processing big batches. - Increase
max_upload_recordsif you want fewer, larger files (for example, when pushing to object storage like R2 or S3). - For destinations like Datadog that have strict payload limits, Logpush automatically uses smaller batch sizes (for example, 1,000 rows).
You can add custom fields to your HTTP request log entries in the form of HTTP request headers, HTTP response headers, and cookies. Custom fields configuration applies to all the Logpush jobs in a zone that use the HTTP requests dataset. To learn more, refer to Custom fields.
The following Logpush actions are recorded in Cloudflare Audit Logs: create, update, and delete job.
Was this helpful?
- Resources
- API
- New to Cloudflare?
- Directory
- Sponsorships
- Open Source
- Support
- Help Center
- System Status
- Compliance
- GDPR
- Company
- cloudflare.com
- Our team
- Careers
- © 2025 Cloudflare, Inc.
- Privacy Policy
- Terms of Use
- Report Security Issues
- Trademark
-