Skip to content

API configuration

Endpoints

The table below summarizes the job operations available for both Logpush and Edge Log Delivery jobs. Make sure that Account-scoped datasets use /accounts/{account_id} and Zone-scoped datasets use /zone/{zone_id}. For more information, refer to the Log fields page.

You can locate {zone_id} and {account_id} arguments based on the Find zone and account IDs page. The {job_id} argument is numeric, like 123456. The {dataset_id} argument indicates the log category (such as http_requests or audit_logs).

OperationDescriptionAPI
POSTCreate jobZone-scoped job / Account-scoped job
GETRetrieve job detailsZone-scoped job / Account-scoped job
GETRetrieve all jobs for all datasetsZone-scoped jobs / Account-scoped jobs
GETRetrieve all jobs for a datasetZone-scoped jobs / Account-scoped jobs
GETRetrieve all available fields for a datasetZone-scoped fields / Account-scoped fields
PUTUpdate jobZone-scoped job / Account-scoped job
DELETEDelete jobZone-scoped job / Account-scoped job
POSTCheck whether destination existsZone-scoped job / Account-scoped job
POSTGet ownership challengeZone-scoped job / Account-scoped job
POSTValidate ownership challengeZone-scoped job / Account-scoped job
POSTValidate log optionsZone-scoped job / Account-scoped job

For concrete examples, refer to the tutorials in Logpush examples.

Connecting

The Logpush API requires credentials like any other Cloudflare API.

Terminal window
curl https://api.cloudflare.com/client/v4/zones/{zone_id}/logpush/jobs \
--header "X-Auth-Email: <EMAIL>" \
--header "X-Auth-Key: <API_KEY>"

Ownership

Before creating a new job, ownership of the destination must be proven.

To issue an ownership challenge token to your destination:

Terminal window
curl https://api.cloudflare.com/client/v4/zones/{zone_id}/logpush/ownership \
--header "X-Auth-Email: <EMAIL>" \
--header "X-Auth-Key: <API_KEY>" \
--header "Content-Type: application/json" \
--data '{
"destination_conf": "s3://<BUCKET_PATH>?region=us-west-2"
}'

A challenge file will be written to the destination, and the filename will be in the response (the filename may be expressed as a path, if appropriate for your destination):

{
"errors": [],
"messages": [],
"result": {
"valid": true,
"message": "",
"filename": "<PATH_TO_CHALLENGE_FILE>.txt"
},
"success": true
}

You will need to provide the token contained in the file when creating a job.

Destination

You can specify your cloud service provider destination via the required destination_conf parameter.

The destination_conf parameter must follow this format:

<scheme>://<destination-address>

Supported schemes are listed below, each tailored to specific providers such as R2, S3, etc. Additionally, generic use cases like https are also covered:

  • r2,
  • gs,
  • s3,
  • sumo,
  • https,
  • azure,
  • splunk,
  • datadog.

The destination-address should generally be provided by the destination provider. However, for certain providers, we require the destination-address to follow a specific format:

  • Cloudflare R2 (scheme r2): bucket path + account ID + R2 access key ID + R2 secret access key; for example: r2://<BUCKET_PATH>?account-id=<ACCOUNT_ID>&access-key-id=<R2_ACCESS_KEY_ID>&secret-access-key=<R2_SECRET_ACCESS_KEY>
  • AWS S3 (scheme s3): bucket + optional directory + region + optional encryption parameter (if required by your policy); for example: s3://bucket/[dir]?region=<REGION>[&sse=AES256]
  • Datadog (scheme datadog): Datadog endpoint URL + Datadog API key + optional parameters; for example: datadog://<DATADOG_ENDPOINT_URL>?header_DD-API-KEY=<DATADOG_API_KEY>&ddsource=cloudflare&service=<SERVICE>&host=<HOST>&ddtags=<TAGS>
  • Google Cloud Storage (scheme gs): bucket + optional directory; for example: gs://bucket/[dir]
  • Microsoft Azure (scheme azure): service-level SAS URL with https replaced by azure + optional directory added before query string; for example: azure://<BLOB_CONTAINER_PATH>/[dir]?<QUERY_STRING>
  • New Relic (use scheme https): New Relic endpoint URL which is https://log-api.newrelic.com/log/v1 for US or https://log-api.eu.newrelic.com/log/v1 for EU + a license key + a format; for example: for US "https://log-api.newrelic.com/log/v1?Api-Key=<NR_LICENSE_KEY>&format=cloudflare" and for EU "https://log-api.eu.newrelic.com/log/v1?Api-Key=<NR_LICENSE_KEY>&format=cloudflare"
  • Splunk (scheme splunk): Splunk endpoint URL + Splunk channel ID + insecure-skip-verify flag + Splunk sourcetype + Splunk authorization token; for example: splunk://<SPLUNK_ENDPOINT_URL>?channel=<SPLUNK_CHANNEL_ID>&insecure-skip-verify=<INSECURE_SKIP_VERIFY>&sourcetype=<SOURCE_TYPE>&header_Authorization=<SPLUNK_AUTH_TOKEN>
  • Sumo Logic (scheme sumo): HTTP source address URL with https replaced by sumo; for example: sumo://<SUMO_ENDPOINT_URL>/receiver/v1/http/<UNIQUE_HTTP_COLLECTOR_CODE>

For R2, S3, Google Cloud Storage, and Azure, you can organize logs into daily subdirectories by including the special placeholder {DATE} in the URL path. This placeholder will automatically be replaced with the date in the YYYYMMDD format (for example, 20180523).

For example:

  • s3://mybucket/logs/{DATE}?region=us-east-1&sse=AES256
  • azure://myblobcontainer/logs/{DATE}?[QueryString]

This approach is useful when you want your logs grouped by day.

For more information on the value for your cloud storage provider, consult the following conventions:

To check if a destination is already in use:

Terminal window
curl https://api.cloudflare.com/client/v4/zones/{zone_id}/logpush/validate/destination/exists \
--header "X-Auth-Email: <EMAIL>" \
--header "X-Auth-Key: <API_KEY>" \
--header "Content-Type: application/json" \
--data '{
"destination_conf": "s3://foo"
}'

Response

{
"errors": [],
"messages": [],
"result": {
"exists": false
},
"success": true
}

:::

Kind

The kind parameter (optional) is used to differentiate between Logpush and Edge Log Delivery jobs. For Logpush jobs, this parameter can be left empty or omitted. For Edge Log Delivery jobs, set "kind": "edge". Currently, Edge Log Delivery is only supported for the http_requests dataset.

Terminal window
curl https://api.cloudflare.com/client/v4/zones/{zone_id}/logpush/jobs \
--header "X-Auth-Email: <EMAIL>" \
--header "X-Auth-Key: <API_KEY>" \
--header "Content-Type: application/json" \
--data '{
"name": "<DOMAIN_NAME>",
"destination_conf": "s3://<BUCKET_PATH>?region=us-west-2",
"dataset": "http_requests",
"output_options": {
"field_names": ["ClientIP", "ClientRequesrHost", "ClientRequestMethod", " ClientRequestURI", "EdgeEndTimestamp", "EdgeResponseBytes", "EdgeResponseStatus", "EdgeStartTimestamp","RayID"],
"timestamp_format": "rfc3339"
},
"kind": "edge"
}'

Options

Logpull_options has been replaced with Custom Log Formatting output_options. Please refer to the Log Output Options documentation for instructions on configuring these options and updating your existing jobs to use these options.

If you are still using logpull_options, here are the options that you can customize:

  1. Fields (optional): Refer to Log fields for the currently available fields. The list of fields is also accessible directly from the API: https://api.cloudflare.com/client/v4/zones/{zone_id}/logpush/datasets/{dataset_id}/fields. Default fields: https://api.cloudflare.com/client/v4/zones/{zone_id}/logpush/datasets/{dataset_id}/fields/default.
  2. Timestamp format (optional): The format in which timestamp fields will be returned. Value options: unixnano (default), unix, rfc3339.
  3. Redaction for CVE-2021-44228 (optional): This option will replace every occurrence of ${ with x{. To enable it, set CVE-2021-44228=true.

To check if the selected logpull_options are valid:

Terminal window
curl https://api.cloudflare.com/client/v4/zones/{zone_id}/logpush/validate/origin \
--header "X-Auth-Email: <EMAIL>" \
--header "X-Auth-Key: <API_KEY>" \
--header "Content-Type: application/json" \
--data '{
"logpull_options": "fields=RayID,ClientIP,EdgeStartTimestamp&timestamps=rfc3339&CVE-2021-44228=true",
"dataset": "http_requests"
}'

Response

{
"errors": [],
"messages": [],
"result": {
"valid": true,
"message": ""
},
"success": true
}

Filter

Use filters to select the events to include and/or remove from your logs. For more information, refer to Filters.

Sampling rate

Value can range from 0.0 (exclusive) to 1.0 (inclusive). sample=0.1 means return 10% (1 in 10) of all records. The default value is 1, meaning logs will be unsampled.

Max Upload Parameters

These parameters can be used to gain control of batch size in the case that a destination has specific requirements. Files will be sent based on whichever parameter is hit first. If these options are not set, the system uses our internal defaults of 30s, 100k records, or the destinations globally defined limits.

  1. max_upload_bytes (optional): The maximum uncompressed file size of a batch of logs. This setting value must be between 5 MB and 1 GB. Note that you cannot set a minimum file size; this means that log files may be much smaller than this batch size.
  2. max_upload_records (optional): The maximum number of log lines per batch. This setting must be between 1,000 and 1,000,000 lines. Note that you cannot specify a minimum number of log lines per batch; this means that log files may contain many fewer lines than this.
  3. max_upload_interval_seconds (optional): The maximum interval in seconds for log batches. This setting must be between 30 and 300 seconds. Note that you cannot specify a minimum interval for log batches; this means that log files may be sent in shorter intervals than this.

Custom fields

You can add custom fields to your HTTP request log entries in the form of HTTP request headers, HTTP response headers, and cookies. Custom fields configuration applies to all the Logpush jobs in a zone that use the HTTP requests dataset. To learn more, refer to Custom fields.

Audit

The following Logpush actions are recorded in Cloudflare Audit Logs: create, update, and delete job.