Understanding the Logpush API

Endpoints

The table below summarizes the job operations available.

The <zone> argument is the zone id (hexadecimal string). The <job> argument is the numeric job id. The <dataset> argument indicates the log category (either http_requests or spectrum_events).

Operation Description URL
POST Create job https://api.cloudflare.com/client/v4/zones/<zone_id>/logpush/jobs
GET Retrieve job https://api.cloudflare.com/client/v4/zones/<zone_id>/logpush/jobs/<job>
GET Retrieve all jobs for all data sets https://api.cloudflare.com/client/v4/zones/<zone_id>/logpush/jobs
GET Retrieve all jobs for a data set https://api.cloudflare.com/client/v4/zones/<zone_id>/logpush/datasets/<dataset>/jobs
GET Retrieve all available fields for a data set https://api.cloudflare.com/client/v4/zones/<zone_id>/logpush/datasets/<dataset>/fields
GET Retrieve all default fields for a data set https://api.cloudflare.com/client/v4/zones/<zone_id>/logpush/datasets/<dataset>/fields/default
PUT Update job https://api.cloudflare.com/client/v4/zones/<zone_id>/logpush/jobs/<job>
DELETE Delete job https://api.cloudflare.com/client/v4/zones/<zone_id>/logpush/jobs/<job>
POST Check whether destination exists https://api.cloudflare.com/client/v4/zones/<zone_id>/logpush/validate/destination/exists
POST Get ownership challenge https://api.cloudflare.com/client/v4/zones/<zone_id>/logpush/ownership
POST Validate ownership challenge https://api.cloudflare.com/client/v4/zones/<zone_id>/logpush/ownership/validate
POST Validate log options https://api.cloudflare.com/client/v4/zones/<zone_id>/logpush/validate/origin

For concrete examples, see the tutorial Manage Logpush with cURL.


Connecting

The Logpush API requires credentials like any other Cloudflare API.

$ curl -s -H "X-Auth-Email: <REDACTED>" -H "X-Auth-Key: <REDACTED>" \
    'https://api.cloudflare.com/client/v4/zones/<ZONE_ID>/logpush/jobs'

Ownership

Before creating a new job, ownership of the destination must be proven.

To issue an ownership challenge token to your destination:

$ curl -s -XPOST https://api.cloudflare.com/client/v4/zones/<ZONE_ID>/logpush/ownership -d '{"destination_conf":"s3://<BUCKET_PATH>?region=us-west-2"}' | jq .

A challenge file will be written to the destination, and the filename will be in the response (the filename may be expressed as a path if appropriate for your destination):

{
  "errors": [],
  "messages": [],
  "result": {
    "valid": true,
    "message": "",
    "filename": "<path-to-challenge-file>.txt"
  },
  "success": true
}
You will need to provide the token contained in the file when creating a job.

When using Sumo Logic, you may find it helpful to have Live Tail open to see the challenge file as soon as it’s uploaded.


Destination

You can specify your cloud service provider destination via the required destination_conf parameter.

  • AWS S3: bucket + optional directory + region + optional encryption parameter (if required by your policy); for example: s3://bucket/[dir]?region=<region>[&sse=AES256]
  • Google Cloud Storage: bucket + optional directory; for example: gs://bucket/[dir]
  • Microsoft Azure: service-level SAS URL with https replaced by azure + optional directory added before query string; for example: azure://[BlobContainerPath]/[dir]?[QueryString]
  • Sumo Logic: HTTP source address URL with https replaced by sumo; for example: sumo://[SumoEndpoint]/receiver/v1/http/[UniqueHTTPCollectorCode]

For S3, Google Cloud Storage, and Azure, logs can be separated into daily subdirectories by using the special string {DATE} in the URL path; for example: s3://mybucket/logs/{DATE}?region=us-east-1&sse=AES256 or azure://myblobcontainer/logs/{DATE}?[QueryString]. It will be substituted with the date in YYYYMMDD format, like 20180523.

For more information on the value for your cloud storage provider, consult the following conventions:

To check if a destination is already in use:

$ curl -s -XPOST https://api.cloudflare.com/client/v4/zones/<ZONE_ID>/logpush/validate/destination/exists -d '{"destination_conf":"s3://foo"}' | jq .

Response
{
  "errors": [],
  "messages": [],
  "result": {
    "exists": false
  },
  "success": true
}

There can be only 1 job writing to each unique destination. For S3 and GCS, a destination is defined as bucket + path. This means two jobs can write to the same bucket, but must write to different subdirectories in that bucket.


Job object

See a detailed description of the Logpush object JSON schema.


Options

Logpush repeatedly pulls logs on your behalf and uploads them to your destination.

Log options, such fields or sampling rate, are configured in the logpull_options job parameter (see Logpush job object schema). If you’re migrating from the Logpull API, logpull_options is simply the query string for the API call. For example, the following query gets data from the Logpull API:

curl -sv \
    -H'X-Auth-Email: <REDACTED>' \
    -H'X-Auth-Key: <REDACTED>' \
    "https://api.cloudflare.com/client/v4/zones/<ZONE_ID>/logs/received?start=2018-08-02T10:00:00Z&end=2018-08-02T10:01:00Z&fields=RayID,EdgeStartTimestamp"

In Logpush, the Logpull options would be: "logpull_options": "fields=RayID,EdgeStartTimestamp". See Logpull API parameters for more info.

If you don’t change any options, you will receive logs with default fields that are unsampled (i.e., sample=1).

The three options that you can customize are:

  1. Fields; see Log fields for the currently available fields. The list of fields is also accessible directly from the API: https://api.cloudflare.com/client/v4/zones/<zone_id>/logpush/datasets/<dataset>/fields. Default fields: https://api.cloudflare.com/client/v4/zones/<zone_id>/logpush/datasets/<dataset>/fields/default.
  2. Sampling rate; value can range from 0.001 to 1.0 (inclusive). sample=0.1 means return 10% (1 in 10) of all records.
  3. Timestamp format; the format in which timestamp fields will be returned. Value options: unixnano (default), unix, rfc3339.

To check if logpull_options is valid:

$ curl -s -XPOST https://api.cloudflare.com/client/v4/zones/<ZONE_ID>/logpush/validate/origin -d '{"logpull_options":"fields=RayID,ClientIP,EdgeStartTimestamp&timestamps=rfc3339","dataset": "http_requests"}' | jq .

Response
{
  "errors": [],
  "messages": [],
  "result": {
    "valid": true,
    "message": "",
  },
  "success": true
}

Audit

The following actions are recorded in Cloudflare Audit Logs: create, update, and delete job.