Enable S3-compatible endpoints
Cloudflare Logpush supports pushing logs to S3-compatible destinations via the Cloudflare dashboard or via API, including:
- Alibaba Cloud OSS ↗
- Backblaze B2 ↗
- DigitalOcean Spaces ↗
- IBM Cloud Object Storage ↗
- JD Cloud Object Storage Service ↗
- Linode Object Storage ↗
- Oracle Cloud Object Storage ↗
- On-premise Ceph Object Gateway ↗
For more information about Logpush and the current production APIs, refer to Cloudflare Logpush documentation.
-
Log in to the Cloudflare dashboard ↗.
-
Select the Enterprise account or domain (also known as zone) you want to use with Logpush. Depending on your choice, you have access to account-scoped datasets and zone-scoped datasets, respectively.
-
Go to Analytics & Logs > Logpush.
-
Select Create a Logpush job.
-
In Select a destination, choose S3-Compatible.
-
Enter or select the following destination information:
- Bucket - S3 Compatible bucket name
- Path - bucket location within the storage container
- Organize logs into daily subfolders (recommended)
- Endpoint URL - The URL without the bucket name or path. Example,
sfo2.digitaloceanspaces.com
. - Bucket region
- Access Key ID
- Secret Access Key
When you are done entering the destination details, select Continue.
-
Select the dataset to push to the storage service.
-
In the next step, you need to configure your logpush job:
- Enter the Job name.
- Under If logs match, you can select the events to include and/or remove from your logs. Refer to Filters for more information. Not all datasets have this option available.
- In Send the following fields, you can choose to either push all logs to your storage destination or selectively choose which logs you want to push.
-
In Advanced Options, you can:
- Choose the format of timestamp fields in your logs (
RFC3339
(default),Unix
, orUnixNano
). - Select a sampling rate for your logs or push a randomly-sampled percentage of logs.
- Enable redaction for
CVE-2021-44228
. This option will replace every occurrence of${
withx{
.
- Choose the format of timestamp fields in your logs (
-
Select Submit once you are done configuring your logpush job.
To set up S3-compatible endpoints:
- Create a job with the appropriate endpoint URL and authentication parameters.
- Enable the job to begin pushing logs.
Ensure Log Share permissions are enabled, before attempting to read or configure a Logpush job. For more information refer to the Roles section.
To create a job, make a POST
request to the Logpush jobs endpoint with the following fields:
- name (optional) - Use your domain name as the job name.
- destination_conf - A log destination consisting of an endpoint name, bucket name, bucket path, region, access-key-id, and secret-access-key in the following string format:
"s3://<BUCKET_NAME>/<BUCKET_PATH>?region=<REGION>&access-key-id=<ACCESS_KEY_ID>&secret-access-key=<SECRET_ACCESS_KEY>&endpoint=<ENDPOINT_URL>"
- dataset - The category of logs you want to receive. Refer to Log fields for the full list of supported datasets.
- output_options (optional) - To configure fields, sample rate, and timestamp format, refer to Log Output Options.
Example request using cURL:
curl https://api.cloudflare.com/client/v4/zones/{zone_id}/logpush/jobs \--header "X-Auth-Email: <EMAIL>" \--header "X-Auth-Key: <API_KEY>" \--header "Content-Type: application/json" \--data '{ "name": "<DOMAIN_NAME>", "destination_conf": "s3://<BUCKET_NAME>/<BUCKET_PATH>?region=<REGION>&access-key-id=<ACCESS_KEY_ID>&secret-access-key=<SECRET_ACCESS_KEY>&endpoint=<ENDPOINT_URL>", "output_options": { "field_names": ["ClientIP", "ClientIP", "ClientRequestHost", "ClientRequestMethod", "ClientRequestURI","EdgeEndTimestamp", "EdgeResponseBytes", "EdgeResponseStatus", "EdgeStartTimestamp", "RayID"], "timestamp_format": "rfc3339" }, "dataset": "http_requests"}'
Response:
{ "errors": [], "messages": [], "result": { "id": <JOB_ID>, "dataset": "http_requests", "enabled": false, "name": "<DOMAIN_NAME>", "output_options": { "field_names": ["ClientIP", "ClientRequestHost", "ClientRequestMethod", "ClientRequestURI", "EdgeEndTimestamp","EdgeResponseBytes", "EdgeResponseStatus", "EdgeStartTimestamp", "RayID"], "timestamp_format": "rfc3339" }, "destination_conf": "s3://<BUCKET_NAME>/<BUCKET_PATH>?region=<REGION>&access-key-id=<ACCESS_KEY_ID>&secret-access-key=<SECRET_ACCESS_KEY>&endpoint=<ENDPOINT_URL>", "last_complete": null, "last_error": null, "error_message": null }, "success": true}
To enable a job, make a PUT
request to the Logpush jobs endpoint. You will use the job ID returned from the previous step in the URL, and send {"enabled": true}
in the request body.
Example request using cURL:
curl --request PUT \https://api.cloudflare.com/client/v4/zones/{zone_id}/logpush/jobs/{job_id} \--header "X-Auth-Email: <EMAIL>" \--header "X-Auth-Key: <API_KEY>" \--header "Content-Type: application/json" \--data '{ "enabled": true}'
Response:
{ "errors": [], "messages": [], "result": { "id": <JOB_ID>, "dataset": "http_requests", "enabled": true, "name": "<DOMAIN_NAME>", "output_options": { "field_names": ["ClientIP", "ClientRequestHost", "ClientRequestMethod", "ClientRequestURI", "EdgeEndTimestamp","EdgeResponseBytes", "EdgeResponseStatus", "EdgeStartTimestamp", "RayID"], "timestamp_format": "rfc3339" }, "destination_conf": "s3://<BUCKET_NAME>/<BUCKET_PATH>?region=<REGION>&access-key-id=<ACCESS_KEY_ID>&secret-access-key=<SECRET_ACCESS_KEY>&endpoint=<ENDPOINT_URL>", "last_complete": null, "last_error": null, "error_message": null }, "success": true}