Enable S3-compatible endpoints
Cloudflare Logpush supports pushing logs to S3-compatible destinations via the Cloudflare dashboard or via API, including:
- Alibaba Cloud OSS ↗
- Backblaze B2 ↗
- DigitalOcean Spaces ↗
- IBM Cloud Object Storage ↗
- JD Cloud Object Storage Service ↗
- Linode Object Storage ↗
- Oracle Cloud Object Storage ↗
- On-premise Ceph Object Gateway ↗
For more information about Logpush and the current production APIs, refer to Cloudflare Logpush documentation.
-
In the Cloudflare dashboard, go to the Logpush page at the account or or domain (also known as zone) level.
For account: Go to Logpush
For domain (also known as zone): Go to Logpush
-
Depending on your choice, you have access to account-scoped datasets and zone-scoped datasets, respectively.
-
Select Create a Logpush job.
-
In Select a destination, choose S3-Compatible.
-
Enter or select the following destination information:
- Bucket - S3 Compatible bucket name
- Path - bucket location within the storage container
- Organize logs into daily subfolders (recommended)
- Endpoint URL - The URL without the bucket name or path. Example,
sfo2.digitaloceanspaces.com. - Bucket region
- Access Key ID
- Secret Access Key
When you are done entering the destination details, select Continue.
-
Select the dataset to push to the storage service.
-
In the next step, you need to configure your logpush job:
- Enter the Job name.
- Under If logs match, you can select the events to include and/or remove from your logs. Refer to Filters for more information. Not all datasets have this option available.
- In Send the following fields, you can choose to either push all logs to your storage destination or selectively choose which logs you want to push.
-
In Advanced Options, you can:
- Choose the format of timestamp fields in your logs (
RFC3339(default),Unix, orUnixNano). - Select a sampling rate for your logs or push a randomly-sampled percentage of logs.
- Enable redaction for
CVE-2021-44228. This option will replace every occurrence of${withx{.
- Choose the format of timestamp fields in your logs (
-
Select Submit once you are done configuring your logpush job.
To set up S3-compatible endpoints:
- Create a job with the appropriate endpoint URL and authentication parameters.
- Enable the job to begin pushing logs.
Ensure Log Share permissions are enabled, before attempting to read or configure a Logpush job. For more information refer to the Roles section.
To create a job, make a POST request to the Logpush jobs endpoint with the following fields:
- name (optional) - Use your domain name as the job name.
- destination_conf - A log destination consisting of an endpoint name, bucket name, bucket path, region, access-key-id, and secret-access-key in the following string format:
"s3://<BUCKET_NAME>/<BUCKET_PATH>?region=<REGION>&access-key-id=<ACCESS_KEY_ID>&secret-access-key=<SECRET_ACCESS_KEY>&endpoint=<ENDPOINT_URL>"- dataset - The category of logs you want to receive. Refer to Datasets for the full list of supported datasets.
- output_options (optional) - To configure fields, sample rate, and timestamp format, refer to Log Output Options.
Example request using cURL:
Required API token permissions
At least one of the following token permissions
is required:
Logs Write
curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/logpush/jobs" \ --request POST \ --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN" \ --json '{ "name": "<DOMAIN_NAME>", "destination_conf": "s3://<BUCKET_NAME>/<BUCKET_PATH>?region=<REGION>&access-key-id=<ACCESS_KEY_ID>&secret-access-key=<SECRET_ACCESS_KEY>&endpoint=<ENDPOINT_URL>", "output_options": { "field_names": [ "ClientIP", "ClientRequestHost", "ClientRequestMethod", "ClientRequestURI", "EdgeEndTimestamp", "EdgeResponseBytes", "EdgeResponseStatus", "EdgeStartTimestamp", "RayID" ], "timestamp_format": "rfc3339" }, "dataset": "http_requests", "enabled": true }'Response:
{ "errors": [], "messages": [], "result": { "id": <JOB_ID>, "dataset": "http_requests", "kind": "", "enabled": true, "name": "<DOMAIN_NAME>", "output_options": { "field_names": ["ClientIP", "ClientRequestHost", "ClientRequestMethod", "ClientRequestURI", "EdgeEndTimestamp","EdgeResponseBytes", "EdgeResponseStatus", "EdgeStartTimestamp", "RayID"], "timestamp_format": "rfc3339" }, "destination_conf": "s3://<BUCKET_NAME>/<BUCKET_PATH>?region=<REGION>&access-key-id=<ACCESS_KEY_ID>&secret-access-key=<SECRET_ACCESS_KEY>&endpoint=<ENDPOINT_URL>", "last_complete": null, "last_error": null, "error_message": null }, "success": true}Refer to Manage Logpush with cURL to update a job (including enabling and disabling).