Cloudflare Docs
Logs
Logs
Edit this page
Report an issue with this page
Log into the Cloudflare dashboard
Set theme to dark (⇧+D)

Enable Logpush to Amazon S3

Cloudflare Logpush supports pushing logs directly to Amazon S3 via the Cloudflare dashboard or via API. Customers that use AWS GovCloud locations should use our S3-compatible endpoint and not the Amazon S3 endpoint.

​​ Manage via the Cloudflare dashboard

  1. Log in to the Cloudflare dashboard.

  2. Select the Enterprise account or domain (also known as zone) you want to use with Logpush. Depending on your choice, you have access to account-scoped datasets and zone-scoped datasets, respectively.

  3. Go to Analytics & Logs > Logpush.

  4. Select Create a Logpush job.

  1. In Select a destination, choose Amazon S3.

  2. Enter or select the following destination information:

When you are done entering the destination details, select Continue.

  1. To prove ownership, Cloudflare will send a file to your designated destination. To find the token, select the Open button in the Overview tab of the ownership challenge file, then paste it into the Cloudflare dashboard to verify your access to the bucket. Enter the Ownership Token and select Continue.

  2. Select the dataset to push to the storage service.

  3. In the next step, you need to configure your logpush job:

    • Enter the Job name.
    • Under If logs match, you can select the events to include and/or remove from your logs. Refer to Filters for more information. Not all datasets have this option available.
    • In Send the following fields, you can choose to either push all logs to your storage destination or selectively choose which logs you want to push.
  4. In Advanced Options, you can:

    • Choose the format of timestamp fields in your logs (RFC3339(default),Unix, or UnixNano).
    • Select a sampling rate for your logs or push a randomly-sampled percentage of logs.
    • Enable redaction for CVE-2021-44228. This option will replace every occurrence of ${ with x{.
  5. Select Submit once you are done configuring your logpush job.

​​ Create and get access to an S3 bucket

Cloudflare uses Amazon Identity and Access Management (IAM) to gain access to your S3 bucket. The Cloudflare IAM user needs PutObject permission for the bucket.

Logs are written into that bucket as gzipped objects using the S3 Access Control List (ACL) Bucket-owner-full-control permission.

For illustrative purposes, imagine that you want to store logs in the bucket burritobot, in the logs directory. The S3 URL would then be s3://burritobot/logs.

Ensure Log Share permissions are enabled, before attempting to read or configure a Logpush job. For more information refer to the Roles section.

To enable Logpush to Amazon S3:

  1. Create an S3 bucket. Refer to instructions from Amazon.

  2. Edit and paste the policy below into S3 > Bucket > Permissions > Bucket Policy, replacing the Resource value with your own bucket path. The AWS Principal is owned by Cloudflare and should not be changed.

{
"Id": "Policy1506627184792",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1506627150918",
"Action": ["s3:PutObject"],
"Effect": "Allow",
"Resource": "arn:aws:s3:::burritobot/logs/*",
"Principal": {
"AWS": ["arn:aws:iam::391854517948:user/cloudflare-logpush"]
}
}
]
}