Enable Cloudflare Pipelines
Cloudflare Pipelines ingests events, transforms them with SQL, and delivers them to R2 as Iceberg tables or as Parquet and JSON files. Logpush can write data to Pipelines as a native destination.
Instead of sending raw logs directly to a storage bucket as JSON, Logpush can route them to a Pipeline to filter, enrich, and transform your data into Parquet or Apache Iceberg tables managed by R2 Data Catalog. This allows the data to be much more compact and optimized for analytics such as querying with R2 SQL.
The Pipelines destination supports the following Logpush datasets:
| Scope | Datasets |
|---|---|
| Zone | http_requests, firewall_events, dns_logs |
| Account | workers_trace_events |
For a full list of fields available in each dataset, refer to Datasets.
- In the Cloudflare dashboard, go to the Logpush page at the account or domain (also known as zone) level.
- For account: Go to Logpush
- For domain (also known as zone): Go to Logpush
- Select Create a Logpush job.
- Select Pipelines as the destination.
- In the Dataset step, select the dataset from the dropdown.
- A Pipeline name is auto-generated, but you can edit it.
- In the Destination step, configure the destination:
- Select an existing R2 bucket or type a new name to create one during setup.
- Choose the storage format: Parquet, JSON, or R2 Data Catalog (Apache Iceberg).
- If you select R2 Data Catalog, enter a catalog namespace and table name.
- Optionally, expand Delivery settings to configure roll size, roll interval, and other destination-specific settings. For more information about these settings, refer to the Pipelines Sinks documentation.
- In the Transform step, choose how to process logs before they are written to the Sink:
- Simple forwards all fields without modification (default).
- Custom SQL opens a SQL editor where you can filter, transform, and enrich your data. Refer to the Pipelines SQL reference for a complete list of SQL functions.
- In the Review step, verify your configuration and select Create. This automatically creates all required resources, including the Stream, Sink, R2 credentials or Data Catalog token, Pipeline, and the Logpush job.