---
title: Pipelines Changelog
image: https://developers.cloudflare.com/cf-twitter-card.png
---

> Documentation Index  
> Fetch the complete documentation index at: https://developers.cloudflare.com/changelog/llms.txt  
> Use this file to discover all available pages before exploring further.

[Skip to content](#%5Ftop) 

# Changelog

New updates and improvements at Cloudflare.

[ Subscribe to RSS ](https://developers.cloudflare.com/changelog/rss/index.xml) [ View RSS feeds ](https://developers.cloudflare.com/fundamentals/new-features/available-rss-feeds/) 

Pipelines

![hero image](https://developers.cloudflare.com/_astro/hero.CVYJHPAd_26AMqX.svg) 

Apr 20, 2026
1. ### [Cloudflare Pipelines as a Logpush destination](https://developers.cloudflare.com/changelog/post/2026-04-20-pipelines-logpush-destination/)  
[ Logs ](https://developers.cloudflare.com/logs/)[ Pipelines ](https://developers.cloudflare.com/pipelines/)  
Logpush has traditionally been great at delivering Cloudflare logs to a variety of destinations in JSON format. While JSON is flexible and easily readable, it can be inefficient to store and query at scale.  
With this release, you can now send your logs directly to [Pipelines](https://developers.cloudflare.com/pipelines/) to ingest, transform, and store your logs in [R2](https://developers.cloudflare.com/r2/) as Parquet files or Apache Iceberg tables managed by [R2 Data Catalog](https://developers.cloudflare.com/r2/data-catalog/). This makes the data footprint more compact and more efficient at querying your logs instantly with [R2 SQL](https://developers.cloudflare.com/r2-sql/) or any other query engine that supports Apache Iceberg or Parquet.  
#### Transform logs before storage  
Pipelines SQL runs on each log record in-flight, so you can reshape your data before it is written. For example, you can drop noisy fields, redact sensitive values, or derive new columns:  
```  
INSERT INTO http_logs_sink  
SELECT  
  ClientIP,  
  EdgeResponseStatus,  
  to_timestamp_micros(EdgeStartTimestamp) AS event_time,  
  upper(ClientRequestMethod) AS method,  
  sha256(ClientIP) AS hashed_ip  
FROM http_logs_stream  
WHERE EdgeResponseStatus >= 400;  
```  
Pipelines SQL supports string functions, regex, hashing, JSON extraction, timestamp conversion, conditional expressions, and more. For the full list, refer to the [Pipelines SQL reference](https://developers.cloudflare.com/pipelines/sql-reference/).  
#### Get started  
To configure Pipelines as a Logpush destination, refer to [Enable Cloudflare Pipelines](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/pipelines/).

Feb 24, 2026
1. ### [Dropped event metrics, typed Pipelines bindings, and improved setup](https://developers.cloudflare.com/changelog/post/2026-02-24-typed-bindings-setup-improvements-error-metrics/)  
[ Pipelines ](https://developers.cloudflare.com/pipelines/)[ Workers ](https://developers.cloudflare.com/workers/)  
[Cloudflare Pipelines](https://developers.cloudflare.com/pipelines/) ingests streaming data via [Workers](https://developers.cloudflare.com/workers/) or HTTP endpoints, transforms it with SQL, and writes it to [R2](https://developers.cloudflare.com/r2/) as Apache Iceberg tables. Today we're shipping three improvements to help you understand why streaming events get dropped, catch data quality issues early, and set up Pipelines faster.  
#### Dropped event metrics  
When [stream](https://developers.cloudflare.com/pipelines/streams/) events don't match the expected schema, Pipelines accepts them during ingestion but drops them when attempting to deliver them to the [sink](https://developers.cloudflare.com/pipelines/sinks/). To help you identify the root cause of these issues, we are introducing a new dashboard and metrics that surface dropped events with detailed error messages.  
![The Errors tab in the Cloudflare dashboard showing deserialization errors grouped by type with individual error details](https://developers.cloudflare.com/_astro/pipelines-error-log-dash.6JIa7r5d_Z1ILPxd.webp)  
Dropped events can also be queried programmatically via the new `pipelinesUserErrorsAdaptiveGroups` GraphQL dataset. The dataset breaks down failures by specific error type (`missing_field`, `type_mismatch`, `parse_failure`, or `null_value`) so you can trace issues back to the source.  
```  
query GetPipelineUserErrors(  
  $accountTag: String!  
  $pipelineId: String!  
  $datetimeStart: Time!  
  $datetimeEnd: Time!  
) {  
  viewer {  
    accounts(filter: { accountTag: $accountTag }) {  
      pipelinesUserErrorsAdaptiveGroups(  
        limit: 100  
        filter: {  
          pipelineId: $pipelineId  
          datetime_geq: $datetimeStart  
          datetime_leq: $datetimeEnd  
        }  
        orderBy: [count_DESC]  
      ) {  
        count  
        dimensions {  
          errorFamily  
          errorType  
        }  
      }  
    }  
  }  
}  
```  
For the full list of dimensions, error types, and additional query examples, refer to [User error metrics](https://developers.cloudflare.com/pipelines/observability/metrics/#user-error-metrics).  
#### Typed Pipelines bindings  
Sending data to a Pipeline from a Worker previously used a generic `Pipeline<PipelineRecord>` type, which meant schema mismatches (wrong field names, incorrect types) were only caught at runtime as dropped events.  
Running `wrangler types` now generates schema-specific TypeScript types for your [Pipeline bindings](https://developers.cloudflare.com/pipelines/streams/writing-to-streams/#send-via-workers). TypeScript catches missing required fields and incorrect field types at compile time, before your code is deployed.  
TypeScript  
```  
declare namespace Cloudflare {  
  type EcommerceStreamRecord = {  
    user_id: string;  
    event_type: string;  
    product_id?: string;  
    amount?: number;  
  };  
  interface Env {  
    STREAM: import("cloudflare:pipelines").Pipeline<Cloudflare.EcommerceStreamRecord>;  
  }  
}  
```  
For more information, refer to [Typed Pipeline bindings](https://developers.cloudflare.com/pipelines/streams/writing-to-streams/#typed-pipeline-bindings).  
#### Improved Pipelines setup  
Setting up a new Pipeline previously required multiple manual steps: creating an R2 bucket, enabling R2 Data Catalog, generating an API token, and configuring format, compression, and rolling policies individually.  
The `wrangler pipelines setup` command now offers a **Simple** setup mode that applies recommended defaults and automatically creates the [R2 bucket](https://developers.cloudflare.com/r2/buckets/) and enables [R2 Data Catalog](https://developers.cloudflare.com/r2/data-catalog/) if they do not already exist. Validation errors during setup prompt you to retry inline rather than restarting the entire process.  
For a full walkthrough, refer to the [Getting started guide](https://developers.cloudflare.com/pipelines/getting-started/).

Sep 25, 2025
1. ### [Pipelines now supports SQL transformations and Apache Iceberg](https://developers.cloudflare.com/changelog/post/2025-09-25-pipelines-sql/)  
[ Pipelines ](https://developers.cloudflare.com/pipelines/)  
Today, we're launching the new [Cloudflare Pipelines](https://developers.cloudflare.com/pipelines/): a streaming data platform that ingests events, transforms them with [SQL](https://developers.cloudflare.com/pipelines/sql-reference/select-statements/), and writes to [R2](https://developers.cloudflare.com/r2/) as [Apache Iceberg ↗](https://iceberg.apache.org/) tables or Parquet files.  
Pipelines can receive events via [HTTP endpoints](https://developers.cloudflare.com/pipelines/streams/writing-to-streams/#send-via-http) or [Worker bindings](https://developers.cloudflare.com/pipelines/streams/writing-to-streams/#send-via-workers), transform them with SQL, and deliver to R2 with exactly-once guarantees. This makes it easy to build analytics-ready warehouses for server logs, mobile application events, IoT telemetry, or clickstream data without managing streaming infrastructure.  
For example, here's a pipeline that ingests clickstream events and filters out bot traffic while extracting domain information:  
```  
INSERT into events_table  
SELECT  
  user_id,  
  lower(event) AS event_type,  
  to_timestamp_micros(ts_us) AS event_time,  
  regexp_match(url, '^https?://([^/]+)')[1]  AS domain,  
  url,  
  referrer,  
  user_agent  
FROM events_json  
WHERE event = 'page_view'  
  AND NOT regexp_like(user_agent, '(?i)bot|spider');  
```  
Get started by creating a pipeline in the dashboard or running a single command in [Wrangler](https://developers.cloudflare.com/workers/wrangler/):  
Terminal window  
```  
npx wrangler pipelines setup  
```  
Check out our [getting started guide](https://developers.cloudflare.com/pipelines/getting-started/) to learn how to create a pipeline that delivers events to an [Iceberg table](https://developers.cloudflare.com/r2/data-catalog/) you can query with R2 SQL. Read more about today's announcement in our [blog post ↗](https://blog.cloudflare.com/cloudflare-data-platform).

Apr 10, 2025
1. ### [Cloudflare Pipelines now available in beta](https://developers.cloudflare.com/changelog/post/2025-04-10-launching-pipelines/)  
[ Pipelines ](https://developers.cloudflare.com/pipelines/)[ R2 ](https://developers.cloudflare.com/r2/)[ Workers ](https://developers.cloudflare.com/workers/)  
[Cloudflare Pipelines](https://developers.cloudflare.com/pipelines) is now available in beta, to all users with a [Workers Paid](https://developers.cloudflare.com/workers/platform/pricing) plan.  
Pipelines let you ingest high volumes of real time data, without managing the underlying infrastructure. A single pipeline can ingest up to 100 MB of data per second, via HTTP or from a [Worker](https://developers.cloudflare.com/workers). Ingested data is automatically batched, written to output files, and delivered to an [R2 bucket](https://developers.cloudflare.com/r2) in your account. You can use Pipelines to build a data lake of clickstream data, or to store events from a Worker.  
Create your first pipeline with a single command:  
Create a pipeline  
```  
$ npx wrangler@latest pipelines create my-clickstream-pipeline --r2-bucket my-bucket  
🌀 Authorizing R2 bucket "my-bucket"  
🌀 Creating pipeline named "my-clickstream-pipeline"  
✅ Successfully created pipeline my-clickstream-pipeline  
Id:    0e00c5ff09b34d018152af98d06f5a1xvc  
Name:  my-clickstream-pipeline  
Sources:  
  HTTP:  
    Endpoint:        https://0e00c5ff09b34d018152af98d06f5a1xvc.pipelines.cloudflare.com/  
    Authentication:  off  
    Format:          JSON  
  Worker:  
    Format:  JSON  
Destination:  
  Type:         R2  
  Bucket:       my-bucket  
  Format:       newline-delimited JSON  
  Compression:  GZIP  
Batch hints:  
  Max bytes:     100 MB  
  Max duration:  300 seconds  
  Max records:   100,000  
🎉 You can now send data to your pipeline!  
Send data to your pipeline's HTTP endpoint:  
curl "https://0e00c5ff09b34d018152af98d06f5a1xvc.pipelines.cloudflare.com/" -d '[{ ...JSON_DATA... }]'  
To send data to your pipeline from a Worker, add the following configuration to your config file:  
{  
  "pipelines": [  
    {  
      "pipeline": "my-clickstream-pipeline",  
      "binding": "PIPELINE"  
    }  
  ]  
}  
```  
Head over to our [getting started guide](https://developers.cloudflare.com/pipelines/getting-started) for an in-depth tutorial to building with Pipelines.

[Search all changelog entries](https://developers.cloudflare.com/search/?contentType=Changelog+entry) 