Skip to content
Start here

List Sinks

client.pipelines.sinks.list(SinkListParams { account_id, page, per_page, pipeline_id } params, RequestOptionsoptions?): V4PagePaginationArray<SinkListResponse { id, created_at, modified_at, 5 more } >
GET/accounts/{account_id}/pipelines/v1/sinks

List/Filter Sinks in Account.

Security
API Token

The preferred authorization scheme for interacting with the Cloudflare API. Create a token.

Example:Authorization: Bearer Sn3lZJTBX6kkg7OdcBUAxOO963GEIyGQqnFTOFYY
API Email + API Key

The previous authorization scheme for interacting with the Cloudflare API, used in conjunction with a Global API key.

Example:X-Auth-Email: user@example.com

The previous authorization scheme for interacting with the Cloudflare API. When possible, use API tokens instead of Global API keys.

Example:X-Auth-Key: 144c9defac04969c7bfad8efaa8ea194
Accepted Permissions (at least one required)
Pipelines WritePipelines Read
ParametersExpand Collapse
params: SinkListParams { account_id, page, per_page, pipeline_id }
account_id: string

Path param: Specifies the public ID of the account.

page?: number

Query param

per_page?: number

Query param

pipeline_id?: string

Query param

ReturnsExpand Collapse
SinkListResponse { id, created_at, modified_at, 5 more }
id: string

Indicates a unique identifier for this sink.

created_at: string
formatdate-time
modified_at: string
formatdate-time
name: string

Defines the name of the Sink.

maxLength128
minLength1
type: "r2" | "r2_data_catalog"

Specifies the type of sink.

One of the following:
"r2"
"r2_data_catalog"
config?: CloudflarePipelinesR2TablePublic { account_id, bucket, file_naming, 4 more } | CloudflarePipelinesR2DataCatalogTablePublic { account_id, bucket, table_name, 2 more }

Defines the configuration of the R2 Sink.

One of the following:
CloudflarePipelinesR2TablePublic { account_id, bucket, file_naming, 4 more }

R2 Sink public configuration.

account_id: string

Cloudflare Account ID for the bucket

bucket: string

R2 Bucket to write to

file_naming?: FileNaming { prefix, strategy, suffix }

Controls filename prefix/suffix and strategy.

prefix?: string

The prefix to use in file name. i.e prefix-.parquet

strategy?: "serial" | "uuid" | "uuid_v7" | "ulid"

Filename generation strategy.

One of the following:
"serial"
"uuid"
"uuid_v7"
"ulid"
suffix?: string

This will overwrite the default file suffix. i.e .parquet, use with caution

jurisdiction?: string

Jurisdiction this bucket is hosted in

partitioning?: Partitioning { time_pattern }

Data-layout partitioning for sinks.

time_pattern?: string

The pattern of the date string

path?: string

Subpath within the bucket to write to

rolling_policy?: RollingPolicy { file_size_bytes, inactivity_seconds, interval_seconds }

Rolling policy for file sinks (when & why to close a file and open a new one).

file_size_bytes?: number

Files will be rolled after reaching this number of bytes

formatuint64
minimum0
inactivity_seconds?: number

Number of seconds of inactivity to wait before rolling over to a new file

formatuint64
minimum1
interval_seconds?: number

Number of seconds to wait before rolling over to a new file

formatuint64
minimum1
CloudflarePipelinesR2DataCatalogTablePublic { account_id, bucket, table_name, 2 more }

R2 Data Catalog Sink public configuration.

account_id: string

Cloudflare Account ID

formaturi
bucket: string

The R2 Bucket that hosts this catalog

table_name: string

Table name

namespace?: string

Table namespace

rolling_policy?: RollingPolicy { file_size_bytes, inactivity_seconds, interval_seconds }

Rolling policy for file sinks (when & why to close a file and open a new one).

file_size_bytes?: number

Files will be rolled after reaching this number of bytes

formatuint64
minimum0
inactivity_seconds?: number

Number of seconds of inactivity to wait before rolling over to a new file

formatuint64
minimum1
interval_seconds?: number

Number of seconds to wait before rolling over to a new file

formatuint64
minimum1
format?: Json { type, decimal_encoding, timestamp_format, unstructured } | Parquet { type, compression, row_group_bytes }
One of the following:
Json { type, decimal_encoding, timestamp_format, unstructured }
type: "json"
decimal_encoding?: "number" | "string" | "bytes"
One of the following:
"number"
"string"
"bytes"
timestamp_format?: "rfc3339" | "unix_millis"
One of the following:
"rfc3339"
"unix_millis"
unstructured?: boolean
Parquet { type, compression, row_group_bytes }
type: "parquet"
compression?: "uncompressed" | "snappy" | "gzip" | 2 more
One of the following:
"uncompressed"
"snappy"
"gzip"
"zstd"
"lz4"
row_group_bytes?: number | null
formatint64
minimum0
schema?: Schema { fields, format, inferred }
fields?: Array<Int32 { type, metadata_key, name, 2 more } | Int64 { type, metadata_key, name, 2 more } | Float32 { type, metadata_key, name, 2 more } | 8 more>
One of the following:
Int32 { type, metadata_key, name, 2 more }
type: "int32"
metadata_key?: string | null
name?: string
required?: boolean
sql_name?: string
Int64 { type, metadata_key, name, 2 more }
type: "int64"
metadata_key?: string | null
name?: string
required?: boolean
sql_name?: string
Float32 { type, metadata_key, name, 2 more }
type: "float32"
metadata_key?: string | null
name?: string
required?: boolean
sql_name?: string
Float64 { type, metadata_key, name, 2 more }
type: "float64"
metadata_key?: string | null
name?: string
required?: boolean
sql_name?: string
Bool { type, metadata_key, name, 2 more }
type: "bool"
metadata_key?: string | null
name?: string
required?: boolean
sql_name?: string
String { type, metadata_key, name, 2 more }
type: "string"
metadata_key?: string | null
name?: string
required?: boolean
sql_name?: string
Binary { type, metadata_key, name, 2 more }
type: "binary"
metadata_key?: string | null
name?: string
required?: boolean
sql_name?: string
Timestamp { type, metadata_key, name, 3 more }
type: "timestamp"
metadata_key?: string | null
name?: string
required?: boolean
sql_name?: string
unit?: "second" | "millisecond" | "microsecond" | "nanosecond"
One of the following:
"second"
"millisecond"
"microsecond"
"nanosecond"
Json { type, metadata_key, name, 2 more }
type: "json"
metadata_key?: string | null
name?: string
required?: boolean
sql_name?: string
Struct
List
format?: Json { type, decimal_encoding, timestamp_format, unstructured } | Parquet { type, compression, row_group_bytes }
One of the following:
Json { type, decimal_encoding, timestamp_format, unstructured }
type: "json"
decimal_encoding?: "number" | "string" | "bytes"
One of the following:
"number"
"string"
"bytes"
timestamp_format?: "rfc3339" | "unix_millis"
One of the following:
"rfc3339"
"unix_millis"
unstructured?: boolean
Parquet { type, compression, row_group_bytes }
type: "parquet"
compression?: "uncompressed" | "snappy" | "gzip" | 2 more
One of the following:
"uncompressed"
"snappy"
"gzip"
"zstd"
"lz4"
row_group_bytes?: number | null
formatint64
minimum0
inferred?: boolean | null

List Sinks

import Cloudflare from 'cloudflare';

const client = new Cloudflare({
  apiToken: process.env['CLOUDFLARE_API_TOKEN'], // This is the default and can be omitted
});

// Automatically fetches more pages as needed.
for await (const sinkListResponse of client.pipelines.sinks.list({
  account_id: '0123105f4ecef8ad9ca31a8372d0c353',
})) {
  console.log(sinkListResponse.id);
}
{
  "result": [
    {
      "id": "01234567890123457689012345678901",
      "created_at": "2019-12-27T18:11:19.117Z",
      "modified_at": "2019-12-27T18:11:19.117Z",
      "name": "my_sink",
      "type": "r2",
      "config": {
        "account_id": "account_id",
        "bucket": "bucket",
        "file_naming": {
          "prefix": "prefix",
          "strategy": "serial",
          "suffix": "suffix"
        },
        "jurisdiction": "jurisdiction",
        "partitioning": {
          "time_pattern": "year=%Y/month=%m/day=%d/hour=%H"
        },
        "path": "path",
        "rolling_policy": {
          "file_size_bytes": 0,
          "inactivity_seconds": 1,
          "interval_seconds": 1
        }
      },
      "format": {
        "type": "json",
        "decimal_encoding": "number",
        "timestamp_format": "rfc3339",
        "unstructured": true
      },
      "schema": {
        "fields": [
          {
            "type": "int32",
            "metadata_key": "metadata_key",
            "name": "name",
            "required": true,
            "sql_name": "sql_name"
          }
        ],
        "format": {
          "type": "json",
          "decimal_encoding": "number",
          "timestamp_format": "rfc3339",
          "unstructured": true
        },
        "inferred": true
      }
    }
  ],
  "result_info": {
    "count": 1,
    "page": 0,
    "per_page": 10,
    "total_count": 1
  },
  "success": true
}
Returns Examples
{
  "result": [
    {
      "id": "01234567890123457689012345678901",
      "created_at": "2019-12-27T18:11:19.117Z",
      "modified_at": "2019-12-27T18:11:19.117Z",
      "name": "my_sink",
      "type": "r2",
      "config": {
        "account_id": "account_id",
        "bucket": "bucket",
        "file_naming": {
          "prefix": "prefix",
          "strategy": "serial",
          "suffix": "suffix"
        },
        "jurisdiction": "jurisdiction",
        "partitioning": {
          "time_pattern": "year=%Y/month=%m/day=%d/hour=%H"
        },
        "path": "path",
        "rolling_policy": {
          "file_size_bytes": 0,
          "inactivity_seconds": 1,
          "interval_seconds": 1
        }
      },
      "format": {
        "type": "json",
        "decimal_encoding": "number",
        "timestamp_format": "rfc3339",
        "unstructured": true
      },
      "schema": {
        "fields": [
          {
            "type": "int32",
            "metadata_key": "metadata_key",
            "name": "name",
            "required": true,
            "sql_name": "sql_name"
          }
        ],
        "format": {
          "type": "json",
          "decimal_encoding": "number",
          "timestamp_format": "rfc3339",
          "unstructured": true
        },
        "inferred": true
      }
    }
  ],
  "result_info": {
    "count": 1,
    "page": 0,
    "per_page": 10,
    "total_count": 1
  },
  "success": true
}